id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2308.12953 | Average behaviour of Hecke eigenvalues over certain polynomial | In the article, we investigate the average behaviour of normalised Hecke
eigenvalues over certain polynomials and establish an estimate for the power
moments of the normalised Hecke eigenvalues of a normalised Hecke eigenform of
weight $k \ge 2$ for the full modular group $SL_2(\mathbb{Z})$ over certain
polynomial, given by a sum of triangular numbers with certain positive
coefficients. More precisely, for each $r \in \mathbb{N}$, we obtain an
asymptotic for the following sum \begin{equation*} \begin{split}
\displaystyle{\sideset{}{^{\flat }}\sum_{ \alpha(\underline{x}))+1 \le X \atop
\underline{x} \in {\mathbb Z}^{4}} } \lambda_{f}^{r}(\alpha(\underline{x})+1) ,
\\ \end{split} \end{equation*} where $\displaystyle{\sideset{}{^{\flat }}\sum}$
means that the sum runs over the square-free positive integers, and
$\lambda_{f} (n)$ is the normalised $n^{\rm th}$-Hecke eigenvalue of a
normalised Hecke eigenform $f \in S_{k}(SL_2(\mathbb{Z}))$, and
$\alpha(\underline{x}) = \frac{1}{2} \left( x_{1}^{2}+ x_{1} + x_{2}^{2} +
x_{2} + 2 ( x_{3}^{2} + x_{3}) + 4 (x_{4}^{2} + x_{4}) \right) \in {\mathbb
Q}[x_{1},x_{2},x_{3},x_{4}] $ is a polynomial, and $\underline{x} =
(x_{1},x_{2},x_{3},x_{4}) \in {\mathbb Z}^{4}$. | Lalit Vaishya | 2023-08-24T17:42:05Z | http://arxiv.org/abs/2308.12953v1 | # Average behaviour of Hecke eigenvalues over certain polynomial
###### Abstract.
In the article, we investigate the average behaviour of normalised Hecke eigenvalues over certain polynomials and establish an estimate for the power moments of the normalised Hecke eigenvalues of a normalised Hecke eigenform of weight \(k\geq 2\) for the full modular group \(SL_{2}(\mathbb{Z})\) over certain polynomial, given by a sum of triangular numbers with certain positive coefficients. More precisely, for each \(r\in\mathbb{N}\), we obtain an asymptotic for the following sum
\[\sum_{\alpha(\underline{x})+1\leq X\atop\underline{x}\in\mathbb{Z}^{4}} \lambda_{f}^{r}(\alpha(\underline{x})+1),\]
where \(\sum^{\flat}\) means that the sum runs over the square-free positive integers, and \(\lambda_{f}(n)\) is the normalised \(n^{\text{th}}\)-Hecke eigenvalue of a normalised Hecke eigenform \(f\in S_{k}(SL_{2}(\mathbb{Z}))\), and \(\alpha(\underline{x})=\frac{1}{2}\left(x_{1}^{2}+x_{1}+x_{2}^{2}+x_{2}+2(x_{3 }^{2}+x_{3})+4(x_{4}^{2}+x_{4})\right)\in\mathbb{Q}[x_{1},x_{2},x_{3},x_{4}]\) is a polynomial, and \(\underline{x}=(x_{1},x_{2},x_{3},x_{4})\in\mathbb{Z}^{4}\).
Key words and phrases: Fourier coefficients of cusp form, Symmetric power \(L\) functions, Asymptotic behaviour 2010 Mathematics Subject Classification: Primary 11F30, 11F11, 11M06; Secondary 11N37
## 1. Introduction
Let \(S_{k}(SL_{2}(\mathbb{Z}))\) denote the \(\mathbb{C}\)-vector space of cusp forms of weight \(k\) for the full modular group \(SL_{2}(\mathbb{Z})\). A cusp form \(f\in S_{k}(SL_{2}(\mathbb{Z}))\) is said to be a Hecke eigenform if \(f\) is a simultaneous eigenfunction for all the Hecke operators. Let \(a_{f}(n)\) denote the \(n^{\text{th}}\) Fourier coefficient of a cusp form \(f\in S_{k}(SL_{2}(\mathbb{Z}))\). A cusp form \(f\) is said to be normalised if \(a_{f}(1)=1\). We define the normalised \(n^{\text{th}}\) Fourier coefficients \(\lambda_{f}(n)\) given by; \(\lambda_{f}(n):=a_{f}(n)/n^{\frac{k-1}{2}}\). The normalised Fourier coefficient \(\lambda_{f}(n)\) is a multiplicative function and satisfies the following Hecke relation [10, Eq. (6.83)]:
\[\lambda_{f}(m)\lambda_{f}(n)=\sum_{d|m,n}\lambda_{f}\left(\frac{mn}{d^{2}} \right), \tag{1}\]
for all positive integers \(m\) and \(n.\) Ramanujan conjecture predicts that \(|\lambda_{f}(p)|\leq 2\). It has been established in a pioneer work of Deligne. More precisely, we have
\[|\lambda_{f}(n)|\leq d(n)\ll_{\epsilon}n^{\epsilon}, \tag{2}\]
for any arbitrary small \(\epsilon>0\), where \(d(n)\) denotes the number of positive divisors of \(n\).
The study of the average behavior of arithmetical functions attracts many researchers. It is a well-known approach in analytic number theory to examine the moments of arithmetical functions to understand the behavior of arithmetical functions. In this regard, Fourier coefficients of cuspidal automorphic forms (more precisely, Fourier coefficients of classical cusp forms) are one of the interesting hosts. Moreover, the average behaviour of arithmetical functions over certain sequences is riveting but quite mysterious. The randomness in the behavior of Fourier coefficients of classical cusp forms leads to many equidistribution
results. There are very interesting results on the distribution of Fourier coefficients over certain sparse set of natural numbers. For example, Iwaniec and Kowalski [11] studied the distribution of \(\{\lambda_{f}(p):p-\text{prime}\}\) and established an analogue of prime number theorem for classical Hecke eigenforms, and Blomer [3] considered the distribution of the sequence \(\{\lambda_{f}(q(n)):n\in\mathbb{N}\},\) where \(q(x)\) is a monic quadratic polynomial. More precisely, he proves that
\[\sum_{n\leq X}\lambda_{f}(q(n))\ll X^{\frac{6}{7}+\epsilon},\]
for any \(\epsilon>0.\) For a polynomial with more than one variable, the problem has been studied broadly for many arithmetic functions. A two-variables analogue of the sum studied in the work of Blomer [3], has been studied by Banarjee-Pandey [2] and Acharya [1]. More precisely, they studied the distribution of \(\{\lambda_{f}(q(a,b))\}\) where \(q(x,y)=x^{2}+y^{2},\) and obtained an estimate for the summatory function \(\sum_{\begin{subarray}{c}k,l\in\mathbb{Z}\\ k^{2}+l^{2}\leq X\end{subarray}}\lambda_{f}(q(k,l)).\) In our previous work (See [18, 19]), we study the average behaviour of Hecke eigenvalue \(\lambda_{f}(n),\) supported at the integers represented by primitive integral positive definite binary quadratic forms of fixed negative discriminant \(D.,\) In a joint work with M.K. Pandey [20], we study the higher power moments of \(\lambda_{f}(n),\) over the set of integers supported at the integers represented by primitive integral positive definite binary quadratic forms of fixed negative discriminant \(D\). More precisely, we obtain an estimate for the sum (for each fixed \(r\in\mathbb{N}\) and sufficiently large \(X\geq 1\))
\[\sum_{\begin{subarray}{c}x\in\mathbb{Z}^{2}\\ q(\underline{x})\leq X\end{subarray}}(\lambda_{f}(Q(\underline{x})))^{r},\]
where \(\lambda_{f}(n)\) is the \(n^{th}\) normalised Fourier coefficient of a Hecke eigenform \(f\) and \(Q(\underline{x})\) is a primitive integral positive definite binary quadratic form (reduced form) of fixed negative discriminant \(D\) with the class number \(h(D)=1.\) This was a generalisation of the previous result proved in [18, 19]. As a consequence, we established and improved previous results on the behaviour of sign change of \(\lambda_{f}(n)\) in a short interval.
In this article, we consider the polynomial
\[\alpha(\underline{x})=\frac{1}{2}\left(x_{1}^{2}+x_{1}+x_{2}^{2}+x_{2}+2(x_{3 }^{2}+x_{3})+4(x_{4}^{2}+x_{4})\right)\in\mathbb{Q}[x_{1},x_{2},x_{3},x_{4}], \tag{3}\]
and study the power moment of Hecke eigenvalues \(\lambda_{f}(n)\)'s over the integers which are represented by \(\alpha(\underline{x}).\)
We fix a few more notations and state our results.
Let \(M_{k}(\Gamma_{0}(N),\chi)\) denote the space of modular forms of weight \(k\) for the congruence subgroup \(\Gamma_{0}(N)\) with the nebentypus \(\chi\) (a character defined on the congruence subgroup \(\Gamma_{0}(N)\)). For the characters \(\chi\) and \(\psi\) of modulus \(N\), we define the generalised divisor function given by
\[\sigma_{k;\chi,\psi}(n)=\sum_{d|n}\psi(d)\chi(n/d)d^{k}.\]
Let \(E_{k;\chi,\psi}\) denote an Eisenstein series in \(M_{k}(\Gamma_{0}(N),\chi\psi)\) given by
\[E_{k;\chi,\psi}(\tau)=\sum_{n=0}^{\infty}\sigma_{k-1;\chi,\psi}(n)q^{n}.\]
Let \(\alpha(\underline{x})\in\mathbb{Q}[x_{1},x_{2},x_{3},x_{4}]\) be a polynomial given in (3). The polynomial \(\alpha(\underline{x})\) takes positive integer values at the integral point of \(\mathbb{Q}^{4}\). Let \(\delta_{4}(\alpha;n)\) denote the number of integral
representations of a positive integer \(n\) represented by a polynomial \(\alpha\), i.e.,
\[\delta_{4}(\alpha;n)=\#\{\underline{x}\in\mathbb{Z}^{4}:n=\alpha(\underline{x})\}.\]
Associated to a polynomial \(\alpha(\underline{x})\), we define the generating function \(T(\tau)\) of \(\delta_{4}(\alpha;n)\) given by
\[T(\tau):=\sum_{\underline{x}\in\mathbb{Z}^{4}}q^{(\alpha(\underline{x}))}\quad= \quad\sum_{n=0}^{\infty}\delta_{4}(\alpha;n)q^{n},\qquad q=e^{2\pi i\tau},\tau \in\mathbb{H}\]
where \(\mathbb{H}\) denotes the complex upper plane. The function \(qT(\tau)\) is a modular form of weight \(2\) and level \(8\) with nebentypus \(\chi_{8}\)[17, Remark 1.2], where \(\chi_{8}\) is a character given by Jacobi symbol, \(\chi_{8}(n):=\left(\frac{8}{n}\right)\). It is well-known that the modular space \(M_{2}(\Gamma_{0}(8),\chi_{8})\) is generated by the generalised Eisenstein series \(E_{2;\chi_{8},\mathbf{1}}\) and \(E_{2;\mathbf{1},\chi_{8}}\) where \(\mathbf{1}\) denotes the trivial character of modulus \(8\). Then, it is easy to see that \(qT(\tau)=E_{2;\chi_{8},\mathbf{1}}(\tau).\) By comparing the \(n^{\text{th}}\) Fourier coefficients, we have
\[\delta_{4}(\alpha;n-1)=\sigma_{1;\chi_{8},\mathbf{1}}(n)=\sum_{d|n}\,\chi_{8}( d)\ \frac{n}{d}. \tag{4}\]
_Remark 1.1_.: Let \(\alpha_{1}(\underline{x})=x_{1}^{2}+2x_{2}^{2}+2(x_{3}^{2}+x_{3})+2(x_{4}^{2}+ x_{4})\in\mathbb{Z}[x_{1},x_{2},x_{3},x_{4}]\) and \(\alpha_{2}(\underline{x})=x_{1}^{2}+(x_{2}^{2}+x_{2})+(x_{3}^{2}+x_{3})+2(x_{ 4}^{2}+x_{4})\in\mathbb{Z}[x_{1},x_{2},x_{3},x_{4}]\) be the polynomials. From the theory of modular forms, it is easy to see that the number of integral representations \(R_{4}(\alpha_{1};n-1)\) (resp. \(R_{4}(\alpha_{2};n-1)\)) of a positive integer \(n\) represented by the polynomial \(\alpha_{1}\) (resp. \(\alpha_{2}\)) is given by
\[R_{4}(\alpha_{1};n-1)=\sum_{d|n}\,\chi_{8}(d)\ \frac{n}{d}\qquad\text{and} \qquad R_{4}(\alpha_{2};n-1)=\sum_{d|n}\,\chi_{8}(d)\ \frac{n}{d}.\]
Let \(\lambda_{f}(n)\) denote the normalised \(n^{\text{th}}\)-Hecke eigenvalue of a normalised Hecke eigenform \(f\in S_{k}(SL_{2}(\mathbb{Z}))\). For each fixed \(r\in\mathbb{N}\) and \(X\geq 1\), we define the following power sum:
\[S_{r}(X):=\sum_{\alpha(\underline{x})+1\leq X\atop\underline{x}\in\mathbb{Z}^ {4}}\lambda_{f}^{r}(\alpha(\underline{x})+1) \tag{5}\]
where \(\alpha(\underline{x})\in\mathbb{Q}[x_{1},x_{2},x_{3},x_{4}]\) is a polynomial defined in (3).
With these notations, we state our results.
**Theorem 1.1**.: _Let \(\epsilon>0\) be an arbitrarily small. For sufficiently large \(X\), we have_
\[S_{1}(X)=O_{f,D,\epsilon}(X^{\frac{3}{2}+\epsilon}) \tag{6}\]
_and_
\[S_{2}(X)=CX^{2}+O(X^{\frac{8}{5}+\epsilon}) \tag{7}\]
_where \(C\) is a positive absolute constant._
**Theorem 1.2**.: _Let \(\epsilon>0\) be an arbitrarily small. For each \(r\geq 3\) and sufficiently large \(X\), we have the following estimates for \(S_{r}(X)\)._
\[S_{r}(X)=X^{2}P_{r}(\log X)+O(X^{2-\frac{1}{2(1+\gamma_{r})}+\epsilon}), \tag{8}\]
_where for each \(r=2m\ (m\geq 2)\), \(P_{r}(t)\) is polynomial of degree \(d_{r}=\frac{1}{m}{r\choose m}-1\) and_
\[\gamma_{r}=\frac{13}{82m}{2m\choose m-1}+\frac{15}{8(m-1)}{2m\choose m-2}+ \frac{1}{4}\left[\sum_{n=0}^{m-2}\frac{(2m-2n+1)^{2}}{n}{2m\choose n-1}\right],\]
_and for each \(r=2m+1\)\((m\geq 1)\), \(P_{r}(t)\equiv 0\) and with_
\[\gamma_{r}=\frac{2}{3m}\binom{2m+1}{m-1}+\frac{1}{4}\left[\sum_{n=0}^{m-1}\frac{ (2m+1-2n+1)^{2}}{n}\binom{2m+1}{n-1}\right]-\frac{5}{6}.\]
_Remark 1.2_.: One can obtain exactly same result as in Theorem 1.1 and Theorem 1.2 for the polynomial \(\alpha_{1}\) and \(\alpha_{2}\) in place of \(\alpha\).
Throughout the paper, \(\epsilon\) denotes an arbitrarily small positive constant but not necessarily the same one at each place of occurrence.
## 2. Key ingredients
The sums defined in (5) can be expressed in terms of known arithmetical functions using (4), i.e.,
\[S_{r}(X) =\sum_{\alpha(\underline{x})+1\leq X\atop\underline{x}\in 2^{4}} \lambda_{f}^{r}(\alpha(\underline{x})+1)=\sum_{n\leq X}\left(\lambda_{f}^{r}(n )\left(\sum_{n=\alpha(\underline{x})+1}1\right)\right)=\sum_{n\leq X}\lambda_ {f}^{r}(n)\delta_{4}(\alpha;n-1)\] \[=\sum_{n\leq X}\lambda_{f}^{r}(n)\sigma_{1;\chi s,1}(n) \tag{9}\]
We define the following Dirichlet series associated to the sum \(S_{r}(X)\) given by:
\[R_{r}(s)=\sum_{n\geq 1}\frac{\lambda_{f}^{r}(n)\sigma_{1;\chi s,1}(n)}{n^{s}}. \tag{10}\]
The above Dirichlet series converges for \(\Re(s)>2\). We obtain an estimate for \(S_{r}(X)\) by using the decomposition of \(R(s)\) in terms of known \(L\)-functions associated to Hecke eigenform \(f\in S_{k}(SL_{2}(\mathbb{Z}))\). Before acquiring the decomposition of \(R(s)\), we define the \(L\)-functions associated to a normalised Hecke eigenform \(f(\tau)=\sum_{n=1}^{\infty}\lambda_{f}(n)n^{\frac{k-1}{2}}q^{n}\in S_{k}(SL_{2} (\mathbb{Z})).\) The Hecke \(L\)-function associated to \(f\) is given by (\(\Re(s)>1\))
\[L(s,f)=\sum_{n\geq 1}\frac{\lambda_{f}(n)}{n^{s}}=\prod_{p}\left(1-\frac{ \lambda_{f}(p)}{p^{s}}-\frac{1}{p^{2s}}\right)^{-1}=\prod_{p}\left(1-\frac{ \alpha_{p}}{p^{s}}\right)^{-1}\left(1-\frac{\beta_{p}}{p^{s}}\right)^{-1}, \tag{11}\]
where \(\alpha_{p}+\beta_{p}=\lambda_{f}(p)\) and \(\alpha_{p}\beta_{p}=1.\) For a given Dirichlet character \(\chi\) of modulus \(N,\) the twisted Hecke \(L\)- function is defined as follows:
\[L(s,f\times\chi)=\sum_{n\geq 1}\frac{\lambda_{f}(n)\chi(n)}{n^{s}}\qquad\Re(s) >1. \tag{12}\]
The twisted Hecke \(L\)-function \(L(s,f\times\chi)\) is associated to the cusp form \(f_{\chi}\in S_{k}(\Gamma_{0}(N^{2}))\) with Fourier coefficients \(\lambda_{f}(n)\chi(n)\). Both the \(L\)-functions satisfy a nice functional equation and it has analytic continuation to whole \(\mathbb{C}\)-plane [10, Section 7.2].
For \(m\geq 2\), the \(m^{th}\) symmetric power \(L\)-function is defined as
\[L(s,sym^{m}f):=\prod_{p-\text{prime}}\prod_{j=0}^{m}\left(1-\alpha_{p}{}^{m-j }\beta_{p}{}^{j}p^{-s}\right)^{-1}=\sum_{n=1}^{\infty}\frac{\lambda_{sym^{m}f} (n)}{n^{s}}, \tag{13}\]
where \(\lambda_{sym^{m}f}(n)\) is multiplicative arithmetical function. At prime values, it is given by
\[\lambda_{sym^{m}f}(p)=\lambda_{f}(p^{m}). \tag{14}\]
From Deligne's bound, we have
\[|\lambda_{sym^{m}f}(n)|\leq d_{m+1}(n)\ll_{\epsilon}n^{\epsilon}\]
for any real number \(\epsilon>0\) and \(d_{m}\) denotes the \(m\)-fold divisor function.
For each \(m\geq 2\), we also define the twisted \(m^{th}\) symmetric power \(L\)-functions given by
\[L(s,sym^{m}f\times\chi):=\sum_{n\geq 1}\frac{\lambda_{sym^{m}f}(n)\chi(n)}{n^{s}}, \tag{15}\]
similar to the twisted Hecke \(L\)-function. these \(L\)-functions are automorphic (for details, see [14, 15]) and inherit the property similar to the Hecke \(L\)-function. For a holomorphic Hecke eigenform \(f\), J. Cogdell and P. Michel [5] have given the explicit description of analytic continuation and functional equation for the function \(L(s,sym^{m}f)\), \(m\geq 3\).
Let \(\zeta(s)\) and \(L(s,\chi)\) (for a Dirichlet character \(\chi\) of modulus \(N\)) denote the Riemann zeta function and Dirichlet \(L\)-function, respectively defined by
\[\zeta(s)=\sum_{n\geq 1}n^{-s}\quad\text{and}\quad L(s,\chi)=\sum_{n\geq 1} \chi(n)n^{-s}. \tag{16}\]
We assume the following conventions:
\[\begin{cases}L(s,sym^{0}f)&=\zeta(s),\\ L(s,sym^{1}f)&=L(s,f),\end{cases}\qquad L(s,sym^{0}f\times\chi)=L(s,\chi),\]
With these definitions, we state the decomposition of \(R_{r}(s)\), \(r\in\mathbb{N}\) into well-known \(L\)-functions.
**Lemma 2.1**.: _Let \(r\in\mathbb{N}.\) We have the following decomposition for \(R_{r}(s)\)._
\[R_{r}(s)=L_{r}(s)\times U_{r}(s),\qquad\text{where} \tag{17}\]
\[L_{1}(s)=L(s-1,f)L(s,f\times\chi_{8})\]
\[L_{2}(s)=\zeta(s-1)L(s,\chi_{8})L(s-1,sym^{2}f)L(s,sym^{2}f\times\chi_{8}).\]
_and for each \(r\geq 3\),_
\[L_{r}(s)=\prod_{n=0}^{[r/2]}\left(L(s-1,sym^{r-2n}f)^{\binom{\binom{r}{n}- \binom{r}{n-1}}}L(s,sym^{r-2n}f\times\chi_{8})^{\binom{\binom{r}{n}-\binom{r} {n-1}}}\right),\]
_and \(\binom{r}{n}\) is the binomial coefficient with the convention \(\binom{r}{n}=0\) if \(n<0,\) and \(\chi_{8}\) is the Dirichlet character modulo \(8\) and \(U_{r}(s)\) is a Dirichlet series given by_
\[U_{r}(s)=\prod_{p}\Bigg{(}1+\frac{(A(p^{2})-\lambda_{f}(p)^{2r}\sigma_{1;\chi _{8},I}^{2}(p))}{p^{2s}}+\cdots\Bigg{)}.\]
_It converges absolutely and uniformly for \(\Re(s)>\frac{3}{2}\) and \(U_{r}(s)\neq 0\) for \(\Re(s)=2.\)_
Before proving Lemma 2.1, we state the following result which explicitly governs the proof of Lemma 2.1.
**Lemma 2.2**.: _[_20_, Lemma 2.2]_ _Let \(\ell\in\mathbb{N}.\) For each \(j\) with \(0\leq j\leq\ell\) and \(j\equiv\ell\pmod{2}\), let \(A_{\ell,j}:=\left(\frac{\ell_{j}}{2^{j}}\right)-\left(\frac{\ell_{j}}{2^{j}- 1}\right)\) and \(0\) otherwise and \(T_{m}(2x):=U_{m}(x)\) where \(U_{m}(x)\) is the \(m^{\text{th}}\) Chebyshev polynomial of second kind. Then_
\[x^{\ell}=\sum_{j=0}^{\ell}A_{\ell,j}T_{\ell-j}(x).\]
### Proof of Lemma 2.1
From Deligne's estimate, we know that \(\lambda_{f}(p)=2\cos\theta\), and \(\lambda_{f}(p^{r})=T_{r}(2\cos\theta)=U_{r}(\cos\theta)\). From Lemma 2.2, we get an expression for \(\lambda_{f}(p)^{r}\) in terms of the \(j^{th}\)- symmetric power Fourier coefficient \(\lambda_{sym^{j}f}(p)\), i.e.,
\[\lambda_{f}(p)^{r}=\left(\sum_{n=0}^{r/2}\left(\binom{r}{n}-\binom{r}{n-1} \right)\lambda_{sym_{f}^{r-2n}}(p)\right). \tag{18}\]
We know that \(\lambda_{f}(n)\) and \(\sigma_{1;\chi_{8},1}(n)\) are multiplicative functions. So, \(R_{r}(s)\) is given in terms of an Euler product, i.e.,
\[R_{r}(s)=\sum\nolimits_{n\geq 1}^{\flat}\frac{(\lambda_{f}(n))^{r}\sigma_{1; \chi_{8},1}(n)}{n^{s}}=\prod_{p}\left(1+\frac{(\lambda_{f}(p))^{r}\sigma_{1; \chi_{8},1}(p)}{p^{s}}\right)\]
for \(\Re(s)>2\). From (18), we have
\[\lambda_{f}^{r}(p)\sigma_{1;\chi_{8},1}(p)=\lambda_{f}^{r}(p)(p+\chi_{8}(p))= \left(\sum_{n=0}^{r/2}\left(\binom{r}{n}-\binom{r}{n-1}\right)\lambda_{sym_{f }^{r-2n}}(p)\right)(p+\chi_{8}(p))\]
\[=\sum_{n=0}^{r/2}\left(\binom{r}{n}-\binom{r}{n-1}\right)p\lambda_{sym_{f}^{ r-2n}}(p)+\sum_{n=0}^{r/2}\left(\binom{r}{n}-\binom{r}{n-1}\right)\lambda_{ sym_{f}^{r-2n}}(p)\chi_{8}(p).\]
For \(\Re(s)>2\), we express the function
\[L_{r}(s)=\prod_{n=0}^{[r/2]}\Big{(}L(s-1,sym^{r-2n}f)^{\left(\binom{r}{n}- \binom{r}{n-1}\right)}L(s,sym^{r-2n}f\times\chi_{8})^{\left(\binom{r}{n}- \binom{r}{n-1}\right)}\Big{)}\]
as an Euler product of the form
\[\prod_{p}\bigg{(}1+\frac{A(p)}{p^{s}}+\frac{A(p^{2})}{p^{2s}}+\cdots\bigg{)}, \quad\text{where}\quad A(p)=-\lambda_{f}(p)^{r}\sigma_{1;\chi_{8},1}(p).\]
Moreover, for each prime \(p\), we define the sequence \(B(p)=0\), for each \(r\geq 2\), \(B(p^{r})=A(p^{r})+A(p^{r-1})\lambda_{f}^{r}(p)\sigma_{1;\chi_{8},1}(p).\) It is easy to see that \(B(n)\ll n^{1+\epsilon}\) for any \(\epsilon\). Associated to this sequence, We define the Euler product for \(U_{r}(s)\) given by
\[U_{r}(s)=\prod_{p}\bigg{(}1+\frac{B(p)}{p^{s}}+\frac{B(p^{2})}{p^{2s}}+\cdots \bigg{)}.\]
Then, it is easy to see that
\[R_{r}(s)=L_{r}(s)U_{r}(s).\]
This completes the proof.
### Convexity bound and integral moment of \(L\)-functions
**Lemma 2.3**.: _Let \(\zeta(s)\) be the Riemann zeta function. Then for any \(\epsilon>0\), we have_
\[\zeta(\sigma+it)\ll_{\epsilon}(1+|t|)^{\max\{\frac{13}{42}(1-\sigma),0\}+\epsilon} \tag{19}\]
_uniformly for \(\frac{1}{2}\leq\sigma\leq 1\) and \(|t|\geq 1\) and_
\[\int_{0}^{T}\left|\zeta\left(\frac{1}{2}+it\right)\right|^{2}dt\ll_{\epsilon}T ^{1+\epsilon} \tag{20}\]
uniformly for \(T\geq 1.\) For sub-convexity bound and integral estimate of \(\zeta(s)\), we refer to [4, Theorem 5] and [8, Theorem 8.4] respectively._
**Lemma 2.4**.: _[_7_, eq. (1.1)]_ _Let \(L(s,\chi)\) be the Dirichlet \(L\)-function for a Dirichlet character \(\chi\) modulo N. Then for any \(\epsilon>0\), we have_
\[L(\sigma+it,\chi)\ll_{\epsilon,N}(1+|t|)^{\frac{1}{3}(1-\sigma)+\epsilon} \tag{21}\]
_uniformly for \(\frac{1}{2}\leq\sigma\leq 1\) and \(|t|\geq 1\)._
**Lemma 2.5**.: _For any \(\epsilon>0\), the sub-convexity bound of \(L(s,f)\) is given by_
\[L(\sigma+it,f)\ll_{f,\epsilon}(1+|t|)^{\max\left\{\frac{2}{3}(1-\sigma),0 \right\}+\epsilon} \tag{22}\]
_uniformly for \(\frac{1}{2}\leq\sigma\leq 1\) and \(|t|\geq 1,\) and the second integral moment of \(L(s,f)\) is given by_
\[\int_{0}^{T}\left|L\left(\frac{1}{2}+\epsilon+it,f\right)\right|^{2}dt\ll_{f,\epsilon}T^{1+\epsilon} \tag{23}\]
_uniformly for \(T\geq 1.\) The results also hold for \(f\ \times\chi\) in place of \(f\) with a different the absolute constant depends on \(f\) and \(\epsilon\)._
Proof.: The sub-convexity bound of Hecke \(L\)-function \(L(s,f)\) follows from the standard argument of Phragmen - Lindelof convexity principle and a result of A. Good [6, Corollary]. For the integral estimate, we refer to [9, Theorem 2].
**Lemma 2.6**.: _[_16_, Corollary 2.1]_ _For any arbitrarily small \(\epsilon>0\), we have_
\[L(\sigma+it,sym^{2}f)\ll_{f,\epsilon}(1+|t|)^{\max\left\{\frac{5}{4}(1-\sigma ),0\right\}+\epsilon} \tag{24}\]
_uniformly for \(\frac{1}{2}\leq\sigma\leq 1\) and \(|t|\geq 1\). A similar result also holds for \(sym^{2}f\otimes\chi\)._
**Lemma 2.7**.: _[_11_, pp. 100]_ _Let \(L(s,F)\) be an \(L\)-function of degree \(m\geq 2,\) i.e.,_
\[L(s,F)=\sum_{n\geq 1}\frac{\lambda_{F}(n)}{n^{s}}=\prod_{p-\operatorname{ prime}}\prod_{j=1}^{m}\left(1-\frac{\alpha_{p,f,j}}{p^{s}}\right)^{-1}, \tag{25}\]
_where \(\alpha_{p,f,j}\), \(1\leq j\leq m\); are the local parameters of \(L(s,F)\) at prime \(p\) and \(\lambda_{F}(n)=O(n^{\epsilon})\) for any \(\epsilon>0.\) We assume that the series and Euler product converge absolutely for \(\Re(s)>1\) and \(L(s,F)\) is an entire function except possibly for a pole at \(s=1\) of order \(r\) and satisfies a nice functional equation \((s\to 1-s)\). Then for any \(\epsilon>0\) and \(s=\sigma+it\), we have_
\[\left(\frac{s-1}{s+1}\right)^{r}L(\sigma+it,F)\ll_{F,\epsilon}(1+|t|)^{\frac{ m}{2}(1-\sigma)+\epsilon} \tag{26}\]
_uniformly for \(0\leq\sigma\leq 1\). and \(|t|\geq 1\)._
**Lemma 2.8**.: _[_13_, Lemma 2.6]_ _Let \(L(s,F)\) be an \(L\)-function of degree \(m\geq 2\). Then for any \(\epsilon>0\) and \(T\geq 1\), We have_
\[\int_{T}^{2T}\left|L\left(\frac{1}{2}+\epsilon+it,F\right)\right|^{2}dt\ll_{ F,\epsilon}T^{\frac{m}{2}+\epsilon}. \tag{27}\]
## 3. Proof of results
### General philosophy:
Let \(1\leq Y<\frac{X}{2}\). In order to obtain an upper bound for the sum \(S_{r}(X)\) given in (5), we introduce a smooth compactly supported function \(w(x)\) satisfying; \(w(x)=1\) for \(x\in[2Y,X],\)\(w(x)=0\) for \(x<Y\) and \(x>X+Y,\) and \(w^{(r)}(x)\ll_{r}Y^{-r}\) for all \(r\geq 0.\) In general, for any arithmetical function \(f(n),\) we have
\[\sum_{n\leq X}f(n)=\sum_{n=1}^{\infty}f(n)w(n)+O\left(\sum_{n<2Y}|f(n)|\right) +O\left(\sum_{X<n<X+Y}|f(n)|\right). \tag{28}\]
Moreover, by Mellin's inverse transform, we have
\[\sum_{n=1}^{\infty}f(n)w(n)=\frac{1}{2\pi i}\int_{(b)}\tilde{w}(s)\left(\sum_{ n\geq 1}\frac{f(n)}{n^{s}}\right)ds, \tag{29}\]
where \(b\) is a real number larger than the abscissa of absolute convergence of \(\sum_{n\geq 1}f(n)n^{-s}\) and the Mellin's transform \(\tilde{w}(s)\) is given by following integral: \(\tilde{w}(s)=\int_{0}^{\infty}w(x)x^{s}\frac{dx}{x}.\)
We observe that (due to integration by parts),
\[\tilde{w}(s)=\frac{1}{s(s+1)\cdots(s+m-1)}\int_{0}^{\infty}w^{(m)}(x)x^{s+m-1 }dx\ll\frac{Y}{X^{1-\sigma}}\left(\frac{X}{|s|Y}\right)^{m}, \tag{30}\]
for any \(m\geq 0,\) where \(\sigma=\Re(s).\) For details, we refer to [12, Section 3].
From Equation (28) with \(f(n)=\lambda_{f}^{r}(n)\sigma_{1;\chi_{8},\mathbf{1}}(n),\) we have
\[\sum_{n\leq X}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
for the integral over \(|s|\geq T=\frac{X^{1+\epsilon}}{Y}\) on the right hand side of (33) is negligibly small, i.e., \(O(X^{-A})\) for any large \(A>0\) if one chooses sufficiently large \(m>0.\) Hence, we have
\[\frac{1}{2\pi i}\int_{(3/2+\epsilon)}\tilde{w}(s)R_{r}(s)ds =\frac{1}{2\pi i}\int_{3/2+\epsilon-iT}^{3/2+\epsilon+iT}\tilde{w }(s)R_{r}(s)ds+O(X^{-A})\] \[\ll\int_{-T}^{T}|\tilde{w}(3/2+\epsilon+it)||R_{r}(3/2+\epsilon+ it)|dt+O(X^{-A})\] \[\ll\int_{0}^{T}\frac{X^{\frac{3}{2}+\epsilon}}{|\frac{3}{2}+ \epsilon+it|}|R_{r}(3/2+\epsilon+it)|dt+O(X^{-A})\] \[\ll\left(\int_{0}^{1}+\int_{1}^{T}\right)\frac{X^{\frac{3}{2}+ \epsilon}}{|\frac{3}{2}+\epsilon+it|}|R_{r}(3/2+\epsilon+it)|dt+O(X^{-A}),\]
where the estimate in the last lines is obtained by substituting the bound for \(\tilde{w}(s)\) (given in (30)) when \(m=1.\) We substitute the decomposition of \(R_{r}(s)\) (\(R_{r}(s)=L_{r}(s)U_{r}(s)\)) from Lemma 2.1, and utilize the absolute convergence of \(U_{r}(s)\) in the region \(\Re(s)>\frac{3}{2}\) to get
\[\frac{1}{2\pi i}\int_{(3/2+\epsilon)}\tilde{w}(s)R_{r}(s)ds\ll X^{\frac{3}{2}+ \epsilon}+X^{\frac{3}{2}+\epsilon}\int_{1}^{T}\frac{|L_{r}(3/2+\epsilon+it)|}{ t}dt. \tag{34}\]
Thus, combining all the estimates, we have (for each fixed \(r\in\mathbb{N}\))
\[\begin{split}\sum\limits_{n\leq X}\lambda_{f}^{r}(n)\sigma_{1; \chi_{8},\mathbf{1}}(n)&=\operatorname*{Res}_{s=2}\left(R_{r}(s) \tilde{w}(s)\right)+O\left(X^{\frac{3}{2}+\epsilon}\int_{1}^{T}\frac{|L_{r}(3 /2+\epsilon+it)|}{t}dt\right)\\ &\quad+O(X^{\frac{3}{2}+\epsilon})+O(X^{1+\epsilon}Y)+O(X^{-A}) \end{split} \tag{35}\]
where \(T=\frac{X^{1+\epsilon}}{Y}\) and \(Y\) be a suitable parameter, and the first term on RHS of (35) exists only when \(r\) is even. So, it is enough to get an estimate for the integral \(I_{r}\) (say) appearing in (35) to get the required estimate for the sum \(S_{r}(X)\) defined in (5).
### Proof of Theorem 1.1:
Since the function \(R_{1}(s)=L(s-1,f)L(s,f\times\chi_{8})U_{1}(s)\) is holomorphic at \(s=2\). So, from (35), we have
\[\begin{split} S_{1}(X)&=O\left(X^{\frac{3}{2}+ \epsilon}\int_{1}^{T}\frac{|L(1/2+\epsilon+it,f)L(3/2+\epsilon+it,f\times\chi _{8})|}{t}dt\right)\\ &\quad+O(X^{\frac{3}{2}+\epsilon})+O(X^{1+\epsilon}Y)+O(X^{-A}). \end{split} \tag{36}\]
Following the argument using the dyadic division method and then the Cauchy-Schwarz inequality, we have
\[\int_{1}^{T}\frac{|L(1/2+\epsilon+it,f)L(3/2+\epsilon+it,f\times \chi_{8})|}{t}dt\ll\int_{1}^{T}\frac{|L(1/2+\epsilon+it,f)|}{t}dt\] \[\ll\log T\sup_{1\leq T_{1}\leq T}\left(\frac{1}{T_{1}}\left(\int _{1}^{T_{1}}|L(1/2+\epsilon+it,f)|^{2}dt\right)^{\frac{1}{2}}\times\left( \int_{1}^{T_{1}}dt\right)^{\frac{1}{2}}\right)\ll T^{\epsilon}.\]
Thus, substituting the integral estimate in (36), we have
\[S_{1}(X)=O(X^{\frac{3}{2}+\epsilon}T^{\epsilon})+O(X^{1+\epsilon}Y)+O(X^{-A}).\]
We substitute \(T=\frac{X^{1+\epsilon}}{Y}\) and choose \(Y=X^{\frac{1}{2}+\epsilon}\) to get
\[S_{1}(X)=O(X^{\frac{3}{2}+\epsilon}).\]
In the case of \(r=2\), from the (35), we have
\[S_{2}(X) =\mathop{\rm Res}\limits_{s=2}\left(R_{2}(s)\tilde{w}(s)\right)+O \left(X^{\frac{3}{2}+\epsilon}\int_{1}^{T}\frac{|L_{2}(3/2+\epsilon+it)|}{t}dt\right)\] \[\quad+O(X^{\frac{3}{2}+\epsilon})+O(X^{1+\epsilon}Y)+O(X^{-A})\]
with \(L_{2}(s)=\zeta(s-1)L(s,\chi_{8})L(s-1,sym^{2}f)L(s,sym^{2}f\times\chi_{8}).\) Let
\[I_{2}=\int_{1}^{T}\frac{|L_{2}(3/2+\epsilon+it)|}{t}dt.\]
Substituting the decomposition of \(L_{2}(s)\) and using the absolute convergence of respective \(L\)-functions, we have
\[I_{2}\ll\int_{1}^{T}\frac{|\zeta(1/2+\epsilon+it)L(1/2+\epsilon+it,sym^{2}f)|} {t}dt.\]
Appealing the dyadic division method and then the Cauchy-Schwarz inequality, we have
\[I_{2} \ll\log T\sup_{1\leq T_{1}\leq T}\left(\frac{1}{T_{1}}\left(\int _{1}^{T_{1}}|\zeta(1/2+\epsilon+it)|^{2}dt\right)^{\frac{1}{2}}\left(\int_{1} ^{T_{1}}|L(1/2+\epsilon+it,sym^{2}f)|^{2}dt\right)^{\frac{1}{2}}\right)\] \[\ll T^{\frac{1}{4}+\epsilon}.\]
which is obtained by using integral estimates for respective \(L\)-functions. Thus, substituting the estimate of \(I_{2}\) in (3.2), we have
\[S_{2}(X)=\mathop{\rm Res}\limits_{s=2}\left(R_{2}(s)\tilde{w}(s)\right)+O \left(X^{\frac{3}{2}+\epsilon}T^{\frac{1}{4}+\epsilon}\right)+O(X^{1+\epsilon }Y)+O(X^{-A}).\]
We substitute \(T=\frac{X^{1+\epsilon}}{Y}\) and choose \(Y=X^{\frac{3}{5}+\epsilon}\) to get
\[S_{2}(X)=CX^{2}+O\left(X^{\frac{8}{5}+\epsilon}\right).\]
where \(C\) is a positive absolute constant given by
\[C=L(2,\chi_{8})L(1,sym^{2}f)L(2,sym^{2}f\times\chi_{8})U_{2}(2).\]
This completes the proof.
### Proof of Theorem 1.2:
For each fixed \(r\in\mathbb{N}\), following the argument as in SS3.1, it is enough to obtain an estimate for the integral occurring in (35) get an estimate for the sum \(S_{r}(X)\) given in (5). Let
\[I_{r}=\int_{1}^{T}\frac{|L_{r}(3/2+\epsilon+it)|}{t}dt.\]
We substitute \(L_{r}(s)\) from Lemma 2.1 to get
\[I_{r} =\int_{1}^{T}\frac{dt}{t}\times\prod_{n=0}^{[r/2]}\left(\left|L \left(\frac{1}{2}+\epsilon+it,sym^{r-2n}f\right)L\left(\frac{3}{2}+\epsilon+ it,sym^{r-2n}f\times\chi_{8}\right)\right|^{\left(\binom{r}{n}-\binom{r}{n-1} \right)}\right)\] \[\ll\int_{1}^{T}\frac{1}{t}\times\prod_{n=0}^{[r/2]}\left|L\left( \frac{1}{2}+\epsilon+it,sym^{r-2n}f\right)\right|^{\left(\binom{r}{n}-\binom{ r}{n-1}\right)}dt\]
where we use the fact that \(L(s,sym^{\ell}f)\) converges absolutely for \(\Re(s)>1\) for each \(\ell\geq 0\). We consider two cases when \(r\) is even and \(r\) is odd separately.
**Case 1:** Let \(r\) is even, i.e., \(r=2m(\text{say})\). Then
\[I_{r} \ll\int_{1}^{T}\frac{1}{t}\times|\underset{n=0}{\overset{m}{ \prod}}L(1/2+\epsilon+it,sym^{2m-2n}f)^{\left(\binom{2m}{n}-\binom{2m}{n-1} \right)}|dt\] \[\ll\max_{1\leq t\leq T}\left(\zeta(1/2+\epsilon+it)^{\frac{1}{m} \binom{2m}{n-1}}L(1/2+\epsilon+it,sym^{2}f)^{\frac{3}{m-1}\binom{2m}{m-2}}\right)\] \[\qquad\times\log T\sup_{1\leq T_{1}\leq T}\left(\frac{1}{T_{1}} \int_{1}^{T_{1}}|\underset{n=0}{\overset{m-2}{\prod}}L(1/2+\epsilon+it,sym^{2 m-2n}f)^{\frac{2m-2n+1}{n}\binom{2m}{n-1}}|dt\right)\] \[\ll T^{\gamma_{r}+\epsilon}\]
where we use the convexity/sub-convexity bound of respective \(L\)-functions to get an upper estimate for \(I_{r}\), and \(\gamma_{r}=\frac{13}{82m}\binom{2m}{m-1}+\frac{15}{8(m-1)}\binom{2m}{m-2}+ \frac{1}{4}\left[\underset{n=0}{\overset{m-2}{\prod}}\frac{(2m-2n+1)^{2}}{n} \binom{2m}{n-1}\right]\). We substitute the bound for \(I_{r}\) in (35) to get (for each \(r\in\mathbb{N}\))
\[\underset{n\leq X}{\overset{\flat}{\sum}}\lambda_{f}^{r}(n) \sigma_{1;\chi_{8},\mathbf{1}}(n) =\underset{s=2}{\text{Res}}\left(R_{r}(s)\tilde{w}(s)\right)+O \left(X^{\frac{3}{2}+\epsilon}T^{\gamma_{r}+\epsilon}\right)\] \[\quad+O(X^{\frac{3}{2}+\epsilon})+O(X^{1+\epsilon}Y)+O(X^{-A}).\]
We substitute \(T=\frac{X^{1+\epsilon}}{Y}\) and choose \(Y=X^{1-\frac{1}{2(1+\gamma_{r})}+\epsilon}\) to get
\[\underset{n\leq X}{\overset{\flat}{\sum}}\lambda_{f}^{r}(n) \sigma_{1;\chi_{8},\mathbf{1}}(n) =X^{2}P_{r}(\log X)+O(X^{2-\frac{1}{2(1+\gamma_{r})}+\epsilon})\]
where \(P_{r}(t)\) is a polynomial of degree \(d_{r}=\frac{1}{m}\binom{2m}{m-1}-1\) and \(\gamma_{r}\) is given in Theorem 1.2. This completes the proof for even \(r\).
**Case 2:** Let \(r\) is odd, i.e., \(r=2m+1(\text{say})\). Then, first using the dyadic division method and Cauchy-Schwarz inequality to get
\[I_{r} \ll\int_{1}^{T}\frac{dt}{t}\times\underset{n=0}{\overset{m}{ \prod}}|L(1/2+\epsilon+it,sym^{2m+1-2n}f)|^{\left(\binom{2m+1}{n}-\binom{2m+1} {n-1}\right)}\] \[\ll\log T\sup_{1\leq T_{1}\leq T}\begin{cases}\int_{1}^{T_{1}} \left(|L(1/2+\epsilon+it,f)|^{2\times\frac{2}{m}\binom{2m+1}{m-1}}dt\right)^{ \frac{1}{2}}\times\\ \left(\frac{1}{T_{1}}\int_{1}^{T_{1}}\underset{n=0}{\overset{m-1}{\prod}}|L(1/ 2+\epsilon+it,sym^{2m+1-2n}f)|^{2\times\frac{2m+1-2n+1}{n}\binom{2m}{n-1}}dt \right)^{\frac{1}{2}}\end{cases}\] \[\ll\log T\sup_{1\leq T_{1}\leq T}\begin{cases}\max_{1\leq t\leq T_ {1}}\left(|L(1/2+\epsilon+it,f)|^{\frac{2}{m}\binom{2m+1}{m-1}-1}\right)\int_{1 }^{T_{1}}\left(|L(1/2+\epsilon+it,f)|^{2}dt\right)^{\frac{1}{2}}\\ \times\left(\frac{1}{T_{1}}\int_{1}^{T_{1}}\underset{n=0}{\overset{m-1}{\prod}}|L( 1/2+\epsilon+it,sym^{2m+1-2n}f)|^{2\times\frac{2m+1-2n+1}{n}\binom{2m}{n-1}} dt\right)^{\frac{1}{2}}\end{cases}\] \[\ll T^{\gamma_{r}+\epsilon}\]
which is obtained using convexity/sub-convexity bound and integral estimate of respective \(L\)-functions, and \(\gamma_{r}=\frac{2}{3m}\binom{2m+1}{m-1}+\frac{1}{4}\left[\underset{n=0}{ \overset{m-1}{\prod}}\frac{(2m+1-2n+1)^{2}}{n}\binom{2m+1}{n-1}\right]- \frac{5}{6}\). We know that
\(L_{r}(s)\) (for odd integer \(r\)) does not have a pole. So, substituting the bound for \(I_{r}\) in (35) to get
\[\sum_{n\leq X}\lambda_{f}^{r}(n)\sigma_{1;\chi s,\mathbf{1}}(n)=O\left(X^{\frac{ 3}{2}+\epsilon}T^{\gamma_{r}+\epsilon}\right)+O(X^{\frac{3}{2}+\epsilon})+O(Y ^{2+\epsilon})+O(X^{1+\epsilon}Y)+O(X^{-A}).\]
We substitute \(T=\frac{X^{1+\epsilon}}{Y}\) in above equation and choose \(Y=X^{1-\frac{1}{2(1+\gamma_{r})}+\epsilon}\) to get
\[\sum_{n\leq X}\lambda_{f}^{r}(n)\sigma_{1;\chi s,\mathbf{1}}(n)=O(X^{2-\frac{1 }{2(1+\gamma_{r})}+\epsilon}).\]
This completes the proof for odd \(r\).
**Acknowledgement :**The author would like to thank IMSc, Chennai for providing financial support through institute fellowship.
## 4. Declarations:
**Ethical Approval:** Not applicable.
**Competing interests:** Not applicable.
**Author's contributions:** Not applicable.
**Funding:** Not applicable.
**Availability of data and materials:** This manuscript does not include any data.
|
2307.13210 | Weighted twisted inhomogeneous Diophantine approximation | We prove a multidimensional weighted analogue of the well-known theorem of
Kurzweil (1955) in the metric theory of inhomogeneous Diophantine
approximation. Let $A$ be matrix of real numbers, $\Psi$ an $n$-tuple of
monotonic decreasing functions, and let $W_{A}(\Psi)$ be the set of points that
infinitely often lie in a $\Psi(q)$-neighbourhood of the sequence
$\{Aq\}_{q\in\mathbb{N}}$. We prove that the set $ W_{A}(\Psi)$ has zero-full
Lebesgue measure under convergent-divergent sum conditions with some mild
assumptions on $A$ and the approximating functions $\Psi$. We also prove the
Hausdorff dimension results for this set. Along with some geometric arguments,
the main ingredients are weighted ubiquity and weighted mass transference
principle introduced recently by Kleinbock & Wang (Adv. Math. 2023), and Wang &
Wu (Math. Ann. 2021) respectively. | Mumtaz Hussain, Benjamin Ward | 2023-07-25T02:29:05Z | http://arxiv.org/abs/2307.13210v1 | # Weighted twisted inhomogeneous Diophantine approximation
###### Abstract.
We prove a multidimensional weighted analogue of the well-known theorem of Kurzweil (1955) in the metric theory of inhomogeneous Diophantine approximation. Let \(\sum_{i=1}^{m}a_{i}=m\) and \(|\cdot|_{a}=\max_{1\leq i\leq m}|\cdot|^{1/a_{i}}.\) Given an \(n\)-tuple of monotonically decreasing unit-variable functions \(\Psi=(\psi_{1},\ldots,\psi_{n})\) with \(\psi_{i}:\mathbb{R}_{+}\to\mathbb{R}_{+}\) such that each \(\psi_{i}(r)\to 0\) as \(r\to\infty\) and fixed \(A\in\mathbb{R}^{n\times m}\) define
\[W_{A}(\Psi):=\left\{\mathbf{b}\in[0,1]^{n}:\begin{array}{l}|A_{i}\cdot \mathbf{q}-b_{i}-p_{i}|<\psi_{i}(|\mathbf{q}|_{a})\quad(1\leq i\leq n),\\ \text{for infinitely many}\ (\mathbf{p},\mathbf{q})\in\mathbb{Z}^{n} \times(\mathbb{Z}^{m}\setminus\{0\})\end{array}\right\}.\]
We prove that the set \(W_{A}(\Psi)\) has zero-full Lebesgue measure under convergent-divergent sum conditions with some mild assumptions on \(A\) and the approximating functions \(\Psi.\) We also prove the Hausdorff dimension results for this set. Along with some geometric arguments, the main ingredients are the weighted ubiquity and weighted mass transference principle introduced recently by Kleinbock & Wang [Adv. Math. 428 (2023), Paper No. 109154], and Wang & Wu [Math. Ann. 381 (2021), no. 1-2, 243-317] respectively.
## 1. Introduction
For a fixed \(\xi\in\mathbb{R}\) consider the sequence \((\{\xi q\})_{q\in\mathbb{N}},\) where \(\{x\}\) denotes the fractional part of \(x\in\mathbb{R}.\) When \(\xi\in\mathbb{Q}\) the sequence is periodic, but for \(\xi\in\mathbb{R}\setminus\mathbb{Q}\) the sequence is dense on the unit interval. In 1901 Minkowski proved that for any irrational \(\xi\in\mathbb{R}\) the inequality
\[|\xi q-b-p|<\frac{1}{4q}\]
has infinitely many solutions \((q,p)\in\mathbb{N}\times\mathbb{Z}\) for any \(b\not\in\mathbb{Z}+\xi\mathbb{Z}\)[19]. The constant \(\frac{1}{4}\) was later made optimal by Khintchine [12]. Note that rather than the classical setting of Diophantine approximation where one approximates the space \([0,1]\) by rational points \(\frac{p}{q}\in\mathbb{Q},\) here we are approximating the space \([0,1]\) by the points \((\{\xi q\})_{q\in\mathbb{N}},\) hence the name "twisted" inhomogeneous Diophantine approximation. Naturally, this requires knowledge of the distribution of the points \((\{\xi q\})_{q\in\mathbb{N}},\) and so the choice of \(\xi\in\mathbb{R}\) heavily affects that rate of approximation.
A natural question to ask is how many \(b\in\mathbb{R}\) satisfy the above inequality when the right hand side is replaced by some general decreasing function \(\psi:\mathbb{N}\to\mathbb{R}_{+}.\) That is, how large is the set
\[W_{\xi}(\psi):=\left\{b\in[0,1]:|\xi q-b-p|<\psi(q)\,\text{ for infinitely many}\ (q,p)\in\mathbb{N}\times\mathbb{Z}\right\}.\]
In 1955, Kurzweil made the following contribution to answering the above question. Recall that \(\xi\) is badly approximable if there exists a constant \(c(\xi)>0\) such that
\[|\xi q-p|\geq\frac{c(\xi)}{q}\quad\text{ for all }(q,p)\in\mathbb{N}\times \mathbb{Z}.\]
Throughout, let \(\lambda_{d}(A)\) denotes the \(d\)-dimensional Lebesgue measure of a set \(A\subseteq\mathbb{R}^{d}\).
**Theorem** ([17]).: _Let \(\psi:\mathbb{N}\to\mathbb{R}_{+}\) be a non-increasing function and \(\xi\in\mathbb{R}\setminus\mathbb{Q}\). Then, \(\lambda_{1}(W_{\xi}(\psi))\in\{0,1\}\). Furthermore, for \(\xi\) badly approximable_
\[\lambda_{1}(W_{\xi}(\psi))=\begin{cases}0&\text{if}\quad\sum\limits_{r\in \mathbb{N}}\psi(r)<\infty,\\ 1&\text{if}\quad\sum\limits_{r\in\mathbb{N}}\psi(r)=\infty.\end{cases}\]
Heuristically, it makes sense to consider \(\xi\) badly approximable. Note that, by the Three Distance Theorem (see for example [25]), the sequence \(((\xi q))_{q\in\mathbb{N}}\) does not cluster in any particular region of \([0,1]\), and so are reasonably good at approximating the unit interval. Theorem 1 has since been refined to apply to more general \(\xi\in\mathbb{R}\). In particular, Fuchs and Kim improved on the statement by considering the principle convergents of \(\xi\), see [9, Theorem 1.2] and [22] for further details and contributions to this theory.
When the Lebesgue measure of the set is null, for example, all functions of the form \(\psi_{\tau}(q)=q^{-\tau}\) with \(\tau>1\) have \(\lambda_{1}(W_{\xi}(\psi_{\tau}))=0\), one can ask for a refined statement to distinguish between the null sets. In this regard, Bugeaud [6, Theorem 1], and Schmeling and Troubetzkoy [23, Theorem 3.2] independently proved the following result.
**Theorem** ([6, 23]).: _For any \(\xi\in\mathbb{R}\setminus\mathbb{Q}\) and \(\tau>1\)_
\[\dim_{\mathrm{H}}W_{\xi}(\psi_{\tau})=\frac{1}{\tau},\]
_where \(\dim_{\mathrm{H}}\) denotes the Hausdorff dimension._
We should remark that one of the first results in this direction was proven by Bernik and Dodson [5, p.105] who proved the above result for \(\lambda_{1}\)-almost all \(\xi\in[0,1]\). Furthermore, the above theorem has since been generalised to a range of more general approximation functions \(\psi\)[8, Theorem 3] and a restricted set of \(\xi\in\mathbb{R}\), where \(\tau\) in the dimension is replaced by the lower order at infinity of the function \(\psi\). Perhaps more importantly, it was also shown that there exists \(\xi\in\mathbb{R}\) where the expected dimension result is false [8, Theorem2].
### Higher dimensional twisted inhomogeneous approximation
The above setup can be readily extended to higher dimensions. Throughout suppose \(v=(v_{1},\ldots,v_{n})\in\ \mathbb{R}_{+}^{n}\) and \(\alpha=(\alpha_{1},\ldots,\alpha_{m})\in\mathbb{R}_{+}^{m}\) are vectors satisfying
\[\sum\limits_{i=1}^{n}v_{i}=n\,,\quad\sum\limits_{i=1}^{m}\alpha_{i}=m,\]
and let
\[|\cdot|_{v}=\max_{1\leq i\leq n}|\cdot|^{1/v_{i}}\,,\quad|\cdot|_{\alpha}=\max_{1 \leq i\leq m}|\cdot|^{1/\alpha_{i}}\,.\]
Let \(\mathbb{R}^{n\times m}\) denote the set of \(n\times m\) matrices with real number entries and fix some \(A\in\mathbb{R}^{n\times m}\). Given an \(n\)-tuple of monotonic decreasing functions \(\Psi=(\psi_{1},\ldots,\psi_{n})\) with \(\psi_{i}:\mathbb{R}_{+}\to\mathbb{R}_{+}\), we say \(\mathbf{b}=(b_{1},\ldots,b_{n})\in[0,1]^{n}\) is \(\Psi\)-approximable for \(A\) if there exists infinitely many \((\mathbf{q},\mathbf{p})=(q_{1},\ldots,q_{m},p_{1},\ldots,p_{n})\in(\mathbb{Z}^ {m}\setminus\{0\})\times\mathbb{Z}^{n}\) solving
\[|A_{i}\cdot\mathbf{q}-b_{i}-p_{i}|<\psi_{i}(|\mathbf{q}|_{\alpha})\quad(1 \leq i\leq n), \tag{1.1}\]
where \(A_{i}\) denotes the \(i\)th row of \(A\). Denote by \(W_{A}(\Psi)\) the set of such \(\Psi\)-approximable vectors for \(A\), that is
\[W_{A}(\Psi):=\left\{\mathbf{b}\in[0,1]^{n}:\,\text{(\ref{eq:1.1}) is solved for infinitely many $(\mathbf{q},\mathbf{p})\in(\mathbb{Z}^{m}\setminus\{0\})\times\mathbb{Z}^{n}$} \right\}.\]
One should note that the one dimensional Lebesgue measure results presented above, that is the theorems of Kurzweil, Fuchs and Kim, rely in parts on the theory of continued fractions, and so higher dimensional analogues do not readily follow. Similarly, the dimension theory result of Bugeaud, Schmeling, and Troubetzkoy, uses the Three Distance Theorem, again a result strongest in the one dimensional setting.
In generalising the technique presented in [15] to the weighted ubiquitous setting we are able to prove higher dimensional weighted analogues of the classical results. In order to state our results we need the following definitions.
We say, \(A\in\mathbb{R}^{n\times m}\) is \((v,\alpha)\)-singular if for any \(\varepsilon>0\) and for all sufficiently large \(N\geq 1\), there exists \((\mathbf{q},\mathbf{p})\in\mathbb{Z}^{m+n}\) solving the inequalities
\[\left\{\begin{aligned} &|A_{i}\cdot\mathbf{q}-p_{i}|< \varepsilon N^{-v_{i}\frac{m}{n}}\quad(1\leq i\leq n),\\ & 0<|\mathbf{q}|_{\alpha}<N\,.\end{aligned}\right. \tag{1.2}\]
Let \(Sing_{\alpha}(v)\) denote the set of all \((v,\alpha)\)-singular matrices. That is,
\[Sing_{\alpha}(v)=\left\{A\in\mathbb{R}^{n\times m}:\lim_{N\to\infty}\left(N \min_{0<|\mathbf{q}|_{\alpha}<N}\max_{1\leq i\leq n}|A_{i}\cdot\mathbf{q}-p_{ i}|^{\frac{n}{mv_{i}}}\right)=0\right\}.\]
We also define the set of \((v,\alpha)\)-badly approximable points as
\[\mathbf{Bad}_{\alpha}(v):=\left\{A\in\mathbb{R}^{n\times m}:\liminf_{|\mathbf{ q}|_{\alpha}\to\infty}\left(\max_{1\leq i\leq n}|\mathbf{q}|_{\alpha}|A_{i} \cdot\mathbf{q}-p_{i}|^{\frac{n}{mv_{i}}}\right)>0\right\}.\]
Lastly, define the set
\[L_{\alpha}(v,A,\varepsilon):=\left\{\ell\in\mathbb{N}:\left\{\begin{aligned} &|A_{i}\cdot\mathbf{q}-p_{i}|< \varepsilon 2^{-\ell v_{i}\frac{m}{n}}\quad(1\leq i\leq n),\\ & 0<|\mathbf{q}|_{\alpha}<2^{\ell}\end{aligned}\right.\quad \text{has no solution $(\mathbf{q},\mathbf{p})\in\mathbb{Z}^{m+n}$}\right\}.\]
Given the above definitions, we are able to state our results.
**Theorem 1**.: _Let \(A\in Sing_{\alpha}(v)^{c}\) and let \(\Psi=(\psi_{1},\ldots,\psi_{n})\) be an \(n\)-tuple of monotonic approximation functions with each_
\[\psi_{i}(r)\ll r^{-\tau_{i}\frac{m}{n}}\quad(1\leq i\leq n)\]
_for all \(r\in\mathbb{R}_{+}\) with the implied constants independent of \(r\). Then_
\[\lambda_{n}\left(W_{A}(\Psi)\right)=1\quad\mathrm{if}\ \sum_{\ell\in L_{\alpha}(v,A, \varepsilon)}2^{m\ell}\prod_{i=1}^{n}\psi_{i}\left(2^{\ell}\right)=\infty.\]
**Corollary 1**.: _Let \(A\in Bad_{\alpha}(v)\) and let \(\Psi=(\psi_{1},\ldots,\psi_{n})\) be an \(n\)-tuple of monotonic approximation functions with each_
\[\psi_{i}(r)\ll r^{-\tau_{i}\frac{m}{n}}\quad(1\leq i\leq n)\]
_for all \(r\in\mathbb{R}_{+}\) with the implied constants independent of \(r\). Then_
\[\lambda_{n}\left(W_{A}(\Psi)\right)=\begin{cases}0\quad\mathrm{if}\ \sum_{r\in\mathbb{N}}r^{m-1}\prod_{i=1}^{n}\psi_{i}(r)<\infty,\\ 1\quad\mathrm{if}\ \sum_{r\in\mathbb{N}}r^{m-1}\prod_{i=1}^{n}\psi_{i}(r)= \infty.\end{cases}\]
Proof.: This easily follows from Theorem 1 on the observation that \(\mathbf{Bad}_{\alpha}(v)\subset Sing_{\alpha}(v)^{c}\) and that \(L_{\alpha}(v,A,\varepsilon)\supset\mathbb{N}_{\geq k}\) for some \(\varepsilon>0\) and \(k\in\mathbb{N}\) when \(A\in\mathbf{Bad}_{\alpha}(v)\) (see Lemma 2 in SS2). The convergence case will be proven at the end of SS3.1.
In [15] Kim proves the analogue of Theorem 1 in the case where \(\alpha=(1,\ldots,1)\), \(v=(1,\ldots,1)\) and \(\Psi=(\psi,\ldots,\psi)\). It is from this paper that we draw our inspirations to prove our weighted analogues, see [15, Theorem 1.3, Corollary 1.6] for more details. It should be noted, in particular, that Kim's version of our result is in terms of the Hausdorff \(s\)-measure for \(0\leq s\leq n\), and so, in particular, they prove that, for \(A\in\mathbf{Bad}_{(1,\ldots,1)}(1,\ldots,1)\) and \(\Psi(q)=(q^{-\tau},\ldots,q^{-\tau})\) with \(\tau>\frac{m}{n}\), that
\[\dim_{\mathrm{H}}W_{A}(\Psi)=\frac{m}{\tau}.\]
Given our setup, we are able to prove the weighted analogue of their result as follows.
**Theorem 2**.: _Let \(v=(1,\ldots,1)\). Let \(A\in Bad_{\alpha}(v)\) and let \(\Psi=(\psi_{1},\ldots,\psi_{n})\) be an \(n\)-tuple of functions with each_
\[\psi_{i}(r)=r^{-\tau_{i}}\quad\text{ and }\quad\tau_{i}>\frac{m}{n}\quad(1 \leq i\leq n).\]
_Then,_
\[\dim_{\mathrm{H}}W_{A}(\Psi)=\min_{1\leq i\leq n}\left\{\frac{m+\sum_{i:\tau_ {j}>\tau_{i}}(\tau_{j}-\tau_{i})}{\tau_{j}}\right\}=s.\]
_Furthermore, for any ball \(B\subset[0,1]^{n}\) we have that_
\[\mathcal{H}^{s}\left(B\cap W_{A}(\Psi)\right)=\infty.\]
**Theorem 3**.: _Fix \(n=2\) and let \(v=(v_{1},v_{2})\). Let \(A\in Bad_{\alpha}(v)\) and let \(\Psi=(\psi_{1},\psi_{2})\) be a pair of functions with each_
\[\psi_{i}(r)=r^{-\tau_{i}}\,,\quad with\quad\tau_{i}\geq v_{i}\frac{m}{2}\quad(i=1,2).\]
_Furthermore, suppose that_
\[\min\left\{\frac{\min\{v_{1},v_{2}\}m}{\min\{\tau_{1},\tau_{2}\}n},\frac{\min\{ v_{1},v_{2}\}}{\max\{v_{1},v_{2}\}}\right\}\geq\frac{m-\min\{\tau_{1},\tau_{2} \}}{\max\{\tau_{1},\tau_{2}\}}. \tag{1.3}\]
_Then,_
\[\dim_{\rm H}W_{A}(\Psi)=\frac{m+\max\{\tau_{1},\tau_{2}\}-\min\{\tau_{1},\tau_ {2}\}}{\max\{\tau_{1},\tau_{2}\}}.\]
_Remark 1_.: The difference between Theorem 2 and Theorem 3 is subtle. When each of the approximation functions \(\psi_{i}\) decreases at a faster rate than the Dirichlet exponent of approximation (in this setting the exponent is \(\frac{m}{n}\)) then Theorem 2 is optimal. However, when this is not the case, Theorem 3 is required.
Condition (1.3) seems unnecessary but it is required for our method to work in the lower bound. In particular, observe that if \(\min\{\tau_{1},\tau_{2}\}\geq m\), that is the approximation functions are decreasing fast enough, then (1.3) is automatically satisfied.
It should be noted that a set of particular interest within the setting of twisted inhomogeneous approximation is the set of \(\xi\)-badly approximable points, which are generally the set of points \(b\in[0,1]\) such that (1) becomes false when the right hand side is multiplied by some arbitrarily small constant. In this article, we do not consider such a set (although we can deduce a metric result on weighted higher dimensional analogue of \(\xi\)-badly approximable points from Theorem 1). For more details on the metric properties of these sets, we refer the reader to [1, 10, 11, 13, 14, 21, 24, 18] and references within. This is a particularly active area of research, as far as we are aware there are at least two forthcoming papers in this area of research. In [3] the metric results of higher dimensional analogues of \(\xi\)-badly approximable points are studied in detail and generalised to the \(S\)-arithmetic setting, and in [20] the measure of \(\xi\)-badly approximable points when shifted by some constant is proven to be null for any algebraic measure on the \(m\)-dimensional torus (see [18, Theorem 1.6]). In both these settings \(\xi\) satisfies certain properties.
The rest of the paper is laid out as follows. In the next section, we recall and prove a few basic properties on the set of \((v,\alpha)\)-non-singular matrices. We also recall the setup for weighted ubiquitous systems as introduced in [16, 26]. Lastly, in SS3, we give the proofs of our main results.
**Acknowledgments:** The research of both authors is supported by the Australian Research Council discovery project 200100994. We would also like to thank Victor Beresnevich for many useful comments on an earlier draft.
## 2. Preliminaries and Auxiliary Results
### Properties of singular and badly approximable matrices
The following observation was made in [15] in the case \(v=(1,\ldots,1)\) and \(\alpha=(1,\ldots,1)\). As we will require such an observation in our results we state and prove the following easy lemma.
**Lemma 1**.: \(A\in Sing_{\alpha}(v)\) _if and only if for any \(\varepsilon>0\) the set_
\[L_{\alpha}(v,A,\varepsilon):=\left\{\ell\in\mathbb{N}:\begin{cases}|A_{i} \cdot\mathbf{q}-p_{i}|<\varepsilon 2^{-\ell v_{i}\frac{m}{n}}&(1\leq i\leq n),\\ 0<|\mathbf{q}|_{\alpha}<2^{\ell}\end{cases}\quad\text{has no solution }(\mathbf{q}, \mathbf{p})\in\mathbb{Z}^{m+n}\right\}\]
_is finite._
_Remark 2_.: Observe that this result naturally implies that if \(A\) is \((v,\alpha)\)- non-singular, that is, \(A\in Sing_{\alpha}(v)^{c}=\mathbb{R}^{n\times m}\setminus Sing_{\alpha}(v)\), then \(L_{\alpha}(v,A,\varepsilon)\) is unbounded. In particular this means the summation appearing in Theorem 1 is infinite.
Proof.: The forward implication that \(A\in Sing_{\alpha}(v)\) implies \(L_{\alpha}(v,A,\varepsilon)\) is immediate by the definition and setting \(N=2^{\ell}\). To see the reverse implication note that if \(L_{\alpha}\left(v,A,\frac{\varepsilon}{2^{\max v_{i}}}\right)\) is finite, say \(L_{\alpha}\left(v,A,\frac{\varepsilon}{2^{\max v_{i}}}\right)\subset\{1, \ldots,k\}\), then for all \(\ell>k\) and any \(2^{\ell}\leq N<2^{\ell+1}\) we have that
\[\begin{cases}|A_{i}\cdot\mathbf{q}-p_{i}|<\frac{\varepsilon}{2^{\frac{m}{n} \max v_{i}}}2^{-\ell v_{i}\frac{m}{n}}\leq\varepsilon 2^{-(\ell+1)v_{i}\frac{m}{n}} <\varepsilon N^{-v_{i}\frac{m}{n}}&(1\leq i\leq n),\\ 0<|\mathbf{q}|_{\alpha}<2^{\ell}\leq N\end{cases}\]
has solution \((\mathbf{q},\mathbf{p})\in\mathbb{Z}^{m+n}\). Since this is true for any choice of \(\varepsilon>0\) we have that \(A\in Sing_{\alpha}(v)\).
As stated in [15] in the case of \(v=(1,\ldots,1)\) and \(\alpha=(1,\ldots,1)\) we have the following result.
**Lemma 2**.: \(A\in\mathbf{Bad}_{\alpha}(v)\) _if and only if for some \(\varepsilon>0\) there exists \(k\in\mathbb{N}\) such that \(L_{\alpha}(v,A,\varepsilon)\supset\mathbb{N}_{\geq k}\)._
Proof.: If \(A\in\mathbf{Bad}_{\alpha}(v)\) then there exists some \(c(A)>0\) such that
\[\max_{1\leq i\leq n}|A_{i}\cdot\mathbf{q}-p_{i}|^{1/v_{i}}>c(A)|\mathbf{q}|_{ \alpha}^{-\frac{m}{n}}\]
for all sufficiently large \(\mathbf{q}\in\mathbb{Z}^{m}\setminus\{0\}\). Without loss of generality assume this is true for all \(\mathbf{q}\in\mathbb{Z}^{m}\) such that \(|\mathbf{q}|_{\alpha}\geq 2^{t}\). Hence for any \(\ell>t\) and any \(|\mathbf{q}|_{\alpha}<2^{\ell}\) we have that
\[\max_{1\leq i\leq n}|A_{i}\cdot\mathbf{q}-p_{i}|^{1/v_{i}}>c(A)|\mathbf{q}|_{ \alpha}^{-\frac{m}{n}}>c(A)2^{-\ell\frac{m}{n}}.\]
Thus \(L_{\alpha}\left(v,A,c(A)^{\min v_{i}}\right)\supset\mathbb{N}_{>t}\).
For the reverse implication, we have that for all \(\ell>k\in\mathbb{N}\)
\[\begin{cases}\max_{1\leq i\leq n}|A_{i}\cdot\mathbf{q}-p_{i}|^{1/v_{i}}> \varepsilon 2^{-\ell\frac{m}{n}},\\ 0<|\mathbf{q}|_{\alpha}<2^{\ell}\,.\end{cases}\]
Hence for all sufficiently large \(\mathbf{q}\in\mathbb{Z}^{m}\) (all \(\mathbf{q}\in\mathbb{Z}^{m}\) such that \(|\mathbf{q}|_{\alpha}>2^{k}\)) there exists \(\ell>k\) such that \(2^{\ell}<|\mathbf{q}|_{\alpha}\leq 2^{\ell+1}\) so that
\[\left\{\begin{array}{l}\max_{1\leq i\leq n}|A_{i}\cdot\mathbf{q}-p_{i}|^{1/v_ {i}}>\varepsilon 2^{-(\ell+1)\frac{m}{n}}=\frac{\varepsilon}{2^{\frac{m}{n}}}2^{- \ell\frac{m}{n}}>\frac{\varepsilon}{2^{\frac{m}{n}}}|\mathbf{q}|_{\alpha}^{- \frac{m}{n}},\\ \mbox{ for all }0<|\mathbf{q}|_{\alpha}<2^{\ell+1}\,.\end{array}\right.\]
This is true for all \(\ell>k\), hence \(A\in\mathbf{Bad}_{\alpha}(v)\).
### A Dirichlet-type theorem in the case of non-singular matrices
We need the following weighted analogue of [7, Chapter V, Theorem VI], which can readily be proven from [7, Chapter V, Theorem V]. For completeness, we prove the result here.
**Lemma 3**.: _Let \(A\in\mathbb{R}^{nm}\) and suppose there are no integer solutions \((\mathbf{q},\mathbf{p})\in\mathbb{Z}^{m+n}\setminus\{0\}\) to_
\[\left\{\begin{array}{l}|A\mathbf{q}-\mathbf{p}|_{v}<C\,,\\ |\mathbf{q}|_{\alpha}<N\,,\end{array}\right.\]
_for norms \(|\cdot|_{v}=\max_{1\leq i\leq n}|\cdot|^{1/v_{i}}\), \(|\cdot|_{\alpha}=\max_{1\leq i\leq m}|\cdot|^{1/\alpha_{i}}\) and vectors \(v=(v_{1},\ldots,v_{n})\in\mathbb{R}_{+}^{n}\) and \(\alpha=(\alpha_{1},\ldots,\alpha_{m})\in\mathbb{R}_{+}^{m}\) satisfying_
\[\sum_{i=1}^{n}v_{i}=n\quad\mbox{ and }\quad\sum_{i=1}^{m}\alpha_{i}=m\,.\]
_Then for any \(\mathbf{b}\in\mathbb{R}^{n}\) there exists \((\mathbf{q},\mathbf{p})\in\mathbb{Z}^{m+n}\) solving_
\[\left\{\begin{array}{l}|A\mathbf{q}-\mathbf{b}-\mathbf{p}|_{v}\leq c_{1}C\,, \\ |\mathbf{q}|_{\alpha}<c_{1}N\,,\end{array}\right.\]
_for constant_
\[c_{1}=\max_{\begin{subarray}{c}1\leq i\leq m\\ 1\leq j\leq n\end{subarray}}\left\{\left(\frac{1}{2}\left(C^{-n}N^{-m}+1 \right)\right)^{1/\alpha_{j}},\left(\frac{1}{2}\left(C^{-n}N^{-m}+1\right) \right)^{1/\alpha_{j}}\right\}.\]
Proof.: Consider the system of inequalities on \(n+m\) variables \((\mathbf{q},\mathbf{p})\in\mathbb{Z}^{m+n}\)
\[\left\{\begin{array}{l}|C^{-v_{i}}(A_{i}\cdot\mathbf{q}-p_{i})|<1\,,\quad(1 \leq i\leq n)\\ |N^{-\alpha_{j}}q_{j}|<1\,,\quad(1\leq j\leq m).\end{array}\right.\]
For ease of notation let \(f_{i}(\mathbf{q},\mathbf{p})=C^{-v_{i}}(A_{i}\cdot\mathbf{q}-p_{i})\) for \(1\leq i\leq n\) and \(f_{i+m}(\mathbf{q},\mathbf{p})=N^{-\alpha_{j}}q_{j}\) for \(1\leq i\leq m\). Then (3) is equivalent to
\[\max_{1\leq i\leq n+m}|f_{i}(\mathbf{q},\mathbf{p})|<1\,. \tag{2.1}\]
By assumption, (2.1) has no integer solutions \((\mathbf{q},\mathbf{p})\in\mathbb{Z}^{m+n}\setminus\{0\}\). Furthermore, observe that the \((n+m)\times(n+m)\) matrix associated to the system of linear forms (2.1) has determinant
\[C^{-\sum_{i=1}^{n}v_{i}}N^{-\sum_{i=1}^{m}\alpha_{i}}=C^{-n}N^{-m},\]
and so by [7, Theorem V] for any real number \(\mathbf{b}^{*}\in\mathbb{R}^{n+m}\) there are integer solutions to
\[\max_{1\leq i\leq n+m}|f_{i}(\mathbf{q},\mathbf{p})-b_{i}^{*}|<\frac{1}{2}(C^{-n }N^{-m}+1). \tag{2.2}\]
Setting \(\mathbf{b}^{*}=(C^{-v_{1}}b_{1},\ldots,C^{-v_{n}}b_{n},0,\ldots,0)\in\mathbb{R }^{n+m}\) in (2.2) gives us that that the system of inequalities
\[\left\{\begin{array}{c}|A_{i}\cdot\mathbf{q}-b_{i}-p_{i}|\leq\frac{1}{2}(C^{ -n}N^{-m}+1)C^{v_{i}},\quad(1\leq i\leq n)\\ |q_{i}|<\frac{1}{2}(C^{-n}N^{-m}+1)N^{a_{i}},\quad(1\leq i\leq m)\end{array}\right.\]
has integer solutions \((\mathbf{q},\mathbf{p})\in\mathbb{Z}^{n+m}\setminus\{0\}\) for any \(\mathbf{b}\in\mathbb{R}^{n}\). Rearranging in terms of the norms \(|\cdot|_{v}\) and \(|\cdot|_{\alpha}\) completes the proof.
For Lemma 3 we can easily deduce the following corollary
**Corollary 2**.: _Let \(A\in Sing_{\alpha}(v)^{c}=[0,1]^{nm}\setminus Sing_{\alpha}(v)\). Then for any \(\mathbf{b}\in\mathbb{R}^{n}\) there exists some \(\varepsilon>0\) such that for all \(\ell\in L_{\alpha}(v,A,\varepsilon)\) the system of inequalities_
\[\begin{cases}|A_{i}\cdot\mathbf{q}-p_{i}-b_{i}|<\varepsilon c_{2}2^{-\ell v_{ i}\frac{m}{n}}\quad(1\leq i\leq n),\\ |\mathbf{q}|_{\alpha}<c_{2}2^{\ell}\end{cases}\]
_has integer solution \((\mathbf{q},\mathbf{p})\in\mathbb{Z}^{m+n}\) for \(c_{2}=\left(\frac{1}{2}(\varepsilon^{-n}+1)\right)^{\frac{1}{\min_{i,j}|v_{i},a_{j}|}}\)._
Proof.: Note that since \(A\in Sing_{\alpha}(v)^{c}\), there exists \(\varepsilon>0\) such that the set \(L_{\alpha}(v,A,\varepsilon)\) has infinite cardinality by Lemma 1. Now take \(C=\varepsilon 2^{-\ell\frac{m}{n}}\) and \(N=2^{\ell}\) as in Lemma 3.
Observe that Corollary 2 implies that \(A\mathbf{q}(\mathrm{mod}1)\) is dense in \([0,1]^{m}\), and so, by Kronecker's Theorem (see for example [7, Chapter III Theorem IV]), the subgroup \(G(t^{\prime}A):=^{t}A\mathbb{Z}^{m}+\mathbb{Z}^{n}\subset\mathbb{R}^{n}\) has maximal rank \(n+m\) over \(\mathbb{Z}\). This allows us to use the following result.
**Lemma 4** ([15, Proposition 3.5]).: _Suppose \(A\in Sing_{\alpha}(v)^{c}\). Then for any ball \(B\subset[0,1]^{n}\)_
\[\frac{\#\{A\mathbf{q}\in B:|\mathbf{q}|_{\alpha}\leq N\}}{\#\{\mathbf{q}\in \mathbb{Z}^{n}:|\mathbf{q}|_{\alpha}\leq N\}}\to\lambda_{n}(B)\quad\text{as }N\to\infty.\]
The proof of this Lemma follows in exactly the same way as [15, Proposition 3.5], the only difference being in the latter stages of the proof, where the summation on \(q_{1}\) ranges over \(-N^{a_{1}}\) to \(N^{a_{1}}\) and is averaged over \(N^{a_{1}}\).
### Weighted ubiquitous systems
In this section we give the definition of local ubiquity for rectangles as given in [16]. This definition is a generalisation of ubiquity for rectangles as found in [26], which is in turn a generalisation of local ubiquity for balls introduced in [4]. For brevity, we will state the results of [16, 26] in the special setting of \(n\)-dimensional real space. We will also assume that each resonant set, see definition below, is a finite collection of points and so we can omit the notion of \(\kappa\)-scaling. See [16, 26] for the full statements.
Consider the product space \((\mathbb{R}^{n},|\cdot|_{\infty},\lambda_{n})\), where \(|\cdot|_{\infty}=\max_{1\leq i\leq n}|\cdot|\). For any \(x\in\mathbb{R}^{n}\) and \(r\in\mathbb{R}_{+}\) define the open ball
\[B(x,r)=\{y\in\mathbb{R}^{n}:|x-y|_{\infty}<r\}=\prod_{i=1}^{n}B_{1}(x_{i},r),\]
where \(B_{1}\) are the usual open intervals with centre \(x_{i}\) and diameter \(2r\) in \(\mathbb{R}\). Let \(J\) be a countably infinite index set, and \(\beta:J\to\mathbb{R}_{+}\), \(\alpha\to\beta_{\alpha}\) a positive function satisfying the condition that for any \(N\in\mathbb{N}\)
\[\#\left\{\alpha\in J:\beta_{\alpha}<N\right\}<\infty.\]
Let \(l_{k},u_{k}\) be two sequences in \(\mathbb{R}_{+}\) such that \(u_{k}\geq l_{k}\) with \(l_{k}\to\infty\) as \(k\to\infty\). Define
\[J_{k}=\{\alpha\in J:l_{k}\leq\beta_{\alpha}\leq u_{k}\}.\]
Let \(\rho=(\rho_{1},\ldots,\rho_{n})\) be an \(n\)-tuple of non-increasing functions \(\rho_{i}:\mathbb{R}_{+}\to\mathbb{R}_{+}\) such that each \(\rho_{i}(x)\to 0\) as \(x\to\infty\). For each \(1\leq i\leq n\), let \((R_{\alpha,i})_{\alpha\in J}\) be a sequence of finite collections of points in \(\mathbb{R}\). The family of sets \((R_{\alpha})_{\alpha\in J}\) where
\[R_{\alpha}=\prod_{i=1}^{n}R_{\alpha,i},\]
for each \(\alpha\in J\), are called _resonant sets_.
Define
\[\Delta(R_{\alpha},\rho(r))=\prod_{i=1}^{n}\Delta_{i}(R_{\alpha,i},\rho_{i}(r)),\]
where for some set \(Y\subset\mathbb{R}\) and \(b\in\mathbb{R}_{+}\)
\[\Delta_{i}(Y,b)=\bigcup_{\alpha\in Y}B_{1}(a,b)\]
is the union of balls in \(\mathbb{R}\) of radius \(b\) centred at all possible points in \(Y\).
The following notion of ubiquity for rectangles can be found in [16].
**Definition 1** (Local ubiquitous system of rectangles).: Call the pair \(\left((R_{\alpha})_{\alpha\in J},\beta\right)\)_a local ubiquitous system of rectangles with respect to \(\rho\)_ if there exists a constant \(c>0\) such that for any ball \(B\subset X\) and all sufficiently large \(k\in\mathbb{N}\)
\[\lambda_{n}\left(B\cap\bigcup_{\alpha\in J_{k}}\Delta(R_{\alpha},\rho(u_{k})) \right)\geq cm(B).\]
For \(n\)-tuple of approximation functions \(\Psi=(\psi_{1},\ldots,\psi_{n})\) with each \(\psi_{i}:\mathbb{R}_{+}\to\mathbb{R}_{+}\) define
\[W(\Psi)=\{x\in\mathbb{R}^{n}:x\in\Delta\left(R_{\alpha},\Psi(\beta_{\alpha}) \right)\text{ for infinitely many }\alpha\in J\}.\]
The following theorem, due to Kleinbock and Wang [16], provides the Lebesgue measure theory on \(W(\Psi)\).
**Theorem 4**.: _Let \(0<c<1.\) A function \(f\) is said to be c-regular with respect to a sequence \(\{r_{i}\}_{i\in\mathbb{N}}\) if \(f(r_{i+1})\leq cf(r_{i})\) for all sufficiently large \(i.\) Let \(W(\Psi)\) be defined as above and assume that \(((R_{a})_{a\in J},\beta)\) is a local ubiquitous systems of rectangles with respect to \(\rho.\) Suppose that_
1. _each_ \(\psi_{i}\) _is decreasing,_
2. _for each_ \(1\leq i\leq n,\)__\(\psi_{i}(r)\leq\rho_{i}(r)\) _for all_ \(r\in\mathbb{R}_{+}\) _and_ \(\rho_{i}(r)\to 0\) _as_ \(r\to\infty,\)__
3. _either_ \(\rho_{i}\) _is c-regular on_ \(\{u_{k}\}_{k\in\mathbb{N}}\) _for all_ \(1\leq i\leq n\) _or_ \(\psi_{i}\) _is c-regular on_ \(\{u_{k}\}_{k\in\mathbb{N}}\) _for all_ \(1\leq i\leq n\) _for some_ \(0<c<1.\)__
_Then_
\[\lambda_{n}(W(\Psi))=\text{full}\quad\text{if}\quad\sum_{k=1}^{\infty}\prod_{ i=1}^{n}\left(\frac{\psi_{i}(u_{k})}{\rho_{i}(u_{k})}\right)=\infty.\]
Here, by full we mean that the complement is a Lebesgue nullset.
For the Hausdorff theory we have the following theorem due to Wang and Wu [26].
**Theorem 5**.: _Let \(W(\Psi)\) be defined as above and assume that \(((R_{a})_{a\in J},\beta)\) is a local ubiquitous systems of rectangles with respect to \(\rho=(\rho^{a_{1}},\ldots,\rho^{a_{n}})\) for some function \(\rho:\mathbb{R}_{+}\to\mathbb{R}_{+}\) and \((a_{1},\ldots,a_{n})\in\mathbb{R}_{+}^{n}\) with \(\rho(N)\to 0\) as \(N\to\infty.\) Then, for \(\Psi=(\rho^{a_{1}+t_{1}},\ldots,\rho^{a_{n}+t_{n}})\) for some \(t=(t_{1},\ldots,t_{n})\in\mathbb{R}_{+}^{n}\)_
\[\dim_{\mathbb{H}}W(\Psi)\geq\min_{A_{i}\in A}\left\{\sum_{j\in\mathcal{K}_{1} }1+\sum_{j\in\mathcal{K}_{2}}1+\frac{\sum_{j\in\mathcal{K}_{3}}a_{j}-\sum_{j \in\mathcal{K}_{2}}t_{j}}{A_{i}}\right\}=s,\]
_where \(A=\{a_{i},a_{i}+t_{i},1\leq i\leq n\}\) and \(\mathcal{K}_{1},\mathcal{K}_{2},\mathcal{K}_{3}\) are a partition of \(\{1,\ldots,n\}\) defined as_
\[\mathcal{K}_{1}=\{j:a_{j}\geq A_{i}\},\quad\mathcal{K}_{2}=\{j:a_{j}+t_{j} \leq A_{i}\}\setminus\mathcal{K}_{1},\quad\mathcal{K}_{3}=\{1,\ldots n\} \setminus(\mathcal{K}_{1}\cup\mathcal{K}_{2}).\]
_Furthermore, for any ball \(B\subset\mathbb{R}^{n}\)_
\[\mathcal{H}^{s}(B\cap W(\Psi))=\mathcal{H}^{s}(B).\]
## 3. Proof of main results
The main results will be proven by showing that we have a weighted ubiquitous system for a certain setup. Given this statement, we can use the theorems of Kleinbock & Wang (Theorem 4), and Wang & Wu (Theorem 5) to obtain the divergence case of Theorem 1 and the lower bound of Theorem 2. The corresponding convergence and upper bound statements use standard covering arguments, but for completeness, we include them in their relevant sections.
### Proof of the weighted ubiquity statement
Choose any \(A\in Sing_{\alpha}(v)^{\varepsilon}\) and fix such a choice for the remainder of this section. Let
\[J=\{\mathbf{q}\in\mathbb{Z}^{m}\} \beta:J\to\mathbb{R}_{+},\mathbf{q}\mapsto\beta_{\mathbf{q}}=| \mathbf{q}|_{\alpha},\] \[R_{\mathbf{q},i}=\{A_{i}\cdot\mathbf{q}\} R_{\mathbf{q}}=\{A\mathbf{q}\},\]
where \(\{X\}\) denotes the vector composing of the fractional part in each coordinate. Define the sequences
\[l_{i}=c_{3}u_{i}\text{ for constant }0<c_{3}<1\text{ dependent on }\varepsilon,\quad u_{i}=c_{2}2^{\ell_{i}}\]
where \(\varepsilon>0\) is the real number such that the set \(L_{\alpha}(v,A,\varepsilon)\) is infinite, and \(\{\ell_{i}\}_{i\in\mathbb{N}}\) corresponds to the ordered sequence of such integers (\(c_{2}\) is the constant given in Corollary 2).
**Proposition 1**.: _Let each_
\[\rho_{i}(r)=\varepsilon c_{2}^{\frac{1+v_{i}\frac{m}{n}}}r^{-v_{i}\frac{m}{n}} \quad(1\leq i\leq n)\,.\]
_Then for any ball \(B\subset[0,1]^{n}\)_
\[\lambda_{n}\left(B\cap\bigcup_{\mathbf{q}\in J_{k}}\Delta\big{(}R_{\mathbf{q} },\rho(u_{k})\big{)}\right)\geq\frac{1}{2}\lambda_{n}(B),\]
_for all sufficiently large \(k\in\mathbb{N}\)._
Proof.: Fix \(B\subset[0,1]^{n}\). By Corollary 2 we have that for any \(\ell_{j}\in L_{\alpha}(v,A,\varepsilon)\)
\[\lambda_{n}\left(B\cap\bigcup_{\mathbf{q}\in\mathbb{Z}^{m}:|\mathbf{q}|_{ \alpha}\leq c_{2}2^{\ell_{k}}}\Delta\Big{(}R_{\mathbf{q}},\Big{(}\varepsilon c _{2}2^{-\ell_{k}v_{1}\frac{m}{n}},\ldots,\varepsilon c_{2}2^{-\ell_{k}v_{n} \frac{m}{n}}\Big{)}\Big{)}\right)=\lambda_{n}(B).\]
Note that each
\[\varepsilon c_{2}2^{-\ell_{k}v_{i}\frac{m}{n}}=\varepsilon c_{2}^{1+v_{i}\frac {m}{n}}\left(c_{2}2^{\ell_{j}}\right)^{-v_{i}\frac{m}{n}}=\rho_{i}(u_{k})\quad (1\leq i\leq n).\]
This allows us to deduce that for any \(k\in\mathbb{N}\)
\[\lambda_{n}(B)\leq\lambda_{n}\left(B\cap\bigcup_{\mathbf{q}\in\mathbb{Z}^{m}:| \mathbf{q}|_{\alpha}\leq l_{k}}\Delta\big{(}R_{\mathbf{q}},\rho(u_{k})\big{)} \right)+\lambda_{n}\left(B\cap\bigcup_{\mathbf{q}\in\mathbb{Z}^{m}:\mathbf{q} \in J_{k}}\Delta\big{(}R_{\mathbf{q}},\rho(u_{k})\big{)}\right),\]
and so the proof is complete in showing that
\[\lambda_{n}\left(B\cap\bigcup_{\mathbf{q}\in\mathbb{Z}^{m}:|\mathbf{q}|_{ \alpha}\leq l_{k}}\Delta\big{(}R_{\mathbf{q}},\rho(u_{k})\big{)}\right)\leq \frac{1}{2}\lambda_{n}(B). \tag{3.1}\]
To show (3.1) we note, by Lemma 4, that there exists sufficiently large \(k_{B}\in\mathbb{N}\) such that for all \(k>k_{B}\)
\[\#\{A\mathbf{q}\in 2B:|\mathbf{q}|_{\alpha}\leq l_{k}\}\leq 2l_{k}^{m}\lambda_{ n}(B).\]
Hence, taking \(k\in\mathbb{N}\) sufficiently large such that \(\max_{1\leq i\leq n}\rho_{i}(u_{k})<r(B)\), we have that
\[\lambda_{n}\left(B\cap\bigcup_{\mathbf{q}\in\mathcal{I}^{m}:| \mathbf{q}|_{\alpha}\leq l_{k}}\Delta\left(R_{\mathbf{q}},\rho(u_{k})\right)\right) \leq 2l_{k}^{m}\lambda_{n}(B)2^{n}\prod_{i=1}^{n}\rho_{i}(u_{k})\] \[=2^{n+1}\varepsilon^{n}c_{3}^{m}c_{2}^{n+m}\lambda_{n}(B)\] \[\leq\frac{1}{2}\lambda_{n}(B),\]
where the last inequality follows since we can choose the constant \(c_{3}\) so that
\[c_{3}<\left(2^{-(n+2)}\varepsilon^{-n}c_{2}^{-(n+m)}\right)^{1/m}\]
independent of \(B\) and \(k\).
### Proof of Theorem 1
To prove Theorem 1 we apply Theorem 4. The bulk of the proof (the weighted ubiquity statement) has been proven in the previous section. It remains to verify that the choice of \(\rho\) function satisfies the conditions of Theorem 4 and that the summations given in Theorem 4 and Theorem 1 are equivalent.
By the conditions of Theorem 1, each \(\psi_{i}\) is monotonically decreasing. Furthermore each \(\rho_{i}(r)\geq\psi_{i}(r)\). Note that each \(\psi_{i}\) may need to be multiplied by some constant to make this strictly true, but as shown in for example [2, Lemma 5.7], this would not change the Lebesgue measure of \(W_{A}(\Psi)\). Observe that each
\[\rho_{i}(u_{k+1})=\varepsilon c_{2}^{1+v_{i}\frac{m}{n}}u_{k+1}^{-v_{i}\frac{ m}{n}}=\varepsilon c_{2}2^{-\ell_{k+1}v_{i}\frac{m}{n}}\leq\varepsilon c_{2}2^{-( \ell_{k}+1)v_{i}\frac{m}{n}}=2^{-v_{i}\frac{m}{n}}\rho_{i}(u_{k})\leq 2^{- \frac{m}{n}\min_{i}v_{i}}\rho_{i}(u_{k})\]
and so \(\rho\) is \(2^{-\frac{m}{n}\min_{i}v_{i}}\)-regular. Thus, conditions \((I)-(III)\) of Theorem 4 are satisfied, and so
\[\lambda_{n}(W_{A}(\Psi))=1\quad\text{if }\sum_{k=1}^{\infty}\prod_{i=1}^{n} \left(\frac{\psi_{i}(u_{k})}{\rho_{i}(u_{k})}\right)=\infty.\]
Lastly, observe that
\[\sum_{k=1}^{\infty}2^{m\ell_{k}}\prod_{i=1}^{n}\psi_{i}(2^{\ell_ {k}}) =\varepsilon^{n}c_{2}^{n}\sum_{k=1}^{\infty}\varepsilon^{-n}c_{2} ^{-(n+m)}(c_{2}2^{\ell_{k}})^{m}\prod_{i=1}^{n}\psi_{i}(2^{\ell_{k}})\] \[=\varepsilon^{n}c_{2}^{n}\sum_{k=1}^{\infty}\prod_{i=1}^{n}\frac{ \psi_{i}(2^{\ell_{k}})}{\rho_{i}(u_{k})}\] \[\geq\varepsilon^{n}c_{2}^{n}\sum_{k=1}^{\infty}\prod_{i=1}^{n} \frac{\psi_{i}(u_{k})}{\rho_{i}(u_{k})}\]
where the last line follows by the monotonicity of \(\psi_{i}\) and that \(c_{2}\geq 1\). Hence
\[\sum_{k=1}^{\infty}2^{m\ell_{k}}\prod_{i=1}^{n}\psi_{i}(2^{\ell_{k}})=\infty\]
is sufficient to prove full measure of \(W_{A}(\Psi)\).
For the convergence case of Corollary 1, apply the Borel-Cantelli Lemma to see that
\[\lambda_{n}(W_{A}(\Psi))=0\quad\text{if}\quad\sum_{r\in\mathbb{N}_{2^{r}}<| \mathbf{q}|_{\alpha}\leq 2^{r+1}}\lambda_{n}\left(\Delta(A\mathbf{q},\Psi(| \mathbf{q}|_{\alpha}))\right)<\infty.\]
Observe that
\[\sum_{r\in\mathbb{N}_{2^{r}}<|\mathbf{q}|_{\alpha}\leq 2^{r+1}} \lambda_{n}\left(\Delta(\{A\mathbf{q}\},\Psi(|\mathbf{q}|_{\alpha}))\right) \leq\sum_{r\in\mathbb{N}}c2^{rm}2^{n}\prod_{i=1}^{n}\psi_{i}(2^{r }),\] \[\leq c2^{n+1}\sum_{r=1}^{\infty}r^{m-1}\prod_{i=1}^{n}\psi_{i}(r),\]
for constant \(c=m2^{m+1}3^{m-1}\left(1-2^{-\min_{i}\alpha_{i}}\right)\) independent of \(r\), and so the convergence summation condition of Corollary 1 implies zero measure.
### Proof of Theorem 2-3
For the upper bound consider the standard cover
\[\bigcup_{\mathbf{q}\in Z^{m}\setminus\{0\}:|\mathbf{q}|_{\alpha}\geq N} \Delta(\{A\mathbf{q}\},\Psi(\mathbf{q}))\]
of \(W_{A}(\Psi)\). Observe that each rectangle \(\Delta(\{A\mathbf{q}\},\Psi(\mathbf{q}))\) can be covered by
\[\prod_{i=1}^{n}\max\left\{1,\frac{|\mathbf{q}|_{\alpha}^{-\tau_{i}}}{|\mathbf{ q}|_{\alpha}^{-\tau_{j}}}\right\}\]
balls of radius \(|\mathbf{q}|_{\alpha}^{-\tau_{j}}\). Thus
\[\mathcal{H}^{s}(W_{A}(\Psi)) \leq\sum_{\mathbf{q}\in Z^{m}:|\mathbf{q}|_{\alpha}\geq N}\prod_{ i=1}^{n}\max\left\{1,\frac{|\mathbf{q}|_{\alpha}^{-\tau_{i}}}{|\mathbf{q}|_{ \alpha}^{-\tau_{j}}}\right\}|\mathbf{q}|_{\alpha}^{-s\tau_{j}},\] \[=\sum_{r\in\mathbb{N}_{\infty}}\sum_{\mathbf{q}\in Z^{m}:| \mathbf{q}|_{\alpha}=r}r^{\sum_{i:\tau_{j}>\tau_{i}}(\tau_{j}-\tau_{i})-s\tau _{j}},\] \[\leq 2^{m}\sum_{r\in\mathbb{N}_{\infty}N}r^{m-1+\sum_{i:\tau_{j}> \tau_{i}}(\tau_{j}-\tau_{i})-s\tau_{j}}\to 0\]
as \(N\to\infty\) for any
\[s>\frac{m+\sum_{i:\tau_{j}>\tau_{i}}(\tau_{j}-\tau_{i})}{\tau_{j}}.\]
This argument is true for any choice of \(1\leq j\leq n\), hence
\[\dim_{\mathrm{H}}W_{A}(\Psi)\leq\min_{1\leq i\leq n}\left\{\frac{m+\sum_{i: \tau_{j}>\tau_{i}}(\tau_{j}-\tau_{i})}{\tau_{j}}\right\},\]
completing the upper bound result.
For the lower bound dimension result we appeal to Theorem 5. Given the setup provided in Theorem 5 we have that
\[a_{i}=\frac{m}{n},\quad t_{i}=\tau_{i}-\frac{m}{n}\quad(1\leq i\leq n),\quad \rho(r)=\left(\varepsilon c_{2}^{1+\frac{m}{n}}\right)^{\frac{n}{m}}r^{-1}.\]
For ease of notation, let
\[d(A)=\sum_{j\in\mathcal{K}_{1}}1+\sum_{j\in\mathcal{K}_{2}}1+\frac{\sum_{j\in \mathcal{K}_{3}}a_{j}-\sum_{j\in\mathcal{K}_{2}}t_{j}}{A}\]
with each set \(\mathcal{K}_{1},\mathcal{K}_{2}\) and \(\mathcal{K}_{3}\) defined by \(A\). Consider the two cases:
* \(A=\frac{m}{n}\): Then the sets appearing in Theorem 5 are \[\mathcal{K}_{1}=\{1,\ldots,n\}\,,\quad\mathcal{K}_{2}=\emptyset\,,\quad \mathcal{K}_{3}=\emptyset\] and so \(d(A)=n\), giving us a trivial lower bound.
* \(A=\tau_{j}\) for some \(1\leq j\leq n\): Then \[\mathcal{K}_{1}=\emptyset\,,\quad\mathcal{K}_{2}=\{i:\tau_{i}\leq\tau_{j}\}\,, \quad\mathcal{K}_{3}=\{i:\tau_{i}>\tau_{j}\}=\{1,\ldots,n\}\setminus\mathcal{ K}_{2}\] and so \[d(A) =\#\mathcal{K}_{2}+\frac{\frac{m}{n}(n-\#\mathcal{K}_{2})-\sum_{ i:\tau_{i}\leq\tau_{j}}\left(\tau_{i}-\frac{m}{n}\right)}{A}\] \[=\frac{m+(A-\frac{m}{n})\#\mathcal{K}_{2}-\sum_{i\in\mathcal{K}_{ 2}}\left(\tau_{i}-\frac{m}{n}\right)}{A}\] \[=\frac{m+\sum_{i\in\mathcal{K}_{2}}(A-\tau_{i})}{A}\,.\] Hence, replacing \(A\) by some \(\tau_{j}\) and inputting the definition of \(\mathcal{K}_{2}\) we have that \[d(\tau_{j})=\frac{m+\sum_{i:\tau_{i}\leq\tau_{j}}\left(\tau_{j}-\tau_{i}\right) }{\tau_{j}}\] Taking the minimum over all possible choices of \(1\leq j\leq n\) we have the desired lower bound. The measure result is simply due to the second part of Theorem 5 and the observation that \(s<n\).
For the lower bound of Theorem 3 let
\[a_{i}=v_{i}\frac{m}{n}\,,\quad t_{i}=\tau_{i}-v_{i}\frac{m}{n}\quad(i=1,2) \quad\rho(r)=\left(\varepsilon c_{2}^{1+\frac{m}{n}}\right)^{\frac{m}{n}}r^{- 1}\,.\]
Without loss of generality we assume \(v_{1}<v_{2}\).
* \(v_{1}\frac{m}{n}<v_{2}\frac{m}{n}<\tau_{1}<\tau_{2}\): Consider the sets \(\mathcal{K}_{1},\mathcal{K}_{2}\) and \(\mathcal{K}_{3}\), and the corresponding dimension bound for each of the following
* \(A=v_{1}\frac{m}{n}\): \[\mathcal{K}_{1}=\{1,2\}\,,\quad\mathcal{K}_{2}=\emptyset\,,\quad\mathcal{K}_{ 3}=\emptyset\,.\] So \[d(A)=2.\]
* \(A=v_{2}\frac{m}{n}\): \[\mathcal{K}_{1}=\{2\}\,,\quad\mathcal{K}_{2}=\emptyset\,,\quad\mathcal{K}_{3}=\{1 \}\,.\] So \[d(A)=1+\frac{v_{1}\frac{m}{n}}{v_{2}\frac{m}{n}}=1+\frac{v_{1}}{v_{2}}.\]
* \(A=\tau_{1}\): \[\mathcal{K}_{1}=\emptyset\,,\quad\mathcal{K}_{2}=\{1\}\,,\quad\mathcal{K}_{3}= \{2\}\,.\] So \[d(A)=1+\frac{v_{1}\frac{m}{n}+v_{2}\frac{m}{n}-\tau_{1}}{\tau_{1}}=1+\frac{m- \tau_{1}}{\tau_{1}}.\]
* \(A=\tau_{2}\): \[\mathcal{K}_{1}=\emptyset\,,\quad\mathcal{K}_{2}=\{1,2\}\,,\quad\mathcal{K}_{3 }=\emptyset\,.\] So \[d(A)=2-\frac{\left(\tau_{1}-v_{1}\frac{m}{n}\right)+\left(\tau_{2}-v_{2}\frac{ m}{n}\right)}{\tau_{2}}=1+\frac{m-\tau_{1}}{\tau_{2}}.\] Since \(\tau_{2}>\tau_{1}\) we have, in this case, that \[\min d(A)=1+\min\left\{\frac{m-\tau_{1}}{\tau_{2}},\frac{v_{1}}{v_{2}}\right\}.\]
* \(v_{1}\frac{m}{n}<v_{2}\frac{m}{n}<\tau_{2}<\tau_{1}\): This is similar to the previous case with \(\tau_{1}\) and \(\tau_{2}\) switching roles. In particular we get \[\min d(A)=1+\min\left\{\frac{v_{1}}{v_{2}},\frac{m-\tau_{2}}{\tau_{1}}\right\}.\]
Combining these cases together, and using the condition that
\[\frac{v_{1}}{v_{2}}\geq\frac{m-\min\{\tau_{1},\tau_{2}\}}{\max\{\tau_{1},\tau_ {2}\}}\]
we obtain our lower bound. Lastly, consider the following case:
* \(v_{1}\frac{m}{n}<\tau_{1}<v_{2}\frac{m}{n}<\tau_{2}\): Consider the sets \(\mathcal{K}_{1},\mathcal{K}_{2}\) and \(\mathcal{K}_{3}\), and the corresponding dimension bound for each of the following
* \(A=v_{1}\frac{m}{n}\): \[\mathcal{K}_{1}=\{1,2\}\,,\quad\mathcal{K}_{2}=\emptyset\,,\quad\mathcal{K}_{3 }=\emptyset\,.\] So \[d(A)=2.\]
* \(A=v_{2}\frac{m}{n}\): \[\mathcal{K}_{1}=\{2\}\,,\quad\mathcal{K}_{2}=\{1\}\,,\quad\mathcal{K}_{3}= \emptyset\,.\] So \[d(A)=2-\frac{\tau_{1}-v_{1}\frac{m}{n}}{v_{2}\frac{m}{n}}=1+\frac{v_{1}\frac{m} {n}+v_{2}\frac{m}{n}-\tau_{1}}{v_{2}\frac{m}{n}}=1+\frac{m-\tau_{1}}{v_{2} \frac{m}{n}}.\]
* \(A=\tau_{1}\): \[\mathcal{K}_{1}=\{2\}\,,\quad\mathcal{K}_{2}=\{1\}\,,\quad\mathcal{K}_{3}= \emptyset\,.\] So, \[d(A)=2-\frac{\tau_{1}-v_{1}\frac{m}{n}}{\tau_{1}}=1+\frac{v_{1}\frac{m}{n}}{\tau _{1}}.\]
* \(A=\tau_{2}\): \[\mathcal{K}_{1}=\emptyset\,,\quad\mathcal{K}_{2}=\{1,2\}\,,\quad\mathcal{K}_{3 }=\emptyset\,.\] So \[d(A)=2-\frac{\left(\tau_{1}-v_{1}\frac{m}{n}\right)+\left(\tau_{2}-v_{2}\frac {m}{n}\right)}{\tau_{2}}=1+\frac{m-\tau_{1}}{\tau_{2}}.\] In this case, noting \(\tau_{2}>v_{2}\frac{m}{n}\), we have that \[\min d(A)=1+\min\left\{\frac{v_{1}\frac{m}{n}}{\tau_{1}},\frac{m-\tau_{1}}{ \tau_{2}}\right\}.\]
Using (1.3), this minimum again becomes our desired lower bound.
|
2302.08950 | Handling the Alignment for Wake Word Detection: A Comparison Between
Alignment-Based, Alignment-Free and Hybrid Approaches | Wake word detection exists in most intelligent homes and portable devices. It
offers these devices the ability to "wake up" when summoned at a low cost of
power and computing. This paper focuses on understanding alignment's role in
developing a wake-word system that answers a generic phrase. We discuss three
approaches. The first is alignment-based, where the model is trained with
frame-wise cross-entropy. The second is alignment-free, where the model is
trained with CTC. The third, proposed by us, is a hybrid solution in which the
model is trained with a small set of aligned data and then tuned with a
sizeable unaligned dataset. We compare the three approaches and evaluate the
impact of the different aligned-to-unaligned ratios for hybrid training. Our
results show that the alignment-free system performs better than the
alignment-based for the target operating point, and with a small fraction of
the data (20%), we can train a model that complies with our initial
constraints. | Vinicius Ribeiro, Yiteng Huang, Yuan Shangguan, Zhaojun Yang, Li Wan, Ming Sun | 2023-02-17T15:33:47Z | http://arxiv.org/abs/2302.08950v3 | # Handling the Alignment for Wake Word Detection:
###### Abstract
Wake word detection exists in most intelligent homes and portable devices. It offers these devices the ability to "wake up" when summoned at a low cost of power and computing. This paper focuses on understanding alignment's role in developing a wake-word system that answers a generic phrase. We discuss three approaches. The first is alignment-based, where the model is trained with frame-wise cross-entropy. The second is alignment-free, where the model is trained with CTC. The third, proposed by us, is a hybrid solution in which the model is trained with a small set of aligned data and then tuned with a sizeable unaligned dataset. We compare the three approaches and evaluate the impact of the different aligned-to-unaligned ratios for hybrid training. Our results show that the alignment-free system performs better alignment-based for the target operating point, and with a small fraction of the data (\(20\%\)), we can train a model that complies with our initial constraints.
Vnicius Ribeiro\({}^{1,2}\), Yiteng Huang\({}^{1}\), Yuan Shangguan\({}^{1}\), Zhaojun Yang\({}^{1}\), Li Wan\({}^{1}\), Ming Sun\({}^{1}\)+\({}^{1}\)Meta AI
\({}^{2}\)Universite de Lorraine, CNRS, Inria, LORIA, F-54000, Nancy, France
{ribeirovinicius, yah, yuansg, zhaojuny, wwanli, sunming425}@meta.com
Footnote †: Work was done when Vinicius Ribeiro was an intern at Meta AI. Correspondence to Yiteng Huang: [email protected]
**Index Terms**: wake word detection, keyword spotting, speech recognition, alignment-free
## 1 Introduction
Wake word detection, also known in the literature as keyword spotting, refers to identifying if target phrases appear in an audio sequence. With the advancement of virtual assistants like Google Assistant, Amazon Alexa, Apple Siri, etc., wake word engines are present in most of the edge devices available in the market, be they phones, tablets, watches, or glasses. Wake words work as gateways to these devices. Due to energy consumption constraints, these devices operate most of the time in a low energy consumption state. They are thus not expected to recognize any commands until they register a call to the wake word. Once activated, they shift to a high energy consumption state with more powerful computation to completely recognize the user's instructions. Wake word detection accuracy is thus essential for a smooth user experience. To be specific, a system with a high False Accept Rate will trigger too frequently, being annoying to the user and raising concerns about privacy issues [1, 2]. Conversely, a system with a high False Reject Rate will incapacitate the person from using the product. Most often, these two aspects represent a trade-off, i.e., changing the activation threshold towards one metric will inevitably damage the other.
Initially, wake word detection models were developed using Hidden Markov Models, which model both the voice activity and the background noise [3, 4], and later they shifted to deep neural networks [5, 6] as most of the literature in complex data processing did. Wake word detection models are traditionally built on top of two main techniques. On the one hand, there are alignment-based approaches [7, 8], which assume that the exact alignment between the phoneme targets of the utterances and the corresponding audio are available during model training. Such alignment simplifies the task since the models are trainable with traditional cross-entropy (CE) loss. The alignments, however, obtained using forced alignment algorithms, not only are computationally expensive, but also might introduce annotation errors and be unavailable for low-resource languages [9]. On the other hand, there exist alignment-free techniques [10, 11], with which phonetic alignments of audio transcriptions are not needed during model training. Alignment-free approaches are common in Automatic Speech Recognition (ASR) [12, 13], commonly used with the the Connectionist Temporal Classification (CTC) loss [14]. CTC uses dynamic programming to search for the most likely alignment between all possibilities in an efficient manner. The most significant advantage of using alignment-free detection is that it enables the usage of much larger dataset. Additionally, it allows federated learning with edge devices [15] since the users' data do not have to leave their devices to be pre-processed. Alternatively, we might have at our disposal some aligned data that we would like to benefit from. The idea proposed by us is that this small dataset could provide a good starting point for the acoustic model of the alignment-free model. We refer to this approach as hybrid alignment. We initially hypothesize that the model trained with a large aligned set would represent a performance upper bound; likewise, the model trained with the small unaligned set would represent a performance lower bound. However, when training the model with a small aligned set using cross-entropy loss, and then continuing the training with a large unaligned set using CTC loss, the model can find an intermediate spot where the performance is improved in comparison to an alignment-based system trained with the smaller share of the corpus and to an alignment-free system trained with the larger share of the corpus.
To our knowledge, few previous works in wake word detection have compared alignment-based and alignment-free approaches under the same setup. Also, this work is the first to propose a hybrid alignment method to benefit from aligned and unaligned data in the same system. Our main contribution is to fill the gap by comparing the three alternatives described under the same conditions and with the same dataset for training and evaluation. Our second contribution, is to present results that contradicts our initial hypothesis - that the alignment-based training approach does not represent a performance upper bound, and the CTC-alignment based approach does not represent a lower bound. The best approach should be decided based on the operating point chosen for the target use case.
In this work, we trained a wake word detection model on \(274\,194\) utterances, totaling \(523\) hours of speech. We explore the amount of utterances with wake words, referred to as "positive" data, needed to achieve reasonable results with alignment-based model training. Similarly, we identify the amount of positive data for the alignment-free approach. We show that the alignment-based training performs better for a high FAh (\(>0.5FAh\)), while the alignment-free performs better for low levels of FAh. We than combine the two approaches and observe that with a \(50\%/50\%\) aligned vis-a-vis unaligned data ratio, the model retains the best of the two approaches.
## 2 Data Preparation
The dataset used in this work contains positive samples with 4 274 unique speakers (2 945 female and 1 329 male) collected under the users' agreement via dogfooding and paid recording sessions. Speakers have unequal contribution to the dataset, meaning that a few speakers supplied hundreds of utterances while many speakers contribute with only a few of them. This imbalance could bias the model towards the most significant contributors, preventing it from generalizing. To handle this imbalance, we hold out speakers with a low contribution (less than 50 utterances) for the final evaluation. Additionally, we limit the individual contribution in training (100 utterances/speaker) and the evaluation set (10 utterances/speaker) such that no speaker has a disproportional amount of samples in the dataset. Finally, we discard positive utterances longer than 20 seconds. The final train and evaluation datasets are organized in a speaker-independent way.
Additionally, we use irrelevant speech that are very unlikely to contain the wake word as negative data. The same negative set was used in all of the experiments, which differed only in the positive data. [1] summarizes the number of speakers, utterances, and duration in hours available for train and evaluation.
The phonetic annotations are first obtained by running a non-streaming ASR model to obtain the audios' transcription; then, we run forced alignment on all samples. We augment the positive train set by applying speed distortion and adding background noise. For each original audio sample, five new augmented copies are created. To evaluate how much data is necessary for training each approach, we split the set into several subsets referred to as A and B. We denote a [X] the dataset containing X% of the data and B[Z] the dataset containing the remaining samples, i.e., A[X] and B[Z] are complementary to each other (\(Z=100-X\)). In addition, we denote CE-A[X] and CE-B[Z] the alignment-based models trained with the datasets A[X] and B[Z], respectively. Similarly, we denote CTC-A[X] and CTC-B[Z] the alignment-free models trained with the datasets A[X] and B[Z], respectively. Finally, we denote CE-A[X]-CTC-B[Z] the alignment-hybrid models trained with A[X] using CE followed by B[Z] using CTC. It is important to highlight that the set A[\(X_{i}\)] contains the set A[\(X_{j}\)] for all \(X_{i}>X_{j}\). Likewise, the set B[\(Z_{i}\)] contains the set B[\(Z_{j}\)] for all \(Z_{i}>Z_{j}\).
## 3 Methods
Our network is based on the neural network topology called SVDF (single value decomposition filter), first introduced by [16] and discussed in detail in [17]. The network is composed of an encoder that emits per-frame class probabilities while the decoder predicts the occurrence of the wake word. We utilize only the encoding path differently from [17]. Our model takes 80-dimensional log-Mel filter-bank energies computed over a 25ms window every 10ms as input and is trained to recognize the nine phonemes of the wake word plus three extra tokens corresponding to silence, unknown and blank (used for CTC). The decoder is not based on learnable parameters. Instead, we run a rule-based procedure. We run a sliding window where we expect to observe the wake word through the audio file. The emission probabilities inside the decoding window are smoothed, and the log probabilities are computed. Finally, the best decoding path for a given wake word candidate is selected using the Max Pooling Viterbi algorithm, i.e., instead of summing the log probabilities, the maximum log probability in the sequence of the same token predictions is computed.
The models are trained for 180 epochs. In the hybrid-alignment case, the training is divided such that during the first 90 epochs, the model runs with cross-entropy loss and the following 90 epochs with CTC loss. The models are trained with Adam optimizer [18] using a weight decay of \(1\mathrm{e}{-2}\) and a learning rate of \(5\mathrm{e}{-3}\) during the first \(60\) epochs and then decreased by a factor of \(0.96\) per epoch. For hybrid alignment, the CTC training phase uses a fixed learning rate of \(5\mathrm{e}{-4}\). The code was implemented using PyTorch [19].
We evaluate the models in terms of Detection Error Trade-off (DET) curves [20], in which the x-axis represents the number of false alarms per hour (FAh), and the y-axis represents the chance of false rejects per utterance (FRR). User experience research indicates that the users are usually satisfied with an FRR around \(5\%\) at the \(0.1\) FAh level, even though an FRR of \(10\%\) is acceptable for many use cases. Therefore we evaluate model performances at this level. In addition, we measure the latency introduced by CTC relative to the alignment-based method. It is important to stress that we are not measuring the latency between the prediction and the actual occurrence of the wake word but the difference in the triggering points of alignment-based and alignment-free engines. To perform such a calculation, for each utterance, we get the peak value of each system and compute the point where the decoder score is greater than \(40\%\) of the peak value. We force this threshold to be greater or equal to \(0.20\) to guarantee that we are not computing cases where the system did not fire. Then we calculate the difference between the triggering point for cross entropy and CTC - positive measures mean that the alignment-based system triggered first, and negative measures represent the opposite.
## 4 Results
Figure 1 presents the DET curves for the alignment-based, alignment-free, and hybrid alignment models for each A/B split separately. Table 1 presents the FRR for each model at the \(0.1\) FAh level. Note that the hybrid-alignment system only contains results for half of the settings. In these cases, the result for set B[100-X] refers to the hybrid model trained first with A[X] with cross-entropy and then with B[100-X] with CTC. Figure 2 presents the decoder scores for four of our models for the same positive utterance (CE-A20, CE-B80, CTC-A20, and CTC-B80). We can observe the model triggering in the presence of the wake word. Table 2 presents the measured latency (\(\mu\pm\sigma\)) of the CTC model with relation to the equivalent cross entropy one in milliseconds.
## 5 Discussion
Even though we did not confirm our initial hypothesis, this work presents many exciting findings. First, we were able to train,
\begin{table}
\begin{tabular}{c c c c} \hline \hline Dataset & Align.-Based & Align.-Free & Hybrid-Align.* \\ \hline A01 & \(16.50\%\) & \(77.99\%\) & \(-\) \\ A10 & \(7.19\%\) & \(5.98\%\) & \(-\) \\ A20 & \(7.07\%\) & \(4.95\%\) & \(-\) \\ A50 & \(7.24\%\) & \(4.74\%\) & \(-\) \\ B50 & \(6.51\%\) & \(5.03\%\) & \(5.69\%\) \\ B80 & \(7.28\%\) & \(4.97\%\) & \(4.97\%\) \\ B90 & \(7.43\%\) & \(6.53\%\) & \(6.53\%\) \\ B99 & \(6.77\%\) & \(5.69\%\) & \(5.69\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: False Rejection Rate at the \(0.1\) FAh level. For the Hybrid Alignment, the B-set trained models were initialized with the weights of its complementary A-set model.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Dataset & Latency (ms) & \multicolumn{2}{c}{False Rejection Rate} \\ \hline A01 & \(-99.2\pm 487.6\) & \(2.7\%\) & \(0.0\%\) \\ A10 & \(-62.2\pm 162.8\) & \(1.1\%\) & \(0.0\%\) \\ A20 & \(-54.9\pm 161.9\) & \(1.0\%\) & \(0.0\%\) \\ A50 & \(-47.1\pm 177.1\) & \(1.0\%\) & \(0.0\%\) \\ B50 & \(-64.8\pm 165.6\) & \(1.0\%\) & \(0.0\%\) \\ B80 & \(87.1\pm 90.1\) & \(1.0\%\) & \(0.0\%\) \\ B90 & \(101.5\pm 114.9\) & \(1.0\%\) & \(0.0\%\) \\ B99 & \(100.7\pm 129.9\) & \(0.9\%\) & \(0.0\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Latency between the cross entropy and the CTC trained models with a threshold of \(40\%\) of the peak value (minimum of \(0.20\)). Negative values indicate that the CTC model triggered first, while positive values indicate the opposite.
Figure 1: DET curves for the pairs A01-B99, A10-B90, A20-B80, and A50-B50, for the alignment-based, alignment-free, and hybrid alignment settings.
Figure 2: Decoder scores of one positive utterance for the CE-A20, CE-B80, CTC-A20, and CTC-B80 models.
evaluate and compare the two traditional approaches to wake word detection, plus the proposed hybrid method. Except for CE-A01 and CTC-A01, the models reached an FRR below \(8\%\) at the 0.1 FAh level, which is very encouraging given that many of the training sets are considered small.
According to our experimentation, even though the best alignment-based performance occurred with the most extensive set (B99), with only \(10\%\) of the positive data (A10), around 23 000 positive utterances, we were capable of training a wake word system that closely matches the user requirements and has an indistinguishable performance to its complementary set (B90). As Figure 1 shows, A01 yields inferior results since most of the data observed during training are negative samples, and the model is not exposed enough to the wake word. Nonetheless, when the model is trained with only \(10\%\) of the data, the alignment-based model reaches an FRR of \(7.19\%\) in the \(0.1\) FAh level, which is competitive with the models trained with more data, even though it is above the target (\(5\%\)). This is an important finding compared to previous works, which report results with a total of 1 million training utterances [17]. Collecting data for wake word detection incurs a tremendous cost for research institutions and organizations, especially in the initial phases of product development. Reducing the data needed for wake-word detection model training not only improves the expenses of data collection but also facilitates the documentation of the dataset and reduces GPU hours spent on training, consequently improving the carbon footprint of the systems.
Our alignment-free experiments show that the models trained with CTC improve the performance for lower levels of FAh when compared to cross-entropy. However, for FAh above \(0.5\), the alignment-based model achieves a lower FRR. We observed that with only \(20\%\) (about 46 000 samples) of the data, we achieve a performance that is below the target operating point, reaching an FRR of \(4.95\%\) at the \(0.1\) FAh level, which is indistinguishable from the performance observed with the complementary set (B80). The best performance is achieved with \(50\%\) (about 115 000 samples) of the data (FRR \(4.74\%\) at 0.1 FAh). The comparison between the alignment-based and alignment-free configurations shows that it is unreasonable to claim that one approach is strictly better than the other. The choice should consider the user needs, the operating point, and the availability of resources for developing the wake word system.
The result is encouraging since unaligned data is easier to obtain. Datasets for wake word detection are available such as the SNIPS dataset for keyword spotting [10], which contains 5,876 positive (1,179 speakers) and 45,344 negative (3,330 speakers) utterances in the train set, the Mobvoi single wake word (private) [21], with 19,684 positive and 54,450 negative utterances for training, and the Mobvoi (SLR87) [11] datasets, with 43,625 positive and 130,967 negative utterances. Results reported in these sets [10, 22] are better than the ones presented in this study, but it is important to highlight that we use a completely different dataset for evaluation, with a different wake word, and it is not trivial to estimate how these works would perform with our data. In addition, the main outcome of our study is not beating state-of-the-art wake word detection but systematically comparing different alignment approaches using the same setup.
A secondary observation is related to the latency of CTC-based systems that have been reported in the literature [23]. Figure 2 shows an interesting behavior of the decoder scores for the two approaches. The scores for the CTC systems behave as a step function, reaching their peak as soon as the system activates. Contrarily, the cross entropy scores gradually grow until they reach their peak. In any case, it is unclear which system produces a more significant latency. Table 2 shows that for most of the splits, the alignment-free system triggers faster (negative values), which is probably explained by the abrupt transition between inactive and active states seen in Figure 2. However, the substantial standard deviations indicate that we cannot take that as a rule. Using the alignment-free approach _in lieu_ of the cross entropy requires further exploration for use cases where latency is a strong constraint.
To evaluate the hybrid model, we must carefully analyze Figure 1. We observe that with the 1/99,10/90, and 20/80 ratios, the hybrid training converges to the alignment-free system. Considering the relationship cross entropy and CTC have with the blank token, we can reason on top of this behavior. For cross-entropy training, the blank token does not exist; hence, the model should always emit zero probability for it. However, the blank token is critical for training CTC-based systems. The model first learns to emit blank tokens and then fills the gaps with actual ones so that the final emissions matrix will decode to the target sequence.
Nevertheless, the model benefits from both approaches when we have an aligned-to-unaligned ratio of 50/50. It behaves like the alignment-free model for lower FAh levels, but it deviates towards the performance of the alignment-based model for higher FAh. This peculiar behavior indicates that when the model is trained with a balanced amount of data in the two phases, it can improve the performance of the two traditional approaches. However, future research and experiments are needed to understand if this result holds for different amounts of data, for example, varying the split size while holding the ratio constant.
## 6 Conclusions
Most publicly available ASR datasets [24, 25, 26] do not contain alignment information. Even private datasets from industry will most likely lack phonetic alignment [21, 11]. For many languages other than English, a forced aligner might not exist to pre-process the data. All of these raise the relevance of alignment-free systems. Nonetheless, fine-grained phonetic annotations are still helpful for many speech applications [27], and the benefit from the data available is desirable. For future work, more experiments in the hybrid approach should be conducted. Moreover, a fruitful research line is to explore how to conciliate the blank token on the hybrid training. Alternatively, semi-supervised training using pseudo-labels [28] shows up as an exciting alternative since we showed how to achieve an outstanding model with a limited amount of positive data.
This work leaves a few learned lessons related to the wake word detection task. We tested our initial hypothesis, which was proved to be wrong, and confidently addressed the research questions proposed. Our experiments provide a reasonable estimation of the data collection needs for training wake word detection models, which is especially useful for teams that do not yet have a final product deployed in the market. In addition, to the best of our knowledge, this is the first study to evaluate and compare alignment-based and alignment-free methods for wake word detection under the same settings and resources and the first proposal of a hybrid alignment system that is compared to the traditional ones. We hope that the results of our study will benefit academia and the industry in developing more efficient voice applications in the future. |
2310.18952 | Exploring the phase diagrams of multidimensional Kuramoto models | The multidimensional Kuramoto model describes the synchronization dynamics of
particles moving on the surface of D-dimensional spheres, generalizing the
original model where particles were characterized by a single phase. In this
setup, particles are more easily represented by $D$-dimensional unit vectors
than by $D-1$ spherical angles, allowing for the coupling constant to be
extended to a coupling matrix acting on the vectors. As in the original
Kuramoto model, each particle has a set of $D(D-1)/2$ natural frequencies,
drawn from a distribution. The system has a large number of independent
parameters, given by the average natural frequencies, the characteristic widths
of their distributions plus $D^2$ constants of the coupling matrix. General
phase diagrams, indicating regions in parameter space where the system exhibits
different behaviors, are hard to derive analytically. Here we obtain the
complete phase diagram for $D=2$ and Lorentzian distributions of natural
frequencies using the Ott-Antonsen ansatz. We also explore the diagrams
numerically for different distributions and some specific choices of parameters
for $D=2$, $D=3$ and $D=4$. In all cases the system exhibits at most four
different phases: disordered, static synchrony, rotation and active synchrony.
Existence of specific phases and boundaries between them depend strongly on the
dimension $D$, the coupling matrix and the distribution of natural frequencies. | Ricardo Fariello, Marcus A. M. de Aguiar | 2023-10-29T09:46:33Z | http://arxiv.org/abs/2310.18952v1 | # Exploring the phase diagrams of multidimensional Kuramoto models
###### Abstract
The multidimensional Kuramoto model describes the synchronization dynamics of particles moving on the surface of D-dimensional spheres, generalizing the original model where particles were characterized by a single phase. In this setup, particles are more easily represented by \(D\)-dimensional unit vectors than by \(D-1\) spherical angles, allowing for the coupling constant to be extended to a coupling matrix acting on the vectors. As in the original Kuramoto model, each particle has a set of \(D(D-1)/2\) natural frequencies, drawn from a distribution. The system has a large number of independent parameters, given by the average natural frequencies, the characteristic widths of their distributions plus \(D^{2}\) constants of the coupling matrix. General phase diagrams, indicating regions in parameter space where the system exhibits different behaviors, are hard to derive analytically. Here we obtain the complete phase diagram for \(D=2\) and Lorentzian distributions of natural frequencies using the Ott-Antonsen ansatz. We also explore the diagrams numerically for different distributions and some specific choices of parameters for \(D=2\), \(D=3\) and \(D=4\). In all cases the system exhibits at most four different phases: disordered, static synchrony, rotation and active synchrony. Existence of specific phases and boundaries between them depend strongly on the dimension \(D\), the coupling matrix and the distribution of natural frequencies.
Introduction
Many natural and artificial systems can be described mathematically by a set of coupled oscillators. Examples include neuronal networks [1; 2; 3; 4], power grids [5; 6; 7; 8], active matter [9], sperm motion [10; 11], coupled metronomes [12] and circadian rhythms [13; 14]. In all these examples, synchronous motion of the oscillators is a key feature, leading to macroscopic behaviors with important consequences. The model introduced by Kuramoto provided the first detailed study of synchronization in a simple setup, becoming a paradigm in the area [15; 16]. In this model the oscillators are represented only by their phases \(\theta_{i}\) and evolve according to the equations
\[\dot{\theta}_{i}=\omega_{i}+\frac{k}{N}\sum_{j=1}^{N}\sin\left(\theta_{j}- \theta_{i}\right) \tag{1}\]
where \(\omega_{i}\) are their natural frequencies, selected from a symmetric distribution \(g(\omega)\), \(k\) is the coupling strength and \(i=1,...,N\). Kuramoto showed that, for \(k\) is sufficiently large, the oscillators synchronize their phases. A measure of synchronization is given by the complex order parameter
\[z=pe^{i\psi}\equiv\frac{1}{N}\sum_{i=1}^{N}e^{i\theta_{i}} \tag{2}\]
which is \(p\approx 0\) for independent oscillators and \(p\approx 1\) when motion is coherent. In the limit \(N\rightarrow\infty\), the onset of synchronization can be described as a continuous phase transition, where \(p=0\) for \(k<k_{c}=2/\pi g(0)\) and increases as \(p=\sqrt{1-k_{c}/k}\) for \(k>k_{c}\)[17; 18].
Since its original inception, the model was extended in many ways, with the introduction of frustration [19; 20; 21; 22], different types of coupling functions [23; 24; 25], networks [18; 26], distributions of the oscillator's natural frequencies [27; 28], inertial terms [17; 29; 30], external periodic driving forces [31; 32; 33] and coupling with particle swarms [34; 35; 36].
The Kuramoto model was also extended to higher dimensions with the help of the unit vectors \(\vec{\sigma_{i}}=(\cos\theta_{i},\sin\theta_{i})\equiv(\sigma_{ix},\sigma_{iy})\)[37]. Computing \(\dot{\sigma}_{ix}=-\dot{\theta}_{i}\sigma_{iy}\), \(\dot{\sigma}_{iy}=\dot{\theta}_{i}\sigma_{ix}\) and using Eq.(1) it follows that
\[\frac{d\vec{\sigma_{i}}}{dt}=\mathbf{W}_{i}\vec{\sigma_{i}}+\frac{k}{N}\sum_{j }[\vec{\sigma_{j}}-(\vec{\sigma_{i}}\cdot\vec{\sigma_{j}})\vec{\sigma_{i}}] \tag{3}\]
where \(\mathbf{W}_{i}\) is the anti-symmetric matrix
\[\mathbf{W}_{i}=\left(\begin{array}{cc}0&-\omega_{i}\\ \omega_{i}&0\end{array}\right). \tag{4}\]
The complex order parameter \(z\), Eq.(2), can be written in terms of the real vector
\[\vec{p}=\frac{1}{N}\sum_{i}\vec{\sigma_{i}}=(p\cos\psi,p\sin\psi) \tag{5}\]
describing the center of mass of the system.
Eq.(3) can be extended to higher dimensions by simply considering unit vectors \(\vec{\sigma}_{i}\) in D-dimensions, rotating on the surface of the corresponding (D-1) unit sphere [37]. Particles are now represented by \(D-1\) spherical angles, generalizing the single phase \(\theta_{i}\) of the original model. The matrices \({\bf W}_{i}\) become \(D\times D\) anti-symmetric matrices containing the \(D(D-1)/2\) natural frequencies of each oscillator. Finally, the \(D\)-dimensional model is further extended by replacing the coupling constant \(k\) by a coupling matrix \({\bf K}\) acting on the vectors [21; 22; 38]:
\[\frac{d\vec{\sigma_{i}}}{dt}={\bf W}_{i}\vec{\sigma_{i}}+\frac{1}{N}\sum_{j}[{ \bf K}\vec{\sigma_{j}}-(\vec{\sigma_{i}}\cdot{\bf K}\vec{\sigma_{j}})\vec{ \sigma_{i}}]. \tag{6}\]
Using Eq.(5) and defining the _rotated_ order parameter
\[\vec{q}={\bf K}\vec{p} \tag{7}\]
we obtain the compact equation
\[\frac{d\vec{\sigma_{i}}}{dt}={\bf W}_{i}\vec{\sigma_{i}}+[\vec{q}-(\vec{ \sigma_{i}}\cdot\vec{q})\vec{\sigma_{i}}]. \tag{8}\]
The coupling matrix breaks the rotational symmetry and plays the role of a generalized frustration: it rotates \(\vec{\sigma}_{j}\), hindering its alignment with \(\vec{\sigma}_{i}\) and inhibiting synchronization. The angle of rotation depends on \(\sigma_{j}\), generalizing the constant frustration angle of the Sakaguchi model [19]. Norm conservation, \(|\vec{\sigma_{i}}|=1\), is guaranteed, as can be seen by taking the scalar product of Eqs.(6) with \(\vec{\sigma_{i}}\). Similar extensions of the Kuramoto model with symmetry breaking and higher dimensions were also considered in refs. [39; 40; 41; 42; 43].
For the case of identical oscillators with zero natural frequencies, the eigenvalues and eigenvectors of the coupling matrix completely determine the dynamics [22]. Complete synchronization occurs if the real part of the dominant eigenvalue is positive. If the corresponding eigenvector is real the order parameter converges to the direction of the eigenvector (static sync). If it is complex, the order parameter rotates in the plane defined by the real and imaginary parts of the corresponding eigenvector. However, for non-identical oscillators, the behavior changes considerably, depending on the distribution of natural frequencies.
Not only the center \(\omega_{0}\) and width \(\Delta\) of the distribution matter (since rotational symmetry is broken) but also the type of distribution. Here we construct phase diagrams in the \(\omega_{0}\times\Delta\) plane for different distributions and dimensions 2, 3 and 4, exploring the effects of these parameters in the dynamics of the extended Kuramoto model.
## II Representations in 2, 3 and 4 dimensions
In \(D=2\) the coupling matrix \(\mathbf{K}\) can be conveniently written as a sum of symmetric and anti-symmetric parts
\[\mathbf{K}=K\left(\begin{array}{cc}\cos\alpha&\sin\alpha\\ -\sin\alpha&\cos\alpha\end{array}\right)+J\left(\begin{array}{cc}-\cos \beta&\sin\beta\\ \sin\beta&\cos\beta\end{array}\right)\equiv\mathbf{K}_{R}+\mathbf{K}_{S} \tag{9}\]
where \(\mathbf{K}_{R}\) is recognized as a rotation matrix. In this case the equations of motion can still be written in terms of a single phase and read
\[\dot{\theta}_{i}=\omega_{i}+\frac{1}{N}\sum_{j=1}^{N}\left[K\sin(\theta_{j}- \theta_{i}-\alpha)+J\sin(\theta_{j}+\theta_{i}+\beta)\right]. \tag{10}\]
For \(J=0\) the system reduces to the Kuramoto-Sakaguchi model, but for \(J\neq 0\) new _active_ states are obtained [21]. We review the main properties of the 2D system in the next section.
As the coupling matrix has \(D^{2}\) independent real entries, the model equations are hard to handle explicitly if \(D\geq 3\). For identical oscillators, \(\mathbf{W}_{i}=0\), the dynamics is completely determined by the eigenvalues and eigenvectors of \(\mathbf{K}\)[22], but for general distributions of natural frequencies the dynamics changes considerably, and so does the phase diagram of the system in the space of parameters. In order to simplify matters, we choose to work with particular forms of the coupling matrices that make it easy to identify the leading eigenvectors and, therefore, to predict the behavior of the system in the limit where the width of the distribution of natural frequencies goes to zero.
For \(D=3\) we set
\[\mathbf{K}=\left(\begin{array}{ccc}a\cos\alpha&a\sin\alpha&0\\ -a\sin\alpha&a\cos\alpha&0\\ 0&0&b\end{array}\right). \tag{11}\]
The eigenvalues are \(\lambda_{\pm}=ae^{\pm i\alpha}\), with eigenvectors \((1,\pm i,0)/\sqrt{2}\), and \(\lambda_{3}=b\), with eigenvector \((0,0,1)\). The matrices of natural frequencies have three components each and are given
by
\[{\bf W}_{i}=\left(\begin{array}{ccc}0&-\omega_{3i}&\omega_{2i}\\ \omega_{3i}&0&-\omega_{1i}\\ -\omega_{2i}&\omega_{1i}&0\end{array}\right). \tag{12}\]
In this case it is possible to associate \({\bf W}_{i}\) to the vector
\[\vec{\omega}_{i}^{T}=\omega_{i}(\sin\beta_{i}\cos\alpha_{i},\sin\beta_{i}\sin \alpha_{i},\cos\beta_{i}), \tag{13}\]
where the superscript \(T\) stands for transpose. Clearly \({\bf W}_{i}\vec{\sigma}=\omega_{i}\times\vec{\sigma}\).
For \(D=4\) we choose the coupling matrix as
\[{\bf K}=\left(\begin{array}{cccc}a_{1}\cos\alpha&a_{1}\sin\alpha&0&0\\ -a_{2}\sin\alpha&a_{2}\cos\alpha&0&0\\ 0&0&b\cos\beta&b\sin\beta\\ 0&0&-b\sin\beta&b\cos\beta\end{array}\right), \tag{14}\]
representing a rotation in the lower block and another rotation (if \(a_{1}=a_{2}\)) or two real eigenvectors (if \(\alpha=0\) and \(a_{1}\neq a_{2}\)) in the upper block. We note that any real coupling matrix could be used and that only the eigenvalues and eigenvectors matter for the asymptotic behavior of the system in the case of identical oscillators. This choice is only to facilitate the determination of the eigenvectors. The matrices \({\bf W}_{i}\) can be parametrized as [44]
\[{\bf W}_{i}=\left(\begin{array}{cccc}0&-\omega_{6i}&\omega_{5i}&-\omega_{4i }\\ \omega_{6i}&0&-\omega_{3i}&\omega_{2i}\\ -\omega_{5i}&\omega_{3i}&0&-\omega_{1i}\\ \omega_{4i}&-\omega_{2i}&\omega_{1i}&0\end{array}\right) \tag{15}\]
and have six independent entries.
## III Exact results for \(D=2\)
### Dimensional reduction
In two dimensions we can use the dimensional reduction approach of Ott and Antonsen for Lorentzian distributions of natural frequencies [45]. In this case one can derive differential equations for the module and phase of the order parameter [21] and we will use these
equations to construct the full 2D phase diagram analytically. Taking the limit \(N\rightarrow\infty\) we define the function \(f(\omega,\theta,t)\) describing the density of oscillators with natural frequency \(\omega\) at position \(\theta\) in time \(t\). Since the total number of oscillators is conserved, \(f\) satisfies a continuity equation with velocity field
\[\vec{v}=\textbf{W}\vec{\sigma}+\vec{q}-(\vec{\sigma}\cdot\vec{q})\vec{\sigma}=[ \omega+q\sin(\xi-\theta)]\hat{\theta}\equiv v_{\theta}\hat{\theta} \tag{16}\]
where \(\vec{\sigma}=(\cos\theta,\sin\theta)\), \(\vec{q}\equiv q(\cos\xi,\sin\xi)\) and \(\hat{\theta}=(-\sin\theta,\cos\theta)\). The continuity equation reads
\[\frac{\partial f}{\partial t}+\frac{\partial(v_{\theta}f)}{\partial\theta}=0. \tag{17}\]
The Ott-Antonsen ansatz consists in expanding \(f\) in Fourier series and choose the coefficients so that all of them depend on a single complex parameter \(\nu(t)\) as
\[f(\omega,\theta,t)=\frac{g(\omega)}{2\pi}\left[1+\sum_{m=1}^{\infty}\nu^{m}e^ {-im\theta}+\sum_{m=1}^{\infty}\nu^{*m}e^{im\theta}\right]. \tag{18}\]
Inserting (18) and (16) into (17) we obtain the following differential equation for \(\nu(t)\):
\[\dot{\nu}=i\omega\nu-\frac{1}{2}u^{*}\nu^{2}+\frac{1}{2}u \tag{19}\]
where we defined the complex number \(u=qe^{i\xi}\). Setting \(z=pe^{i\psi}\) and using the definition \(\vec{q}=\textbf{K}\vec{p}\) (see Eq.(7)), we find
\[u=Kze^{-i\alpha}-Jz^{*}e^{-i\beta}. \tag{20}\]
The last step is to relate the ansatz parameter \(\nu\) with the complex order parameter \(z\). To do that we note that in the limit of infinitely many oscillators, Eq.(2) becomes
\[z=\int f(\omega,\theta,t)e^{i\theta}d\theta d\omega. \tag{21}\]
Using the Fourier series (18) we see that only the term proportional to \(e^{-i\theta}\) contributes to the integral and we are left with
\[z=\int g(\omega)\nu(\omega)d\omega. \tag{22}\]
This equation can be integrated exactly for
\[g(\omega)=\frac{\Delta}{\pi}\frac{1}{(\omega-\omega_{0})^{2}+\Delta^{2}} \tag{23}\]
[45]. In this case we can write \(\nu=\rho e^{i\Phi}\) and we obtain
\[z=\frac{\Delta}{\pi}\int\frac{\rho e^{i\Phi}}{(\omega-\omega_{0}+i\Delta)( \omega-\omega_{0}-i\Delta)}d\omega. \tag{24}\]
The integral can now be performed in the complex \(\omega\)-plane using a closed path from \(-R\) to \(+R\) over the real line and closing back from \(R\) to \(-R\) with a half circle in the positive complex half plane, taking \(R\rightarrow\infty\). From Eq.(19) we see that \(\Phi\) should be proportional to \(\omega\) for large \(\omega\), implying that the integral over the half circle goes to zero. Using the residues theorem at the pole \(\omega=\omega_{0}+i\Delta\) we obtain
\[z=\nu(\omega_{0}+i\Delta). \tag{25}\]
Calculating Eq.(19) at \(\omega_{0}+i\Delta\) allows us to replace \(\nu\) by \(z\), resulting in
\[\dot{z}=i(\omega_{0}+i\Delta)z-\frac{1}{2}(Kz^{*}e^{i\alpha}-Jze^{i\beta})z^{2 }+\frac{1}{2}(Kze^{-i\alpha}-Jz^{*}e^{-i\beta}). \tag{26}\]
Finally, separating real and imaginary parts we obtain equations for the module and phase of the order parameter \(z=pe^{i\psi}\):
\[\dot{p}=-p\Delta+\frac{p}{2}(1-p^{2})[K\cos\alpha-J\cos\theta] \tag{27}\]
and
\[\dot{\theta}=2\omega_{0}-(1+p^{2})[K\sin\alpha-J\sin\theta] \tag{28}\]
where we have defined \(\theta=2\psi+\beta\).
### Phase Diagram
Non-trivial stationary solutions of Eqs.(27) and (28) are given by
\[p=\sqrt{1-\frac{2\Delta}{K\cos\alpha-J\cos\theta}} \tag{29}\]
and
\[\sin\theta=-\frac{a}{b} \tag{30}\]
where \(a=2\omega_{0}-(1+p^{2})K\sin\alpha\) and \(b=J(1+p^{2})\). These solutions are real provided
\[2\Delta<K\cos\alpha-J\cos\theta \tag{31}\]
and \(|a|<|b|\), or
\[\frac{1}{2}(1+p^{2})(K\sin\alpha-|J|)<\omega_{0}<\frac{1}{2}(1+p^{2})(K\sin\alpha +|J|) \tag{32}\]
Therefore, to find \(p\) and \(\psi\) for each pair \((\omega_{0},\Delta)\) we need to solve Eqs.(29) and (30) and check that conditions (31) and (32) are satisfied.
The trivial (asynchronous) state \(p=0\) is always a solution of Eq.(27). Although the phase of \(z\) for \(p=0\) is mostly irrelevant, it plays a role in analysis of its stability. At \(p=0\) the linearized version of Eq.(27) is
\[\delta\dot{p}=\left(-\Delta+\frac{K\cos\alpha}{2}-\frac{J\cos\theta}{2}\right) \delta p \tag{33}\]
whose solution is
\[\delta p(t)=\exp\bigg{\{}-\left(\Delta-\frac{K\cos\alpha}{2}\right)t-\frac{J} {2}\int_{0}^{t}\cos\theta(t^{\prime})dt^{\prime}\bigg{\}}. \tag{34}\]
Therefore, if \(\theta\) oscillates (for \(\omega_{0}\) outside the interval in Eq.(32)) \(p=0\) is stable for \(\Delta>K\cos\alpha/2\equiv\Delta_{0}\). When \(\theta\) converges to a constant, stability requires \(\Delta>K\cos\alpha/2-J\cos\theta/2\). Since the linearized equation for \(\theta\) at \(p=0\) is \(\delta\dot{\theta}=J\cos\theta\delta\theta\), for \(J>0\), the trivial solution is stable if \(\cos\theta<0\) (\(\pi 2/<\theta<3\pi/2\)) and for \(J<0\) if \(\cos\theta>0\) (\(-\pi/2<\theta<\pi/2\)).
For \(\Delta>\Delta_{0}\) the line separating the async from the static sync region is given in parametric form by \((\omega(\theta),\Delta(\theta))\) with \(\omega(\theta)=(K\sin\alpha-|J|\sin\theta)/2\), \(\Delta(\theta)=(K\cos\alpha-|J|\cos\theta)/2\), for \(\theta\in(\pi/2,3\pi/2)\).
For \(\Delta<\Delta_{0}\) the solution \(p=0\) is unstable and \(p\) either converges to a constant (static solutions) or it oscillates (active solutions). The boundary between the two kinds of behavior is obtained setting \(\sin\theta=\pm 1\) and \(\cos\theta=0\). From Eq.(29) we find
\[\frac{1+p^{2}}{2}=1-\frac{\Delta}{K\cos\alpha}=1-\frac{\Delta}{2\Delta_{0}}. \tag{35}\]
Setting \(a=b\) we obtain
\[\frac{1+p^{2}}{2}=\frac{\omega_{0}}{K\sin\alpha+|J|}=\frac{\omega_{0}}{2\omega _{max}} \tag{36}\]
where \(\omega_{max}\equiv(K\sin\alpha+|J|)/2\). Equating these two relations gives the boundary curve
\[\omega_{0}(\Delta)=\omega_{max}(2-\Delta/\Delta_{0}). \tag{37}\]
Setting \(a=-b\) gives the other boundary curve
\[\omega_{0}(\Delta)=\omega_{min}(2-\Delta/\Delta_{0}). \tag{38}\]
where \(\omega_{min}\equiv(K\sin\alpha-|J|)/2\). The value of \(p\) along the curve is given br Eq.(29) with \(\cos\theta=0\), and it approaches \(1\) as \(\Delta\to 0\). The phase is \(\psi=3\pi/4-\beta\) on the upper curve and \(\psi=\pi/4-\beta\) on the lower curve. Between these two curves the solution is static and outside this interval the phase \(\theta\), and therefore \(\psi\), oscillates, leading to an oscillation of \(p(t)\), corresponding to active states.
For \(J=0\) the static sync region collapses and the active regions become regions of rotation, where the module of \(p\) remains constant whereas the phase \(\psi\) rotates with angular velocity \(\omega_{0}-K\sin\alpha+\Delta\tan\alpha\). A line of static sync appears for \(\omega_{0}=K\sin\alpha-\Delta\tan\alpha\). The full diagram is illustrated in Figure 1.
## IV Simulations
In this section we present simulations of the generalized Kuramoto model for \(D=2\), \(3\) and \(4\) for the coupling matrices described in section II. For each coupling matrix and distribution of natural frequencies we integrate the system for different values of \(\omega_{0}\) and \(\Delta\). In order to characterize the asymptotic state of the system we computed the following
quantities: (i) \(\langle p\rangle\), time average of the module of the order parameter ; (ii) \(\delta p\), mean square deviation around the average, and; (iii) \(\delta_{max}\), maximum between mean square deviation of the components of \(\vec{p}\). The first quantity informs about the degree of synchronization whereas the second indicates if \(p\) is constant or displays oscillatory motion (active state). Finally, \(\delta_{max}\) indicates if \(\vec{p}\) is rotating (\(\delta_{max}>0\)) or not (\(\delta_{max}=0\)).
Capital letters in the phase diagrams indicate the asymptotic state of the system as follows:
D - disordered (not synchronized) - \(\langle p\rangle\approx 0\).
S - static sync - \(\langle p\rangle>0\), \(\delta p\approx 0\), \(\delta_{max}\approx 0\).
R - rotation - \(\langle p\rangle>0\), \(\delta p\approx 0\), \(\delta_{max}>0\).
A - active sync - \(\langle p\rangle>0\), \(\delta p>0\), \(\delta_{max}>0\).
In all simulations we set \(N=5000\) oscillators for a total integration time of \(t=2000\), using the last \(500\) units of time to compute the asymptotic results. The only exception is the case of the Lorentz distribution, for which we used \(N=200\). \(t=1000\) and the last \(250\) units of time for averages, due to the very slow convergence of the system. Integration of the equations of motion were performed with a \(4\)th order Runge-Kutta algorithm with precision parameter \(10^{-6}\). Convergence of the results was also checked against the method proposed in [46]. Initial conditions for the oscillators were chosen randomly on the sphere. The parameters \(\omega_{0}\) and \(\Delta\) in heat maps were varied from \(0\) to \(2\) at steps of \(0.05\).
### Phase diagrams in \(D=2\)
Although in \(D=2\) we have the exact phase diagrams for the Lorentz distributions, simulations are important for two reasons: first, we don't have the diagrams for other distributions; second, since we need to automatize the simulations for other distributions and higher dimensions, we need to make sure we can extract the different phases from the simulated diagrams. Therefore, we first simulate the diagrams for the Lorentz distribution
\[g_{L}(\omega)=\frac{\Delta}{\pi}\frac{1}{(\omega-\omega_{0})^{2}+\Delta^{2}}; \tag{39}\]
and see how they compare with the exact solutions. We also simulated the equations for the Gaussian
\[g_{G}(\omega)=\frac{1}{\sqrt{2\pi\Delta^{2}}}e^{-\frac{(\omega-\omega_{0})^{2 }}{2\Delta^{2}}}; \tag{40}\]
and uniform distributions,
\[g_{U}(\omega)=\left\{\begin{array}{ll}\frac{1}{\Delta}&\quad\mbox{i}f-\frac{ \Delta}{2}+\omega_{0}\leq\omega\leq\omega_{0}+\frac{\Delta}{2}\\ 0&\quad\mbox{otherwise.}\end{array}\right. \tag{41}\]
Note that \(\Delta\) is a measure of the width of the distributions, but has different meanings in each case. For the Gaussian distribution \(\Delta^{2}\) is the variance; for the uniform distribution the variance is \(\Delta^{2}/12\) whereas the Lorentz has infinite variance.
For each distribution we simulated the dynamics with three coupling matrices (see Eq.(9)):
\[{\bf K}_{S}=\left(\begin{array}{cc}2.5&0\\ 0&0.8\end{array}\right) \tag{42}\]
\[{\bf K}_{R}=2.5\left(\begin{array}{cc}\cos 0.5&\sin 0.5\\ -\sin 0.5&\cos 0.5\end{array}\right) \tag{43}\]
and
\[{\bf K}_{A}={\bf K}_{R}+\left(\begin{array}{cc}0.2&0.1\\ 0&0\end{array}\right). \tag{44}\]
These choices correspond to coupling matrices with real eigenvalues, leading to static states for \(\Delta=\omega_{0}=0\) (\({\bf K}_{S}\)), purely complex eigenvalues, leading to rotations as in the Kuramoto-Sakaguchi model (\({\bf K}_{R}\)) and complex eigenvalues corresponding to active states (\({\bf K}_{A}\)).
Fig. 2 displays results for the Lorentz distribution, which we simulate only for \({\bf K}_{S}\) (top panels) and \({\bf K}_{R}\) (lower panels), as our intention is to validate the numerical results comparing with the exact diagrams. Continuous black lines show the boundary curves as in Fig.1 and they divide the plane int o three (top) or two (bottom) regions. In both cases the right most region corresponds to non-synchronized states. The top panels show static synchronization, with \(\vec{p}\) constant, in the lower left corner, as can be seen by the low values of both \(\delta p\) and \(\delta_{max}\). The upper left region, on the other hand, shows active states, with \(\langle p\rangle\) constant but rotating and oscillating \(\vec{p}(t)\), as indicated by significant values of \(\delta p\) and large values of \(\delta_{max}\). For the bottom panels, corresponding to the Kuramoto-Sakaguchi model, \(\vec{p}\) rotates but keeps its module constant (\(\delta p\approx 0\)).
Fig. 3 shows similar plots for the Gaussian distribution, Eq.(40). The top two panels are qualitatively similar to the panels in Fig.2, although not identical: critical transition values of \(\Delta\) and \(\omega_{0}\) are different and transitions are much sharper, as natural frequencies far from
\(\omega_{0}\) are much less likely to be sampled. Interestingly, for the case of \({\bf K}_{A}\), the line of fixed \(\vec{p}\) immersed in the rotating zone (see Fig.1(b) ) is enlarged into an area (red stripe in the \(\delta_{mas}\)), showing the great sensitivity of phase diagram with the coupling matrix.
Finally, Fig. 4 shows results for the uniform distribution, Eq.(41) and the same coupling matrices, Eqs.(42) -(44). Except for the case of \({\bf K}_{R}\) (middle row) which is qualitatively similar to the cases of Lorentz and Gaussian distributions, new regions develop in this case. For \({\bf K}_{S}\) part of the region corresponding to non-synchronized states for Gaussian and Lorentz distributions becomes partially synchronized with static states (first row) and for \({\bf K}_{A}\) the active states near the line of static states develop larger oscillations (yellow area in the middle plot in the third row). Enlargement of the line of static sync states is also observed in this case.
Figure 2: Heat maps in the \(\omega_{0}\)-\(\Delta\) plane for the Lorentz distribution. Panels show results for coupling matrices \({\bf K}_{S}\) (first line) and \({\bf K}_{R}\) (second line). Along each line of plots, panels show the average value of the order parameter \(\langle p\rangle\), its mean square deviation \(\delta p\) and \(\delta_{max}\). Black lines show the theoretical results (see Fig.1).
### Phase diagrams in \(D=3\)
To explore the phase diagrams in 3D we set the coupling matrix as in Eq.(11) with \(a=1\) and \(\alpha=0.5\) and consider two values of the parameter \(b\). For \(b=0.5\) we define
\[\mathbf{K}_{3R}=\left(\begin{array}{ccc}\cos 0.5&\sin 0.5&0\\ -\sin 0.5&\cos 0.5&0\\ 0&0&0.5\end{array}\right) \tag{45}\]
Figure 3: Heat maps in the \(\omega_{0}\)-\(\Delta\) plane for the Gaussian distribution. Panels show results for coupling matrices \(\mathbf{K}_{S}\) (first line), \(\mathbf{K}_{R}\) (second line) and \(\mathbf{K}_{A}\) (third line). Each line shows the average value of the order parameter \(\langle p\rangle\), its mean square deviation \(\delta p\) and \(\delta_{max}\).
and for \(b=1\)
\[{\bf K}_{3S}=\left(\begin{array}{ccc}\cos 0.5&\sin 0.5&0\\ -\sin 0.5&\cos 0.5&0\\ 0&0&1\end{array}\right). \tag{46}\]
In the first case the dominant eigenvalues have complex eigenvectors in the \(\hat{e}_{1}\)-\(\hat{e}_{2}\) plane, corresponding to rotations for \(\Delta=\omega_{0}=0\). In the second case the dominant eigenvector is real in the \(\hat{e}_{3}\) direction leading to static synchronization. For each case we use three different types of natural frequencies distributions, described either in terms of the matrix \({\bf W}_{i}\) entries
Figure 4: Heat maps in the \(\omega_{0}\)-\(\Delta\) plane for the uniform distribution. Panels show results for coupling matrices \({\bf K}_{S}\) (first line), \({\bf K}_{R}\) (second line) and \({\bf K}_{A}\) (third line). Plots show the average value of the order parameter \(\langle p\rangle\), its mean square deviation \(\delta p\) and \(\delta_{max}\).
as in Eq.(11) or in terms of the associated vector \(\vec{\omega}_{i}\), Eq.(13), as follows:
- \(g_{ang}(\vec{\omega})\); for each oscillator a vector \(\vec{\omega}_{i}\) is sampled with uniform distribution of angles \(\alpha_{i}\) and \(\beta_{i}\) and Gaussian distribution of module \(\omega_{i}\) centered at \(\omega_{0}\) with width \(\Delta\).
- \(g_{gauss}(\vec{\omega})\); each entry of \({\bf W}_{i}\) is sampled from a Gaussian distribution centered at \(\omega_{0}\) with width \(\Delta\).
- \(g_{uni}(\vec{\omega})\); each entry of \({\bf W}_{i}\) is sampled from a uniform distribution centered at \(\omega_{0}\) with width \(\Delta\).
Figs. 5 to 7 show results of numerical simulations in each case. As in the 2D case we show the time-average value of the order parameter after the transient, \(\langle p\rangle\), its mean square deviation \(\delta p\), and \(\delta_{max}\), the maximum between mean square deviation of the components of \(\vec{p}\), indicating if \(\vec{p}\) is rotating (\(\delta_{max}>0\)) or not (\(\delta_{max}=0\)). Capital letters indicate the asymptotic state of the system as in \(D=2\). In all cases, when \(\Delta\) and \(\omega_{0}\) are sufficiently
Figure 5: Heat maps in the \(\omega_{0}\)-\(\Delta\) plane for 3D and distribution \(g_{ang}(\vec{\omega})\). Panels show results for coupling matrices \({\bf K}\) as in Eq.(11) with \(a=1\), \(\alpha=0.5\). The value of \(b\) is 0.5 (first line) and 1.0 (second line). Plots show the average value of the order parameter, its mean square deviation and \(\delta_{max}\).
small the oscillators synchronize and rotate (case 1, first line in all figures) or converge to a static configuration (case 2, second line of figures). However, as \(\Delta\) and \(\omega_{0}\) increase, the behavior of the system depends significantly on the distribution of natural frequencies.
For \(g_{ang}(\vec{\omega})\), synchronization decreases as \(\Delta\) and \(\omega_{0}\) increase. For the coupling matrix \(\mathbf{K}_{3R}\) a phase transition from rotation (R) to static sync (S) and back to rotation (R) is observed as \(\Delta\) and \(\omega_{0}\) increase. A thin region of active states is also noted for small \(\Delta\) and large \(\omega_{0}\). For \(\mathbf{K}_{3S}\) the diagram is dominated by a large area of static sync (S), although a similar region of active states is observed, next to rotations (R). Notice that, since the direction of the vectors \(\vec{\omega}_{i}\) is uniformly sampled, the average value of these vectors is zero for \(g_{ang}\), independent of the value of \(\omega_{0}\). This makes the phases observed at \(\omega_{0}=\Delta=0\) to extend over large regions of the diagram as compared to the other two distributions in Figs. 6 and 7.
The phase diagrams for \(g_{gauss}\) and \(g_{uni}\) are similar, but very different from that of \(g_{ang}\)
Figure 6: Heat maps in the \(\omega_{0}\)-\(\Delta\) plane for the 3D case. Here \(g(\vec{\omega})\) is given by Eq,(12) with all entries Gaussian distributed around \(\omega=1\). Panels show results for coupling matrices \(\mathbf{K}\) as in Eq.(11) with \(a=1\), \(\alpha=0.5\). The value of \(b\) is 0.5 (first line) and 1.0 (second line). Plots show the average value of the order parameter, its mean square deviation and \(\delta_{max}\).
Synchronization is much facilitated in these cases, as we don't see regions of disordered motion in this range of parameters. Moreover, states with nearly complete sync are possible even for large \(\Delta\) in the static phase. The phase of active states (A) is also much larger than in Fig. 5 and occurs for small values of \(\omega_{0}\) and large values of \(\Delta\). Pure rotations tend to be suppressed for \(g_{gauss}\), occurring in small regions for both \(\mathbf{K}_{3R}\) and \(\mathbf{K}_{3S}\) and in a larger region \(g_{uni}\), especially for \(\mathbf{K}_{3S}\). Finally we note that the uniform distribution, Fig. 7, produces sharper transitions between the different phases.
### Phase diagrams in \(D=4\)
In this section we illustrate the phase diagrams in \(D=4\) with two instances of the coupling matrix Eq. (14). In both cases we fixed \(\alpha=0\), \(\beta=0.5\) and \(a_{2}=0.8\). Similar to the 3D system we choose the remaining parameters \(a_{1}\) and \(b\) as follows: first we set \(a_{1}=2.5\)
Figure 7: Heat maps in the \(\omega_{0}\)-\(\Delta\) plane for the 3D case. Here \(g(\vec{\omega})\) is given by Eq,(12) with all entries uniformly distributed around \(\omega=1\). Panels show results for coupling matrices \(\mathbf{K}\) as in Eq.(11) with \(a=1\), \(\alpha=0.5\). The value of \(b\) is \(0.5\) (first line) and \(1.0\) (second line). Plots show the average value of the order parameter, its mean square deviation and \(\delta_{max}\).
and \(b=0.5\), so that the leading eigenvalue of \(\mathbf{K}\) is real in the direction of \(\hat{e}_{1}\):
\[\mathbf{K}_{4S}=\left(\begin{array}{cccc}2.5&0&0&0\\ 0&0.8&0&0\\ 0&0&0.5\cos 0.5&0.5\sin 0.5\\ 0&0&-0.5\sin 0.5&0.5\cos 0.5\end{array}\right). \tag{47}\]
For the second case we set \(a_{1}=0.5\) and \(b=2.5\), with a complex leading eigenvalue in the \(\hat{e}_{3}\)-\(\hat{e}_{4}\) plane:
\[\mathbf{K}_{4R}=\left(\begin{array}{cccc}0.5&0&0&0\\ 0&0.8&0&0\\ 0&0&2.5\cos 0.5&2.5\sin 0.5\\ 0&0&-2.5\sin 0.5&2.5\cos 0.5\end{array}\right) \tag{48}\]
In \(D=4\) there are six independent entries for each matrix of natural frequencies \(\mathbf{W}_{i}\) and many ways to choose these values. For each choice of coupling matrix above we performed
Figure 8: Heat maps in the \(\omega_{0}\)-\(\Delta\) plane for the 4D case with distribution of natural frequencies \(g_{ang4}\). Panels show results for coupling matrices \(\mathbf{K}_{4S}\) (first line) and \(\mathbf{K}_{4R}\) (second line), as in Eqs.(47) and (48). Plots show the average value of the order parameter, its mean square deviation and \(\delta_{max}\).
simulations for the following three distributions:
- \(g_{ang4}\); in order to have a distribution similar to that used in Fig. 5, we grouped the six entries \(\omega_{ji}\) for each oscillator \(i\) into two vectors \(\vec{\xi}_{1}=(\omega_{1i},\omega_{2i},\omega_{3i})\) and \(\vec{\xi}_{2}=(\omega_{4i},\omega_{5i},\omega_{6i})\) and sampled \(\vec{\xi}_{1}\) and \(\vec{\xi}_{2}\) with uniform angular distribution and Gaussian distribution of modules \(\xi_{1}\) and \(\xi_{2}\), centered at \(\omega_{0}\).
- \(g_{gauss4}\); all entries are Gaussian distributed around \(\omega_{0}\).
- \(g_{uni4}\); all entries are uniformly distributed around \(\omega_{0}\).
Fig. 8 shows results for \(g_{ang4}\). Similar to the 3D case with \(g_{ang}\), the average of the natural frequencies is zero for all \(\omega_{0}\) and \(\Delta\), since the entries of \(\mathbf{W}_{i}\) are uniformly distributed in all directions, with only its average intensity controlled by \(\omega_{0}\). The basic phases S and R dominate much of the phase diagrams for \(\mathbf{K}_{4S}\) and \(\mathbf{K}_{4R}\) respectively.
For \(g_{gauss4}\) and \(g_{uni4}\) we obtain qualitatively similar diagrams (Figs. 9 and 10), where
Figure 9: Heat maps in the \(\omega_{0}\)-\(\Delta\) plane for the 4D case with distribution of natural frequencies \(g_{gauss4}\). Panels show results for coupling matrices \(\mathbf{K}_{4S}\) (first line) and \(\mathbf{K}_{4R}\) (second line), as in Eqs.(47) and (48). Plots show the average value of the order parameter, its mean square deviation and \(\delta_{max}\).
rotations are suppressed for \(\mathbf{K}_{4S}\) and active states are suppressed for \(\mathbf{K}_{4R}\). The phase diagrams are very different from their 3D counterparts exhibited in Figs. 6 and 7, but somewhat similar to the 2D case, Figs. 3 and 4, highlighting the role of dimensional parity (even or odd) in the dynamics and equilibrium properties of the model [37]. Synchronization happens only for limited values of \(\Delta\), especially for the Gaussian case.
## V Conclusions
The multidimensional Kuramoto model was proposed in [37] as a natural extension of the original system of coupled oscillators. In the extended model, oscillators are first reinterpreted as interacting particles moving on the unit circle. The system is then generalized allowing the particles to move on the surface of unit spheres embedded in D-dimensional spaces. For \(D=2\) the original model is recovered. The equations of motion of the multi
Figure 10: Heat maps in the \(\omega_{0}\)-\(\Delta\) plane for the 4D case with distribution of natural frequencies \(g_{uni4}\). Panels show results for coupling matrices \(\mathbf{K}_{4S}\) (first line) and \(\mathbf{K}_{4R}\) (second line), as in Eqs.(47) and (48). Plots show the average value of the order parameter, its mean square deviation and \(\delta_{max}\).
dimensional model, Eq. (3), are formally identical in any number of dimensions, provided they are written in terms of the unit vectors determining the particles' positions in the corresponding space.
The vector form of equations (3) describing the multidimensional model admits a further natural extension, where the coupling constant is replaced by a coupling matrix as in Eq. (6), breaking the rotational symmetry and promoting generalized frustration between the particles [21; 22]. For identical oscillators, when all natural frequencies are set to zero, the asymptotic dynamic is completely determined by the eigenvectors and eigenvalues of the coupling matrix \(\mathbf{K}\). If the leading eigenvalue is real and positive the order parameter \(\vec{p}\) converges to \(p=1\) in the direction of the eigenvector (static sync). If the leading eigenvector is complex, \(\vec{p}\) rotates in the plane defined by the real and imaginary parts of the corresponding eigenvector, also with \(p=1\). These results hold for all dimensions \(D\).
Here we have shown that this simple behavior changes dramatically for non-identical oscillators. The asymptotic nature of the system depends strongly on the distribution of natural frequencies, on the coupling matrix and on the dimension \(D\). In order to simplify the calculations we parametrized the distributions of natural frequencies by only two quantities related to their average value, \(\omega_{0}\), and width \(\Delta\). Because the coupling matrix breaks the rotational symmetry, the magnitude of \(\omega_{0}\) plays a key role in the dynamics. We constructed phase diagrams in the \(\omega_{0}\times\Delta\) plane for different types of distributions and for dimensions \(2\), \(3\) and \(4\). In the case of \(D=2\) we computed the phase diagram analytically for the Lorentz distribution and numerically for the Gaussian and uniform distributions.
In \(D=3\) synchronization starts at \(p=0.5\) if the real part of the leading eigenvalue of \(\mathbf{K}\) is positive [37; 38] and the phase diagram exhibits all phases: static sync, rotation and active states. The size and disposition of each phase changes according to the coupling matrix and distribution \(g(\vec{\omega})\). All phase diagrams in \(D=3\) are remarkably different from their counterparts \(D=2\). Finally, for \(D=4\) the phase diagrams have structures similar to their equivalents in \(D=2\), showing that the parity of \(D\) matters as in the case of diagonal coupling matrices [37].
As active states prevent full synchronization of the particles, knowledge of their location in parameter space is an important information. In general terms we can say that 3D systems are characterized by large regions of active states that appear for small values of \(\omega_{0}\) and large values of \(\Delta\). For even dimensional systems, on the other hand, active states
require large values of \(\omega_{0}\) and small values of \(\Delta\).
###### Acknowledgements.
This work was partly supported by FAPESP, grant 2021/14335-0 (ICTP-SAIFR) and CNPq, grant 301082/2019-7. |
2306.14656 | Discrete Bessel functions and discrete wave equation | In this paper, we study discrete Bessel functions which are solutions to the
discretization of Bessel differential equations when the forward and the
backward difference replace the time derivative. We focus on the discrete
Bessel equations with the backward difference and derive their solutions. We
then study the transformation properties of those functions, describe their
asymptotic behaviour and compute Laplace transform. As an application, we study
the discrete wave equation on the integers in timescale $T=\mathbb{Z}$ and
express its fundamental and general solution in terms of the discrete
$J$-Bessel function. Going further, we show that the first fundamental solution
of this equation oscillates with the exponentially decaying amplitude as time
tends to infinity. | Amar Bašić, Lejla Smajlović, Zenan Šabanac | 2023-06-26T12:49:12Z | http://arxiv.org/abs/2306.14656v1 | # Discrete Bessel functions and discrete wave equation
###### Abstract.
In this paper we study discrete Bessel functions which are solutions to the discretization of Bessel differential equations when the forward and the backward difference replace the time derivative. We focus on the discrete Bessel equations with the backward difference and derive their solutions. We then study transformation properties of those functions, describe their asymptotic behaviour and compute Laplace transform. As an application, we study the discrete wave equation on the integers in timescale \(T=\mathbb{Z}\) and express its fundamental and general solution in terms of the discrete \(J\)-Bessel function. Going further, we show that the first fundamental solution of this equation oscillates with the exponentially decaying amplitude as time tends to infinity.
Key words and phrases:difference equation, discrete Bessel functions, asymptotic behaviour, discrete wave equation 2020 Mathematics Subject Classification: 39A12, 39A14, 39A22
## 1. Introduction and statement of results
The classical \(J\)-Bessel and \(I\)-Bessel functions of the first kind are important mathematical objects arising in different fields of mathematics and its applications. They are defined for \(z\in\mathbb{C}\) with \(|\arg z|<\pi\), and a complex index \(\nu\) by the absolutely convergent series
\[\mathcal{J}_{\nu}(z)=\left(\frac{z}{2}\right)^{\nu}\sum_{k=0}^{\infty}\frac{( -1)^{k}}{k!\Gamma(\nu+k+1)}\left(\frac{z}{2}\right)^{2k}, \tag{1.1}\]
and
\[\mathcal{I}_{\nu}(z)=\left(\frac{z}{2}\right)^{\nu}\sum_{k=0}^{\infty}\frac{1 }{k!\Gamma(\nu+k+1)}\left(\frac{z}{2}\right)^{2k}. \tag{1.2}\]
For a non-negative integer \(\nu\), the \(J\)-Bessel and \(I\)-Bessel functions are well defined by the series (1.1) and (1.2) for all complex arguments \(z\). When the index is a negative integer \(-n\), then \(\mathcal{J}_{-n}(z)=(-1)^{n}\mathcal{J}_{n}(z)\) and \(\mathcal{I}_{-n}(z)=\mathcal{I}_{n}(z)\), for all complex arguments \(z\).
For the purposes of this paper, it is important to notice that functions \(\mathcal{J}_{\nu}(z)\) and \(\mathcal{I}_{\nu}(z)\) are solutions to the Bessel differential equations
\[z^{2}\frac{d^{2}f}{dz^{2}}+z\frac{df}{dz}-(\nu^{2}-x^{2})f=0\quad\text{and} \quad z^{2}\frac{d^{2}f}{dz^{2}}+z\frac{df}{dz}-(\nu^{2}+x^{2})f=0, \tag{1.3}\]
respectively.
Moreover, the classical Bessel functions arise in solutions of diffusion and wave equations in different settings in which the solutions depend only on the distance between the spatial variables (the so-called radial dependence, see [3]). For example, the solution to the wave equation on homogeneous trees, derived in [15], is expressed in terms of the \(J\)-Bessel function. In contrast, the solution to the diffusion equation on any \(q\)-regular graph deduced in [12], is expressed in terms of the \(I\)-Bessel functions. (See also [32] for a more general diffusion equation and [30] for the wave equation).
### Discretizations of Bessel functions
There exist many analogues and generalizations of Bessel functions. For example, the \(q-\)Bessel functions are well studied (for an excellent introduction, see the thesis [33]), and their further generalizations to discrete timescales are also developed and applied in [25], [29] or [36], to name a few.
The starting point of this paper is discretizations of differential equations (1.3) in which the forward or backward difference operator replaces the classical derivative. More precisely, we will study discretizations of Bessel functions that satisfy discrete analogues of the Bessel differential equations (1.3) when the timescale equals \(\mathbb{Z}\) and when the derivative is either the forward or the backward difference. Those functions will also arise as solutions of the diffusion/wave equation when the timescale is \(\mathbb{Z}\), the time derivative is the forward or the backward difference and the spatial variable belongs to \(\mathbb{Z}\).
The discretizations of equations (1.3) are given for any integer \(n\) and any nonzero complex parameter \(c\) are as follows:
* The forward difference equation (1.4) \[t\left(t-1\right)\partial_{t}^{2}y\left(t-2\right)+t\partial_{t}y\left(t-1 \right)\pm c^{2}t\left(t-1\right)y\left(t-2\right)-n^{2}y\left(t\right)=0,\]
* The backward difference equation (1.5) \[t\left(t+1\right)\overline{\partial}_{t}^{2}y_{n}\left(t+2\right)+t\overline {\partial}_{t}y_{n}\left(t+1\right)\pm c^{2}t\left(t+1\right)y_{n}\left(t+2 \right)-n^{2}y_{n}\left(t\right)=0.\]
Here \(\partial_{t}\) denotes the _forward difference operator_ which acts on functions \(g\) defined on \(\mathbb{Z}\) as
\[\partial_{t}g(t)=g(t+1)-g(t),\quad t\in\mathbb{Z}\]
while \(\overline{\partial}_{t}\) is the _backward difference operator_ acting as
\[\overline{\partial}_{t}g(t)=g(t)-g(t-1),\quad t\in\mathbb{Z}.\]
Note that difference equations (1.4) and (1.5) are different from equations in [9] where the discretized Laplace equation in spherical coordinates was studied; see also [23, example on p. 187] for a different type of discretization of equation (1.3).
The forward difference equation (1.4) was first studied in [4] with the plus sign in front of \(c^{2}\), and for \(c=1\). The detailed study of this equation for a general \(c\in\mathbb{C}\setminus\{0\}\) was carried out in [31], where it was proved that the discrete \(J\)-Bessel function
\[J_{n}^{c}\left(t\right)=\frac{\left(-c/2\right)^{n}\left(-t\right)_{n}}{n!}{} _{2}F_{1}\!\left(\frac{n-t}{2},\frac{n-t}{2}+\frac{1}{2};n+1;-c^{2}\right),\ t\in \mathbb{N}_{0},\ n\in\mathbb{N}_{0}, \tag{1.6}\]
and the discrete \(I\)-Bessel function
\[I_{n}^{c}\left(t\right)=\frac{\left(-c/2\right)^{n}\left(-t\right)_{n}}{n!}{}_{2 }F_{1}\!\left(\frac{n-t}{2},\frac{n-t}{2}+\frac{1}{2};n+1;c^{2}\right),\ t\in \mathbb{N}_{0},\ n\in\mathbb{N}_{0}, \tag{1.7}\]
are solutions to the forward difference equation (1.4), with the plus and minus sign, respectively. Here, \({}_{2}F_{1}\) is the Gauss hypergeometric function and \(\left(t\right)_{k}\) is the Pochhammer symbol, see equation (2.1) below. Given that the Gauss hypergeometric function \({}_{2}F_{1}(\alpha,\beta;\gamma;z)\) can be analytically continued into the complex \(z\)-plane cut along \(\left[1,\infty\right]\), the function \(J_{n}^{c}\left(t\right)\) is well-defined for \(t\in\mathbb{Z}_{<0}\) and \(c\in\mathbb{C}\setminus\{i\alpha:\,\alpha\in\mathbb{R},\,|\alpha|\geq 1\}\). Analogously, the function \(I_{n}^{c}\left(t\right)\) is well defined for \(t\in\mathbb{Z}_{<0}\) and \(c\in\mathbb{C}\setminus\{\alpha:\,\alpha\in\mathbb{R},\,|\alpha|\geq 1\}\).
The first main result of this paper is the following theorem which describes two solutions to the backward difference equation (1.5), thus providing definitions for two new discretizations of \(J\)-Bessel and \(I\)-Bessel functions, when the timescale is \(\mathbb{Z}\), and the delta derivative is the backward difference.
**Theorem 1.1**.: _Let \(c\in\mathbb{C}\setminus\{i\alpha:\,\alpha\in\mathbb{R},\,|\alpha|\geq 1\}\). Then, the function_
\[\overline{J}_{n}^{c}\left(t\right)=\frac{\left(c/2\right)^{n}\left(t\right)_{ n}}{n!}{}_{2}F_{1}\!\left(\frac{n+t}{2},\frac{n+t}{2}+\frac{1}{2};n+1;-c^{2} \right),\ t\in\mathbb{Z},\ n\in\mathbb{N}_{0}, \tag{1.8}\]
_is the solution to the backward difference equation (1.5), with the plus sign._
_For \(c\in\mathbb{C}\setminus\{\alpha:\,\alpha\in\mathbb{R},\,|\alpha|\geq 1\}\), the function_
\[\overline{I}_{n}^{c}\left(t\right)=\frac{\left(c/2\right)^{n}\left(t\right)_{ n}}{n!}{}_{2}F_{1}\!\left(\frac{n+t}{2},\frac{n+t}{2}+\frac{1}{2};n+1;c^{2} \right),\ t\in\mathbb{Z},\ n\in\mathbb{N}_{0}, \tag{1.9}\]
_is the solution to the backward difference equation (1.5), with the minus sign._
We will call functions \(J_{n}^{c}\) and \(I_{n}^{c}\)_forward_ discrete Bessel functions, while functions \(\overline{J}_{n}^{c}\) and \(\overline{I}_{n}^{c}\) will be called _backward_ discrete Bessel functions.
The complex parameter \(c\) in the above theorem is chosen so that the hypergeometric function \({}_{2}F_{1}(\alpha,\beta;\gamma;z)\) appearing in (1.8) and (1.9) is the principal branch of the analytic continuation of this function from the disc \(|z|<1\) to the complex \(z\)-plane cut from \(1\) to \(\infty\) along the real axes. Therefore, for any integer \(t\in\mathbb{Z}\) functions \(\overline{J}_{n}^{c}\left(t\right)\) and \(\overline{I}_{n}^{c}\left(t\right)\) are holomorphic functions of \(c\) in a given range. When \(t\in\mathbb{Z}_{<0}\), the hypergeometric series in (1.8) and (1.9) is a polynomial in \(c\), and hence holomorphic on the entire complex \(c\)-plane, see Proposition 3.1 below.
The forward discrete \(J\)-Bessel and \(I\)-Bessel functions have been studied in [31], where many properties of those functions have been derived, including various transformation laws and analysis of sign changes for nonzero real \(c\) and positive integers \(t\). An expression as a polynomial in \(c\), a precise asymptotic behaviour
\[I_{n}^{c}\left(t\right)\sim\left(\text{sgn}(c)\right)^{n}\frac{\left(1+|c| \right)^{t+\frac{1}{2}}}{\sqrt{2\pi t\,|c|}},\,\text{as}\ t\to\infty, \tag{1.10}\]
of \(I_{n}^{c}(t)\), for \(c\in\mathbb{R}\setminus\{0\}\) and a generating function for \(I_{n}^{c}(t)\) was deduced in [10, Section 3]. A generating function for the backward discrete \(I\)-Bessel function \(\overline{I}_{n}^{c}\) was derived in [21] where some further properties of this function were established.
In this paper, we complete studies of analytic properties of the forward discrete \(J\)-Bessel function \(J_{n}^{c}\) and the backward discrete \(I\)-Bessel function \(\overline{I}_{n}^{c}\) and present a detailed study of properties of the backward discrete \(J\)-Bessel function \(\overline{J}_{n}^{c}\), which is introduced in this paper.
We prove that \(\overline{J}_{n}^{c}\) satisfies the properties analogous to the properties of the Bessel function \(\mathcal{J}_{n}(t)\), see Lemma 3.2 below. Then, we proceed with the study of the asymptotic behaviour of discrete Bessel functions as \(t\to\infty\) which is summarized in the following theorem.
**Theorem 1.2**.: _For any real, nonzero parameter \(c\) and a fixed \(n\in\mathbb{N}\), we have that_
\[J_{n}^{c}\left(t\right)\sim\left(\operatorname{sgn}(c)\right)^{n}\frac{\sqrt{ 2}}{\sqrt{\pi t\left|c\right|}}\left(1+c^{2}\right)^{\frac{t}{2}+\frac{1}{4}} \cos\left(\left(t+\frac{1}{2}\right)\theta-\frac{\pi}{4}+\frac{n\pi}{2}\right)\text {, as }t\to\infty, \tag{1.11}\]
\[\overline{J}_{n}^{c}\left(t\right)\sim\left(\operatorname{sgn}(c)\right)^{n} \frac{\sqrt{2}}{\sqrt{\pi t\left|c\right|}}\left(1+c^{2}\right)^{-\frac{t}{2} +\frac{1}{4}}\cos\left(\left(t-\frac{1}{2}\right)\theta-\frac{\pi}{4}+\frac{n \pi}{2}\right)\text{, as }t\to\infty, \tag{1.12}\]
_where \(\operatorname{sgn}(c)\) denotes the sign of \(c\) and \(\theta\in\left(0,\frac{\pi}{2}\right)\) is such that \(\cos\theta=\left(1+c^{2}\right)^{-\frac{1}{2}}\)._
_For any real, nonzero parameter \(c\) with \(\left|c\right|<1\) we have_
\[\overline{I}_{n}^{c}\left(t\right)\sim\left(\operatorname{sgn}(c)\right)^{n} \frac{\left(1-\left|c\right|\right)^{-t+\frac{1}{2}}}{\sqrt{2\pi t\left|c \right|}}\text{, as }t\to\infty, \tag{1.13}\]
Functions \(J_{n}^{c}\left(t\right)\) and \(I_{n}^{c}\left(t\right)\) are equal to zero when \(n>t\), however, \(\overline{J}_{n}^{c}\left(t\right)\) and \(\overline{I}_{n}^{c}\left(t\right)\) are nonzero when \(n>t\), hence it is of interest to deduce their asymptotic behaviour when \(t\) is fixed and \(n\to\infty\). This was carried out in Proposition 3.3 below.
We also study the Laplace transform \(\mathcal{L}_{\partial_{t}}\) associated \(\partial_{t}\) of functions \(J_{n}^{c}\) and \(I_{n}^{c}\) and the Laplace transform \(\mathcal{L}_{\overline{\partial}_{t}}\) associated to \(\overline{\partial}_{t}\) of functions \(\overline{J}_{n}^{c}\) and \(\overline{I}_{n}^{c}\) (precise definitions of \(\mathcal{L}_{\partial_{t}}\) and \(\mathcal{L}_{\overline{\partial}_{t}}\) are given in Section 2.3) and prove the following theorem.
**Theorem 1.3**.: _For \(n\in\mathbb{N}_{0}\), \(c\in\mathbb{C}\setminus\{0\}\) and any \(z\in\mathbb{C}\), \(z\neq\pm ic\), we have_
\[\mathcal{L}_{\partial_{t}}\{J_{n}^{c}\}(z)=\mathcal{L}_{\overline{\partial}_{ t}}\{\overline{J}_{n}^{c}\}(z)=\frac{c^{-n}\left(\sqrt{z^{2}+c^{2}}-z \right)^{n}}{\sqrt{z^{2}+c^{2}}}, \tag{1.14}\]
_while for \(n\in\mathbb{N}_{0}\), \(c\in\mathbb{C}\setminus\{0\}\) and any \(z\in\mathbb{C}\), \(z\neq\pm c\), we have_
\[\mathcal{L}_{\partial_{t}}\{I_{n}^{c}\}(z)=\mathcal{L}_{\overline{\partial}_{ t}}\{\overline{I}_{n}^{c}\}(z)=\frac{c^{-n}\left(z-\sqrt{z^{2}-c^{2}}\right)^{n}}{ \sqrt{z^{2}-c^{2}}}. \tag{1.15}\]
**Remark 1.4**.: According to [18, formulas 17.13.103 and 17.13.109], the right-hand side of (1.14) equals the classical Laplace transform of the Bessel function of the first kind \(\mathcal{J}_{n}\left(cx\right)\) at \(z\), for \(\operatorname{Re}(z)>\left|\operatorname{Im}(c)\right|\) while the right-hand side of (1.15) equals the Laplace transform of the modified Bessel function \(\mathcal{I}_{n}\left(cx\right)\) at \(z\), for \(\operatorname{Re}(z)>\left|\operatorname{Re}(c)\right|\).
In the terminology of [16, p. 1298], this means that classical Bessel functions \(\mathcal{J}_{n}\left(cx\right)\) and \(\mathcal{I}_{n}\left(cx\right)\) viewed as functions of \(x\in\mathbb{R}\) are _shadow_ functions for \(J_{n}^{c}(t)\), \(\overline{J}_{n}^{c}(t)\) and \(I_{n}^{c}(t)\), \(\overline{I}_{n}^{c}(t)\) respectively. Hence, those functions are indeed the _appropriate timescale analogues_ of classical Bessel functions.1
Footnote 1: We are thankful to Tom Cuchta for this remark.
### Discrete time wave equation on integers
The classical wave equation in one dimension is the equation
\[\frac{\partial^{2}u(x;t)}{\partial^{2}t}=c^{2}\frac{\partial^{2}u(x;t)}{ \partial^{2}x},\]
where \(c>0\) is the propagation speed, \(t\in[0,\infty)\) is the time variable, \(x\in\mathbb{R}\) is a spatial variable and derivatives are classical partial derivatives of real functions of two real variables.
In a more general setting, the wave equation on timescale \(T\) on the spatial space \(X\) with the propagation speed \(c>0\) can be viewed as the equation
\[\Delta_{t}^{2}u(x;t)+c^{2}\Delta_{X}u(x;t)=0, \tag{1.16}\]
where \(\Delta_{X}\) is usually the (weighted) Laplacian on the spatial space \(X\) (or some other generalization of the second derivative in the space variable, e.g. fractional Laplacian, see [17] or [24]) and \(\Delta_{t}\) stands for the delta (timescale) derivative with respect to \(t\) on a given timescale \(T\), as described e.g. in [6] and [8]. The initial conditions on \(u\) and its derivative with respect to \(t\) for (1.16) are defined in a natural way, depending on the timescale.
When the timescale \(T=[0,\infty)\) and \(X\) is a homogeneous tree, the explicit solution to (1.16) with natural initial conditions was deduced in [26]; see also [34]. In the special situation of 2-regular tree (when \(X=\mathbb{Z}\)) two independent fundamental solutions were given in terms of the classical \(J\)-Bessel function [26, Proposition 3], while the asymptotic behaviour of the energy was studied in [27].
When the timescale is discrete; more precisely when \(T=\mathbb{Z}\), there are different ways of discretizing the continuous time second derivative \(\frac{\partial^{2}}{\partial^{2}t}\). For example, the forward difference \(\partial_{t}\) and the backward difference \(\overline{\partial}_{t}\) are two natural delta derivatives in time \(t\in T=\mathbb{Z}\).2
Footnote 2: The reason for a chosen notation for those derivatives stems from the fact that \(\overline{\partial}_{t}\partial_{t}=-\Delta_{\mathbb{Z}}\), where \(\Delta_{\mathbb{Z}}\) is the combinatorial Laplacian on \(\mathbb{Z}\). This is reminiscent of the relation \(\overline{\partial}_{z}\partial_{z}=-\frac{1}{4}\Delta_{\mathbb{R}^{2}}\) between the Wirtinger derivatives \(\partial_{z}=\frac{1}{2}\left(\frac{\partial}{\partial x}-i\frac{\partial}{ \partial y}\right)\), \(\overline{\partial}_{z}=\frac{1}{2}\left(\frac{\partial}{\partial x}+i\frac{ \partial}{\partial y}\right)\) in \(z=x+iy\) and the Laplacian \(\Delta_{\mathbb{R}^{2}}\) on \(\mathbb{R}^{2}\).
Therefore, the continuous time second derivative \(\frac{\partial^{2}}{\partial^{2}t}\) may be discretized as \(\partial_{t}^{2}\) or \(\overline{\partial}_{t}^{2}\), depending on whether one chooses the forward or the backward difference operator as a discretization.
There are other possibilities; for example one may discretize \(\frac{\partial^{2}}{\partial^{2}t}\) as \(\partial_{t}\overline{\partial}_{t}=\overline{\partial}_{t}\partial_{t}=- \Delta_{\mathbb{Z}}\). Such a discretization is studied in [15], where an explicit solution of the wave equation on a homogeneous tree was found. The same timescale second derivative was used in [2], where the shifted wave equation on a homogeneous tree of degree \(q+1>2\) is solved by applying a discrete version of Asgeirsson's mean value theorem and by using the inverse dual Abel transform that can be explicitly computed on the homogeneous tree. A more general discrete
wave equation in which both operators are second-order differentials in different timescales was studied in [19, Section 3.2].
In this paper, we will be interested in discrete analogues of the equation (1.16) when the timescale \(T=\mathbb{Z}\) and the spatial space is \(X=\mathbb{Z}\). The Laplacian on \(X=\mathbb{Z}\) is the combinatorial Laplacian on \(X\) viewed as a \(2\)-regular tree, in which every vertex \(n\in\mathbb{Z}\) is adjacent only to its neighbouring vertices \(n-1\), \(n+1\). In other words, in the setting of this paper, the action of the Laplacian \(\Delta_{\mathbb{Z}}\) on any function \(f:\mathbb{Z}\to\mathbb{R}\) is defined as
\[\Delta_{X}f(n)=\Delta_{\mathbb{Z}}f(n)=2f(n)-f(n+1)-f(n-1),\quad n\in\mathbb{ Z}.\]
The difference equation
\[\partial_{t}^{2}u\left(n;t\right)+c^{2}\Delta_{\mathbb{Z}}u(n,t)=0,\ n\in \mathbb{Z},\ t\in\mathbb{N}_{0}, \tag{1.17}\]
with initial conditions
\[u\left(n;0\right)=\left\{\begin{array}{ll}1&\text{if }n=0,\\ 0&\text{if }n\neq 0,\end{array}\right.,\quad\partial_{t}u\left(n;0\right)=0, \ \ n\in\mathbb{Z}, \tag{1.18}\]
was studied in [31, Section 3], where it is proved that the function
\[u_{1}\left(n;t\right)=J_{2|n|}^{2c}\left(t\right),\ \ n\in\mathbb{Z},\ \ t\in\mathbb{N}_{0}, \tag{1.19}\]
is its (fundamental) solution. In [31, Section 3] Slavik also deduced the solution to the wave equation (1.17) with general initial conditions.
In Section 4.3 we study the analogue of the equation (1.17) with the derivative \(\partial_{t}^{2}\) replaced by \(\overline{\partial}_{t}^{2}\) with arbitrary initial conditions, given by bounded real sequences indexed by integers. In Theorem 5.1 below we prove that the first fundamental solution to (1.17) with \(\partial_{t}^{2}\) replaced by \(\overline{\partial}_{t}^{2}\) subject to initial conditions (1.18) is
\[u_{1}\left(n;t\right)=\overline{J}_{2|n|}^{2c}\left(t\right),\ \ n\in\mathbb{Z},\ \ t\in\mathbb{N}_{0}.\]
We find the second fundamental solution and express the general solution as a series involving \(\overline{J}_{2|n|}^{2c}\) and the initial data, see Theorem 5.3 for the exact statement.
Asymptotic behaviour of discretizations of \(J\)-Bessel functions proved in parts (i) and (ii) of Theorem 1.2 yields asymptotic behaviour of fundamental solutions to (1.17) subject to initial conditions (1.18) with timescale derivatives being both the forward and the backward difference. In both cases solutions have oscillatory behaviour as \(t\to\infty\), however the amplitude in the case when the time derivative is the forward difference grows exponentially with time, while the amplitude in the case when the time derivative is the backward difference decays exponentially with time; for precise statements, see Corollaries 5.4 and 5.5.
This can be compared with the behaviour of the solution to (1.17) when \(\partial_{t}^{2}\) is replaced by \(\partial_{t}\overline{\partial}_{t}\) studied in [15] and which equals to zero when \(t-n\) is odd.
### Organization of the paper
The structure of the paper is the following: In Section 2, with the aim to make the paper self-contained, we provide a brief overview of necessary definitions and formulas related to the hypergeometric and Legendre functions and the Laplace transform for the forward and the backward difference. Section 3 is devoted to proving some properties of the backward discrete \(J\)-Bessel and \(I\)-Bessel functions and deducing their asymptotic behaviour for large \(n\). Proofs solution to the discrete wave equation with initial conditions given in terms of arbitrary bounded real sequences indexed by integers. We end the paper by describing the asymptotic behaviour of fundamental solutions of the forward and backward discrete wave equations.
## 2. Preliminaries
In this section we recall well known results and definitions that we will need in the sequel. More precisely, we define the hypergeometric function and the Legendre function of the first kind and recall some of their transformation properties and asymptotic behavior as certain parameters tend to infinity. In the last subsection we introduce the (unilateral) Laplace transform in timescales and express the Laplace transform in terms of a certain series when the timescale \(T\) is the set of integers and the timescale (delta) derivative is the forward and the backward difference.
### Hypergeometric function
In this subsection, we recall the basic properties of the Gauss hypergeometric function defined for complex values of \(z\) in the unit disc \(\left|z\right|<1\) and parameters \(\alpha,\beta\in\mathbb{C}\), \(\gamma\in\mathbb{C}\setminus\mathbb{N}_{0}\) as the absolutely convergent series
\[{}_{2}F_{1}(\alpha,\beta;\gamma;z) = \sum_{k=0}^{\infty}\frac{\left(\alpha\right)_{k}\left(\beta \right)_{k}}{\left(\gamma\right)_{k}k!}z^{k},\] \[\left(t\right)_{k} = \frac{\Gamma\left(t+k\right)}{\Gamma\left(t\right)}=\left\{ \begin{array}{ll}t\left(t+1\right)\cdots\left(t+k-1\right)&\text{for }k\in \mathbb{N},\\ 1&\text{for }k=0.\end{array}\right. \tag{2.1}\]
If \(\alpha\) or \(\beta\) is a nonpositive integer, then the series (2.1) reduces to a finite sum and converges everywhere. Otherwise, the hypergeometric series is convergent in the unit disc \(\left|z\right|<1\), but can be analytically continued into the complex plane cut along \(\left[1,\infty\right]\) (see e.g. [28, Section 15.2] or [22, Section 9.1]). The analytic continuation is denoted by the same symbol \({}_{2}F_{1}(\alpha,\beta;\gamma;z)\).
For the sake of brevity and simplicity, we will use \(F\left(\alpha,\beta;\gamma;z\right)\) to denote the Gaussian hypergeometric function instead of \({}_{2}F_{1}(\alpha,\beta;\gamma;z)\).
It is obvious from series representation that \(F\left(\alpha,\beta;\gamma;z\right)=F\left(\beta,\alpha;\gamma;z\right)\). We will need several recursion formulas for Gauss hypergeometric function, which we recall from [18, Section 9.137] and [1, formula 15.2.18] and list in the following lemma.
**Lemma 2.1**.: _The Gauss hypergeometric function satisfies the following recursion relations:_
1. \(\gamma F\left(\alpha,\beta;\gamma;z\right)-\gamma F\left(\alpha,\beta+1; \gamma;z\right)+\alpha zF\left(\alpha+1,\beta+1;\gamma+1;z\right)=0\)_,_
2. \(\gamma F\left(\alpha,\beta;\gamma;z\right)-\gamma F\left(\alpha+1,\beta;\gamma;z \right)+\beta zF\left(\alpha+1,\beta+1;\gamma+1;z\right)=0\)_,_
3. \(\gamma F\left(\alpha,\beta;\gamma;z\right)-\left(\gamma-\alpha\right)F\left( \alpha,\beta;\gamma+1;z\right)-\alpha F\left(\alpha+1,\beta;\gamma+1;z\right)=0\)_,_
4. \(\left(\gamma-\alpha-\beta\right)F\left(\alpha,\beta;\gamma;z\right)-\left( \gamma-\alpha\right)F\left(\alpha-1,\beta;\gamma;z\right)\\ +\beta\left(1-z\right)F\left(\alpha,\beta+1;\gamma;z\right)=0\)_._
The hypergeometric function satisfies many transformation formulas. In the sequel, we will use the following formula, which we quote from [18, Section 9.131]:
\[F(\alpha,\beta;\gamma;z)=(1-z)^{-\alpha}F\left(\alpha,\gamma-\beta;\gamma; \frac{z}{z-1}\right) \tag{2.2}\]
and which is valid for \(|\arg(1-z)|<\pi\).
We will also need the asymptotic behaviour of \(F(\alpha,\beta;\gamma;z)\) when some of the parameters are large. More precisely, we will make use of the following two asymptotic formulas which we quote from [37, Section 9, p. 289]. Let \(\alpha\), \(\beta\), \(\gamma\) be arbitrary (fixed) complex numbers, \(z=\cosh\zeta=\xi+i\eta\in\mathbb{C}\setminus(-\infty,1]\) with \(\xi,\,\eta\) real, \(\xi\geq 0\), \(-\pi<\eta\leq\pi\). The first formula states that
\[\left(\frac{z-1}{2}\right)^{-\alpha-\lambda}F\left(\alpha+ \lambda,\alpha+\lambda-\gamma+1;\alpha-\beta+2\lambda+1;\frac{2}{1-z}\right)\\ \sim\frac{2^{\alpha+\beta}\Gamma\left(\alpha-\beta+2\lambda+1 \right)}{\Gamma\left(\alpha+\lambda-\gamma+1\right)\Gamma\left(\gamma-\beta+ \lambda\right)}e^{-\left(\alpha+\lambda\right)\zeta}\left(1-e^{-\zeta\right)^ {\frac{1}{2}-\gamma}}\left(1+e^{-\zeta\right)^{\gamma-\alpha-\beta-\frac{1}{2 }}}\\ \times\sum_{s=0}^{\infty}c_{s}^{\prime}\frac{\Gamma\left(s+\frac{ 1}{2}\right)}{\lambda^{s+\frac{1}{2}}}, \tag{2.3}\]
as \(|\lambda|\to+\infty\), where \(\lambda\) is such that \(|\arg\lambda|\leq\pi-\delta<\pi\). Constants \(c_{s}^{\prime}\) are independent of \(\lambda\) and \(c_{0}^{\prime}=1\).
Assume \(|\arg\lambda|\leq\pi/2-\delta<\pi/2\). The second formula from [37, Section 9] states that
\[F\left(\alpha+\lambda,\beta-\lambda;\gamma;\frac{1}{2}-\frac{1}{ 2}z\right)\\ \sim\frac{\Gamma\left(1-\beta+\lambda\right)\Gamma\left(\gamma \right)}{\pi\Gamma\left(\gamma-\beta+\lambda\right)}2^{\alpha+\beta-1}\left(1- e^{-\zeta}\right)^{\frac{1}{2}-\gamma}\left(1+e^{-\zeta}\right)^{\gamma- \alpha-\beta-\frac{1}{2}}\\ \times\left[e^{\left(\lambda-\beta\right)\zeta}\sum_{s=0}^{ \infty}c_{s}\frac{\Gamma\left(s+\frac{1}{2}\right)}{\lambda^{s+\frac{1}{2}}}+e ^{\mp\pi i\left(\frac{1}{2}-\gamma\right)}e^{-\left(\lambda+\alpha\right) \zeta}\sum_{s=0}^{\infty}c_{s}^{\prime}\frac{\Gamma\left(s+\frac{1}{2}\right) }{\lambda^{s+\frac{1}{2}}}\right], \tag{2.4}\]
as \(|\lambda|\to+\infty\), where \(c_{s}\), \(c_{s}^{\prime}\) are independent of \(\lambda\), \(c_{0}=c_{0}^{\prime}=1\) and in the second term the upper or lower sign is taken according as \(\operatorname{Im}(z)\gtrgtr 0\).
### Legendre function of the first kind
The Legendre function of the first kind of degree \(\nu\) and order \(\mu\), where \(\nu,\mu\in\mathbb{C}\) is defined as
\[P_{\mu}^{\nu}(z)=\frac{1}{\Gamma(1-\mu)}\left(\frac{z+1}{z-1}\right)^{\frac{\mu }{2}}F\left(-\nu,\nu+1;1-\mu;\frac{1-z}{2}\right).\]
When \(\beta=\alpha+1/2\), the hypergeometric function \(F(\alpha,\beta;\gamma;z)\) can be expressed in terms of the Legendre function \(P_{\mu}^{\nu}\). We quote here the formula 15.4.11 from [1], which is valid for real, negative values of \(z=x\) and any complex numbers \(a,\,c\)
\[F(a,a+1/2;c;x)=2^{c-1}\Gamma(c)(-x)^{1/2-c/2}(1-x)^{c/2-a-1/2}P_{2a-c}^{1-c} \left[(1-x)^{-1/2}\right]. \tag{2.5}\]
When the order \(\mu\) is an integer, say \(m\), combining formulas 8.2.1. and 8.2.5. of [1], we arrive at the following equation
\[P_{-\nu-1}^{-m}(z)=P_{\nu}^{-m}(z)=\frac{\Gamma(\nu-m+1)}{\Gamma(\nu+m+1)}P_{ \nu}^{m}(z). \tag{2.6}\]
When \(z=\cos\theta\in(0,1)\) in the sequel we will need the asymptotic formula for \(P_{\nu}^{\mu}(\cos\theta)\), which we quote from [18, formula 8.721.3] (see also [35])
\[P_{\nu}^{\mu}(\cos\theta)=\frac{2}{\sqrt{\pi}}\frac{\Gamma(\nu+m+1)}{\Gamma( \nu+3/2)}\frac{\cos\left(\left(\nu+\frac{1}{2}\right)\theta-\frac{\pi}{4}+ \frac{\mu\pi}{2}\right)}{\sqrt{2\sin\theta}}\left(1+O\left(\frac{1}{\nu} \right)\right), \tag{2.7}\]
as \(|\nu|\to\infty\).
### Laplace transform for the forward and the backward difference
The unilateral Laplace transform on timescales was introduced by Bohner and Peterson in [7] and further used, generalized and studied in numerous works. In this section, we recall results from [5] on Laplace transform on timescale \(T=\mathbb{Z}\), with respect to both the forward difference operator \(\partial_{t}\) and the backward difference operator \(\overline{\partial}_{t}\). Given any timescale \(T\) such that \(0\in T\) and \(\sup T=\infty\), the Laplace transform of the regulated function \(x:T\to\mathbb{R}\) is defined by
\[\mathcal{L}_{\Delta}\{x\}(z)=\int\limits_{0}^{\infty}\frac{x(t)}{e_{z}(t+\mu ^{*}(t),0)}\Delta t,\]
where \(\Delta\) is the delta derivative on the timescale \(T\); the integral is improper integral with respect to the derivative \(\Delta\), \(e_{z}(t,0)\) is the exponential function in timescale \(T\) and \(z\) belongs to a subset of complex numbers (depending on the function \(x\)) such that the above improper integral is convergent. The function \(\mu^{*}(t)\) depends on the definition of the \(\Delta\)-derivative in timescale \(T\).
The set of points \(z\) for which the above integral converges is generally not easy to find. When \(T=\mathbb{Z}\) and \(\Delta\) is either the forward or the backward difference, the region of convergence will be a certain disc in the extended complex plane. We refer the interested reader to the paper [20] in which a very detailed study of the Laplace transform, including the discussion on the region of convergence, is presented.
In this paper, we are interested only in the case when \(T=\mathbb{Z}\) and \(\Delta\) is either the forward or the backward difference. In our setup \(\mu^{*}(t)=1\), for all \(t\) when \(\Delta=\partial_{t}\) and \(\mu^{*}(t)=-1\) for all \(t\) when \(\Delta=\overline{\partial}_{t}\).
When \(\Delta=\partial_{t}\), the exponential function is \(e_{z}(t+1,0)=(1+z)^{t+1}\) (see, e.g. [13], p. 29, formula (49)) and hence the Laplace transform of the sequence \(x(t):\mathbb{N}_{0}\rightarrow\mathbb{C}\) is given by
\[\mathcal{L}_{\partial_{t}}\{x\}(z)=\sum_{t=0}^{\infty}\frac{x(t)}{(1+z)^{t+1}}, \tag{2.8}\]
for all complex values of \(z\neq-1\) such that the above series converges. When \(\Delta=\overline{\partial}_{t}\), the exponential function is given by \(\hat{e}_{z}(t-1,0)=(1-z)^{-(t-1)}\), see [13], p. 29, formula (50), hence the Laplace transform of the sequence \(x(t):\mathbb{N}_{0}\rightarrow\mathbb{C}\) in this case is
\[\mathcal{L}_{\overline{\partial}_{t}}\{x\}(z)=\sum_{t=0}^{\infty}x(t)(1-z)^{ t-1}, \tag{2.9}\]
for all complex numbers \(z\neq 1\) such that the above series converges.
## 3. Properties of backward discrete Bessel Functions
In this section, we derive some basic properties of backward discrete \(J\)-Bessel and \(I\)-Bessel functions. We start by stating simple relations between four discrete Bessel functions that stem directly from their definitions (1.6) - (1.9). Namely, we have the following identities which hold true for all \(n\in\mathbb{N}_{0}\) and \(t\in\mathbb{Z}\):
\[\overline{J}_{n}^{c}\left(t\right)=(-1)^{n}J_{n}^{c}(-t),\quad\text{for}\quad c \in\mathbb{C}\setminus\{i\alpha:\,\alpha\in\mathbb{R},\,|\alpha|\geq 1\},\]
\[\overline{I}_{n}^{c}\left(t\right)=(-1)^{n}I_{n}^{c}(-t),\quad\text{for}\quad c \in\mathbb{C}\setminus\{\alpha:\,\alpha\in\mathbb{R},\,|\alpha|\geq 1\},\]
and
\[I_{n}^{c}\left(t\right)=(-i)^{n}J_{n}^{ic}(t),\quad\overline{I}_{n}^{c}\left( t\right)=(-i)^{n}\overline{J}_{n}^{ic}\left(t\right),\quad\text{for}\quad c \in\mathbb{C}\setminus\{\alpha:\,\alpha\in\mathbb{R},\,|\alpha|\geq 1\}.\]
When \(n>t\), it is well known that \(I_{n}^{c}(t)=J_{n}^{c}(t)=0\). Therefore, for \(t\in\mathbb{Z}_{<0}\), for a suitable range of \(c\) we have that \(n>-t\) implies \(\overline{J}_{n}^{c}\left(t\right)=\overline{I}_{n}^{c}\left(t\right)=0\).
The following proposition shows that for \(t\in\mathbb{N}_{0}\), the function \(J_{n}^{c}\left(t\right)\) can be viewed as a polynomial in variable \(c\), while for \(t\in\mathbb{Z}_{<0}\) functions \(\overline{J}_{n}^{c}\left(t\right)\) and \(\overline{I}_{n}^{c}\left(t\right)\) can be viewed as a polynomial in \(c\).
**Proposition 3.1**.:
* _Let_ \(t,n\in\mathbb{N}_{0}\) _such that_ \(n\leq t\)_. Set_ \(\ell=\left\lfloor\left(t-n\right)/2\right\rfloor\) _. Then for any_ \(c\in\mathbb{C}\setminus\{0\}\)_, we have that_ (3.1) \[J_{n}^{c}\left(t\right)=\sum_{k=0}^{\ell}\frac{\left(-1\right)^{k}t!}{k!\left( t-2k-n\right)!\left(n+k\right)!}\left(\frac{c}{2}\right)^{2k+n}.\]
2. _Let_ \(t\in\mathbb{Z}_{<0}\)_,_ \(c\in\mathbb{C}\setminus\{0\}\) _and_ \(n\in\mathbb{N}_{0}\) _such that_ \(n\leq-t\)_. Set_ \(\ell=\left\lfloor\left(-t-n\right)/2\right\rfloor\)_. Then, we have that_ \[\overline{J}_{n}^{c}\left(t\right)=\sum_{k=0}^{\ell}\frac{\left(-1\right)^{k+n }\left(-t\right)!}{k!\left(-t-2k-n\right)!\left(n+k\right)!}\left(\frac{c}{2} \right)^{2k+n},\] _and_ \[\overline{I}_{n}^{c}\left(t\right)=(-1)^{n}\sum_{k=0}^{\ell}\frac{\left(-t \right)!}{k!\left(-t-2k-n\right)!\left(n+k\right)!}\left(\frac{c}{2}\right)^{2 k+n}.\]
Proof.: According to [10, Proposition 3.2.], for any \(c\in\mathbb{C}\), we have
\[I_{n}^{c}\left(t\right)=\sum_{k=0}^{\ell}\frac{t!}{k!\left(t-2k-n\right)!\left( n+k\right)!}\left(\frac{c}{2}\right)^{2k+n}.\]
Combining it with \(I_{n}^{c}\left(t\right)=\left(-i\right)^{n}J_{n}^{ic}\left(t\right)\), we easily deduce (3.1), which proves part (i). Part (ii) stems from the identities \(\overline{J}_{n}^{c}\left(t\right)=\left(-1\right)^{n}J_{n}^{c}\left(-t\right)\) and \(\overline{I}_{n}^{c}\left(t\right)=\left(-1\right)^{n}I_{n}^{c}\left(-t\right)\).
Transformation formulas for \(\partial_{t}J_{n}^{c}\), \(\partial_{t}I_{n}^{c}\) and \(\overline{\partial}_{t}\overline{I}_{n}^{c}\) have been deduced in [31], [10] and [21], respectively (see also [14] for recurrence formulas satisfied by matrix analogues of \(J_{n}^{1}\)). In the following lemma we prove transformation formulas for \(\overline{\partial}_{t}\overline{J}_{n}^{c}\).
**Lemma 3.2**.: _For \(c\in\mathbb{C}\setminus\{i\alpha:\,\alpha\in\mathbb{R},\,|\alpha|\geq 1\}\), the discrete \(J\)-Bessel function \(\overline{J}_{n}^{c}\) has the following properties:_
1. \(\overline{J}_{0}^{c}\left(0\right)=1\)_._
2. \(\overline{\partial}_{t}\overline{J}_{0}^{c}\left(t\right)=-c\overline{J}_{1} ^{c}\left(t\right)\) _for all_ \(t\geq 1\)_._
3. \(t\overline{\partial}_{t}\overline{J}_{n}^{c}\left(t+1\right)=n\overline{J}_{n }^{c}\left(t\right)-ct\overline{J}_{n+1}^{c}\left(t+1\right)\) _for any_ \(n\geq 0\) _and_ \(t\geq 0\)_._
4. \(t\overline{\partial}_{t}\overline{J}_{n}^{c}\left(t+1\right)=-n\overline{J}_{n }^{c}\left(t\right)+ct\overline{J}_{n-1}^{c}\left(t+1\right)\) _for any_ \(n\geq 1\) _and_ \(t\geq 0\)_._
5. \(\overline{\partial}_{t}\overline{J}_{n}^{c}\left(t\right)=\frac{c}{2}\left( \overline{J}_{n-1}^{c}\left(t\right)-\overline{J}_{n+1}^{c}\left(t\right)\right)\) _for any_ \(n\geq 1\) _and_ \(t\geq 1\)_._
Proof.: The first statement follows from the definition of \(\overline{J}_{n}^{c}\left(t\right)\).
Formula (ii) can be written as
\[\overline{J}_{0}^{c}\left(t\right)-\overline{J}_{0}^{c}\left(t-1\right)=-c \overline{J}_{1}^{c}\left(t\right).\]
Using recursive formula (ii) from Lemma 2.1 with \(\alpha=\frac{t-1}{2},\beta=\frac{t}{2},\gamma=1\) and \(z=-c^{2}\), and the symmetry of \(F\) in the first two arguments, we obtain
\[F\left(\frac{t-1}{2},\frac{t}{2};1;-c^{2}\right)-F\left(\frac{t}{2},\frac{t+1 }{2};1;-c^{2}\right)=\frac{tc^{2}}{2}F\left(\frac{t+1}{2},\frac{t+2}{2};2;-c^{ 2}\right).\]
Therefore,
\[F\left(\frac{t}{2},\frac{t}{2}+\frac{1}{2};1;-c^{2}\right)-F\left( \frac{t-1}{2},\frac{t-1}{2}+\frac{1}{2};1;-c^{2}\right)\\ =-c\frac{tc}{2}F\left(\frac{t+1}{2},\frac{t+1}{2}+\frac{1}{2};2;- c^{2}\right),\]
which proves (ii) for all \(t\geq 1\).
The identity in part (iii) can be written as
\[t\overline{J}_{n}^{c}\left(t+1\right)-\left(n+t\right)\overline{J}_{n}^{c} \left(t\right)+ct\overline{J}_{n+1}^{c}\left(t+1\right)=0. \tag{3.2}\]
Using recursive relation (i) from Lemma 2.1 with \(\alpha=\frac{n+t+1}{2},\beta=\frac{n+t}{2},\gamma=n+1\) and \(z=-c^{2}\), and the symmetry of \({}_{2}F_{1}\) in the first two arguments, we obtain
\[\left(n+1\right)F\left(\frac{n+t}{2},\frac{n+t}{2}+\frac{1}{2};n +1;-c^{2}\right)\\ -\left(n+1\right)F\left(\frac{n+t+1}{2},\frac{n+t+1}{2}+\frac{1} {2};n+1;-c^{2}\right)\\ -\frac{n+t+1}{2}c^{2}F\left(\frac{n+t+2}{2},\frac{n+t+2}{2}+\frac {1}{2};n+2;-c^{2}\right)=0.\]
Multiplying the above display by \(\frac{c^{n}}{2^{n}\left(n+1\right)!}\left(t\right)_{n+1}\), and using the recurrent relations \(\left(t\right)_{n+1}=t\left(t+1\right)_{n}\), \(\left(t\right)_{n+1}=\left(n+t\right)\left(t\right)_{n}\), and \(\left(n+t+1\right)\left(t\right)_{n+1}=t\left(t+1\right)_{n+1}\), we deduce (3.2).
The proof of part (iv) is analogous to the proof of (iii); it follows from a simple manipulation of the relation (iii) given in Lemma 2.1 with \(\alpha=\frac{n+t}{2},\beta=\frac{n+t}{2}+\frac{1}{2},\gamma=n\) and \(z=-c^{2}\).
Formula (v) is deduced by adding formulas (iii) and (iv), dividing the result by \(2t\), and replacing \(t\) by \(t-1\).
We will now explore asymptotic behaviour of \(\overline{J}_{n}^{c}\left(t\right)\) and \(\overline{I}_{n}^{c}\left(t\right)\) when \(c\) is real-valued, and as \(n\rightarrow\infty\).
**Proposition 3.3**.:
* _For any real nonzero parameter_ \(c\) _and a fixed_ \(t\in\mathbb{N}\)_, we have that_ (3.3) \[\overline{J}_{n}^{c}\left(t\right)\sim\left(\operatorname{sgn}\left(c\right) \right)^{n}\frac{n^{t-1}}{\left(\frac{1+\sqrt{1+c^{2}}}{\left|c\right|} \right)^{n}}\frac{\left(1+c^{2}\right)^{-t/2}}{\Gamma\left(t\right)},\text{ as }n\rightarrow\infty.\]
* _For any real nonzero parameter_ \(c\) _such that_ \(\left|c\right|<1\) _and a fixed_ \(t\in\mathbb{N}\)_, we have that_ (3.4) \[\overline{I}_{n}^{c}\left(t\right)\sim\left(\operatorname{sgn}\left(c\right) \right)^{n}\frac{n^{t-1}}{\left(\frac{1+\sqrt{1-c^{2}}}{\left|c\right|} \right)^{n}}\frac{\left(1-c^{2}\right)^{-t/2}}{\Gamma\left(t\right)},\text{ as }n\rightarrow\infty.\]
Proof.: i) We will use the asymptotic formula (2.3) with \(\lambda=\frac{n}{2}\), \(\alpha=\beta=\frac{t}{2}\), \(\gamma=\frac{1}{2}\), \(z=1+\frac{2}{c^{2}}>1\) and \(z+\sqrt{z^{2}-1}=e^{\zeta}=\frac{\left(1+\sqrt{1+c^{2}}\right)^{2}}{c^{2}}.\) This, together with the asymptotic formula [1, formula (6.1.39)] for the gamma function
\[\Gamma\left(aw+b\right)\sim\sqrt{2\pi}e^{-aw}\left(aw\right)^{aw+b-\frac{1}{2 }},\ \ \left(\left|\arg w\right|<\pi,a>0\right),\ \ \ \left|w\right|\rightarrow\infty \tag{3.5}\]
and the fact that the series \(\sum\limits_{s=1}^{\infty}c_{s}^{\prime}\frac{\Gamma\left(s+\frac{1}{2}\right) }{n^{s}}\) is convergent yields the following asymptotics
\[\overline{J}_{n}^{c}\left(t\right) \sim \frac{\left(\operatorname{sgn}(c)\right)^{n}2^{t}}{2^{n}\Gamma \left(t\right)\left|c\right|^{n+t}}\frac{\sqrt{2\pi}e^{-n}n^{n+t-\frac{1}{2}}} {2\pi e^{-n}\left(\frac{n}{2}\right)^{n}}\left(\frac{c^{2}}{\left(1+\sqrt{1+c ^{2}}\right)^{2}}\right)^{\frac{n+t}{2}}\] \[\times\left(1+\frac{c^{2}}{\left(1+\sqrt{1+c^{2}}\right)^{2}} \right)^{-t}\cdot\frac{\sqrt{2\pi}}{\sqrt{n}}\left(1+O\left(\frac{1}{n}\right)\right)\] \[= \left(\operatorname{sgn}(c)\right)^{n}\frac{n^{t-1}}{\left(\frac{ 1+\sqrt{1+c^{2}}}{\left|c\right|}\right)^{n}}\frac{1}{\Gamma\left(t\right)} \left(\frac{2\left(1+\sqrt{1+c^{2}}\right)}{\left(1+\sqrt{1+c^{2}}\right)^{2} +c^{2}}\right)^{t}\left(1+O\left(\frac{1}{n}\right)\right),\]
as \(n\rightarrow\infty\). This proves (3.3).
ii) Assume \(c\in\mathbb{R}\) such that \(0<\left|c\right|<1\). We apply equation (2.2) with \(\alpha=\frac{n+t}{2}\), \(\beta=\frac{n+t}{2}+\frac{1}{2}\), \(\gamma=n+1\) and \(z=c^{2}\) to deduce
\[F\left(\frac{n+t}{2},\frac{n+t}{2}+\frac{1}{2};n+1;c^{2}\right)=\left(1-c^{2} \right)^{-\frac{n+t}{2}}F\left(\frac{n+t}{2},\frac{n-t}{2}+\frac{1}{2};n+1; \frac{c^{2}}{c^{2}-1}\right),\]
hence
\[\overline{I}_{n}^{c}\left(t\right)=\frac{\left(c/2\right)^{n}\Gamma(t+n)}{n! \Gamma(t)}\left(1-c^{2}\right)^{-\frac{n+t}{2}}F\left(\frac{n+t}{2},\frac{n-t }{2}+\frac{1}{2};n+1;\frac{c^{2}}{c^{2}-1}\right). \tag{3.6}\]
We can write \(\frac{c^{2}}{c^{2}-1}=\frac{2}{1-z}\) for \(z=\frac{2-c^{2}}{c^{2}}>1\). Hence,
\[z+\sqrt{z^{2}-1}=e^{\zeta}=\frac{\left(1+\sqrt{1-c^{2}}\right)^{2}}{c^{2}},\]
for \(\zeta>0\).
Now, we proceed in the same ways as above, i.e., apply the asymptotic formula (2.3) with \(\lambda=\frac{n}{2}\), \(\alpha=\beta=\frac{t}{2}\), \(\gamma=t+\frac{1}{2}\) and use the asymptotic behaviour (3.5) of the gamma function to derive (3.4) for \(0<\left|c\right|<1\).
Since \(\frac{1+\sqrt{1+c^{2}}}{\left|c\right|}>1\) for all real nonzero \(c\) and \(\frac{1+\sqrt{1-c^{2}}}{\left|c\right|}>1\) for all real nonzero \(c\) such that \(\left|c\right|<1\), it is obvious that the right-hand sides of equations (3.3) and (3.4), for a fixed positive integer \(t\), decay exponentially as \(n\rightarrow\infty\). This proves the following corollary.
**Corollary 3.4**.:
1. _For any real nonzero parameter_ \(c\) _and a fixed_ \(t\in\mathbb{N}\)_, we have that_ \(\overline{J}_{n}^{c}\left(t\right)\to 0\) _as_ \(n\to\infty\)_._
2. _For any real nonzero parameter_ \(c\)_,_ \(\left|c\right|<1\) _and a fixed_ \(t\in\mathbb{N}\)_, we have that_ \(\overline{I}_{n}^{c}\left(t\right)\to 0\) _as_ \(n\to\infty\)_._
## 4. Proofs of main results
In this section, we will prove our main results. First, we will show that our backward discrete Bessel functions satisfy the corresponding backward difference equations.
### Proof of Theorem 1.1
Consider the backward difference equation (1.5). By expanding the differences, one can obtain the following equivalent form of the equation (1.5) with the \(+\) sign
\[t\left(t+1\right)(1+c^{2})y_{n}\left(t+2\right)-t\left(2t+1\right)y_{n}\left(t +1\right)-\left(n^{2}-t^{2}\right)y_{n}\left(t\right)=0. \tag{4.1}\]
Since \(\overline{I}_{n}^{c}(t)=(-i)^{n}\,\overline{J}_{n}^{c}(t)\), it is enough to prove that \(\overline{J}_{n}^{c}\left(t\right)\) satisfies the difference equation (4.1).
Using relation (iv) from Lemma 2.1 with \(\alpha=\frac{n+t}{2}+1,\beta=\frac{n+t}{2}+\frac{1}{2},\gamma=n+1\) and \(z=-c^{2}\), we get
\[-\left(t+\frac{1}{2}\right)F\left(\frac{n+t}{2}+1,\frac{n+t}{2}+ \frac{1}{2};n+1;-c^{2}\right)\\ -\frac{n-t}{2}F\left(\frac{n+t}{2},\frac{n+t}{2}+\frac{1}{2};n+1; -c^{2}\right)\\ +\frac{n+t+1}{2}(1+c^{2})F\left(\frac{n+t+2}{2},\frac{n+t+2}{2}+ \frac{1}{2};n+1;-c^{2}\right)=0.\]
Multiplying by \(\frac{2c^{n}}{2^{n}n!}\left(t\right)_{n+1}\) and using the definition of \(\overline{J}_{n}^{c}\left(t\right)\), we obtain
\[-t\left(2t+1\right)\overline{J}_{n}^{c}\left(t+1\right)-\left(n^{2}-t^{2} \right)\overline{J}_{n}^{c}\left(t\right)+t\left(t+1\right)(1+c^{2})\overline {J}_{n}^{c}\left(t+2\right)=0,\]
which is (4.1).
### Proof of Theorem 1.2
First, we will prove the asymptotic formula (1.11). Applying the identity (2.5) with \(a=(n-t)/2\), \(c=n+1\), and \(x=-c^{2}<0\) we get
\[F\left(\frac{n-t}{2},\frac{n-t}{2}+\frac{1}{2};n+1;-c^{2}\right)=2^{n}n!\frac{ \left(1+c^{2}\right)^{\frac{t}{2}}}{\left|c\right|^{n}}P_{-t-1}^{-n}\left( \left(1+c^{2}\right)^{-\frac{1}{2}}\right). \tag{4.2}\]
We put \(\cos\theta=\left(1+c^{2}\right)^{-\frac{1}{2}}\in\left(0,1\right)\) and use (2.6) with \(\nu=t\) and \(m=n\) to deduce
\[P_{-t-1}^{-n}\left(\cos\theta\right)=P_{t}^{-n}\left(\cos\theta\right)=\frac{ \Gamma\left(t-n+1\right)}{\Gamma\left(t+n+1\right)}P_{t}^{n}\left(\cos\theta \right). \tag{4.3}\]
Combining (4.2) and (4.3) with the asymptotic formula (2.7) (in which we take \(\nu=t\) and \(\mu=n\)) we arrive at
\[F\left(\frac{n-t}{2},\frac{n-t}{2}+\frac{1}{2};n+1;-c^{2}\right)\sim\frac{2^{n+1 }n!\left(1+c^{2}\right)^{\frac{t}{2}}}{\left|c\right|^{n}\sqrt{\pi}}\frac{ \Gamma\left(t-n+1\right)}{\Gamma\left(t+\frac{3}{2}\right)}\frac{\cos\left( \left(t+\frac{1}{2}\right)\theta-\frac{\pi}{4}+\frac{n\pi}{2}\right)}{\sqrt{2 \sin\theta}},\]
as \(t\to\infty\).
Since \(\sin\theta=\frac{\left|c\right|}{\sqrt{1+c^{2}}}\), we deduce the following asymptotic formula for \(J_{n}^{c}\left(t\right)\) as \(t\to\infty\):
\[J_{n}^{c}\left(t\right)=\frac{\left(-c/2\right)^{n}\left(-t \right)_{n}}{n!}F\left(\frac{n-t}{2},\frac{n-t}{2}+\frac{1}{2};n+1;-c^{2}\right)\] \[\sim\frac{\sqrt{2}}{\sqrt{\pi}}\left(-1\right)^{n}\left(\text{ sgn}c\right)^{n}\frac{\left(-t\right)_{n}\Gamma\left(t-n+1\right)}{\Gamma\left(t+ \frac{3}{2}\right)}\frac{\left(1+c^{2}\right)^{\frac{t}{2}+\frac{1}{4}}}{ \sqrt{\left|c\right|}}\cos\left(\left(t+\frac{1}{2}\right)\theta-\frac{\pi}{4 }+\frac{n\pi}{2}\right). \tag{4.4}\]
Using the asymptotic formula (3.5) for the gamma function, we get
\[(-1)^{n}\frac{\left(-t\right)_{n}\Gamma\left(t-n+1\right)}{\Gamma\left(t+ \frac{3}{2}\right)}=\frac{\Gamma(t+n)\Gamma(t-n+1)}{\Gamma(t)\Gamma(t+3/2)} \sim\frac{1}{\sqrt{t}},\text{ as }t\to\infty.\]
Inserting this into (4.4) we arrive at (1.11).
Now, we prove (1.12). Applying the identity (2.5) with \(a=(n+t)/2\), \(c=n+1\), and \(x=-c^{2}<0\) together with the identity (2.6) we get
\[F\left(\frac{n+t}{2},\frac{n+t}{2}+\frac{1}{2};n+1;-c^{2}\right)=2^{n}n!\frac {\left(1+c^{2}\right)^{-\frac{t}{2}}}{\left|c\right|^{n}}\frac{\Gamma\left(t- n\right)}{\Gamma\left(t+n\right)}P_{t-1}^{n}\left(\left(1+c^{2}\right)^{-\frac{1}{2}} \right).\]
The asymptotic formula (2.7) with \(\nu=t-1\) and \(\mu=n\) yields
\[\overline{J}_{n}^{c}\left(t\right) = \frac{\left(c/2\right)^{n}\left(t\right)_{n}}{n!}F\left(\frac{n+t }{2},\frac{n+t}{2}+\frac{1}{2};n+1;-c^{2}\right)\] \[\sim \frac{\sqrt{2}}{\sqrt{\pi}}\left(\text{sgn}c\right)^{n}\frac{ \Gamma(t)}{\Gamma\left(t+\frac{1}{2}\right)}\frac{\left(1+c^{2}\right)^{- \frac{t}{2}+\frac{1}{4}}}{\sqrt{\left|c\right|}}\cos\left(\left(t-\frac{1}{2} \right)\theta-\frac{\pi}{4}+\frac{n\pi}{2}\right),\]
as \(t\to\infty\). Applying the asymptotic formula (3.5) to the quotient \(\frac{\Gamma(t)}{\Gamma\left(t+\frac{1}{2}\right)}\) in the above display immediately yields (1.12).
It remains to prove (1.13). Assume \(c\in\mathbb{R}\), \(\left|c\right|<1\). Our starting point is equation (3.6), in which we would like to deduce the asymptotics for the hypergeometric function as \(t\to\infty\). In order to do so, we write \(\frac{c^{2}}{c^{2}-1}=\frac{1}{2}\left(1-z\right)\) for \(z=\frac{1+c^{2}}{1-c^{2}}>1\). Hence
\[z+\sqrt{z^{2}-1}=e^{\zeta}=\frac{1+\left|c\right|}{1-\left|c\right|},\]
for \(\zeta>0\), which justifies application of the second asymptotic Watson's formula (2.4) with \(\alpha=\frac{n}{2}\), \(\beta=\frac{n+1}{2}\), \(\gamma=n+1\), and \(\lambda=\frac{t}{2}\). In this case \(e^{-\zeta}<1\), hence the second term in the
sum in (2.4) decays exponentially as \(\lambda=t/2\to\infty\), meaning that the first sum gives the lead term in asymptotics. We get
\[F\left(\frac{n+t}{2},\frac{n-t}{2}+\frac{1}{2};n+1;\frac{c^{2}}{c^{ 2}-1}\right)\\ \sim\frac{n!\Gamma\left(\frac{t-n+1}{2}\right)}{\pi\Gamma\left( \frac{t+n+1}{2}\right)}2^{n-1/2}\left(\frac{1+|c|}{2|c|}\right)^{n+1/2}\left( \frac{1+|c|}{1-|c|}\right)^{(t-n-1)/2}\left(\frac{\sqrt{\pi}}{\sqrt{t/2}}+O \left(\frac{1}{t}\right)\right),\]
as \(t\to\infty\). Inserting this into the formula (3.6) we deduce
\[\overline{I}_{n}^{c}\left(t\right)\sim\left(\text{sgn}(c)\right)^{n}\frac{2^{ -n}\Gamma\left(t+n\right)\Gamma\left(\frac{t-n+1}{2}\right)}{\Gamma\left(t \right)\Gamma\left(\frac{t+n+1}{2}\right)\sqrt{2\pi t\,|c|}}\left(1-|c|\right) ^{-t+\frac{1}{2}},\text{as }t\to\infty.\]
Formula (3.5) yields that \(\frac{\Gamma(t+n)\Gamma\left(\frac{t-n+1}{2}\right)}{\Gamma(t)\Gamma\left( \frac{t+n+1}{2}\right)}\sim 2^{n}\) as \(t\to\infty\), which completes the proof of (1.13).
### Proof of Theorem 1.3
Let \(c\in\mathbb{C}\setminus\left\{0\right\}\) and \(n\in\mathbb{N}_{0}\). We begin by recalling the result of [10] which states that for any \(n\in\mathbb{N}\) the generating function
\[g_{n}^{c}(z):=\sum_{t=0}^{\infty}I_{n}^{c}\left(t\right)z^{t}\]
of the sequence \(\left\{I_{n}^{c}\left(t\right)\right\}_{n\in\mathbb{N}}\) is holomorphic in the disc \(|z|<\frac{1}{1+|c|}\) and possesses meromorphic continuation to the whole \(z\)-plane given by
\[g_{n}^{c}(z)=\frac{1}{\sqrt{(1-z)^{2}-c^{2}z^{2}}}\left(\frac{(1-z)-\sqrt{(1- z)^{2}-c^{2}z^{2}}}{cz}\right)^{n}.\]
According to the asymptotic relation (1.10), the Laplace transform \(\mathcal{L}_{\partial_{t}}\) of the sequence \(\left\{I_{n}^{c}\left(t\right)\right\}_{n\in\mathbb{N}}\) is a holomorphic function in the region \(|1+z|>1+|c|\) and
\[\mathcal{L}_{\partial_{t}}\{I_{n}^{c}\}(z)=\sum_{t=0}^{\infty}\frac{I_{n}^{c} \left(t\right)}{(1+z)^{t+1}}=\frac{1}{1+z}g_{n}^{c}\left(\frac{1}{1+z}\right) =\frac{c^{-n}\left(z-\sqrt{z^{2}-c^{2}}\right)^{n}}{\sqrt{z^{2}-c^{2}}}.\]
The right-hand side provides the meromorphic continuation of \(\mathcal{L}_{\partial_{t}}\{I_{n}^{c}\}(z)\) to all complex values of \(z\) such that \(z\neq\pm c\).
The obvious inequality \(|J_{n}^{c}\left(t\right)|\leq J_{n}^{|c|}\left(t\right)\) for all integers \(n,t\geq 0\), combined with the asymptotic formula (1.11) implies that the radius of convergence of the power series
\[\sum_{t=0}^{\infty}J_{n}^{c}\left(t\right)z^{t} \tag{4.5}\]
equals \(\frac{1}{\sqrt{1+\left|c\right|^{2}}}\). Therefore, the series (4.5) defines a holomorphic function \(f_{n}^{c}\left(z\right)\) in the disc \(\left|z\right|<\frac{1}{\sqrt{1+\left|c\right|^{2}}}\) which is the generating function of the sequence \(\left\{J_{n}^{c}\left(t\right)\right\}_{n\in\mathbb{N}}\).
The identity \(I_{n}^{c}\left(t\right)=\left(-i\right)^{n}J_{n}^{ic}\left(t\right)\) which is valid for all nonzero complex \(c\), when \(n,t\geq 0\) yields \(J_{n}^{c}\left(t\right)=i^{n}I_{n}^{-ic}\left(t\right)\). Therefore, \(f_{n}^{c}(z)=i^{n}g_{n}^{-ic}(z)\), for all \(z\) in the disc \(\left|z\right|<\frac{1}{1+\left|c\right|}\leq\frac{1}{\sqrt{1+\left|c\right|^{ 2}}}\). A simple computation that amounts to algebraic manipulations and choosing the principal branch of the square root yields that
\[f_{n}^{c}\left(z\right)=\frac{1}{\sqrt{(z-1)^{2}+c^{2}z^{2}}}\left(\frac{z-1} {cz}+\sqrt{\frac{(z-1)^{2}}{c^{2}z^{2}}}+1\right)^{n}, \tag{4.6}\]
for any \(c\in\mathbb{C}\setminus\left\{0\right\}\), \(n\in\mathbb{N}_{0}\) and \(\left|z\right|<\frac{1}{\sqrt{1+\left|c\right|^{2}}}\). The right-hand side of (4.6) provides the meromorphic continuation of \(f_{n}^{c}\left(z\right)\) to the whole \(z\)-plane.
The asymptotic formula (1.11) ensures that the Laplace transform \(\mathcal{L}_{\partial_{t}}\) of the sequence \(\left\{J_{n}^{c}\left(t\right)\right\}_{n\in\mathbb{N}}\) as given in (2.8) is well defined and holomorphic function in the region \(\left|1+z\right|>\sqrt{1+\left|c\right|^{2}}\) and moreover
\[\mathcal{L}_{\partial_{t}}\{J_{n}^{c}\}(z)=\sum_{t=0}^{\infty}\frac{J_{n}^{c} \left(t\right)}{(1+z)^{t+1}}=\frac{1}{1+z}f_{n}^{c}\left(\frac{1}{1+z}\right) =\frac{c^{-n}\left(\sqrt{z^{2}+c^{2}}-z\right)^{n}}{\sqrt{z^{2}+c^{2}}}.\]
The right-hand side provides the holomorphic continuation of \(\mathcal{L}_{\partial_{t}}\{J_{n}^{c}\}(z)\) to all complex values of \(z\) such that \(z\neq\pm ic\).
This proves the claim of the theorem for the Laplace transform \(\mathcal{L}_{\partial_{t}}\) associated to the forward difference operator. The Laplace transform for the backward difference operator is evaluated analogously.
Namely, assume \(c\in\mathbb{C}\setminus\left\{\alpha:\,\alpha\in\mathbb{R},\left|\alpha\right| \geq 1\right\}\). In [21, formula (3.28)] Kan and Shiraishi computed the generating function for the sequence \(\left\{\overline{I}_{n}^{c}\left(t\right)\right\}_{n\in\mathbb{N}}\); its meromorphic continuation to the whole \(z\)-plane is given for \(n\geq 0\) by
\[\overline{g}_{n}^{c}(z)=\frac{z}{\sqrt{(1-z)^{2}-c^{2}}}\left(\frac{\left(1-z \right)}{c}-\sqrt{\frac{\left(1-z\right)^{2}-c^{2}}{c^{2}}}\right)^{n}.\]
The definition (2.9) of the Laplace transform associated with the backward difference operator, combined with the asymptotic formula (1.13) yields that for complex values \(c\) in the unit disc and for \(z\in\mathbb{C}\) such that \(\left|1-z\right|<1-\left|c\right|\) we have
\[\mathcal{L}_{\overline{\partial}_{t}}\{\overline{I}_{n}^{c}\}(z)=\sum_{t=0}^{ \infty}\overline{I}_{n}^{c}\left(t\right)\left(1-z\right)^{t-1}=\frac{1}{1-z} \overline{g}_{n}^{c}(1-z)=\frac{c^{-n}\left(z-\sqrt{z^{2}-c^{2}}\right)^{n}}{ \sqrt{z^{2}-c^{2}}}.\]
The right-hand side of the above equation provides the meromorphic continuation of \(\mathcal{L}_{\overline{\partial}_{t}}\{\overline{I}_{n}^{c}\}(z)\) to all complex, nonzero \(c\) and all \(z\in\mathbb{C}\) with \(z\neq\pm c\).
Computation of \(\mathcal{L}_{\overline{\partial}_{t}}\{\overline{J}_{n}^{c}\}(z)\) is analogous, so we provide only the two key steps. The first step is computation of the generating function \(\overline{f}_{n}^{c}\left(z\right)\) of the sequence \(\left\{\overline{J}_{n}^{c}\left(t\right)\right\}_{n\in\mathbb{N}}\) which follows by using the identity \(\overline{J}_{n}^{c}\left(t\right)=i^{n}\overline{I}_{n}^{-ic}\left(t\right)\) to relate \(\overline{f}_{n}^{c}\left(z\right)\) to \(\overline{g}_{n}^{ic}\left(z\right)\) and deduce that
\[\overline{f}_{n}^{c}\left(z\right)=\frac{z}{\sqrt{(z-1)^{2}+c^{2}}}\left( \frac{z-1}{c}+\sqrt{\frac{(z-1)^{2}}{c^{2}}}+1\right)^{n}.\]
The second step is the evaluation of the Laplace transform, which follows from the observation that
\[\mathcal{L}_{\overline{\partial}_{t}}\{\overline{J}_{n}^{c}\}(z)=\frac{1}{1- z}\overline{f}_{n}^{c}(1-z)=\frac{c^{-n}\left(\sqrt{z^{2}+c^{2}}-z\right)^{n}}{ \sqrt{z^{2}+c^{2}}}.\]
## 5. Backward discrete wave equation
In this section, we will study the backward discrete wave equation
\[\overline{\partial}_{t}^{2}u\left(n;t\right)=c^{2}\left(u\left(n+1;t\right)-2 u\left(n;t\right)+u\left(n-1;t\right)\right),\;\;n\in\mathbb{Z},\;\;t\in \mathbb{N}_{0}, \tag{5.1}\]
which is the backward analogue of (1.17) and find its fundamental and general solutions under natural initial conditions. Then, we will study the asymptotic behaviour of the first fundamental solutions of both forward and backward discrete wave equations when the time variable tends to infinity.
### Fundamental solutions to the backward discrete wave equation
The first fundamental solution to the backward discrete wave equation is described in the following theorem.
**Theorem 5.1**.: _Let \(c>0\). The solution of the backward wave equation (5.1) with initial conditions_
\[u\left(n;0\right)=\left\{\begin{array}{ll}1&\mbox{if }n=0,\\ 0&\mbox{if }n\neq 0,\end{array}\right.,\;\;\;\overline{\partial}_{t}u\left(n;0 \right)=0,\;\;n\in\mathbb{Z}, \tag{5.2}\]
_is given by_
\[u\left(n;t\right)=\overline{J}_{2\left|n\right|}^{2c}\left(t\right),\;\;n\in \mathbb{Z},\;\;t\in\mathbb{N}_{0}. \tag{5.3}\]
Proof.: Let us define \(u\left(n;t\right)\) by (5.3). It is easy to verify that \(\overline{J}_{2\left|n\right|}^{2c}\left(t\right)\) satisfies initial conditions (5.2). We will now check if \(\overline{J}_{2\left|n\right|}^{2c}\left(t\right)\) satisfies (5.1). For \(n\geq 1\), using Lemma 3.2 (v) we obtain
\[\overline{\partial}_{t}u\left(n;t\right) = \overline{\partial}_{t}\overline{J}_{2n}^{2c}\left(t\right)=c \left(\overline{J}_{2n-1}^{2c}\left(t\right)-\overline{J}_{2n+1}^{2c}\left(t \right)\right),\] \[\overline{\partial}_{t}^{2}u\left(n;t\right) = \overline{\partial}_{t}^{2}\overline{J}_{2n}^{2c}\left(t\right)=c ^{2}\left(\overline{J}_{2n-2}^{2c}\left(t\right)-2\overline{J}_{2n}^{2c} \left(t\right)+\overline{J}_{2n+2}^{2c}\left(t\right)\right)\] \[= c^{2}\left(u\left(n-1;t\right)-2u\left(n;t\right)+u\left(n+1;t \right)\right).\]
Similarly, if \(n\leq-1\), we obtain
\[\overline{\partial}_{t}u\left(n;t\right) = \overline{\partial}_{t}\overline{J}_{-2n}^{2c}\left(t\right)=c \left(\overline{J}_{-2n-1}^{2c}\left(t\right)-\overline{J}_{-2n+1}^{2c}\left( t\right)\right),\] \[\overline{\partial}_{t}^{2}u\left(n;t\right) = \overline{\partial}_{t}^{2}\overline{J}_{-2n}^{2c}\left(t\right)= c^{2}\left(\overline{J}_{-2n-2}^{2c}\left(t\right)-2\overline{J}_{-2n}^{2c} \left(t\right)+\overline{J}_{-2n+2}^{2c}\left(t\right)\right)\] \[= c^{2}\left(u\left(n+1;t\right)-2u\left(n;t\right)+u\left(n-1;t \right)\right).\]
For \(n=0\), using Lemma 3.2 (ii) and (v), we obtain
\[\overline{\partial}_{t}u\left(n;t\right) = \overline{\partial}_{t}\overline{J}_{0}^{2c}\left(t\right)=-2c \overline{J}_{1}^{2c}\left(t\right),\] \[\overline{\partial}_{t}^{2}u\left(n;t\right) = \overline{\partial}_{t}^{2}\overline{J}_{0}^{2c}\left(t\right)= -2c\overline{\partial}_{t}\overline{J}_{1}^{2c}\left(t\right)=-2c^{2}\left( \overline{J}_{0}^{2c}\left(t\right)-\overline{J}_{2}^{2c}\left(t\right)\right)\] \[= c^{2}\left(u\left(1;t\right)-2u\left(0;t\right)+u\left(-1;t \right)\right).\]
Therefore, the equation (5.1) holds for all \(n\in\mathbb{Z}\) and \(t\in\mathbb{N}_{0}\).
The second fundamental solution to the backward time discrete wave equation is the solution \(u_{2}(x;t)\), \(x\in\mathbb{Z}\), \(t\in\mathbb{N}_{0}\) to (5.1) satisfying the initial conditions
\[u_{2}\left(n;0\right)=0,\quad\overline{\partial}_{t}u_{2}\left(n;0\right)= \left\{\begin{array}{ll}1&\text{if }n=0,\\ 0&\text{if }n\neq 0,\end{array}\right.\quad n\in\mathbb{Z}. \tag{5.4}\]
We have the following proposition.
**Proposition 5.2**.: _Let \(c>0\). The solution of the backward wave equation (5.1) with initial conditions (5.4) is given by_
\[u_{2}\left(n;t\right)=\sum_{s=0}^{t}\overline{J}_{2|n|}^{2c}\left(s\right)- \overline{J}_{2|n|}^{2c}\left(-1\right), \tag{5.5}\]
_where we assume that \(t\in\mathbb{Z}_{\geq-1}\) and set the empty sum to be identically zero._
Proof.: Let \(t\geq 1\). By definition, \(\overline{\partial}_{t}u_{2}(n;t)=\overline{J}_{2|n|}^{2c}\left(t\right)\). Now, using part (v) of Lemma 3.2, it is straightforward to check that the function \(u_{2}\) is the solution of the backward discrete wave equation (5.1).
It remains to check the initial conditions. When \(t=0\) we have \(u_{2}(n;0)=\overline{J}_{2|n|}^{2c}\left(0\right)-J_{2|n|}^{2c}\left(-1\right)\), where we used the identity \(\overline{J}_{n}^{c}(-t)=(-1)^{n}J_{n}^{c}(t)\), for non-negative integers \(t\). Both terms on the right-hand side are equal to zero for \(|n|\geq 1\) and equal to one when \(n=0\), which proves that \(u_{2}(n;0)=0\). Finally, \(\overline{\partial}_{t}u_{2}\left(n;0\right)=u_{2}(n;0)-u_{2}(n;-1)=\overline {J}_{2|n|}^{2c}\left(0\right)\) which is zero unless \(n=0\) in which case it equals one. This proves that \(u_{2}(n;t)\), given by (5.5), satisfies the initial conditions (5.4) and completes the proof.
### General solution to the discrete backward equation
Now that we found two fundamental solutions to (5.1), we are in a position to find the solution to (5.1) under general initial conditions given by arbitrary bounded real sequences indexed by integers.
**Theorem 5.3**.: _For each \(c>0\) and arbitrary bounded real sequences \(\left\{u_{n}^{0}\right\}_{n\in\mathbb{Z}}\) and \(\left\{v_{n}^{0}\right\}_{n\in\mathbb{Z}}\), the general solution of the wave equation (5.1) with initial conditions_
\[u\left(n;0\right)=u_{n}^{0},\ \overline{\partial}_{t}u\left(n;0\right)=v_{n}^{0}, \ n\in\mathbb{Z}, \tag{5.6}\]
_is given by_
\[u\left(n;t\right)=\sum_{k\in\mathbb{Z}}\left(u_{k}^{0}\cdot u_{1}\left(n-k;t \right)+v_{k}^{0}\cdot u_{2}\left(n-k;t\right)\right),\ \ n\in\mathbb{Z},\ \ t\in\mathbb{N}_{0}, \tag{5.7}\]
_where \(u_{1}\left(n;t\right)=\overline{J}_{2|n|}^{2c}\left(t\right)\), and \(u_{2}\left(n;t\right)=\sum_{s=0}^{t}u_{1}\left(n;s\right)-u_{1}\left(n;-1\right)\)._
Proof.: From [30, Theorem 2.5] with the backward difference as the timescale derivative, it follows that in order to prove that the function (5.7) is the unique solution of the backward discrete wave equation (5.1) satisfying (5.6) it suffices to prove that the series on the right-hand side of (5.7) is absolutely convergent for all \(t\in\mathbb{N}_{0}\). (The proof is analogous to the proof of [30, Theorem 3.2], so we omit it here.)
According to Proposition 3.3, for \(t\in\mathbb{N}_{0}\), numbers \(\left|\overline{J}_{2|n-k|}^{2c}\left(t\right)\right|\) decay exponentially as \(\left|k\right|\rightarrow\infty\), hence the series
\[\sum_{k\in\mathbb{Z}}u_{k}^{0}\cdot\overline{J}_{2|n-k|}^{2c}\left(t\right)\]
is absolutely convergent for every bounded real sequence \(\left\{u_{n}^{0}\right\}_{n\in\mathbb{Z}}\), fixed \(t\in\mathbb{N}_{0}\) and \(c>0\). Since \(u_{2}\left(n;t\right)\) is a finite sum of functions \(\overline{J}_{2|n-k|}^{2c}\left(t\right)\), using the same argument as above, we conclude that the series
\[\sum_{k\in\mathbb{Z}}v_{k}^{0}\cdot u_{2}\left(n-k;t\right)\]
is also absolutely convergent for every bounded real sequence \(\left\{v_{n}^{0}\right\}_{n\in\mathbb{Z}}\), fixed \(t\in\mathbb{N}_{0}\) and \(c>0\). Therefore, the general solution to the backward discrete wave equation (5.1) with initial conditions (5.6) is given by (5.7).
### Asymptotic behaviour of solutions to discrete wave equations
Applying the asymptotic formula (1.11) of Theorem 1.2 to the solution (1.19) of the wave equation (1.17) subject to the initial conditions (1.18) we easily deduce the limiting behaviour of solutions when the time variable tends to infinity, as described in the following corollary.
**Corollary 5.4**.: _For \(c>0\), the solution \(u(n;t)=J_{2|n|}^{2c}(t)\), \(n\in\mathbb{Z}\), \(t\in\mathbb{N}_{0}\) to the discrete wave equation (1.17) subject to the initial conditions (1.18) has the following asymptotic behavior_
\[u\left(n;t\right)\sim\frac{2}{\sqrt{\pi tc}}\left(1+\frac{c^{2}}{4}\right)^{ \frac{t}{2}+\frac{1}{4}}\cos\left(\left(t+\frac{1}{2}\right)\theta+\frac{|n|-1 }{4}\pi\right),\ \ \text{as $t\rightarrow\infty$.}\]
Oscillations with exponentially growing amplitude are somewhat unexpected, however, as seen in [11], in some situations solutions to a discrete semilinear wave equation can blow up in finite time (on a continuous timescale).
From the Theorem 1.2 we can derive the following asymptotic behaviour of the solution to the backward discrete wave equation.
**Corollary 5.5**.: _For \(c>0\), the solution \(u\left(n;t\right)=\overline{J}_{2|n|}^{2c}\left(t\right)\), \(n\in\mathbb{Z}\), \(t\in\mathbb{N}_{0}\) to the discrete wave equation (5.1) subject to the initial conditions (5.2) has the following asymptotic behavior_
\[u\left(n;t\right)\sim\frac{2}{\sqrt{\pi t}c}\left(1+\frac{c^{2}}{4}\right)^{- \frac{t}{2}+\frac{1}{4}}\cos\left(\left(t-\frac{1}{2}\right)\theta+\frac{|n|-1 }{4}\pi\right),\ \ \text{as }t\rightarrow\infty.\]
|
2310.09687 | When Collaborative Filtering is not Collaborative: Unfairness of PCA for
Recommendations | We study the fairness of dimensionality reduction methods for
recommendations. We focus on the established method of principal component
analysis (PCA), which identifies latent components and produces a low-rank
approximation via the leading components while discarding the trailing
components. Prior works have defined notions of "fair PCA"; however, these
definitions do not answer the following question: what makes PCA unfair? We
identify two underlying mechanisms of PCA that induce unfairness at the item
level. The first negatively impacts less popular items, due to the fact that
less popular items rely on trailing latent components to recover their values.
The second negatively impacts the highly popular items, since the leading PCA
components specialize in individual popular items instead of capturing
similarities between items. To address these issues, we develop a
polynomial-time algorithm, Item-Weighted PCA, a modification of PCA that uses
item-specific weights in the objective. On a stylized class of matrices, we
prove that Item-Weighted PCA using a specific set of weights minimizes a
popularity-normalized error metric. Our evaluations on real-world datasets show
that Item-Weighted PCA not only improves overall recommendation quality by up
to $0.1$ item-level AUC-ROC but also improves on both popular and less popular
items. | David Liu, Jackie Baek, Tina Eliassi-Rad | 2023-10-15T00:22:12Z | http://arxiv.org/abs/2310.09687v1 | # When Collaborative Filtering is not Collaborative: Unfairness of PCA for Recommendations
###### Abstract.
We study the fairness of dimensionality reduction methods for recommendations. We focus on the established method of principal component analysis (PCA), which identifies latent components and produces a low-rank approximation via the leading components while discarding the trailing components. Prior works have defined notions of "fair PCA"; however, these definitions do not answer the following question: what makes PCA _unfair_? We identify two underlying mechanisms of PCA that induce unfairness at the item level. The first negatively impacts less popular items, due to the fact that less popular items rely on trailing latent components to recover their values. The second negatively impacts the highly popular items, since the leading PCA components specialize in individual popular items instead of capturing similarities between items. To address these issues, we develop a polynomial-time algorithm, _Item-Weighted PCA_, a modification of PCA that uses item-specific weights in the objective. On a stylized class of matrices, we prove that _Item-Weighted PCA_ using a specific set of weights minimizes a popularity-normalized error metric. Our evaluations on real-world datasets show that _Item-Weighted PCA_ not only improves overall recommendation quality by up to 0.1 item-level AUC-ROC but also improves on both popular and less popular items.
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
the approach of setting the weights to be inversely proportional to an item's norm is a _interpolation_ between the two benchmark algorithms. We use this weighting procedure for all of our numerical experiments.
3. We present empirical results demonstrating that our algorithm yields improved collaborative filtering recommendations compared to PCA baselines. Interestingly, we characterize how our algorithm improves recommendation quality for both popular and less popular artists.
We conclude with a discussion of limitations and recommended use cases for our algorithm.
### Relation to Fair PCA Literature
While we provide a more extensive literature review in Section 5, we believe it is important to describe the connection of our work to the existing literature that studies fairness in the context of PCA.
Brief summary of literatureThe existing literature on fair PCA can be summarized as imposing a fairness constraint on the PCA problem and developing a new algorithm to satisfy this constraint. Specifically, existing works assume that the set of users is partitioned into pre-defined groups (e.g., race, gender). There are a series of papers (Han et al., 2015; Krizhevsky et al., 2015; Krizhevsky et al., 2015; Krizhevsky et al., 2015) that define fairness as enforcing that the reconstruction error across groups of users to be "balanced", for different definitions of balance. Alternatively, (Krizhevsky et al., 2015) defines the output of a PCA algorithm as fair if the group label cannot be inferred from the projected data, while (Krizhevsky et al., 2015) aims to minimize the difference in the conditional distributions of the projected data.
Comparison to our workTable 1 summarizes the differences between our work and existing literature. One difference is that prior works focus on _user_-level fairness with pre-defined groups, whereas we focus on _item_-level fairness, with no reliance on group labels.
However, there is also a major difference in the _motivation_ of our work compared to existing works that induce a distinction in the types of situations that the works apply to. Specifically, the methods from existing works address situations where an algorithm designer knows, a priori, that they would like to enforce a certain type of fairness constraint. That is, there is an _external constraint_ that deems a particular fairness notion necessary, and these fairness constraints are _generic_, in the sense that they can be defined in a general machine learning context.
On the other hand, the motivation of our work is to _identify_ unfairness issues that arise specifically from the PCA algorithm. The issues that we identify are not generic machine learning issues, and hence they would not necessarily be issues that one would be concerned about a priori. Our work helps elucidate the black-box nature of the PCA algorithm and contributes to situations where one does not have a particular fairness notion in mind but would like to understand what types of issues can arise from this specific algorithm.
Analogs of this distinction appear in other areas. For example, in prediction, the seminal work of (Krizhevsky et al., 2015) studies how to learn a classifier with an external fairness constraint (equality of opportunity). In contrast, (Krizhevsky et al., 2015; Krizhevsky et al., 2015) also study fairness in prediction, but the goal is to identify the reasons why bias may arise in a prediction setting, rather than developing algorithms that satisfy a fairness notion.
Figure 1. These figures are generated from computing vanilla PCA on the LastFM dataset for varying values of the rank \(r\). Subfigure (A) shows the normalized item error as a function of the rank for six different artists, as well as the overall error in the dotted line. Subfigure (B) shows the relationship between an artist’s popularity (weighted number of listeners) and the number of principal components needed to half the initial item reconstruction error. Subfigure (C) shows the average diagonal value of the projection matrix outputted by PCA, where artists are grouped by their popularity.
### Background on PCA
Let \(X\in\mathbb{R}^{n\times d}\) be a matrix of preferences over \(n\) users and \(d\) items. PCA applied to \(X\) projects the matrix into a \(r\)-dimensional space yielding an approximation matrix \(\widehat{X}\), where \(r\ll d\) is a user-determined rank hyperparameter. Formally, PCA solves:
\[\begin{split}\operatorname*{argmin}_{P=UU^{T}}&\|X-XP \|_{F}^{2}\\ \text{s.t.}& U\in\mathbb{R}^{d\times r},U^{T}U=I_{r} \end{split} \tag{1}\]
The optimization is over projection matrices \(P=UU^{T}\) where the columns of \(U\in\mathbb{R}^{d\times r}\) form an orthonormal basis. The optimal projection matrix \(P^{*}\) minimizes the reconstruction error \(\|X-\widehat{X}\|_{F}^{2}\) between the original matrix and the approximation, \(\hat{X}=XP^{*}\).
Note that the approximation matrix \(\hat{X}=XP^{*}\) is equivalent to taking the \(r\)-truncated Singular Value Decomposition (SVD) of \(X\). Henceforth, when referring to collaborative filtering we refer to the problem of identifying a suitable projection matrix \(P=UU^{T}\) where we refer to solution to Equation (1) as the _vanilla PCA_ baseline.
## 2. Unfairness of PCA for collaborative filtering
In this section, we begin with a motivating empirical example illustrating two mechanisms in which PCA exhibits unfairness towards items for collaborative filtering. Then, we show that these mechanisms provably occur in a stylized class of matrices that represent user-item preferences.
### Empirical Example: LastFM
Our motivating empirical example uses the lastfm-2k dataset (Beng et al., 2015) which records the number of times a user of the LastFM1 music platform listened to their favorite artists. Specifically, if artist \(j\) is one of user \(i\)'s top 25 artists, then \(X_{ij}\) is the number of times user \(i\) listened to artist \(j\). Otherwise \(X_{ij}=0\). We use a dataset with \(n=920\) users and \(d=316\) artists. To account for heterogeneity in user listening volume we row-normalize the matrix. See the Experiments section for a detailed description of this dataset. We compute PCA on this matrix \(X\) for all possible values of the rank \(r\), from \(0\) to \(d\). Let \(P_{r}\in\mathbb{R}^{d\times d}\) be the projection matrix corresponding to the output of PCA for rank \(r\).
Footnote 1: [http://www.lastfm.com](http://www.lastfm.com)
We now describe two ways in which PCA induces unfairness for the items (artists).
#### 2.1.1. Mechanism 1: Unfairness for unpopular items
The overall reconstruction error, \(\|X-XP_{r}\|_{F}^{2}\) decreases as \(r\) increases in a diminishing returns fashion: see the dashed grey line in Figure 1, Subfigure A. In fact, it can be shown that reconstruction error decreases by exactly \(\sigma_{r}^{2}\) at rank \(r\) compared to \(r-1\), where \(|\sigma_{1}|\geq\cdots\geq|\sigma_{d}|\) are the ordered singular values of \(X\) (see Theorem 8 in the Appendix).
However, this pattern of diminishing returns does not occur at the individual item level. We define the _normalized item error_ for item \(j\) as \(\|X_{j}-XP_{r,j}\|_{F}^{2}/W_{j}\), where \(X_{j}\) is the \(j\)'th column of \(X\), \(P_{r,j}\) is the \(j\)'th column of \(P_{r}\), and \(W_{j}=\|X_{j}\|_{F}^{2}\) is a normalizing factor. Subfigure A in Figure 1 plots the normalized item error for six individual artists (items), which displays the large heterogeneity in how the errors decrease as a function of the rank. For each artist, the normalized error is initially \(1\) when the rank is \(0\) since \(P=0\), and drops sharply after some threshold rank is reached, where this threshold varies greatly by the artist. Certain artist such as Jessica Simpson requires the rank to be over \(200\) before their normalized error decreases below \(80\%\).
In general, the leading components of PCA capture the artists who are popular. Subfigure B in Figure 1 shows the relationship between artist popularity, where the popularity for artist \(j\) is \(\sum_{i=1}^{n}X_{ij}\) following row normalization, and the number of principal components needed to halt the initial reconstruction error of \(\|X_{j}\|_{2}^{2}\). The Subfigure shows that leading principal components greatly reduce reconstruction error for popular artists. The top-\(20\%\) most popular artists require \(36\) components, on average, to half their error while the bottom \(80\%\) requires \(147\) of \(316\) components.
#### 2.1.2. Mechanism 2: Unfairness for popular items
We now describe a completely different mechanism that negatively impacts popular items. The previous mechanism showed that the leading components favor the popular items. However, we find that the leading components can become _specialized_ in _individual_ items, which has undesirable consequences in the context of collaborative filtering.
Recall that PCA outputs a projection matrix \(P\in\mathbb{R}^{d\times d}\). We claim that it is undesirable for item \(j\) for the diagonal entry, \(P_{jj}\), to be close to \(1\) at low values of \(r\), which is the case for popular artists as seen in Subfigure C of Figure 1.
For an artist \(j\), the approximation of its listening count for user \(i\) is \(\hat{X}_{ij}=\sum_{k=1}^{d}X_{ik}P_{kj}\). Then, for an item \(k\neq j\), the entry \(P_{kj}\) can be interpreted as a "similarity" between items \(j\) and \(k\). A non-zero entry for \(P_{kj}\) implies that the preference towards artist \(k\) contributes to the reconstructed preference towards item \(j\).
Now, if it is the case that the diagonal entry is \(1\) (\(P_{jj}=1\)) and \(P_{kj}=0\) for all \(k\neq j\), we recover a perfect reconstruction (\(\hat{X}_{ij}=X_{ij}\)). However, this implies that the reconstructed preference of item \(j\) is simply the original preference towards item \(j\), which is not useful information in the context of collaborative filtering. This does not give us a way to infer whether a user will like item \(j\) given their preferences over other items. The diagonal entry \(P_{jj}\) being close
\begin{table}
\begin{tabular}{|l||c|c|c|l|} \hline
**Algorithm** & **User** & **Item** & **Labels** & **Fairness Notion** \\ \hline Olfat and Aswani (Ollfat and Aswani, 2018; Lee et al., 2018) & ✓ & & ✓ & obfuscate group identifiability \\ \hline Samadi et al. (Samadi et al., 2016), Tantipongpipat et al. (Tantipongpipat et al., 2017), Kamani et al. (Kamani et al., 2015), Pelegrina and Duarte (Paleiro and Duarte, 2014) & ✓ & & \(\check{\mathcal{V}}\) & balance reconstruction error across groups \\ \hline _Item-Weighted PCA_ & & ✓ & & improve collaborative-filtering recommendations \\ \hline \end{tabular}
\end{table}
Table 1. Comparison with existing papers studying fair PCA.
to \(1\) implies that most of the reconstructed value for \(\hat{X}_{ij}\) is coming from \(X_{ij}\).
### Theoretical Result
We demonstrate that PCA exhibits the above two phenomena in a class of matrices that represent user-item preferences, where a subset of items is highly popular. We consider a sequence of systems of increasing size, where both the number of users and items is growing. Concretely, consider a sequence of matrices \(\{X_{n}\}_{n\geq 1}\), where \(X_{n}\in\{0,1\}^{n\times d_{n}}\) and \(d_{n}=o(n)\). The \((i,j)\)'th entry of \(X_{n}\) is \(1\) if user \(i\) likes item \(j\), and \(0\) otherwise.
We assume that the items can be partitioned into two classes: popular items and unpopular items. We assume that the first \(M_{n}\) items are the popular items for \(X_{n}\), for \(M_{n}\leq d_{n}\), that satisfy the following assumption.
Assumption A (Popular items).: _Let \(X_{n}^{\prime}\in\{0,1\}^{n\times d_{n}}\) be a copy of \(X_{n}\) where all entries in columns \(j>M_{n}\) are set to zero. Then, we assume that the \(M_{n}\)'th largest singular value of \(X_{n}^{\prime}\), which we denote by \(s_{M_{n}}(X_{n}^{\prime})\), grows as \(\Omega(\sqrt{n})\)._
Note that Assumption A is satisfied with high probability if all entries of \(X_{n}^{\prime}\) are i.i.d. mean zero subgaussian random variables with unit variance; see Theorem 1.1 in Rudelson and Vershynin (2005) and Figure 6 in the Appendix for empirical validation.
Next, we assume that for unpopular items, the number of users that like the item is a constant.
Assumption B (Unpopular items).: _There exists a constant \(K\) such that for all \(n\), \(\sum_{i=1}^{n}(X_{n})_{i,j}\leq K\) for any \(j>M_{n}\)._
Then, we show that PCA on the matrix \(X_{n}\) using the top \(M_{n}\) principal components admits the two undesirable mechanisms. Let \(I_{n,M_{n}}\in\mathbb{R}^{d(n)\times d(n)}\) be the matrix where all entries are zero except for the first \(M_{n}\) diagonal entries, which are \(1\).
Theorem 1.: _Let \(P_{n}\in\mathbb{R}^{d_{n}\times d_{n}}\) be the projection matrix given by performing PCA on matrix \(X_{n}\), taking the largest \(M_{n}\) principal components. Then, \(||P_{n}-I_{n,M_{n}}||_{F}\to 0\) as \(n\to\infty\)._
Theorem 1 states that as the system gets large, the projection matrix outputted by PCA with \(M_{n}\) components converges to the \(I_{n,M_{n}}\) matrix. The projection matrix being \(P=I_{n,M_{n}}\) demonstrates both undesirable mechanisms. The proof makes use of the Davis-Kahan theorem from perturbation theory, which can be found in the Appendix.
Firstly, all columns \(j>M_{n}\) that represent the less popular items are the \(0\) vector in the projection matrix; i.e. the projection does not contain _any_ information about item \(j\). Then, the reconstruction, \(\hat{X}_{j}\) will also be the \(0\) vector; that is, the reconstructed preference of every user to every unpopular item is outputted to be \(0\).
Next, fix a popular item \(j\leq M_{n}\). Then, column \(j\) of the projection matrix approaches \(e_{j}\), the unit vector with \(1\) in the \(j\)'th entry. Then, the reconstruction of the preference of user \(i\) for item \(j\), \(\hat{X}_{ij}\), is exactly \(X_{ij}\). That is, the reconstruction for the \((i,j)\)'th entry just "reads" the value that was there in the original matrix. This provides a perfect reconstruction, but this provides no useful information in the context of collaborative filtering. The reconstruction only provides non-zero values to entries that already existed in the original matrix, which does not serve the purpose of using this method as a recommendation tool. A projection matrix that is useful for recommendations should contain many non-zero entries for column \(j\): then, the preference of user \(i\) towards item \(j\) can be inferred through the existing preferences of user \(i\) towards _other_ items \(k\neq j\).
## 3. Item-Weighted PCA
We propose an algorithm named _Item-Weighted PCA_ that counters the unfairness mechanisms introduced in the previous section. We will formally state the problem we aim to solve and present _Item-Weighted PCA_ as an algorithm solving the problem. Then, on a stylized class of matrices, we provide a theoretical justification for this approach, and we also show that two baseline approaches are a special case of _Item-Weighted PCA_.
### Algorithm Description
#### 3.1.1. Problem Statement
Let \(X\in\mathbb{R}^{n\times d}\) be an input matrix, where entries can be positive or negative and missing values are set to zero, \(r\leq\min\{n,d\}\) be a rank parameter, and \(S\in\{-1,0,+1\}^{n\times d}\) be the _sign matrix_ of \(X\), where \(S_{ij}=1\) for positive \(X_{ij}\), \(-1\) for negative \(X_{ij}\), and \(0\) when \(X_{ij}=0\). Let \(w_{j}\geq 0\) for \(j\in[d]\) be item-specific weights. We aim to solve the following problem:
\[\operatorname*{argmax}_{P=UU^{+}} \sum_{j=1}^{d}w_{j}\ \langle S_{j,}\hat{X}_{j}\rangle\] (2) s.t. \[U\in\mathbb{R}^{d\times r},\ U^{T}U=I \tag{3}\]
where \(\hat{X}_{ij}=\left\langle X_{i,},P_{j}\right\rangle\forall i,j\).
Note that the weights \(w_{j}\) must be given as input. In all of our experiments, we use the weights \(w_{j}=1/||S_{j,}||_{2}\) In Section 3.2, we study a simple class of matrices where we specify how the weights should be chosen.
#### 3.1.2. Algorithm
We propose the algorithm _Item-Weighted PCA_, which solves (2)-(3) by relaxing the feasible set. Instead of constraining to projection matrices \(P=UU^{T}\), _Item-Weighted PCA_ relaxes to optimize over positive semi-definite matrices (PSD) with bounded trace and eigenvalues and solves for an extreme-point optimal solution to the following Semi-Definite Program (SDP):
\[\operatorname*{argmax}_{P} \sum_{j=1}^{d}w_{j}\ \langle S_{j,}\hat{X}_{j}\rangle\] (4) s.t. \[\operatorname{tr}\left(P\right)\leq r,0\leq P\leq 1 \tag{5}\]
We observe that the set of PSD matrices with trace \(\leq r\) and eigenvalues \(\in[0,1]\) is a superset of rank \(r\) projection matrices. In the Appendix, we prove that the extreme-point optimal solution _Item-Weighted PCA_ yields is indeed a projection matrix and thus solves the original problem.
Theorem 2.: _Item-Weighted PCA is a polynomial-time algorithm to solve the optimization problem of (2)-(3)._
#### 3.1.3. Discussion
We now describe the intuition and motivation of this algorithm, and in Section 3.2, we provide a theoretical justification for a special class of matrices.
Recall that vanilla PCA finds the projection matrix of rank \(r\) that minimizes the overall reconstruction error, \(||X-XP||_{F}^{2}\). Then, _Item-Weighted PCA_ makes the following modifications to vanilla PCA:
1. We use the _sign matrix_\(S\) of \(X\), which discards the magnitude of the original entries.
2. The objective function uses _item-specific weights_, \(w_{j}\).
3. Instead of minimizing reconstruction error between the columns \(S_{j}\) and \(\hat{X}_{j}\), we maximize the inner products between the two vectors.
_Modification (a)_. The motivation for (a) is a normalization of the original matrix that aligns with the downstream goal of _recommendations_, rather than _reconstruction_. That is, the goal of recommendations is to identify the \((i,j)\) pairs where user \(i\) would enjoy item \(j\), rather than reconstructing the exact entries \(X_{ij}\). Because the entry magnitudes often vary greatly across users and contain outliers, using the sign matrix effectively introduces a normalization across all entries.
_Modification (b)_. The item-specific weights aim to address both of the unfairness mechanisms. Suppose we use the weights \(w_{j}=1/||S_{j}||_{2}\), which we use for all of our experiments. Since less popular items have a smaller norm, this normalization up-weights these items, directly addressing the issue of unfairness towards less popular items (Mechanism 1). Moreover, this normalization also _down-weights_ the significance of the highly popular items in the objective, which also addresses Mechanism 2. Recall that Mechanism 2 occurs when one of the components of PCA _specializes_ in representing a _single_ item. Since all items are effectively treated equally in (2), if the number of components is small (i.e. rank is small), then one cannot "afford" to dedicate one component to a single item - it is more efficient if each component contained information about multiple items.
_Modification (c)_. Given modification (b), a natural alternative objective would be to keep the same error metric as vanilla PCA (square of entry-wise differences), with column-specific weights \(w_{j}\), minimizing the least squares objective \(\sum_{j=1}^{d}w_{j}||S_{j}-\hat{X}_{j}||_{2}^{2}\). Unfortunately, this objective does not yield a computationally efficient method. The allure of the objective (2) is that it is _linear_ in \(P\), which is not the case in the least squares objective; hence Theorem 2 would not hold. Therefore, the motivation for (c) is strictly for computational efficiency.
One interpretation of the \(w_{j}(S_{j},\hat{X}_{j})\) term in the objective (2) is an approximation to the _cosine similarity_ between columns \(S_{j}\) and \(\hat{X}_{j}\). The exact cosine similarity would include an additional \(1/||\hat{X}_{j}||_{2}\) term, hence using the exact cosine similarity would incorporate non-linearities into the objective, which would again be undesirable.
Note that because of modification (c), the objective (2) does not at all aim to reconstruct the original matrix \(X\). However, it is possible to add constraints to enforce a small error if desired. Suppose \(E_{r}=||\hat{X}^{\text{PCA}}-X||_{F}^{2}\) is the reconstruction error of the vanilla PCA solution (which is the smallest possible reconstruction error). Then, one can add a constraint to the optimization (2)-(3) of the form \(||\hat{X}-X||_{F}^{2}\leq(1+\alpha)E_{r}\) for some parameter \(\alpha>0\), so that the reconstruction error of the output \(\hat{X}\) is at most a \((1+\alpha)\) factor of \(E_{r}\). In the Appendix, we show that Theorem 2 holds with the added constraint.
### Theoretical Result and Comparison with Baseline Algorithms
We show that for a stylized class of matrices, _Item-Weighted PCA_ yields the optimal solution to a popularity-normalized loss function. For the same class of matrices, we show that two baseline PCA algorithms are instantiations of _Item-Weighted PCA_ with a specific set of weights. We then instantiate _Item-Weighted PCA_ with weights that interpolate between the two baselines. In a specific setting, such an instantiation of _Item-Weighted PCA_ balances popular and unpopular items, while the baselines offer two extremal solutions. The proofs for all propositions and theorems are included in Appendix A.3.
#### 3.2.1. Optimality of Item-Weighted PCA
As in Theorem 1, we consider binary preference matrices \(X\in\{0,1\}^{n\times d}\) in which there are \(d_{p}\) popular items, corresponding to columns \(I_{p}=\{1,\ldots,d_{p}\}\) and \(d_{u}\) unpopular items, corresponding to columns \(I_{u}=\{d_{p}+1,\ldots,d\}\).
We make the following assumption on \(X\):
Assumption C (Exclusivity).: _Each user likes either only popular items or only unpopular items._
Let \(\mathcal{B}\) be the set of binary matrices that satisfy Assumption C. By constraining users to like only one class of items, we ensure that individual principal components correspond exclusively to either popular or unpopular items.
In light of the imbalance in item popularities, given a matrix \(X\), we introduce the following objective function that normalizes item reconstruction error by group popularity, quantified as the number of ratings for all items in the group:
\[l(P)=\left(\frac{||X_{p}-\widehat{X}_{p}||_{F}}{||X_{p}||_{F}}\right)^{2}+ \left(\frac{||X_{u}-\widehat{X}_{u}||_{F}}{||X_{u}||_{F}}\right)^{2}\quad \text{where}\quad\widehat{X}=XP \tag{6}\]
In Equation (6), \(X_{p}\) denotes a copy of \(X\) with entries for all unpopular items set to \(0\) and \(X_{u}\) denotes a copy of \(X\) with ratings for popular items set to zero.
Theorem 3 (_Item-Weighted PCA Optimality_).: _For \(X\in\mathcal{B}\), Item-Weighted PCA yields the optimal solution for the popularity-adjusted loss function in Equation (6) when \(w_{j}=||X_{p}||^{-2}\forall j\in I_{p}\) and \(w_{j}=||X_{u}||^{-2}\forall j\in I_{u}\)._
Henceforth, we will call the weights in Proposition 3 the _proper_ weights \(w_{j}\).
Remark 1.: _For \(X\in\mathcal{B}\), there is a closed-form solution to minimize Equation 6 but for general \(X\) there is not a closed-form solution._
#### 3.2.2. Baseline Algorithms as a Special Case
We compare against two baselines: vanilla PCA and column-normalized PCA which scales each column of \(X\) to be unit norm before performing vanilla PCA. In the case of matrices in \(\mathcal{B}\), we can interpret vanilla PCA and column-normalized PCA as specific instantiations of _Item-Weighted PCA_ given:
Assumption D (Constant Popularity).: _For all popular items, there are \(n_{p}\) users that like the item, and for all unpopular items, there are \(n_{u}\) users that like the item, where \(n_{u}<n_{p}\)._
Proposition 4 ().: _For \(X\in\mathcal{B}\) and satisfying Assumption D, vanilla PCA instantiates Item-Weighted PCA with \(w_{j}=1\ \forall j\in I_{p}\) and column-normalized PCA instantiates Item-Weighted PCA with \(w_{j}=n_{p}^{-1}\ \forall j\in\{1,\ldots,d_{p}\}\) and \(w_{j}=n_{u}^{-1}\ \forall j\in I_{u}\)._
Placing the instantiations in the context of the proper weights identified in Theorem 3, we see that vanilla PCA yields the proper weights when \(d_{u}=\frac{n_{p}}{n_{u}}d_{p}\). Column-normalized PCA is optimal when \(d_{p}=d_{u}\). As the baselines use weights that are not a function of \(d_{p}\), \(d_{u}\), which are generally unknown, the baselines are suboptimal in minimizing Equation (6) in all other settings.
To show that _Item-Weighted PCA_ provides a flexible framework, we define an instantiation that interpolates between vanilla PCA and column-normalized PCA. Let _Interpolate-Item-Weighted PCA_ be the instantiation in which \(w_{j}=\sqrt{n_{p}}\ \forall j\in I_{p}\) and \(w_{j}=\sqrt{n_{u}}\ \forall j\in I_{u}\)
We now provide a concrete instance in which _Interpolate-Item-Weighted PCA_ balances popular and unpopular items while the baselines yield extreme, undesirable outcomes. For the specific example, we introduce an additional assumption:
Assumption E (Exponential Decay).: \(X_{p}^{T}X_{p}\) _and \(X_{u}^{T}X_{u}\) are both of rank \(r\) and their respective eigenvalues decay exponentially such that for each matrix, the \(i^{th}\) largest eigenvalue \(\lambda_{i}=\beta^{-(i-1)}\lambda_{1}\), where \(\beta>1\) and \(i\leq r\)._
Theorem 5 ().: _For any binary preference matrix \(X\in\mathcal{B}\) satisfying Assumption E, if \(\frac{n_{u}}{n_{p}}<\beta^{-2(r-1)}\) and \(d_{u}=\sqrt{\frac{n_{p}}{n_{u}}}d_{p}\), then the leading \(r\) vanilla PCA components are \(V_{p}\); the leading \(r\) column-normalized PCA components are \(V_{u}\). For Interpolate-Item-Weighted PCA, half of the leading components are in \(V_{p}\) and the other half is in \(V_{u}\)._
Theorem 5 states that when the popularity gap is large enough and there are sufficiently many unpopular items, for a rank \(r\) projection, vanilla PCA only reconstructs popular items whereas column-normalized PCA only reconstructs unpopular items. _Interpolate-Item-Weighted PCA_, on the other hand, reconstructs both popular and unpopular items in parallel. We observe that the above conditions mimic real-world settings in which there is a long tail of unpopular items.
## 4. Experiments
### Datasets
_LastFM._ We use the lastfm-2k dataset of user listening counts introduced in our motivating example where entry \(ij\) is the number of times user \(i\) listened to artist \(j\) if artist \(j\) is one of user \(i\)'s top-25 most-listened artists, otherwise \(X_{ij}=0\). We filter the dataset to keep only artists with at least 50 top listeners and then users with at least 20 listening counts among the remaining artists, leaving a \(920\times 316\) matrix. We row normalize the listening counts for all users.
_MovieLens._ We use the MovieLens-1M dataset in which users provide ratings for movies on a scale from \(1-5\)(D
normalization baseline, especially for \(r\in[50,250]\). For Movielens, normalizing the columns performs comparably with our algorithm.
#### 4.3.3. Performance by Item Popularity
Item-Weighted PCA is able to increase user classification performance for all item popularity groups instead of increasing performance for one group at the expense of another. Figure 4 shows that the user-classification performance increased for all popularity groups relative to vanilla PCA. The popularity groups were defined to approximately be of equal size.
The collective benefit illustrates the limitations of vanilla PCA. The recommendation quality for high-popularity items is lowest in vanilla PCA for both datasets because these items rely heavily on the diagonal values of the projection matrix and rely less on item similarities. By limiting the focus on any individual item, the overall item-level similarities captured in \(P\) are improved which benefits items of all popularity levels.
#### 4.3.4. Robustness to Missing Data
We also assess our algorithm's robustness to missing data. In Figure 5, we plot the recommendation performance as training data points are gradually set to 0, where \(\alpha\) is the fraction of training data points that have been uniformly randomly removed, for a fixed value of \(r=106\). In the case of LastFM, our algorithm outperforms both baselines for \(\alpha<0.6\). Whereas for MovieLens, our algorithm is not as robust as column normalization, though all three algorithms perform similarly for all values of \(\alpha\). We chose to fix \(r=106\) because our algorithm outperforms the baselines in Figure 3 when all data are available. In the Appendix, we include robustness results for all values of \(r\).
## 5. Related Work
### Fair PCA
We provide further background on past fair PCA works that balance the reconstruction errors across groups of users (Han et al., 2016; Wang et al., 2017; Wang et al., 2018; Wang et al., 2018). Many existing approaches solve a convex optimization problem of the following structure: let \(X_{g}\) denote the sub-matrix of \(X\) comprising of all individuals in group \(g\in\{1,2,\cdots,G\}\), \(f_{P}\) be the reconstruction error using projection matrix \(P\), \(A\) be an aggregation function, and \(U\) be an \(n\times r\) matrix with orthonormal columns; then, existing fair PCA algorithms can be generalized as:
\[\underset{P=UU^{T}}{\operatorname{argmin}}\quad A\left(f_{P}\left(X_{1} \right),f_{P}\left(X_{2}\right),\cdots,f_{P}\left(X_{G}\right)\right) \tag{7}\]
By considering the reconstruction error of individual groups, existing fair PCA algorithms can ensure more balanced approximation quality. Common instances of the aggregation function \(A\) are the max function or the product function \(\prod_{g=1}^{G}f_{P}\left(X_{g}\right)\). A cost of the above convex optimization approach is that the solution projection matrix is not guaranteed to be of rank \(r\), where the rank increase is a function of the number of groups. More recent work has also presented non-convex optimization methods (Kang et al., 2018).
Figure 3. For both LastFM and Movielens, _Item-Weighted PCA_, improves the ability for collaborative filtering to identify relevant listeners for each artist.
Figure 2. _Item-Weighted PCA (solid) reduces the unfairness mechanism identified in vanilla PCA (dashed) in which leading components specialize in individual items. High diagonal entries suggest specialization._
### Trustworthy Recommender Systems
Fairness in the context of recommender systems and rankings has frequently been posed as a two-sided problem, balancing the interests of users and items/producers (Friedman, 2011; Krizhevsky et al., 2012). On the user side, fairness definitions typically center on user utility either at the group level, defined by demographics (Krizhevsky et al., 2012), or at more granular levels, such as the notion of envy-freeness (Krizhevsky et al., 2012; Krizhevsky et al., 2012), which states that no user should prefer another user's recommendations. In contrast, our work is more connected to notions of item fairness which are defined in terms of item exposure (Beng et al., 2015).
Additional prior work has focused on improving long-tail recommendations. Because many recommendation datasets feature a large number of items but a small number of highly popular "head" items, recommender systems are prone to popularity bias in disproportionately recommending popular items (Krizhevsky et al., 2012). Over time this can lead to a "rich getting richer" effect, which is undesirable because many of the unpopular "tail" items may be desirable (Krizhevsky et al., 2012). While many trustworthy recommender systems works are focused on introducing new exposure to unpopular items, our work is more focused on preserving existing preferences for less popular items.
## 6. Limitations
We discuss several known and potential limitations of our algorithm _Item-Weighted PCA_. First, the SDP has a runtime complexity of \(\mathcal{O}\left(d^{6.5}\right)\)(Beng et al., 2015), which means that _Item-Weighted PCA_ can be prohibitively slow for large values of \(d\). Second, it is possible that _Item-Weighted PCA_ can overfit the input matrix in cases where the solution matrix is used to project out-of-sample matrices. Last, compared to vanilla PCA, the projection components are not ordered, so it is not possible to deduce \(P_{r}\) from \(P_{r+1}\).
## 7. Conclusion
By analyzing within the context of collaborative filtering and recommender systems, we identify two mechanisms of unfairness in PCA. First, information relevant to less popular items is lacking in the leading components. Second, the leading components specialize in individual popular items instead of capturing similarities between items. These mechanisms arise from heterogeneity in item popularities and do not require external group labels to analyze. We illustrate the consequences of these mechanisms in a motivating real-world example and show that the mechanisms provably occur in a stylized setting. To mitigate unfairness, we introduce an algorithm, _Item-Weighted PCA_, that is designed to preserve user preferences for both popular and less popular items. _Item-Weighted PCA_ is optimal in a stylized setting and our evaluations show that _Item-Weighted PCA_ not only improves recommendations in aggregate but benefits both popular and less popular items.
Figure 4. _Item-Weighted PCA (solid) is able to improve recommendation performance for items of all popularity levels relative to vanilla PCA (dashed). The improvement arises from projection matrices that better capture item similarities for collaborative filtering._
Figure 5. The above charts show the robustness of the PCA algorithms as training examples are gradually removed at a fixed value of \(r=106\). \(\alpha\) is the fraction of training examples that are removed (set to zero). _Item-Weighted PCA is more robust than both baselines for LastFM and performs comparably with the baselines for MovieLens. |
2301.10497 | E(n)-equivariant Graph Neural Cellular Automata | Cellular automata (CAs) are computational models exhibiting rich dynamics
emerging from the local interaction of cells arranged in a regular lattice.
Graph CAs (GCAs) generalise standard CAs by allowing for arbitrary graphs
rather than regular lattices, similar to how Graph Neural Networks (GNNs)
generalise Convolutional NNs. Recently, Graph Neural CAs (GNCAs) have been
proposed as models built on top of standard GNNs that can be trained to
approximate the transition rule of any arbitrary GCA. Existing GNCAs are
anisotropic in the sense that their transition rules are not equivariant to
translation, rotation, and reflection of the nodes' spatial locations. However,
it is desirable for instances related by such transformations to be treated
identically by the model. By replacing standard graph convolutions with
E(n)-equivariant ones, we avoid anisotropy by design and propose a class of
isotropic automata that we call E(n)-GNCAs. These models are lightweight, but
can nevertheless handle large graphs, capture complex dynamics and exhibit
emergent self-organising behaviours. We showcase the broad and successful
applicability of E(n)-GNCAs on three different tasks: (i) pattern formation,
(ii) graph auto-encoding, and (iii) simulation of E(n)-equivariant dynamical
systems. | Gennaro Gala, Daniele Grattarola, Erik Quaeghebeur | 2023-01-25T10:17:07Z | http://arxiv.org/abs/2301.10497v1 | # E(\(n\))-Equivariant Graph Neural Cellular Automata
###### Abstract
Cellular automata (CAs) are computational models exhibiting rich dynamics emerging from the local interaction of cells arranged in a regular lattice. Graph CAs (GCAs) generalise standard CAs by allowing for arbitrary graphs rather than regular lattices, similar to how Graph Neural Networks (GNNs) generalise Convolutional NNs. Recently, Graph Neural CAs (GNCAs) have been proposed as models built on top of standard GNNs that can be trained to approximate the transition rule of any arbitrary GCA. Existing GNCAs are anisotropic in the sense that their transition rules are not equivariant to translation, rotation, and reflection of the nodes' spatial locations. However, it is desirable for instances related by such transformations to be treated identically by the model. By replacing standard graph convolutions with E(\(n\))-equivariant ones, we avoid anisotropy by design and propose a class of isotropic automata that we call E(\(n\))-GNCAs. These models are lightweight, but can nevertheless handle large graphs, capture complex dynamics and exhibit emergent self-organising behaviours. We showcase the broad and successful applicability of E(\(n\))-GNCAs on three different tasks: (i) pattern formation, (ii) graph auto-encoding, and (iii) simulation of E(\(n\))-equivariant dynamical systems.
Machine Learning, ICML
## 1 Introduction
The design of collective intelligence, i.e. the ability of a group of simple agents to collectively cooperate towards a unifying goal, is a growing area of machine learning research aimed at solving complex tasks through _emergent computation_(Ha & Tang, 2022). The interest in these techniques stems from their striking similarity to real biological systems--such as insect swarms and bacteria colonies--and from their natural scalability as distributed systems (Mitchell, 2009).
Cellular automata (CAs) (von Neumann, 1963) represent a natural playground for studying collective intelligence and morphogenesis (shape-forming processes), because of their discrete-time and Markovian dynamics (Turing, 1990). CAs are computational models inspired by the biological behaviors of cellular growth. As such, they are capable of producing complex emergent _global_ dynamics from the iterative, possibly asynchronous application of _localized_ transition rules (aka update rules), that can but do not need to have an analytical formulation (Adamatzky, 2010).
Research on applying neural nets for learning and designing CA rules can be traced back to Wulff & Hertz (1992), with subsequent notable contributions by Elmenreich & Fehervari (2011), Nichele et al. (2017), and Gilpin (2019). Recently, Neural Cellular Automata (NCAs) have been proposed as CAs with transition rules encoded as--typically light-weight--neural networks. They have been successfully applied for designing self-organizing systems for morphogenesis in 2D and 3D (Mordvintsev et al., 2020; Sudhakaran et al., 2021), image generation and classification (Palm et al., 2022; Randazzo et al., 2020), and reinforcement learning
Figure 1: E(\(n\))-GNCA commutative diagram: For any number of time steps \(t\) transition rule \(\tau_{\theta}\) is run, output coordinates \(\mathbf{X}^{\prime}\) and output node features \(\mathbf{H}^{\prime}\) are respectively E(\(n\))-equivariant and E(\(n\))-invariant to rigid transformations of input coordinates \(\mathbf{X}\). Node features are represented with 3 colored dots attached to each node.
(Huang et al., 2020). This line of work has a common theme: It assumes a fixed discrete geometry for the CA cells, which are typically arranged in \(n\)-dimensional, equispaced, and oriented lattices.
Subsequently, Grattarola et al. (2021) introduced GNCAs (Graph NCAs) by extending NCAs to the general setting of graphs, and showed that Graph Neural Networks are natural and universal engines for learning any desired transition rule. However, their formulation does not consider the possible symmetries in the state space, instead relying on a fixed frame of reference even for states representing spatial information like position and velocity. Further, their architecture does not allow nodes to have hidden states, which have been proven to be useful for perception and for keeping track of evolution history (Mordvintsev et al., 2020).
By building on Satorras et al.'s (2021) work on E(\(n\))-equivariant Graph Neural Networks (EGNNs), we overcome these limitations and present GNCAs that respect isometries in the state space _by design_. We name the resulting class of models E(\(n\))-GNCAs. In section 4, we showcase the broad, successful applicability of E(\(n\))-GNCAs on three different tasks: (i) pattern formation, (ii) graph auto-encoding, and (iii) simulation of E(\(n\))-equivariant dynamical systems.
## 2 Preliminaries and Related Work
In this section, we introduce necessary concepts of and relevant prior work on graphs, cellular automata, and (equivariant) graph neural networks. These support and contextualize the definition of the class of models we propose.
GraphsA graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) consists of an unordered set of nodes \(\mathcal{V}=\{1,\ldots,|\mathcal{V}|\}\) and a set of edges \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\). Its neighbourhood function \(\mathcal{N}\) is defined for every node \(i\in\mathcal{V}\) by \(\mathcal{N}(i)=\{j\in\mathcal{V}:(i,j)\in\mathcal{E}\}\). A graph can be equivalently defined with an adjacency matrix \(A\in\{0,1\}^{|\mathcal{V}|\times|\mathcal{V}|}\), where \(A_{ij}\) is \(1\) if and only if \((i,j)\in\mathcal{E}\).
We can attach a state \(\mathbf{s}_{i}\in\mathcal{S}\) to each node \(i\) and an attribute \(\mathbf{e}_{ij}\in\mathcal{A}\) to each edge \((i,j)\), where for now we leave the state space \(\mathcal{S}\) and attribute set \(\mathcal{A}\) unspecified. A node state \(\mathbf{s}_{i}\) typically consists of components such as location \(\mathbf{x}_{i}\), velocity \(\mathbf{v}_{i}\), and node features \(\mathbf{h}_{i}\). Jointly for all nodes or edges, we write **S**--with components **X**, **V**, and **H**--and **E**, which implicitly carry with them the underlying graph.
### Graph (Neural) Cellular Automata
Graph Cellular AutomataA _Graph Cellular Automaton_ (GCA) is a triple \((\mathcal{G},\mathcal{S},\tau)\), where \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) is a graph and \(\mathcal{S}\) is a discrete or continuous state space. The map \(\tau:\mathcal{S}\times 2^{\mathcal{S}}\rightarrow\mathcal{S}\) is used as a _local_ transition rule to update the state \(\mathbf{s}_{i}\in\mathcal{S}\) of each of the graph's nodes \(i\in\mathcal{V}\) as a function of its current state and its neighbour's states:
\[\mathbf{s}_{i}^{\prime}=\tau\big{(}\mathbf{s}_{i},\{\mathbf{s}_{j}:j\in \mathcal{N}(i)\}\big{)}. \tag{1}\]
We compactly write \(\mathbf{S}^{\prime}=\tau(\mathbf{S})\) to indicate the synchronous application of \(\tau\) to all nodes. Standard CAs--like elementary CAs (Wolfram, 2018) and Conway's Game of Life (Adamatzky, 2010)--use a grid for the graph, have integer-valued locations \(\mathbf{x}_{i}\in\mathbb{Z}^{n}\) and use a single binary value for their features \(\mathbf{h}_{i}\).
Anisotropy & IsotropyOf great importance for CAs are the properties _anisotropy_ and _isotropy_: The former implies being directionally dependent, as opposed to the latter, which indicates homogeneity in all directions. Specifically, anisotropic transition rules are _not_ invariant to rotations, translations and reflections of the states thus resulting in nodes being oriented in a specific direction and prohibiting the existence of differently oriented states of interest (Mordvintsev et al., 2022; Grattarola et al., 2021). In contrast, isotropy allows transition rules to act similarly regardless of how the nodes are oriented, thus allowing proper design of self-organising (and living) systems.
Neural Cellular AutomataA neural cellular automaton (NCA) is a light-weight neural net with parameters \(\theta\) representing a parameterised transition rule \(\tau_{\theta}\)(Mordvintsev et al., 2020). In this setting, states are represented as typically low-dimensional vectors and the differentiability of the transition rule allows to optimise its parameters \(\theta\) via backpropagation through time (Lillicrap & Santoro, 2019). Recent work has shown the successful application of deep learning techniques for NCAs, showing that neural transition rules can be efficiently learned to exhibit complex desired behaviors (Mordvintsev et al., 2020, 2022; Tesfaldet et al., 2022; Grattarola et al., 2021; Palm et al., 2022).
As already pointed out by Tesfaldet et al. (2022), NCAs are not structurally equivalent to (deep) feed-forward neural nets, where a _acyclic_ directed computation graph induces a _finite_ impulse response. Instead, NCAs can be viewed as Recurrent Neural Networks (RNNs) (Rumelhart et al., 1985), where a _cyclic_ directed computation graph induces an _infinite_ impulse response, enabling feedback and time-delayed interactions. Notably, RNNs and CAs--and therefore NCAs--are known to be Turing complete (Perez et al., 2019; Rendell, 2002).
### Graph Neural Networks
Graph Neural Networks (GNNs) (Gori et al., 2005; Scarselli et al., 2008) have become the go-to method for representation learning on graphs. The core functionality of GNNs is the message-passing scheme. Let \(\mathbf{s}_{i}\in\mathbb{R}^{s}\) represent the feature vector of node \(i\) and \(\mathbf{e}_{ij}\in\mathbb{R}^{e}\) the (possibly available) feature vector of edge \((i,j)\). A message-passing layer
updates the features of node \(i\) as follows:
\[\mathbf{s}_{i}^{\prime}=\gamma(\mathbf{s}_{i},\bigoplus_{j\in\mathcal{N}(i)}\phi( \mathbf{s}_{i},\mathbf{s}_{j},\mathbf{e}_{ji})), \tag{2}\]
where \(\phi\) is the message function, \(\bigoplus\) is a permutation-invariant operation to aggregate the set of incoming messages (usually a sum or an average), and \(\gamma\) is the node update function. The operators \(\phi\), \(\bigoplus\), and \(\gamma\) must all be differentiable, allowing message-passing layers to be stacked sequentially and then optimised end-to-end with stochastic gradient descent.
### E(_n_)-Equivariant Graph Neural Networks
Our work builds on E(_n_)-Equivariant GNNs (EGNNs) (Satorras et al., 2021). In this setting, every graph node \(i\) has coordinates \(\mathbf{x}_{i}\in\mathbb{R}^{n}\) and node features \(\mathbf{h}_{i}\in\mathbb{R}^{h}\), and an edge \((i,j)\in\mathcal{E}\) can possibily have attributes \(\mathbf{e}_{ij}\in\mathbb{R}^{e}\). EGNNs represent a class of GNNs explicitly designed to be permutation equivariant with respect to the nodes (like any GNN), and translation, rotation and reflection equivariant with respect to nodes' coordinates. The isometry group corresponding to these symmetries acting in an \(n\)-dimensional Euclidean space is called the Euclidean group E(_n_). We will formally discuss the _key features_ of EGNNs (cf. Equation 12) while presenting our method in section 3.
E(_n_)-Equivariant Graph ConvolutionsGiven a graph \(\mathcal{G}\), node coordinates \(\left\{\mathbf{x}_{i}\right\}\), node features \(\left\{\mathbf{h}_{i}\right\}\) and _optional_ edge attributes \(\left\{\mathbf{e}_{ij}\right\}\) an E(_n_)-Equivariant Graph Convolution (EGC) sequentially performs:
\[\mathbf{m}_{ij} =\phi_{m}(\|\mathbf{x}_{i}-\mathbf{x}_{j}\|^{2},\mathbf{h}_{i}, \mathbf{h}_{j},\mathbf{e}_{ij}) \tag{3}\] \[\mathbf{x}_{i}^{\prime} =\mathbf{x}_{i}+\frac{1}{|\mathcal{N}(i)|}\sum_{j\in\mathcal{N}( i)}(\mathbf{x}_{i}-\mathbf{x}_{j})\phi_{x}(\mathbf{m}_{ij})\] (4) \[\mathbf{m}_{i} =\sum_{j\in\mathcal{N}(i)}\mathbf{m}_{ij}\] (5) \[\mathbf{h}_{i}^{\prime} =\phi_{h}(\mathbf{h}_{i},\mathbf{m}_{i}) \tag{6}\]
where \(\phi_{m}:\mathbb{R}^{2h+e+1}\rightarrow\mathbb{R}^{m},\phi_{x}:\mathbb{R}^{m} \rightarrow\mathbb{R}^{1}\) and \(\phi_{h}:\mathbb{R}^{h+m}\rightarrow\mathbb{R}^{h^{\prime}}\) are dense neural networks. Concisely, we write \(\mathbf{X}^{\prime},\mathbf{H}^{\prime}=\text{EGC}(\mathbf{X},\mathbf{H}, \mathbf{E})\). Note that when \(h^{\prime}=h\) we can use a skip connection in Equation 6 as follows:
\[\mathbf{h}_{i}^{\prime}=\phi_{h}^{+}(\mathbf{h}_{i},\mathbf{m}_{i})=\phi_{h}( \mathbf{h}_{i},\mathbf{m}_{i})+\mathbf{h}_{i}. \tag{7}\]
Skip connections have been proven to improve model resilience to vanishing gradients and over-smoothing (Zhao and Akoglu, 2019).
E(_n_)-Equivariant Graph Convolutionswith AttentionIn order to give the model the freedom to assign different weights when aggregating messages, we can use attention weights and replace equation 5 with:
\[\mathbf{m}_{i}=\sum_{j\in\mathcal{N}(i)}\phi_{a}(\mathbf{m}_{ij})\mathbf{m}_{ij} \tag{8}\]
where \(\phi_{a}:\mathbb{R}^{m}\rightarrow[0,1]^{1}\) is a dense neural network that takes a message \(\mathbf{m}_{ij}\) as input and outputs its attention weight \(\phi_{a}(\mathbf{m}_{ij})\). Attention weights are particularly advantageous when a fully connected graph is used (Satorras et al., 2021; Vaswani et al., 2017).
E(_n_)-Equivariant Graph Convolutionswith VelocityWhen nodes represent bodies with velocities, we can extend the previous formulation to explicitly take velocities into account. Specifically, given node velocities \(\left\{\mathbf{v}_{i}\right\}\) we can replace the coordinate update in Equation 4 with the following two steps:
\[\mathbf{v}_{i}^{\prime} =\phi_{v}(\mathbf{h}_{i},\|\mathbf{v}_{i}\|)\mathbf{v}_{i} \tag{9}\] \[+\frac{1}{|\mathcal{N}(i)|}\sum_{j\in\mathcal{N}(i)}(\mathbf{x}_{ i}-\mathbf{x}_{j})\phi_{x}(\mathbf{m}_{ij})\] \[\mathbf{x}_{i}^{\prime} =\mathbf{x}_{i}+\mathbf{v}_{i}^{\prime} \tag{10}\]
where \(\phi_{v}:\mathbb{R}^{h+1}\rightarrow\mathbb{R}^{1}\) is a dense neural network. Without affecting equivariance, and different from Satorras et al. (2021), we input \(\|\mathbf{v}_{i}\|\) (and not only \(\mathbf{h}_{i}\)) to \(\phi_{v}\) since we found it to be very beneficial in practice (cf. subsection 4.3). Concisely, we write \(\mathbf{X}^{\prime},\mathbf{V}^{\prime},\mathbf{H}^{\prime}=\text{EGC}( \mathbf{X},\mathbf{V},\mathbf{H},\mathbf{E})\).
E(_n_)-Equivariant Graph Neural NetworksAn E(_n_)-Equivariant GNN is a stack of \(\ell\geq 1\) EGCs applied sequentially. Concisely, we write \(\mathbf{X}^{\prime},\mathbf{H}^{\prime}=\text{EGNN}_{\ell}(\mathbf{X}, \mathbf{H},\mathbf{E})\) to denote the application of an EGNN with \(\ell\) layers.
## 3 E(_n_)-equivariant Graph Neural CAs
Our work builds on the connection between isotropic GCAs and EGNNs. This can be seen by comparing Equation 1 with Equations 4 and 6. Specifically, we consider a setting in which a _parametrised_ transition rule is implemented with a single E(_n_)-equivariant Graph Convolution (EGC) acting on a continuous state space \(\mathcal{S}\equiv\mathbb{R}^{n+h}\), or \(\mathcal{S}\equiv\mathbb{R}^{2n+h}\) when velocity is included. A layered EGNN is also a possible and viable approach to modeling transition rules, especially if one wants to account for higher-order neighbours when performing a single state update (Chan, 2019).
We introduce E(_n_)-Equivariant Graph Neural Cellular Automata, E(_n_)-GNCAs for short. They use a single EGC for the parameterised transition rule \(\tau_{\theta}\). Similarly to plain cellular automata, \(\tau_{\theta}\) is repeatedly applied over time:
\[\mathbf{X}^{\prime},\mathbf{H}^{\prime}=\tau_{\theta}^{t}([\mathbf{X},\mathbf{H }])=\underbrace{\tau_{\theta}\circ\cdots\circ\tau_{\theta}}_{t\text{ times}}([\mathbf{X},\mathbf{H}]), \tag{11}\]
where \(\mathbf{X}\) represents input node coordinates and \(\mathbf{H}\) represents input node features. Note that (i) to avoid clutter in Equation 11 we did not consider possibly available edge attributes \(\mathbf{E}\), (ii) a similar formulation is possible when velocities \(\mathbf{V}\) are available (cf. Equation 9), and (iii) the dependency of \(\tau_{\theta}\) on a static graph \(\mathcal{G}\) is left implicit in order to keep notation uncluttered. The overall state configuration \(\mathbf{S}\) of an E(\(n\))-GNCA is defined as \(\mathbf{S}=[\mathbf{X},\mathbf{H}]\), or \(\mathbf{S}=[\mathbf{X},\mathbf{H},\mathbf{V}]\) when velocity is available, and consequently we denote the \(t\)-times application of the model transition rule as \(\mathbf{S}^{\prime}=\tau_{\theta}^{t}(\mathbf{S})\). Importantly, the transition rules we consider are 1-step Markovian, meaning that automaton state at step \(t+1\) is fully determined by the state at step \(t\).
E(\(n\))-equivariance, E(\(n\))-invariance and IsotropyAnalogously to plain EGNNs (Satorras et al., 2021), for any positive integer \(t\in\mathbb{N}^{+}\), orthogonal matrix \(Q\in\mathbb{R}^{n\times n}\) and translation vector \(b\in\mathbb{R}^{n}\), our neural transition rule \(\tau_{\theta}\) satisfies the following:
\[\psi(\mathbf{X}^{\prime}),\mathbf{H}^{\prime}=\tau_{\theta}^{t}([\psi(\mathbf{ X}),\mathbf{H}]), \tag{12}\]
where \(\mathbf{X}^{\prime},\mathbf{H}^{\prime}=\tau_{\theta}^{t}([\mathbf{X},\mathbf{ H}])\) and \(\psi(\mathbf{X})=Q\mathbf{X}+b\) is shorthand for \((Q\mathbf{x}_{1}+b,\dots,Q\mathbf{x}_{|\mathcal{V}|}+b)\). The map \(\psi\) is a _rigid transformation_ (aka _isometry_), and represents a rotation-reflection-translation of the coordinates. As such, \(\psi\) preserves the Euclidean distance between every pair of nodes. As illustrated with the commutative diagram in Figure 1, applying \(\psi\) to input coordinates \(\mathbf{X}\) and then running transition rule \(\tau_{\theta}^{t}\) will give the same results as first running \(\tau_{\theta}^{t}\) and then applying \(\psi\) to \(\mathbf{X}^{\prime}\). Thus, output coordinates \(\mathbf{X}^{\prime}\) and output node features \(\mathbf{H}^{\prime}\) are respectively E(\(n\))-equivariant and E(\(n\))-invariant to rigid transformations of input coordinates \(\mathbf{X}\). Intuitively, these properties are a consequence of the model only processing relative distances and never being aware of absolute node locations (cf. Equation 3 and Equation 4).1 Importantly, the E(\(n\))-invariance of the node features and the E(\(n\))-equivariance of the node coordinates make E(\(n\))-GNCAs isotropic _by design_.
Footnote 1: We refer the reader to the appendix of Satorras et al. (2021) for a formal proof of the equivariance/invariance of EGNNs.
Hidden States & PerceptionSimilarly to Mordvintsev et al. (2020, 2022), Palm et al. (2022), and Chan (2019), but different from Grattarola et al. (2021), our model has the necessary inductive bias for modelling hidden states, as it offers location-independent node features \(\mathbf{H}\). As Mordvintsev et al. (2020), we interpret hidden states as a signal mechanism for orchestrating morphogenesis: All nodes share the same genome, i.e. the transition rule, and only differ from the information encoded by the signaling they receive, emit, and store internally, i.e. their node features. In case node features \(\mathbf{H}\) are _not_ available in advance, we can either set them to \(\mathbf{1}\) or randomly initialize them, and give the model the freedom to learn and use them while evolving. Further, messages \(\left\{\mathbf{m}_{ij}\right\}\) (cf. Equation 3) are similar in spirit to the perception vectors of Mordvintsev et al. (2020, 2022), as they encode what nodes perceive of the environment from communicating with their neighbors.
Given (i) the interest in what would happen as \(t\rightarrow\infty\) and (ii) the recurrent architecture of our model, we normalise node feature \(\mathbf{H}\) after each transition rule application so as to mitigate problems like over-smoothing, exploding/vanishing gradients, and training instabilities. Specifically, after every transition rule application, we normalise node features \(\mathbf{H}\) with either PairNorm (Zhao & Akoglu, 2019) or NodeNorm (Zhou et al., 2021), helpful _parameter-free_ normalisation techniques for deep GNNs. Further, we use the hyperbolic tangent \(\mathsf{TanH}()\) as non-linear activation function--a common design choice in RNNs (Lipton et al., 2015)--and the skip connection defined in Equation 7, which has proven to be very beneficial for deep GNNs (Zhao & Akoglu, 2019).
Global propagation from local interactionsMessage-passing GNNs require \(\ell\) layers to allow communication between nodes that are \(\ell\)-hops away. Several tasks in graph ML tend to be very challenging when the diameter of the underlying graph \(\mathcal{G}\) is larger than the number of layers used, and that is because the receptive field of the network may not comprise the whole graph (Zhao & Akoglu, 2019; Alon & Yahav, 2021). Further, to avoid severe over-smoothing (Li et al., 2018), most popular GCN-style networks (Kipf & Welling, 2017) tend to be shallow, with narrow receptive fields, leading to _under-reaching_(Wenkel et al., 2022). To avoid this limitation, it is common to exchange messages among all nodes and providing the edge information \((i,j)\in\mathcal{E}\) as a boolean flag within the edge attributes (Liu et al., 2019; Satorras et al., 2021; 20). This is computationally quadratic in the number of nodes, and therefore very expensive and challenging when processing large graphs.
In our setting, due to the 1-step Markovian property of \(\tau_{\theta}\), the effective receptive field of E(\(n\))-GNCA localized message-passing grows larger with each state update until eventually encompassing the whole graph. In this way global propagation of information arises from spatially localized interactions of nodes. In other words, iterative local message-passing circumvents the quadratic complexity and related challenges of exchanging messages among all nodes at each step. This self-organizing process does not require any external control or centralized leader: nodes communicate with their neighbors to make collective decisions about the final configuration of the nodes. This globally consistent and complex behaviour, which arises from strictly local interactions, is a particular feature of (N)CAs as we show in our experiments. Finally, we emphasise that--despite the localized computation--we are allowed to express global information within the loss function employed.
## 4 Experiments
We showcase the broad and successful applicability of E(\(n\))-GNCAs in three different tasks: (i) pattern formation, (ii) graph autoencoding and (iii) simulation of E(\(n\))-Equivariant dynamical systems. We set \(h=16\) (hidden state dimension) and \(m=32\) (message dimension) throughout _all_ experiments which lead to an overall automaton size of only 5K parameters _irrespective_ of the coordinate dimension \(n\) being used. Our code is available at github.com/gengala/egnca. We are grateful to the developers of the main software packages used for this work: Pytorch (Paszke et al., 2019), PyTorch Geometric (Fey and Lenssen, 2019) and PyTorch Lightning (Falcon and The PyTorch Lightning team, 2019).
### Pattern Formation
Inspired by prior work on CA morphogenesis (Mordvintsev et al., 2020, 2022; Grattarola et al., 2021), we show how E(\(n\))-GNCAs can be trained to converge to a given fixed target state. In our case, the target is a sparse geometric graph \(\mathcal{G}\) that visually defines a recognisable 2D or 3D shape. Specifically, the goal is to learn a transition rule \(\tau_{\theta}\) that _morphs_ randomly initialised coordinates \(\overline{\mathbf{X}}\) to a given target point cloud \(\hat{\mathbf{X}}\) by convolving over \(\mathcal{G}\) and assuming a prior 1-to-1 correspondence between nodes in \(\overline{\mathbf{X}}\) and \(\hat{\mathbf{X}}\).
E(\(n\))-invariant objectiveContrary to Grattarola et al. (2021), we are _not_ interested in a specific orientation of \(\hat{\mathbf{X}}\) and therefore we do _not_ optimise the model by minimising the MSE between coordinates reached by the model and target coordinates, i.e. \(\|\mathbf{X}^{\prime}-\hat{\mathbf{X}}\|^{2}\) where \([\mathbf{X}^{\prime},\mathbf{H}^{\prime}]=\tau_{\theta}^{t}([\overline{ \mathbf{X}},\mathbf{1}])\). The former, moreover, would _not_ be a suitable objective for our automata since it accounts for specific locations whereas our model only uses relative distances during its computation (cf. Equations 3 and 4). Therefore, for every pair of nodes \((i,j)\in\mathcal{V}\times\mathcal{V}\), we minimise the MSE between their distance in the model's final configuration and the target one. Formally, we define an E(\(n\))-invariant objective defined as follows:
\[\mathcal{L}_{\text{INV}}=\frac{1}{|\mathcal{V}|^{2}}\sum_{(i,j)\in\mathcal{V} \times\mathcal{V}}(\|\mathbf{x}_{i}^{\prime}-\mathbf{x}_{j}^{\prime}\|-\| \hat{\mathbf{x}}_{i}-\hat{\mathbf{x}}_{j}\|)^{2}. \tag{13}\]
This objective (cf. Equation 13) provides a weaker supervision signal than the one by Grattarola et al. (2021), therefore leading to a much more challenging task. That is because for Grattarola et al. (2021) every node has only a single constraint to satisfy, i.e. being close to a specific global location, whereas in our case every node has \(|\mathcal{V}|-1\) constraints to satisfy, i.e. its distances w.r.t. all the other nodes in \(\mathcal{G}\), _not_ only its neighbors. Interestingly, optimizing for Equation 13 results in learning a transition rule \(\tau_{\theta}\) such that \([\mathbf{X}^{\prime},\mathbf{H}^{\prime}]=\tau_{\theta}^{t}([\overline{ \mathbf{X}},\mathbf{1}])\) and \(\mathbf{X}^{\prime}=\psi(\hat{\mathbf{X}})\) for any arbitrary rigid transformation \(\psi(\cdot)\). In other words, our objective gives the model the freedom to converge in any possible orientation of the target. In practice, one could avoid evaluating \(\mathcal{O}(|\mathcal{V}|^{2})\) distances by only considering a randomly sampled subset of edges when computing Equation 13. Our objective is similar in spirit to the one by Mordvintsev et al. (2022), where a rotation-reflection invariant objective is used in the image domain. Similarly, the loss function we employ is an E(\(n\))-invariant point cloud description and fits well with the isotropy of our model.
TrainingWe mostly follow the experimental setup in (Mordvintsev et al., 2020; Grattarola et al., 2021). First, we create a large pool (aka _cache_) of \(K\) states \(\{\mathbf{S}^{(k)}\}_{k=1}^{K}=\{[\mathbf{X}^{(k)},\mathbf{H}^{(k)}]\}_{k=1}^ {K}\), each initialised as \([\overline{\mathbf{X}},\mathbf{1}]\), where \(\overline{\mathbf{X}}\sim\mathcal{N}(\mathbf{0},\sigma\mathbf{1})\). Then, we randomly sample a mini-batch from the pool and use it as input to transition rule \(\tau_{\theta}\), which runs for a number of time steps \(t\) sampled uniformly from the interval \([15,25]\)2. Once a mini-batch is processed, we apply backpropagation through time (BPTT) (Lillicrap and Santoro, 2019) to update parameters \(\theta\) according to Equation 13. To promote persistency, we use the pool as a _replay memory_, so that, once an optimisation step is performed, we replace the pool state \(\mathbf{S}^{(k)}\) with \(\tau_{\theta}^{t}(\mathbf{S}^{(k)})\) for every \(\mathbf{S}^{(k)}\) in the current mini-batch. This allows next training iterations to account for states that already result from a repeated application of the transition rule, thus encouraging the model to persist in the target state after reaching it. Further, before processing a mini-batch, the state with the highest loss value is replaced with the initial state \([\overline{\mathbf{X}},\mathbf{1}]\) so as to both stabilise training and, more importantly, avoid catastrophic forgetting. Finally, to also promote regeneration, we perturb half of the point clouds in the batch by adding Gaussian noise. Specifically, one quarter is perturbed globally and another only locally.
Footnote 2: The interval considered represents a trade-off between computational complexity, stability during training, and a sufficient number of time steps to allow the model to learn dynamical patterns for the desired behavior.
ResultsWe consider the following geometric graphs available in PyGSP (Defferrard et al., 2017): a regular 2D grid (256 nodes), a 3D torus (256 nodes) and the Stanford bunny (2503 nodes). Figure 2 shows (part of) E(\(n\))-GNCA trajectories as well as the loss value (cf. Equation 13) w.r.t. the coordinates at each time step shown. Remarkably, our model learns to converge to a stable attractor of the given geometric graph after any number of time steps \(t>15\). Furthermore, the model exhibits regeneration abilities by being robust against perturbations of the coordinates. More experimental details and results can be found in Appendix A.
### Graph autoencoding with Cellular Automata
In this section, we show how E(\(n\))-GNCAs can be deployed as performant Graph AutoEncoders (GAEs) (Kipf
& Welling, 2016), despite their single-layered architecture and recurrent computation. In graph autoencoding one has available a set of (possibly featureless) graphs \(\{\mathcal{G}_{n}\}\) and one wants to learn node representations that can be used to reconstruct the underlying ground-truth adjacency matrices (Satorras et al., 2021b; Liu et al., 2019). For this task, we report more details and additional results in Appendix B.
DatasetsWe consider five datasets of featureless graphs of varying size, connectivity and properties: comm-s (100 graphs, 2 communities, 12-20 nodes) (Liu et al., 2019), planar-s (200 planar graphs, 12-20 nodes), planar-l (200 planar graphs, 32-64 nodes), sbm (200 stochastic block model graphs, 2-5 communities, 44-187 nodes) (Martinkus et al., 2022) and proteins (918 graphs, 100-500 nodes) (Dobson & Doig, 2003). Figure B.5 shows some examples of such graphs. We split all datasets into training (80%), validation (10%) and test (10%).
TrainingFor each training graph \(\mathcal{G}_{n}\) we create a small pool of \(K\) states \(\{[\mathbf{X}^{(n,k)},\mathbf{H}^{(n,k)}]\}\). Every \(\mathbf{H}^{(n,k)}\) is again initialised as \(\mathbf{1}\) whereas input node coordinates \(\mathbf{X}^{(n,k)}\) now follow an isotropic Gaussian \(\mathcal{N}(\mathbf{0},\sigma\mathbf{1})\).3 As such, the model can be viewed a generative model _conditioned_ on \(\mathcal{G}\). A mini-batch is now created by first considering a random subset of training graphs and then sampling a random pool state each. Every mini-batch state \(\mathbf{S}^{(n,k)}\) is then run by \(\tau_{\theta}\) for \(t\in[t_{1},t_{2}]\) random time steps eventually reaching state \([\mathbf{X}^{\prime},\mathbf{H}^{\prime}]=\tau_{\theta}^{t}(\mathbf{S}^{(n,k)})\). Finally, we apply an E(\(n\))-invariant decoding scheme based on distances between nodes \(\mathbf{X}^{\prime}\) so that the reconstructed soft adjacency matrix \(\hat{A}\in[0,1]^{|\mathcal{V}|\times|\mathcal{V}|}\) is defined as:
Footnote 3: Injecting Gaussian noise as initial node features has originally been proposed by Liu et al. (2019), and then also used as a way of overcoming the symmetry problem and over-smoothing (Satorras et al., 2021b; Sato et al., 2021; Godwin et al., 2022).
\[\hat{A}_{ij}=\frac{1}{1+\exp(\delta_{2}(\|\mathbf{x}_{i}^{\prime}-\mathbf{x}_ {j}^{\prime}\|_{2}^{2}{-}\delta_{1}))}\in[0,1], \tag{14}\]
where \(\delta_{1}\) and \(\delta_{2}\) are learnable positive scalar parameters. The model is trained by minimising the binary cross-entropy (BCE) between the ground-truth adjacency \(A\) and the predicted soft one \(\hat{A}\), namely:
\[\mathcal{L}_{\texttt{BCE}}=-\sum_{ij}A_{ij}\ln(\hat{A}_{ij})+(1-A_{ij})\ln(1- \hat{A}_{ij}). \tag{15}\]
Furthermore, we require our autoencoders to be persistent, which means autoencoding has to be possible from \(\mathbf{X}^{\prime}\) for any \(t>t_{1}\). To promote persistency, we use a _multi-target_ replay strategy. Specifically, after every optimisation step, we replace the reached state \([\mathbf{X}^{\prime},\mathbf{H}^{\prime}]\) with the pool state that originated it, and randomly re-initialise pool states after a given number of maximum replacements so as to avoid catastrophic forgetting.
Figure 2: E(\(n\))-GNCA convergence to a 2D grid (top), a 3D torus (middle) and the Stanford geometric bunny (bottom). The first 4 columns show E(\(n\))-GNCA states at different time steps. The second to last column show either a local or global damage of coordinates at \(t=24\). Finally, the last column aims to both show regeneration and persistency abilities by running the transition rule for 1000 extra time steps after perturbation has occurred. We report the loss value of Equation 13 for the state in each figure. The nearest-neighbor edges of the Stanford bunny are not shown so as to avoid clutter. We report complete trajectories in Appendix A. Best viewed digitally and zoomed in.
A 3D demoIn a first demo experiment, we use comm-s and planar-s and set \(n=3\) so as to visualise automaton trajectories in 3D. The experiment aims to show persistent autoencoding, _conditional_ generation of 3D point clouds and graph drawing abilities (Eades, 1984; Tamassia, 2013). We randomly sample \(t\) in \([15,25]\) at each optimisation step. We reach an average and _persistent_ F1 score of \(0.98\) and \(0.96\) for comm-s and planar-s respectively over 10 different runs. Figure 3 shows the learned dynamics of our autoencoder.
Autoencoding ResultsE(\(n\))-GNCA autoencoders can scale to higher Euclidean spaces and significantly larger graphs, without neither increasing the size of the models nor losing persistency. This time, we set the coordinate dimension to 8 for all datasets except sbm where it is set to 24, and randomly sample \(t\) in \([25,35]\). Satorras et al. (2021b) already showed the autoencoding superiority of EGNNs compared to classical GNN variants and therefore we only compare against layered EGNNs as a suitable baseline. For a fair comparison, we do _not_ allow underlying fully connected graphs for EGNNs, as opposed to Satorras et al. (2021b). Table 1 reports autoencoding results for E(\(n\))-GNCAs and 4-layered EGNNs having 4-times more parameters. Remarkably, E(\(n\))-GNCA outperform layered EGNNs. Examples of graph reconstructions are available in Figure B.5. Furthermore, E(\(n\))-GNCA autoencoders exhibit persistent dynamics (e.g. Figure B.6) for all datasets except sbm, which, given the variable clustered topology, represents the most challenging dataset.
E(\(n\))-GNCAs are multi-targetE(\(n\))-GNCA autoencoders are multi-target as they can reach many target states, contrary to what is shown in our previous task and previous work (Grattarola et al., 2021; Mordvintsev et al., 2020; 2022). We suppose this to be the consequence of a more relaxed training objective (cf. Equation 15) than the previous one (cf. Equation 13). In graph autoencoding, in fact, target states are _not_ explicitly given but rather a _condition_ that they must satisfy is (cf. Equations 14 and 15). Therefore, since we are only interested in reconstructing the ground-truth \(A\) via Equation 14, E(\(n\))-GNCAs can converge to any possible configuration from which decoding is possible.
### Simulation of E(\(n\))-equivariant dynamical system
We here show the applicability of E(\(n\))-GNCAs as simulators of E(\(n\))-equivariant dynamical systems. The goal is to learn the transition rule underlying observed trajectories.
Figure 4: Boids simulation. First (Second) row shows a ground-truth (predicted) trajectory at different time steps. E(\(n\))-GNCA learns a flocking behaviour similar to the target system, although with smoother and less precise trajectories.
Figure 3: E(\(n\))-GNCA coordinates at different time steps for a test-set graph in comm-s. In each figure, we plot the ground-truth edges and report the binary cross-entropy (cf. Equation 15). Best viewed digitally and zoomed in.
Specifically, we train E(\(n\))-GNCAs to simulate the Boids Algorithm (Reynolds, 1987), a 1-step Markovian and distributed multi-agent system designed to simulate flocks of birds using a set of hand-crafted rules. The underlying graph \(\mathcal{G}\) is obtained as a fixed-radius nearest neighbourhood of the nodes at each time step, i.e. \(\mathcal{G}\) changes dynamically through time. We emphasise that such dynamical system (i) can be formulated as a GCA (cf. Equation 1) and (ii) is E(\(n\))-equivariant. We report more details and show the results of the same experiment for the N-Body problem in Appendix C.
DatasetWe extend the 2D simulation of Grattarola et al. (2021) to a 3D space. We create a dataset of 500 trajectories using the ground-truth simulator. Each trajectory has a duration of 500 time steps and is obtained by evolving 100 boids initialised with random positions and velocities.
TrainingWe use attention weights (cf. Equation 8) and Equations 9 and 10 to explicitly account for velocities. We create a mini-batch of randomly sampled sub-trajectories of length \(L=20\). Then, for each mini-batch sub-trajectory \([\mathbf{X}^{(\ell)},\mathbf{V}^{(\ell)}]_{L=1}^{L}\) we input \(\tau_{\theta}\) with state \(\mathbf{S}^{(1)}=[\mathbf{X}^{(1)},\mathbf{V}^{(1)},\mathbf{H}^{(1)}]\) and run it for \(L-1\) steps obtaining predicted states \([\mathbf{X}^{\prime(\ell)},\mathbf{V}^{\prime(\ell)},\mathbf{H}^{(\ell)}]_{L=2} ^{L}\). Finally, we optimize the MSE of the estimated velocities with the ground truth ones as follows \(\mathcal{L}_{\text{MSE}}=\sum_{\ell=2}^{L}\lVert\mathbf{V}^{(\ell)}-\mathbf{ V}^{\prime(\ell)}\rVert^{2}\). Similarly to Satorras et al. (2021), node features \(\mathbf{H}^{(1)}\) are initialised as the output of a linear layer taking \(\lVert\mathbf{V}^{(1)}\rVert\) as input.
ResultsAs already pointed out by Grattarola et al. (2021), one key aspect of simulating continuous (and chaotic) dynamical systems with GNCAs is that small errors in prediction will quickly accumulate, making it almost impossible for the model to perfectly simulate the true dynamics. Therefore, despite reaching a small validation error, the model _cannot_ perfectly approximate the true trajectories. However, following Grattarola et al. (2021), we can quantitatively evaluate the quality of the learned transition rule by using the sample entropy (SE) (Richman and Moorman, 2000) and correlation dimension (CD) (Grassberger and Procaccia, 1983), two measures of complexity for real-valued time series. On average, ground-truth trajectories (of length 500) report an average SE and CD of \(0.04\pm 0.01\) and \(1.02\pm 0.22\) respectively, whereas E(\(n\))-GNCA trajectories report \(0.04\pm 0.02\) and \(1.08\pm 0.15\) for the same measures. The closeness of the measures indicates that E(\(n\))-GNCA trajectories generate an amount of information comparable to the ground-truth ones, therefore capturing the essence of the underlying rule. Figure 4 shows examples of ground-truth and predicted trajectories.
## 5 Discussion
We introduced E(\(n\))-GNCAs, isotropic automata showing and promising a wide range of applicability. E(\(n\))-GNCA local interactions have been proven powerful enough to reach globally consistent target conditions (cf. subsections 4.1 and 4.2) and capture complex dynamics (cf. subsection 4.3). To the best of our knowledge, this is the first work proposing isotropic-by-design neural cellular automata.
LimitationsThe recurrent training of E(\(n\))-GNCAs makes training hard as we faced problems like exploding/vanishing gradients that were mitigated using weight decay and gradient clipping. Further, the 1-to-1 correspondence between initial nodes and target nodes is a strong and non-natural design choice as it leads to abrupt dynamics especially in the first time steps. In future work, we aim to drop this correspondence by adopting Optimal-Transport (Peyre and Cuturi, 2019; Alvarez-Melis et al., 2019).
Broader ImpactThe possible implications of our work are evident when considering that distributed and self-organizing systems are ubiquitous both in nature (e.g. collective motion, swarming) and technology (e.g. a cyber-physical system operating locally). Furthermore, isometries are very common in dynamical systems (e.g. swarming (Reynolds, 1987), particle simulations (Kipf et al., 2018)) and in many practical applications (e.g. point cloud processing, 3D molecular structures (Ramakrishnan et al., 2014)). Our framework and its inductive biases are particularly useful in all these scenarios, since they allow to learn and discover--rather than hand-design--the transition rules underlying these systems, while accounting for symmetries.
Notably, one of the most remarkable demonstrations of self-organisation can be found in swarm robotics and active matter modeling (Brambilla et al., 2013; Vicsek et al., 1995). Nowadays, we can program tiny robots to locally interact and form a given (E(\(n\))-invariant) pattern, as demonstrated by work such as Mergeable Nervous Systems (Mathews et al., 2017) and Kilobots (Rubenstein et al., 2012). To the best of our knowledge, such programs are currently designed by humans. We hope our work could encourage a line of research in which GNNs further unlock the power of GCAs to implement a desired behavior through differentiable, distributed and emergent computation.
\begin{table}
\begin{tabular}{c|c c|c c} & \multicolumn{2}{c|}{E(\(n\))-GNCA} & \multicolumn{2}{c}{EGNN\({}_{4}\)} \\ & F1\(\uparrow\) & BCE\(\downarrow\) & F1\(\uparrow\) & BCE\(\downarrow\) \\ \hline comm-s & 1.00\(\pm\)0.00 & 0.05\(\pm\)0.01 & 0.91\(\pm\)0.03 & 0.44\(\pm\)0.11 \\ planar-s & 0.99\(\pm\)0.01 & 0.19\(\pm\)0.07 & 0.87\(\pm\)0.01 & 0.44\(\pm\)0.05 \\ planar-l & 0.98\(\pm\)0.01 & 0.07\(\pm\)0.03 & 0.77\(\pm\)0.35 & 0.34\(\pm\)0.03 \\ proteins & 0.95\(\pm\)0.04 & 0.03\(\pm\)0.02 & 0.84\(\pm\)0.01 & 0.08\(\pm\)0.02 \\ sbm & 0.92\(\pm\)0.02 & 0.20\(\pm\)0.02 & 0.76\(\pm\)0.01 & 0.34\(\pm\)0.02 \\ \end{tabular}
\end{table}
Table 1: Autoencoding results averaged over 10 different runs for 4-layered EGNNs and E(\(n\))-GNCAs evaluated at time step \(t=100\). |
2302.09496 | An extension to "A subsemigroup of the rook monoid" | A recent paper studied an inverse submonoid $M_n$ of the rook monoid, by
representing the nonzero elements of $M_n$ via certain triplets belonging to
$\mathbb{Z}^3$. In this short note, we allow the triplets to belong to
$\mathbb{R}^3$. We thus study a new inverse monoid $\overline{M}_n$, which is a
supermonoid of $M_n$. We point out similarities and find essential differences.
We show that $\overline{M}_n$ is a noncommutative, periodic, combinatorial,
fundamental, completely semisimple, and strongly $E^*$-unitary inverse monoid. | George Fikioris, Giannis Fikioris | 2023-02-19T07:12:12Z | http://arxiv.org/abs/2302.09496v2 | # An extension to "A subsemigroup of the rook monoid"
###### Abstract
In a recent paper, we defined an inverse submonoid \(M_{n}\) of the rook monoid and investigated its properties. That investigation was enabled by representing the nonzero elements of \(M_{n}\) (which are \(n\times n\) matrices) via certain triplets belonging to \(\mathbb{Z}^{3}\). In this short note, we allow the aforementioned triplets to belong to \(\mathbb{R}^{3}\). We thus study a new inverse monoid \(\overline{M}_{n}\), which is a supermonoid of \(M_{n}\). We prove that the elements of \(\overline{M}_{n}\) are either idempotent or nilpotent, compute nilpotent indexes, and discuss issues pertaining to \(j\)th roots. We also describe the ideals of \(\overline{M}_{n}\), determine Green's relations, show that \(\overline{M}_{n}\) is a supersemigroup of the Brandt semigroup, and prove that \(\overline{M}_{n}\) has infinite Sierpinski rank. While there are similarities between \(M_{n}\) and \(\overline{M}_{n}\), there are also essential differences. For example, \(M_{n}\) can be generated by only three elements, all ideals of \(M_{n}\) are principal ideals, and there exist \(x\in M_{n}\) that do not possess a square root in \(M_{n}\); but none of these statements is true in \(\overline{M}_{n}\).
**MSC codes.** 20M18, 20M12
**Acknowledgments.** The work of Giannis Fikioris was supported in part by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate (NDSEG) Fellowship Program and the Onassis Foundation - Scholarship ID: F ZS 068-1/2022-2023.
## 1 Introduction
The symmetric inverse semigroup \(\mathcal{IS}_{n}\), also known as the rook monoid, consists of the partial injective transformations of \(\{1,2,\ldots,n\}\)[1; 2]. Any element of \(\mathcal{IS}_{n}\) can be represented as an \(n\times n\) matrix whose entries are \(0\) or \(1\), with at most one \(1\) in every row and every column.
In a previous paper [3], we introduced a submonoid \(M_{n}\) of \(\mathcal{IS}_{n}\) and studied its properties. The monoid \(M_{n}\) consists of the zero matrix together with those matrices of \(\mathcal{IS}_{n}\) whose \(1\)s lie on a single diagonal and form an uninterrupted block (i.e., no \(0\) lies between any two \(1\)s). Let \(d\) be the said diagonal (\(d=-n+1,\ldots,n-1\), with \(d=0\) being the main diagonal), let \(k\) be the row of the northwestern \(1\), and let \(m\) be the row of the southeastern \(1\). The study of [3] was facilitated by representing the elements of \(M_{n}\) as triplets \(\langle d,k,m\rangle\in\mathbb{Z}^{3}\) (\(d\), \(k\), and \(m\) are appropriately restricted), and developing a closed-form expression representing the product of two elements.
This short note is an extension that allows \(\langle d,k,m\rangle\in\mathbb{R}^{3}\); the restrictions on the parameters \(d\), \(k\), \(m\), as well as the product formula, remain unaltered. We thus study a new monoid \(\overline{M}_{n}\), of which the \(M_{n}\) of [3] is a submonoid.
To facilitate comparisons with "the integer case," we maintain much of the notation of [3]; for example, we retain the symbol \(x^{T}\) for the semigroup inverse* of \(x\in\overline{M}_{n}\). As in [3], \(\mathbf{0}\) and \(\mathbf{1}\) denote monoid zero and identity, and ideal means two-sided ideal. We use the traditional notations for Green's relations, associated equivalence classes, and principal ideals. A \(j\)th root of \(x\in\overline{M}_{n}\) is a \(y\in\overline{M}_{n}\) such that \(x=y^{j}\) (\(j\in\mathbb{N}\)).
Footnote *: The underlying reason for the symbol is that inverting \(x\in M_{n}\) amounts to transposing the matrix represented by \(x\).
## 2 Definitions; the height function
Let \(n\in\mathbb{Z}\) with \(n\geq 2\). Our definition of \(\overline{M}_{n}\) is
\[\begin{split}\overline{M}_{n}=\{\mathbf{0}\}\cup\{& \langle d,k,m\rangle:\ d,k,m\in\mathbb{R};\\ & 1-\min(0,d)\leq k\leq m\leq n-\max(0,d)\}.\end{split} \tag{1}\]
Note that the restrictions in (1) further imply
\[-(n-1)\leq d\leq n-1\quad\text{and}\quad 1\leq k\leq m\leq n. \tag{2}\]
As in [3], the formula for the product of two nonzero elements is
\[\langle d,k,m\rangle\langle d^{\prime},k^{\prime},m^{\prime}\rangle=\left\{ \begin{array}{ll}\langle d^{\prime\prime},k^{\prime\prime},m^{\prime\prime} \rangle,&k^{\prime\prime}\leq m^{\prime\prime},\\ \mathbf{0},&k^{\prime\prime}>m^{\prime\prime},\end{array}\right. \tag{3}\]
in which the parameters \(d^{\prime\prime}\), \(k^{\prime\prime}\), and \(m^{\prime\prime}\) are
\[d^{\prime\prime}=d+d^{\prime},\quad k^{\prime\prime}=\max(k,k^{\prime}-d), \quad m^{\prime\prime}=\min(m,m^{\prime}-d). \tag{4}\]
We can use the definitions (1), (3), and (4) to show that \(\overline{M}_{n}\) is a noncommutative monoid, whose identity is \(\mathbf{1}=\langle 0,1,n\rangle\).
We obtain the submonoid \(M_{n}\) if, in (1), we replace the condition \(d,k,m\in\mathbb{R}\) by the more restrictive one \(d,k,m\in\mathbb{Z}\). In \(M_{n}\), the triplet of integers represents an \(n\times n\) matrix. Analogously, we can interpret the triplets of \(\overline{M}_{n}\) as line segments that are contained within a \((n-1)\times(n-1)\) square and are parallel to the diagonal shown in Fig. 1. Segment endpoints are permitted to lie on the square boundary, while \(\mathbf{0}\) corresponds to the square being empty.
Since (1) allows \(m=k\), our line segments can reduce to points within the aforementioned closed square. We use \(\overline{P}_{n}\) to denote the set of points, viz.,
\[\overline{P}_{n}=\{\langle d,k,m\rangle\in\overline{M}_{n}\setminus\{\mathbf{0 }\}:\ \ m=k\}. \tag{5}\]
Let \(h(x)\) denote the height of the segment \(x\in\overline{M}_{n}\) (see Fig. 1), so that
\[h(x)=\left\{\begin{array}{ll}-1,\quad x=\mathbf{0},\\ m-k,\quad x=\langle d,k,m\rangle\in\overline{M}_{n}\setminus\{\mathbf{0}\}. \end{array}\right. \tag{6}\]
Figure 1: Any triplet \(\langle d,k,m\rangle\) of \(\overline{M}_{n}\setminus\{\mathbf{0}\}\) corresponds to a line segment similar to the depicted \(x\), whose height is \(h(x)\). The element \(\mathbf{0}\) corresponds to the empty square, with \(h(\mathbf{0})=-1\).
The height arises in a natural manner throughout this paper so we now give some of its properties. By (1) and (6),
\[0\leq h(x)\leq n-|d|-1\leq n-1,\quad x\neq\mathbf{0}, \tag{7}\]
while \(h\) assumes the particular values \(-1\), \(0\), and \(n-1\) according to
\[h(x)=-1\Leftrightarrow x=\mathbf{0},\quad h(x)=0\Leftrightarrow x\in\overline {P}_{n},\quad h(x)=n-1\Leftrightarrow x=\mathbf{1}. \tag{8}\]
It follows from (3), (4), and (6) that \(h(xy)\leq\min\left(h(x),h(y)\right)\). By induction, we then get
\[h(x_{1}x_{2}\ldots x_{j})\leq h(x_{i}),\text{ for all }i\in\{1,2,\ldots,j\}, \quad x_{i}\in\overline{M}_{n}. \tag{9}\]
Remark 1: Ref. [3] uses the symbol \(\mathrm{rnk}(x)\) for the rank of the partial transformation represented by \(x\in M_{n}\). Thus in the integer case we have
\[\mathrm{rnk}(x)=h(x)+1,\quad x\in M_{n}, \tag{10}\]
which shows why we chose the seemingly arbitrary value \(h(\mathbf{0})=-1\) in (6).
The first of the two lemmas that follow gives the principal ideals of \(\overline{M}_{n}\), as further explained in Section 6; the second will help us in Section 7, where we discuss the subsemigroup \(\{\mathbf{0}\}\cup\overline{P}_{n}\).
**Lemma 1**: _Let \(x,y\in\overline{M}_{n}\). Then \(h(y)\leq h(x)\) iff there exist \(z,w\in\overline{M}_{n}\) such that_
\[y=zxw. \tag{11}\]
_Furthermore, if \(0\leq h(y)\leq h(x)\) with \(x=\langle d,k,m\rangle\) and \(y=\langle d^{\prime},k^{\prime},m^{\prime}\rangle\), then (11) is satisfied by the nonzero elements \(z=\langle d_{z},k_{z},m_{z}\rangle\) and \(w=\langle d_{w},k_{w},m_{w}\rangle\) where_
\[d_{z}=k-k^{\prime},\quad k_{z}=k^{\prime},\quad m_{z}=m^{\prime}; \tag{12}\] \[d_{w}=k^{\prime}+d^{\prime}-k-d,\quad k_{w}=k+d,\quad m_{w}=k+d +m^{\prime}-k^{\prime}. \tag{13}\]
Proof: If (11) holds, then \(h(y)\leq h(x)\) by (9). Conversely, suppose that \(h(y)\leq h(x)\). If \(x=\mathbf{0}\) or \(y=\mathbf{0}\), (11) is trivial. We thus take \(x,y\in\overline{M}_{n}\setminus\{\mathbf{0}\}\); call \(x=\langle d,k,m\rangle\), \(y=\langle d^{\prime},k^{\prime},m^{\prime}\rangle\); and define \(z\), \(w\) by (12), (13). By (6), the assumption \(h(y)\leq h(x)\) amounts to
\[m^{\prime}-k^{\prime}\leq m-k. \tag{14}\]
Write the conditions in (1) for \(d\), \(k\), \(m\), and again for \(d^{\prime}\), \(k^{\prime}\), \(m^{\prime}\). Upon invoking (12)-(14), we can easily deduce identical conditions for \(d_{z}\), \(k_{z}\), \(m_{z}\) and for \(d_{w}\), \(k_{w}\), \(m_{w}\). Thus \(z\) and \(w\) are well-defined elements of \(\overline{M}_{n}\setminus\{\mathbf{0}\}\). Finally, a quick calculation based on the multiplication formula (3) verifies (11).
**Lemma 2**: _Let \(y\in\{\mathbf{0}\}\cup\overline{P}_{n}\). Let \(x\in\overline{M}_{n}\setminus\{\mathbf{0}\}\). Then there exist \(z,w\in\{\mathbf{0}\}\cup\overline{P}_{n}\) such that \(y=zxw\)._
Proof: If \(y=0\), the statement is trivial. Otherwise \(y\in\overline{P}_{n}\), so \(0=h(y)\leq h(x)\) by (7) and (8). Thus (11) holds, where \(z,w\in\overline{M}_{n}\) are given by (12) and (13) with \(m^{\prime}=k^{\prime}\). It follows that \(k_{z}=m_{z}\) and \(k_{w}=m_{w}\), so that \(z,w\in\overline{P}_{n}\).
## 3 Basic results
This section discusses certain properties of \(\overline{M}_{n}\) that readily follow from the definitions of Section 2.
Our first result has no counterpart in the integer case. By means of an affine transformation \(\varphi\) (easily visualized by means of Fig. 1), we demonstrate that all \(\overline{M}_{n}\) are isomorphic:
**Proposition 1**: _Let \(n,q\) be integers \(\geq 2\). The map \(\varphi:\overline{M}_{n}\to\overline{M}_{q}\) given by_
\[\mathbf{0}\mapsto\mathbf{0},\quad x_{n}=\langle d_{n},k_{n},m_{n}\rangle \mapsto x_{q}=\langle d_{q},k_{q},m_{q}\rangle, \tag{15}\]
_where_
\[d_{q}=\frac{q-1}{n-1}d_{n},\quad k_{q}-1=\frac{q-1}{n-1}(k_{n}-1),\quad m_{q} -1=\frac{q-1}{n-1}(m_{n}-1), \tag{16}\]
_is a monoid isomorphism. In particular any \(\overline{M}_{n}\) is isomorphic to \(\overline{M}_{2}\)._
Proof: \(\varphi\) is bijective by (1), while \(\varphi(x_{n}y_{n})=\varphi(x_{n})\varphi(y_{n})\) and \(\varphi(\langle 0,1,n\rangle)=\langle 0,1,q\rangle\) follow from (3).
Remark 2: By Proposition 1, a stand-alone study of \(\overline{M}_{n}\) would be facilitated if one took \(n=2\), corresponding to segments lying in a \(1\times 1\) closed square. However, we retain the parameter \(n\) in order to draw upon and compare to results from [3].
Remark 3: Eqn. (6) and Proposition 1 imply that, for \(x_{n}\in\overline{M}_{n}\setminus\{\mathbf{0}\}\),
\[h(x_{q})=\frac{q-1}{n-1}h(x_{n}). \tag{17}\]
Eqn. (17) shows why, for \(\overline{M}_{n}\), we use the height \(h(x)\) instead of extending (to the non-integer case) the quantity \(\mathrm{rnk}(x)=m-k+1\) mentioned in Remark 1: In \(\overline{M}_{n}\), the latter quantity would scale in an unnatural manner.
The two propositions that follow resemble results of [3]. The first is a formula for powers which can be verified from (3) by induction.
**Proposition 2**: _For \(x=\langle d,k,m\rangle\in\overline{M}_{n}\setminus\{\mathbf{0}\}\) and \(j\in\mathbb{N}\) we have_
\[x^{j}=\left\{\begin{aligned} &\left\langle d^{(j)},k^{(j)},m^{(j)} \right\rangle,\quad\text{if}\quad k^{(j)}\leq m^{(j)},\\ &\mathbf{0},\quad\text{if}\quad k^{(j)}>m^{(j)},\end{aligned}\right. \tag{18}\]
_where_
\[d^{(j)}=jd,\quad k^{(j)}=k-(j-1)\min(0,d),\quad m^{(j)}=m-(j-1)\max(0,d). \tag{19}\]
The next proposition states that \(\overline{M}_{n}\) consists solely of idempotents and nilpotents and gives the nilpotent indexes \(i(x)\). In contrast to the integer case of \(M_{n}\) (and as expected from the aforementioned isomorphism, which leaves \(i(x)\) unaltered), \(i(x)\) can take on values larger than \(n\).
**Proposition 3**: _An element \(x=\langle d,k,m\rangle\in\overline{M}_{n}\setminus\{\mathbf{0}\}\) is idempotent if \(d=0\) and nilpotent if \(d\neq 0\). When \(d\neq 0\), the index \(i(x)\) of the nilpotent is given by_
\[i(x)=2+\left\lfloor\frac{m-k}{|d|}\right\rfloor=2+\left\lfloor\frac{h(x)}{|d| }\right\rfloor, \tag{20}\]
_where \(\lfloor\beta\rfloor\) denotes the floor of \(\beta\in\mathbb{R}\). In particular, \(i(x)=2\) when \(x\in\overline{P}_{n}\); and \(i(x)\to\infty\) as \(d\to 0\) (with \(h(x)=m-k\) held fixed and positive)._
Proof: Proposition 2 gives \(\langle 0,k,m\rangle^{2}=\langle 0,k,m\rangle\), so \(x\) is idempotent when \(d=0\). When \(d>0\), the \(m^{(j)}\) in (19) decreases linearly with \(j\), while \(k^{(j)}=k\) remains constant. Thus \(k^{(j)}>m^{(j)}\) for large enough \(j\), in which case \(x^{j}=\mathbf{0}\) by (18). The index \(i(x)\) is the smallest such \(j\) and is given by (20). The proof for \(d<0\) is similar.
_Remark 4_ In the special case of integer parameters (\(x\in M_{n}\)) we can show that (20) reduces to formula (28) of [3] (which involves the ceiling rather than the floor function). However, (28) of [3]_does not hold_ for the more general case \(x\in\overline{M}_{n}\).
We close this section by giving a number of semigroup classes to which \(\overline{M}_{n}\) and \(\overline{S}_{n}=\overline{M}_{n}\setminus\{\mathbf{1}\}\) belong.
**Theorem 4**: _Both \(\overline{M}_{n}\) and_
\[\overline{S}_{n}=\overline{M}_{n}\setminus\{\mathbf{1}\}=\overline{M}_{n} \setminus\{\langle 0,1,n\rangle\} \tag{21}\]
_are noncommutative, inverse, periodic, combinatorial semigroups. In both \(\overline{M}_{n}\) and \(\overline{S}_{n}\), the unique inverse of \(x\) is_
\[x^{T}=\left\{\begin{array}{ll}\mathbf{0},&x=\mathbf{0},\\ \langle-d,k+d,m+d\rangle,&x=\langle d,k,m\rangle\neq\mathbf{0}.\end{array}\right. \tag{22}\]
Proof: We first prove the assertions for \(\overline{M}_{n}\). A semigroup is _periodic_ when all of its elements are of finite order, i.e., when the monogenic subsemigroup generated by any semigroup element has finite cardinality [4]. As the elements of \(\overline{M}_{n}\) are either idempotent or nilpotent (Proposition 3), \(\overline{M}_{n}\) is periodic. For \(x\in\overline{M}_{n}\) and for the \(x^{T}\) defined in (22), we can use (3) to show \(xx^{T}x=x\) and \(x^{T}xx^{T}=x^{T}\). Thus \(x^{T}\) is an inverse of \(x\) and \(\overline{M}_{n}\) is an regular semigroup. By Proposition 3 the idempotents are \(\mathbf{0}\) together with the elements \(\langle 0,k,m\rangle\); and by (3), all these idempotents commute. Accordingly [5], \(\overline{M}_{n}\) is an inverse semigroup and the inverse \(x^{T}\) is unique (in \(\overline{M}_{n}\)). As shown in Theorem 6 below, \(\mathcal{H}\) is the equality relation. Therefore [5]\(\overline{M}_{n}\) is a combinatorial semigroup.
If \(\mathbf{1}=xy\), then \(h(\mathbf{1})\leq h(x)\) by (9), so that \(x=\mathbf{1}\) by (7) and (8). Similarly, \(y=\mathbf{1}\). We have thus shown
\[xy=\mathbf{1}\implies x=y=\mathbf{1},\quad x,y\in\overline{M}_{n}. \tag{23}\]
This implies that \(\overline{M}_{n}\setminus\{\mathbf{1}\}=\overline{S}_{n}\) is a semigroup and that our theorem (already proved for \(\overline{M}_{n}\)) carries over to \(\overline{S}_{n}\).
Inverse semigroups are associated with a natural partial order [4; 5] which, for our nonzero elements, can be formulated in terms of triplet parameters:
**Corollary 1**: _Let \(\leq\) be the natural partial order in \(\overline{M}_{n}\setminus\{\mathbf{0}\}\). Then_
\[\langle d,k,m\rangle\leq\langle d^{\prime},k^{\prime},m^{\prime}\rangle \iff d=d^{\prime},\ k\geq k^{\prime},\ \mathrm{and}\ m\leq m^{\prime}. \tag{24}\]
_Proof_ As \(x\leq y\) iff \(x=xx^{T}y\)[5], the assertion follows easily from (3) and (22).
Therefore two segments are comparable iff one lies upon and is contained within the other, in which case the shorter segment is \(\leq\) the longer one.
## 4 \(j\)th roots
Theorem 6 of [3] discusses \(j\)th roots for the integer case: In \(M_{n}\), a nonzero element \(x=\langle d,k,m\rangle\) has a \(j\)th root iff \(d\) is an integer multiple of \(j\); and the \(j\)th root, when it exists, is unique. The theorem that follows shows that, in \(\overline{M}_{n}\), a unique root \(y\)_always_ exists. In other words (and in complete analogy to the case of \(\mathbb{R}_{>0}\) and its subset \(\mathbb{N}\)) any nonzero element \(x\in\overline{M}_{n}\) (\(x\in\mathbb{R}_{>0}\)) has a unique root \(y\in\overline{M}_{n}\) (\(y\in\mathbb{R}_{>0}\)); but in the special case \(x\in M_{n}\) (\(x\in\mathbb{N}\)), the said root \(y\) is not necessarily in \(M_{n}\) (in \(\mathbb{N}\)).
**Theorem 5**: _Let \(j\in\mathbb{N}\). The element \(x=\langle d,k,m\rangle\in\overline{M}_{n}\setminus\{\mathbf{0}\}\) has a unique \(j\)th root in \(\overline{M}_{n}\). It is given by \(y=\langle d^{\prime},k^{\prime},m^{\prime}\rangle\in\overline{M}_{n}\setminus \{\mathbf{0}\}\), where_
\[d^{\prime}=\frac{d}{j},\quad k^{\prime}=k+(j-1)\min(0,d^{\prime}),\quad m^{ \prime}=m+(j-1)\max(0,d^{\prime}). \tag{25}\]
_Proof_ Assume \(d\geq 0\), so that (1) implies
\[1\leq k\leq m\leq n-d. \tag{26}\]
We seek \(y\in\overline{M}_{n}\) such that \(x=y^{j}\). As \(y\neq\mathbf{0}\), we set \(y=\langle d^{\prime},k^{\prime},m^{\prime}\rangle\). By Proposition 2, \(d^{\prime}=d/j\geq 0\). Invoking (1), we thus require
\[1\leq k^{\prime}\leq m^{\prime}\leq n-d^{\prime}. \tag{27}\]
By Proposition 2, \(x=y^{j}\) is equivalent to the three equations
\[d=jd^{\prime},\quad k=k^{\prime},\quad m=m^{\prime}-(j-1)d^{\prime}.\]
These are uniquely solvable for \(d^{\prime}\), \(k^{\prime}\), \(m^{\prime}\) and the solution is given in (25). Eqns. (25) and (26) then imply (27), completing the proof for \(d\geq 0\). We can extend to \(d<0\) by taking the inverse.
_Remark 5_: One could also consider the submonoid \(A_{n}\) of \(\overline{M}_{n}\) in which \(d,k,m\in\mathbb{Q}\). For \(x\in A_{n}\setminus\{\mathbf{0}\}\), the unique \(j\)th root \(y\) given in (25) also belongs to \(A_{n}\setminus\{\mathbf{0}\}\). Thus in \(A_{n}\setminus\{\mathbf{0}\}\), a unique root \(y\) always exists. Consequently, despite the aforementioned analogy of \(\overline{M}_{n}\) to \(\mathbb{R}_{>0}\) and \(M_{n}\) to \(\mathbb{N}\), the submonoid \(A_{n}\) is not analogous to \(\mathbb{Q}_{>0}\).
## 5 Green's relations
The theorem below gives Green's relations on \(\overline{M}_{n}\), which turn out to be very similar to those in \(M_{n}\).
**Theorem 6**: _In the inverse monoid \(\overline{M}_{n}\), Green's relations for any two nonzero elements \(x=\langle d,k,m\rangle\) and \(y=\langle d^{\prime},k^{\prime},m^{\prime}\rangle\) are as follows._
\[x\mathcal{R}y\iff k=k^{\prime}\ \mathrm{and}\ m=m^{\prime}, \tag{28}\]
\[x\mathcal{L}y\iff k+d=k^{\prime}+d^{\prime}\ \mathrm{and}\ m+d=m^{\prime}+d^{ \prime}, \tag{29}\]
\[x\mathcal{H}y\iff x=y, \tag{30}\]
\[x\mathcal{D}y\iff x\mathcal{J}y\iff h(x)=h(y). \tag{31}\]
_In all cases, \(\mathbf{0}\) forms a class of its own,_
\[R_{\mathbf{0}}=L_{\mathbf{0}}=H_{\mathbf{0}}=D_{\mathbf{0}}=J_{\mathbf{0}}= \{\mathbf{0}\}. \tag{32}\]
Proof: The proof is identical to the proof of Theorem 12 of [3] with two exceptions: (i) The condition \(m-k=m^{\prime}-k^{\prime}\) translates to \(h(x)=h(y)\) (rather than \(\mathrm{rnk}(x)=\mathrm{rnk}(y)\), see Remark 3); (ii) The equality \(\mathcal{J}=\mathcal{D}\) holds by virtue of Theorem 4, because \(\mathcal{J}=\mathcal{D}\) in any semigroup that is periodic [4] (\(\overline{M}_{n}\) is not finite as is \(M_{n}\)).
Our theorem has simple graphical interpretations. The equivalence class \(R_{x}\) (\(L_{x}\)) consists of all horizontal (vertical) translations of the segment \(x\). Furthermore, the result that \(J_{x}\) consists of all segments of height \(h(x)\) means that elements whose heights are equal generate the same principal ideal. The next section goes beyond this observation and explicitly describes all ideals, whether principal or not.
## 6 Ideals of \(\overline{M}_{n}\)
As opposed to \(M_{n}\) (see Theorem 13 of [3]), \(\overline{M}_{n}\) has (two-sided) ideals that are not principal. In the theorem that follows, these non-principal ideals are denoted by \(K_{\mu}\).
**Theorem 7**: _The principal ideals of \(\overline{M}_{n}\) are precisely the following sets \(I_{\mu}\),_
\[I_{\mu}=\{y\in\overline{M}_{n}:h(y)\leq\mu\},\quad\mu\in\{-1\}\cup[0,n-1]. \tag{33}\]
_In particular,_
\[I_{-1}=\{\mathbf{0}\},\quad I_{0}=\{\mathbf{0}\}\cup\overline{P}_{n},\quad I _{n-1}=\overline{M}_{n}. \tag{34}\]
_An extension to "A subsemigroup of the rook monoid"_ 9
_The \(I_{\mu}\) defined in (33) are also given by_
\[I_{\mu}=\overline{M}_{n}x\overline{M}_{n}, \tag{35}\]
_in which \(x\) is any element of \(\overline{M}_{n}\) with \(h(x)=\mu\)._
_The non-principal ideals of \(\overline{M}_{n}\) are precisely the following sets \(K_{\mu}\),_
\[K_{\mu}=\{y\in\overline{M}_{n}:h(y)<\mu\},\quad\mu\in(0,n-1]. \tag{36}\]
_It follows that the collections \(\{I_{\mu}\}\) and \(\{K_{\mu}\}\) are both strictly totally ordered; that is, \(I_{\mu}\subset I_{\xi}\) and \(K_{\mu}\subset K_{\xi}\) whenever \(\mu<\xi\)._
Proof.: Define the sets \(I_{\mu}\) by (33) and choose an \(x\in\overline{M}_{n}\) such that \(h(x)=\mu\). The iff statement of Lemma 1 can then be rephrased as: \(y\in I_{\mu}\iff y\in\overline{M}_{n}x\overline{M}_{n}\). We have thus shown (35). Therefore all principal ideals are given in (33).
The special cases in (34) follow from (8) and (33).
We now let \(I\) be an _arbitrary_ ideal. From
\[I=\overline{M}_{n}I\overline{M}_{n}=\cup_{x\in I}\overline{M}_{n}x\overline{M }_{n}, \tag{37}\]
we see that \(I\) is a union of principal ideals. By (33), these are totally ordered sets. If \(I\) contains an element \(x\) such that \(h(y)\leq h(x)\) for all \(y\in I\), then the union in (37) equals \(I_{\mu}\), where \(\mu=h(x)=\max_{y\in I}\{h(y)\}\), so that \(I\) is itself a principal ideal. If there is no such element \(x\in I\)--i.e., if the subset \(\{h(y):y\in I\}\) of \(\mathbb{R}\) has no maximum--then the union in (37) is one of the totally ordered sets in (36), namely \(K_{\mu}\), where \(\mu=\sup_{y\in I}\{h(y)\}\).
It remains to show, conversely, that all the \(K_{\mu}\) defined in (36) are ideals. Let \(y\in\overline{M}_{n}K_{\mu}\), so that \(y=zw\) with \(z\in\overline{M}_{n}\) and \(w\in K_{\mu}\). It follows from (9) that \(h(y)\leq h(w)\). Since \(h(w)<\mu\), we have \(h(y)<\mu\), so that \(y\in K_{\mu}\). Hence \(\overline{M}_{n}K_{\mu}\subseteq K_{\mu}\), so \(K_{\mu}\) is a left ideal by definition. Similarly, \(K_{\mu}\) is a right ideal. Thus \(K_{\mu}\) is a two-sided ideal, completing our proof.
## 7 The Brandt semigroup as a subsemigroup of \(\overline{M}_{n}\)
By (1) and (5), the set \(\overline{B}_{n}=\{\mathbf{0}\}\cup\overline{P}_{n}\) is given by
\[\overline{B}_{n}=\{\mathbf{0}\}\cup\overline{P}_{n}=\{\mathbf{0}\}\cup\{ \langle d,k,k\rangle:\ 1-\min(0,d)\leq k\leq n-\max(0,d)\}. \tag{38}\]
Example 2 of [3] shows that, in the integer case, the subsemigroup \(B_{n}\) of \(\overline{B}_{n}\) is isomorphic to a certain Brandt semigroup of finite cardinality. The theorem that follows is a generalization that can be proved in a number of ways. We give a proof that builds upon previous results in the present paper, as well as concepts and results on inverse semigroups that can be found in [5].
**Theorem 8**: \(\overline{B}_{n}\) _is a Brandt semigroup._
Proof: By (3), (22), and (38), \(\overline{B}_{n}\) is an inverse subsemigroup of \(\overline{M}_{n}\). Therefore \(\overline{B}_{n}\) inherits its natural partial order \(\leq\) from \(\overline{M}_{n}\). By (38) and Corollary 1, \(x\leq y\) iff \(x=y\) (\(x,y\in\overline{P}_{n}\)), meaning that in \(\overline{P}_{n}=\overline{B}_{n}\setminus\{\mathbf{0}\}\), the \(\leq\) reduces to an equality. Equivalently [5], all idempotents of \(\overline{B}_{n}\setminus\{\mathbf{0}\}\) are primitive.
Now let \(I\subseteq\overline{B}_{n}\) be an ideal of \(\overline{B}_{n}\). Assume \(I\neq\{\mathbf{0}\}\), so that some nonzero \(x\) belongs to \(I\). Choose any \(y\) in \(\overline{B}_{n}\). By Lemma 2, this \(y\) belongs to the principal ideal \(\overline{B}_{n}x\overline{B}_{n}\), so that \(\overline{B}_{n}\subseteq\overline{B}_{n}x\overline{B}_{n}\). As \(\overline{B}_{n}x\overline{B}_{n}\subseteq I\), we further have \(\overline{B}_{n}\subseteq I\), so \(I=\overline{B}_{n}\). Therefore the only ideals of \(\overline{B}_{n}\) are \(\{\mathbf{0}\}\) and \(\overline{B}_{n}\) itself, meaning that \(\overline{B}_{n}\) is \(0\)-simple.
Inverse, \(0\)-simple semigroups with at least one primitive idempotent are Brandt semigroups [5], completing our proof.
## 8 Sierpinski rank of \(\overline{M}_{n}\)
Corollary 6 of [3] determines a minimal generating set for \(M_{n}\) that, for any \(n\), consists of only three elements. Thus the rank of \(M_{n}\) (integer case) is \(3\). Since \(\overline{M}_{n}\) is uncountable, the situation is very different. In what follows, we prove that \(\overline{M}_{n}\) has infinite Sierpinski rank [6; 7; 8], meaning that there are countable subsets of \(\overline{M}_{n}\) that cannot be generated by finitely many elements of \(\overline{M}_{n}\).
**Theorem 9**: _The Sierpinski rank of \(\overline{M}_{n}\) is infinite._
Proof: It suffices to prove that the Sierpinski rank of \(\overline{S}_{n}=\overline{M}_{n}\setminus\{\mathbf{1}\}\) is infinite, see (21) or (23). By (1), the countable set \(A_{n}=\{y_{i}:i\in\mathbb{N}\}\) with elements
\[y_{i}=\langle 2^{-i},1,n-2^{-i}\rangle,\]
is a well-defined subset of \(\overline{S}_{n}\). By (6), the sequence of heights \(h(y_{i})\) increases, with
\[\sup_{i\in\mathbb{N}}h(y_{i})=\lim_{i\to\infty}\Big{(}n-2^{-i}-1\Big{)}=n-1. \tag{39}\]
Assume that \(A_{n}\) is generated by a finite set with \(r\) elements \(G_{n}=\{g_{1},\ldots,g_{r}\}\). For every \(i\in\mathbb{N}\) this implies that \(y_{i}=g_{i_{1}}g_{i_{2}}\ldots g_{i_{s}}\) for some \(i_{1},i_{2},\ldots,i_{s}\in\{1,\ldots,r\}\). By (9) this means that \(h(y_{i})\leq\min\{h(g_{i_{1}}),h(g_{i_{2}}),\ldots,h(g_{i_{r}})\}\leq h_{ \max}\), where \(h_{\max}=\max\{h(g_{j}):j\in\{1,\ldots,r\}\}\). Since \(\mathbf{1}\notin G_{n}\subset\overline{S}_{n}\), (7) and (8) give \(h_{\max}<n-1\), which contradicts (39).
|
2306.15548 | Geometric Ultrasound Localization Microscopy | Contrast-Enhanced Ultra-Sound (CEUS) has become a viable method for
non-invasive, dynamic visualization in medical diagnostics, yet Ultrasound
Localization Microscopy (ULM) has enabled a revolutionary breakthrough by
offering ten times higher resolution. To date, Delay-And-Sum (DAS) beamformers
are used to render ULM frames, ultimately determining the image resolution
capability. To take full advantage of ULM, this study questions whether
beamforming is the most effective processing step for ULM, suggesting an
alternative approach that relies solely on Time-Difference-of-Arrival (TDoA)
information. To this end, a novel geometric framework for micro bubble
localization via ellipse intersections is proposed to overcome existing
beamforming limitations. We present a benchmark comparison based on a public
dataset for which our geometric ULM outperforms existing baseline methods in
terms of accuracy and robustness while only utilizing a portion of the
available transducer data. | Christopher Hahne, Raphael Sznitman | 2023-06-27T15:18:52Z | http://arxiv.org/abs/2306.15548v3 | # Geometric Ultrasound Localization Microscopy
###### Abstract
Contrast-Enhanced Ultra-Sound (CEUS) has become a viable method for non-invasive, dynamic visualization in medical diagnostics, yet Ultrasound Localization Microscopy (ULM) has enabled a revolutionary breakthrough by offering ten times higher resolution. To date, Delay-And-Sum (DAS) beamformers are used to render ULM frames, ultimately determining the image resolution capability. To take full advantage of ULM, this study questions whether beamforming is the most effective processing step for ULM, suggesting an alternative approach that relies solely on Time-Difference-of-Arrival (TDoA) information. To this end, a novel geometric framework for microbubble localization via ellipse intersections is proposed to overcome existing beamforming limitations. We present a benchmark comparison based on a public dataset for which our geometric ULM outperforms existing baseline methods in terms of accuracy and robustness while only utilizing a portion of the available transducer data.
Keywords:Ultrasound Microbubble Localization Microscopy Geometry Parallax Triangulation Trilateration Multilateration Time-of-Arrival
## 1 Introduction
Ultrasound Localization Microscopy (ULM) has revolutionized medical imaging by enabling sub-wavelength resolution from images acquired by piezo-electric transducers and computational beamforming. However, the necessity of beamforming for ULM remains questionable. Our work challenges the conventional assumption that beamforming is the ideal processing step for ULM and presents an alternative approach based on geometric reconstruction from Time-of-Arrival (ToA) information.
The discovery of ULM has recently surpassed the diffraction-limited spatial resolution and enabled highly detailed visualization of the vascularity [8]. ULM borrows concepts from super-resolution fluorescence microscopy techniques to precisely locate individual particles with sub-pixel accuracy over multiple frames. By the accumulation of all localizations over time, ULM can produce a super-resolved image, providing researchers and clinicians with highly detailed representation of the vascular structure.
While Contrast-Enhanced Ultra-Sound (CEUS) is used in the identification of musculoskeletal soft tissue tumours [5], the far higher resolution capability offered by ULM has great potential for clinical translation to improve the reliability of cancer diagnosis (_i.e._, enable differentiation of tumour types in kidney cancer [7] or detect breast cancer tissue [1]). Moreover, ULM has shown promise in imaging neuroscular activity after visual stimulation (functional ULM) [14]. The pioneering study by Errico _et
al._[8] initially demonstrated the potential of ULM by successfully localizing contrast agent particles (microbubbles) using a 2D point-spread-function model. In general, the accuracy in MicroBubble (MB) localization is the key to achieving sub-wavelength resolution [4], for which classical imaging methods [17, 11], as well as deep neural networks [16, 1], have recently been reported.
However, the conventional approach for ULM involves using computational beamformers, which may not be ideal for MB localization. For example, a recent study has shown that ultrasound image segmentation can be learned from radio-frequency data and thus without beamforming [13]. Beamforming techniques have been developed to render irregular topologies, whereas MBs exhibit a uniform geometric structure, for which ULM only requires information about its spatial position. Although the impact of adaptive beamforming has been studied for ULM to investigate its potential to refine MB localization [3], optimization of the Point-Spread Function (PSF) poses high demands on the transducer array, data storage, and algorithm complexity.
To this end, we propose an alternative approach for ULM, outlined in Fig. 1, that entirely relies on Time-Difference-of-Arrival (TDoA) information, omitting beamforming from the processing pipeline for the first time. We demonstrate a novel geometry framework for MB localization through ellipse intersections to overcome limitations inherent to beamforming. This approach provides a finer distinction between overlapping and clustered spots, improving localization precision, reliability, and computation efficiency. In conclusion, we challenge the conventional wisdom that beamforming is necessary for ULM and propose a novel approach that entirely relies on TDoA information for MB localization. Our proposed approach demonstrates promising results and indicates a considerable trade-off between precision, computation, and memory.
Figure 1: Comparison of ULM processing pipelines: Classical ULM (top) employs computational beamforming from \(N\) channels and image filters to localize microbubbles. Our geometric ULM (bottom) consists of a cross-channel phase-consistent Time-of-Arrival detection (left) to form ellipses that intersect at a microbubble position (middle). As a refinement step, ellipse intersections are fused via clustering (right).
Method
Geometric modeling is a useful approach for locating landmarks in space. One common method involves using a Time-of-Flight (ToF) round-trip setup that includes a transmitter and multiple receivers [10]. This setup is analogous to the parallax concept in visual imaging, where a triangle is formed between the target, emitter, and receivers, as illustrated in Figure 2. The target's location can be accurately estimated using trilateration by analyzing the time delay between the transmitted and received signals. However, the triangle's side lengths are unknown in the single receiver case, and all possible travel path candidates form triangles with equal circumferences fixed at the axis connecting the receiver and the source. These candidates reside on an elliptical shape. By adding a second receiver, its respective ellipse intersects with the first one resolving the target's 2-D position. Thus, the localization accuracy depends on the ellipse model, which is parameterized by the known transducer positions and the time delays we seek to estimate. This section describes a precise echo feature extraction, which is essential for building the subsequent ellipse intersection model. Finally, we demonstrate our localization refinement through clustering.
### Feature Extraction
Feature extraction of acoustic signals has been thoroughly researched [18, 9]. To leverage the geometric ULM localization, we wish to extract Time-of-Arrival (ToA) information (instead of beamforming) at sub-wavelength precision. Despite the popularity of deep neural networks, which have been studied for ToA detection [18], we employ an energy-based model [9] for echo feature extraction to demonstrate the feasibility of our geometric ULM at the initial stage. Ultimately, future studies can combine our proposed localization with a supervised network. Here, echoes \(f(\mathbf{m}_{k};t)\) are modeled as Multimodal Exponentially-Modified Gaussian Oscillators (MEMGO) [9],
\[f(\mathbf{m}_{k};t)=\alpha_{k}\,\exp\left(-\frac{\left(t-\mu_{k} \right)^{2}}{2\sigma_{k}^{2}}\right)\left(1+\text{erf}\left(\eta_{k}\frac{t- \mu_{k}}{\sigma_{k}\sqrt{2}}\right)\right)\cos\left(\omega_{k}\left(t-\mu_{k} \right)+\phi_{k}\right), \tag{1}\]
where \(t\in\mathbb{R}^{T}\) denotes the time domain with a total number of \(T\) samples and \(\mathbf{m}_{k}=[\alpha_{k},\mu_{k},\sigma_{k},\eta_{k},\omega_{k},\phi_{k}]^{ \intercal}\in\mathbb{R}^{6}\) contains the amplitude \(\alpha_{k}\), mean \(\mu_{k}\), spread \(\sigma_{k}\), skew \(\eta_{k}\), angular frequency \(\omega_{k}\) and phase \(\phi_{k}\) for each echo \(k\). Note that \(\text{erf}(\cdot)\) is the error function. To estimate these parameters iteratively, the cost function is given by,
\[\mathcal{L}_{\text{E}}\left(\mathbf{\hat{m}}_{n}\right)=\left\|y _{n}(t)-\sum_{k=1}^{K}f\left(\mathbf{m}_{k};t\right)\right\|_{2}^{2}, \tag{2}\]
where \(y_{n}(t)\) is the measured signal from waveform channel \(n\in\{1,2,\ldots,N\}\) and the sum over \(k\) accumulates all echo components \(\mathbf{\hat{m}}_{n}=[\mathbf{m}_{1}^{\intercal},\mathbf{m}_{2}^{\intercal}, \ldots,\mathbf{m}_{K}^{\intercal}]^{\intercal}\). We get
the best echo feature set \(\hat{\mathbf{m}}_{n}^{\star}\) over all iterations \(j\) via,
(3)
for which we use the Levenberg-Marquardt solver. Model-based optimization requires initial estimates to be nearby the solution space. For this, we detect initial ToAs via gradient-based analysis of the Hilbert-transformed signal to set \(\hat{\mathbf{m}}_{n}^{(1)}\) as in [9].
Before geometric localization, one must ensure that detected echo components correspond to the same MB. In this work, echo matching is accomplished in a heuristic brute-force fashion. Given an echo component \(\mathbf{m}_{n,k}^{\star}\) from a reference channel index \(n\), a matching echo component from an adjacent channel index \(n\pm g\) with gap \(g\in\mathbb{N}\) is found by \(k+h\) in the neighborhood of \(h\in\{-1,0,1\}\). A corresponding phase-precise ToA \(t_{n,k}^{\star}\) is obtained by \(t_{n\pm g,k}^{\star}=\mu_{n\pm g,k+h}^{\star}+\phi_{n,k}^{\star}-\Delta\), which takes \(\mu_{n,k}^{\star}\) and \(\phi_{n,k}^{\star}\) from \(\hat{\mathbf{m}}_{n}^{\star}\) for phase-precise alignment across transducer channels after upsampling. Here, \(\Delta\) is a fixed offset to accurately capture the onset of the MB locations [2]. We validate echo correspondence through a re-projection error in adjacent channels and reject those with weak alignment.
### Ellipse Intersection
While ellipse intersections can be approximated iteratively, we employ Eberly's closed-form solution [6] owing to its fast computation property. Although one might expect that the intersection of arbitrarily placed ellipses is straightforward, it involves advanced mathematical modelling due to the degrees of freedom in the ellipse positioning. An ellipse is drawn by radii \((r_{a},r_{b})\) of the major and minor axes with,
\[r_{a}=\frac{t_{n,k}^{\star}}{2},\quad\text{and}\quad r_{b}=\frac{1}{2}\,\sqrt{ \left(t_{n,k}^{\star}\right)^{2}-\|\hat{\mathbf{u}}_{s}-\mathbf{u}_{n}\|_{2}^ {2}}, \tag{4}\]
where the virtual transmitter \(\hat{\mathbf{u}}_{s}\in\mathbb{R}^{2}\) and each receiver \(\mathbf{u}_{n}\in\mathbb{R}^{2}\) with channel index \(n\) represent the focal points of an ellipse, respectively. For the intersection, we begin with the ellipse standard equation. Let any point \(\mathbf{s}\in\mathbb{R}^{2}\) located on an ellipse and displaced by its center \(\mathbf{c}_{n}\in\mathbb{R}^{2}\) such that,
(5)
Figure 2: Transducer geometry used for the ellipse intersection and localization of a MB position \(\mathbf{s}^{\star}\) from virtual source \(\hat{\mathbf{u}}_{s}\) and receiver positions \(\mathbf{u}_{n}\), which span ellipses rotated by \(\mathbf{v}_{n}\) around their centers \(\mathbf{c}_{n}\).
where \(\mathbf{M}\) contains the ellipse equation with \(\mathbf{v}_{n}\) and \(\mathbf{v}_{n}^{\perp}\) as a pair of orthogonal ellipse direction vectors, corresponding to their radial extents \((r_{0},r_{1})\) as well as the squared norm \(\|\cdot\|_{2}^{2}\) and vector norm \(|\cdot|\). For subsequent root-finding, it is the goal to convert the standard Eq. (5) to a quadratic polynomial with coefficients \(b_{j}\) given by, \(B(x,y)=b_{0}+b_{1}x+b_{2}y+b_{3}x^{2}+b_{4}xy=0\), which, when written in vector-matrix form reads,
\[0=\begin{bmatrix}x&y\end{bmatrix}\begin{bmatrix}b_{3}&b_{4}/2\\ b_{4}/2&b_{5}\end{bmatrix}\begin{bmatrix}x\\ y\end{bmatrix}+\begin{bmatrix}b_{1}&b_{2}\end{bmatrix}\begin{bmatrix}x\\ y\end{bmatrix}+b_{0}=\mathbf{s}^{\intercal}\mathbf{B}\mathbf{s}+\mathbf{b}^{ \intercal}\mathbf{s}+b_{0}, \tag{6}\]
where \(\mathbf{B}\) and \(\mathbf{b}\) carry high-order polynomial coefficients \(b_{j}\) found via matrix factorization (Kang and Liu, 2016). An elaborated version of this is found in the supplementary material.
Let two intersecting ellipses be given as quadratic equations \(A(x,y)\) and \(B(x,y)\) with coefficients \(a_{j}\) and \(b_{j}\), respectively. Their intersection is found via polynomial root-finding of the equation,
\[D(x,y)=d_{0}+d_{1}x+d_{2}y+d_{3}x^{2}+d_{4}xy=0, \tag{7}\]
where \(\forall j\), \(d_{j}=a_{j}-b_{j}\). When defining \(y=w-(a_{2}+a_{4}x)/2\) to substitute \(y\), we get \(A(x,w)=w^{2}+(a_{0}+a_{1}x+a_{3}x^{2})-(a_{2}+a_{4}x)^{2}/4=0\) which after rearranging is plugged into (7) to yield an intersection point \(\mathbf{s}_{i}^{\star}=[x_{i},w_{i}]^{\intercal}\). We refer the interested reader to the insightful descriptions in (Kang and Liu, 2016) for further implementation details.
### Clustering
Micro bubble reflections are dispersed across multiple waveform channels yielding groups of location candidates for the same target bubble. Localization deviations result from ToA variations, which can occur due to atmospheric conditions, receiver clock errors, and system noise. Due to the random distribution of corresponding ToA errors (Kang and Liu, 2016), we regard these candidates as clusters. Thus, we aim to find a centroid \(\mathbf{p}^{\star}\) of each cluster using multiple bi-variate probability density functions of varying sample sizes by,
\[m(\mathbf{p}^{(j)})=\frac{\sum_{\mathbf{s}_{i}^{\star}\in\mathbf{\Omega}^{(j) }}\exp\left(\|\mathbf{s}_{i}^{\star}-\mathbf{p}^{(j)}\|_{2}^{2}\right)\mathbf{ s}_{i}^{\star}}{\sum_{\mathbf{s}_{i}^{\star}\in\mathbf{\Omega}^{(j)}}\exp\left(\| \mathbf{s}_{i}^{\star}-\mathbf{p}^{(j)}\|_{2}^{2}\right)} \tag{8}\]
Here, the bandwidth of the kernel is set to \(\lambda/4\). The Mean Shift algorithm updates the estimate \(\mathbf{p}^{(j)}\) by setting it to the weighted mean density on each iteration \(j\) until convergence. In this way, we obtain the position of the target bubble.
## 3 Experiments
**Dataset:** We demonstrate the feasibility of our geometric ULM and present benchmark comparison outcomes based on the PALA dataset (Kang and Liu, 2016). This dataset is chosen as it is publicly available, allowing easy access and reproducibility of our results. To date, it is the only public ULM dataset featuring Radio Frequency (RF) data as required by our method. Its third-party simulation data makes it possible to perform a numerical quantification and direct comparison of different baseline benchmarks for the first time, which is necessary to validate the effectiveness of our proposed approach.
**Metrics:** For MB localization assessment, the minimum Root Mean Squared Error (RMSE) between the estimated \(\mathbf{p}^{\star}\) and the nearest ground truth position is computed. To align with the PALA study [11], only RMSEs less than \(\lambda/4\) are considered true positives and contribute to the total RMSE of all frames. In cases where the RMSE distance is greater than \(\lambda/4\), the estimated \(\mathbf{p}^{\star}\) is a false positive. Consequently, ground truth locations without an estimate within the \(\lambda/4\) neighbourhood are false negatives. We use the Jaccard Index to measure the MB detection capability, which considers both true positives and false negatives and provides a robust measure of each algorithm's performance. The Structural Similarity Index Measure (SSIM) is used for image assessment.
For a realistic analysis, we employ the noise model used in [11], which is given by,
\[n(t)\sim\mathcal{N}(0,\sigma_{p}^{2})\times\max(y_{n}(t))\times 10^{(L_{A}+L_{C})/ 20}\pm\max(y_{n}(t))\times 10^{L_{C}/20}, \tag{9}\]
where \(\sigma_{p}=\sqrt{B\times 10^{P/10}}\) and \(\mathcal{N}(0,\sigma_{p}^{2})\) are normal distributions with mean 0 and variance \(\sigma_{p}^{2}\). Here, \(L_{C}\) and \(L_{A}\) are noise levels in dB, and \(n(t)\) is the array of length \(T\) containing the random values drawn from this distribution. The additive noise model is then used to simulate a waveform channel \(y_{n}^{\prime}(t)=y_{n}(t)+n(t)\)\(\varoq g(t,\sigma_{f})\) suffering from noise, where \(\varoq\) represents the convolution operator, and \(g(t,\sigma_{f})\) is the one-dimensional Gaussian kernel with standard deviation \(\sigma_{f}=1.5\). To mimic the noise reduction achieved through the use of sub-aperture beamforming with 16 transducer channels [11], we multiplied the RF data noise by a factor of 4 for an equitable comparison.
**Baselines:** We compare our approach against state-of-the-art methods that utilize beamforming together with classical image filterings [8], Spline interpolation [17], Radial Symmetry (RS) [11] and a deep-learning-based U-Net [16] for MB localization. To only focus on the localization performance of each algorithm, we conduct the experimental analysis without temporal tracking. We obtain the results for classical image processing approaches directly from the open-source code provided by the authors of the PALA dataset [11]. As there is no publicly available implementation of [16] to date, we model and train the U-Net [15] according to the paper description, including loss design, layer architecture, and the incorporation of dropout. Since the U-Net-based localization is a supervised learning approach, we split the PALA dataset into sequences 1-15 for testing and 16-20 for training and validation, with a split ratio of 0.9, providing a sufficient number of 4500 training frames.
**Results:** Table 1 provides the benchmark comparison results with state-of-the-art methods. Our proposed geometric inference indicates the best localization performance represented by an average RMSE of around one-tenth of a wavelength. Also, the Jaccard Index reflects an outperforming balance of true positive and false negative MB detections by our approach. These results support the hypothesis that our proposed geometric localization inference is a considerable alternative to existing beamforming-based methods. Upon closer examination of the channels column in Table 1, it becomes apparent that our geometric ULM achieves reasonable localization performance with
only a fraction of the 128 channels available in the transducer probe. Using more than 32 channels improves the Jaccard Index but at the expense of computational resources. This finding confirms the assumption that transducers are redundant for MB tracking. The slight discrepancy in SSIM scores between our 128-channel results and the 64-channel example may be attributed to the higher number of false positives in the former, which decreases the overall SSIM value.
We provide rendered ULM image regions for visual inspection in Fig. 3 with full frames in the supplementary material. To enhance visibility, all images are processed with sRGB and additional gamma correction using an exponent of 0.9. The presence of noisy points in Figs. 2(b) to 2(d) is attributed to the excessive false positive localizations, resulting in poorer SSIM scores. Overall, these visual observations align with the numerical results presented in Table 1. An NVIDIA RTX2080 GPU was used for all computations and time measurements. To improve performance, signal processing chains are often pipelined, allowing for the simultaneous computation of subsequent processes. Table 1 lists the most time-consuming process for each method, which acts as the bottleneck. For our approach, the MEMGO feature extraction is the computationally most expensive process, followed by clustering. However, our method contributes to an overall efficient computation and acquisition time, as it skips beamforming and coherent compounding [12] with the latter reducing the capture interval by two-thirds.
Table 2 presents the results for the best-of-3 algorithms at various noise levels \(L_{C}\). As the amount of noise from (9) increases, there is a decline in the Jaccard Index, which suggests that each method is more susceptible to false detections from noise clutter. Although our method is exposed to higher noise in the RF domain, it is seen that \(L_{C}\) has a comparable impact on our method. However, it is important to note that the U-Net yields the most steady and consistent results for different noise levels.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Method & Channels [\#] & RMSE [\(\lambda/10\)] & Jaccard [\%] & Time [s] & SSIM [\%] \\ \hline Weighted Avg. [11] & 128 & \(1.287\pm 0.162\) & 44.253 & 0.080 & 69.49 \\ Lanczos [11] & 128 & \(1.524\pm 0.175\) & 38.688 & 0.382 & 75.87 \\ RS [11] & 128 & \(1.179\pm 0.172\) & 50.330 & 0.099 & 72.17 \\ Spline [17] & 128 & \(1.504\pm 0.174\) & 39.370 & 0.277 & 75.72 \\
2-D Gauss Fit [17] & 128 & \(1.240\pm 0.162\) & 51.342 & 3.782 & 73.93 \\ U-Net [16] & 128 & \(1.561\pm 0.154\) & 52.078 & **0.004** & 90.07 \\ \hline & 8 & \(1.116\pm 0.206\) & 38.113 & 0.268 & 79.74 \\ G-ULM & 16 & \(1.077\pm 0.136\) & 66.414 & 0.485 & 87.10 \\ (proposed) & 32 & \(1.042\pm 0.125\) & 72.956 & 0.945 & 92.18 \\ & 64 & \(1.036\pm 0.124\) & 73.175 & 1.317 & **93.70** \\ & 128 & \(\textbf{0.967}\pm\textbf{0.109}\) & **78.618** & 3.747 & 92.02 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of localization results using \(15\)k frames of the PALA dataset [11]. The RMSE is reported as mean\(\pm\)std, best scores are bold and units are given in brackets.
## 4 Summary
This study explored whether a geometric reconstruction may serve as an alternative to beamforming in ULM. We employed an energy-based model for feature extraction in conjunction with ellipse intersections and clustering to pinpoint contrast agent positions from RF data available in the PALA dataset. We carried out a benchmark comparison with state-of-the-art methods, demonstrating that our geometric model provides
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Noise & \multicolumn{3}{c}{RMSE [\(\lambda/10\)]} & \multicolumn{3}{c}{Jaccard Index [\(\%\)]} \\ \hline \(L_{C}\) [dB] & RS [11] & U-Net [16] & Ours & RS [11] & U-Net [16] & Ours \\ \hline -30 & \(1.245\pm 0.171\) & \(1.564\pm 0.151\) & \(1.076\pm 0.136\) & 54.036 & 51.032 & 65.811 \\ -20 & \(1.496\pm 0.223\) & \(1.459\pm 0.165\) & \(1.262\pm 0.249\) & 27.037 & 45.647 & 27.962 \\ -10 & \(1.634\pm 0.542\) & \(1.517\pm 0.238\) & \(1.459\pm 0.564\) & 2.510 & 18.162 & 3.045 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance under noise variations from 128 (others) vs 16 (ours) transducers.
Figure 3: Rendered ULM regions from Table 1 in (a) to (d) and a rat brain result in (e) without temporal tracking. Numbers in curly brackets indicate the transducer number.
enhanced resolution and detection reliability with fewer transducers. This capability will be a stepping stone for 3-D ULM reconstruction where matrix transducer probes typically consist of 32 transducers per row only. It is essential to conduct follow-up studies to evaluate the high potential of our approach in an extensive manner before entering a pre-clinical phase. The promising results from this study motivate us to expand our research to more RF data scenarios. We believe our findings will inspire further research in this exciting and rapidly evolving field.
**Acknowledgments:** This research is supported by the Hasler Foundation under project number 22027.
|
2306.02034 | Does Microservices Adoption Impact the Development Velocity? A Cohort
Study. A Registered Report | [Context] Microservices enable the decomposition of applications into small
and independent services connected together. The independence between services
could positively affect the development velocity of a project, which is
considered an important metric measuring the time taken to implement features
and fix bugs. However, no studies have investigated the connection between
microservices and development velocity. [Objective and Method] The goal of this
study plan is to investigate the effect microservices have on development
velocity. The study compares GitHub projects adopting microservices from the
beginning and similar projects using monolithic architectures. We designed this
study using a cohort study method, to enable obtaining a high level of
evidence. [Results] The result of this work enables the confirmation of the
effective improvement of the development velocity of microservices. Moreover,
this study will contribute to the body of knowledge of empirical methods being
among the first works adopting the cohort study methodology. | Nyyti Saarimaki, Mikel Robredo, Sira vegas, Natalia Juristo, David Taibi, Valentina Lenarduzzi | 2023-06-03T07:27:01Z | http://arxiv.org/abs/2306.02034v2 | # Does Microservices Adoption Impact the Development Velocity?
###### Abstract.
[Context] Microservices enable the decomposition of applications into small and independent services connected together. The independence between services could positively affect the development velocity of a project, which is considered an important metric measuring the time taken to implement features and fix bugs. However, no studies have investigated the connection between microservices and development velocity.
[Objective and Method] The goal of this study plan is to investigate the effect microservices have on development velocity. The study compares GitHub projects adopting microservices from the beginning and similar projects using monolithic architectures. We designed this study using a cohort study method, to enable obtaining a high level of evidence.
[Results] The result of this work enables the confirmation of the effective improvement of the development velocity of microservices. Moreover, this study will contribute to the body of knowledge of empirical methods being among the first works adopting the cohort study methodology.
Empirical Software Engineering, Cohort Study, Microservices, Development velocity +
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
composed of a large number of small independent services that communicate through different lightweight mechanisms (Kumar et al., 2017).
Microservices are relatively small and autonomous services deployed independently, with a single and clearly defined purpose (Ball et al., 2016). Microservices enable vertically decomposing applications into a subset of business-driven independent services. Each service can be developed, deployed, and tested independently by different development teams using different technology stacks. Microservices have a variety of advantages (Kumar et al., 2017). They can be developed in different programming languages, can scale independently from other services, and can be deployed on the hardware that best suits their needs. Moreover, because of their size, they are easier to maintain and more fault-tolerant since the failure of one service will not disrupt the whole system, which could happen in a monolithic system (Ball et al., 2016).
### Docker
Docker1 is a platform for developing, shipping and running applications in loosely isolated environments called containers. A Docker container includes everything needed to run an application and, therefore, it enables the developer to develop on their platform on choice without having to worry about where the program is deployed (Docker, 2018). This is one of the several reasons why Docker is a popular tool among developers, and it was voted as the most important tool in StackOverflow's developer survey in 20222. The tool is commercial but is has a free version for open-source communities and individual developers.
Footnote 1: Docker: [https://www.docker.com/](https://www.docker.com/)
Footnote 2: [https://survey.stackoverflow.eu/2022/#most-popular-technologies-tools-tech-prof](https://survey.stackoverflow.eu/2022/#most-popular-technologies-tools-tech-prof)
A dockerized project can consist of one or several containers. The containers can communicate with each other but the implementation of each container is otherwise independent of other containers. Therefore, Docker is one way of creating microservices. In practice, each container has a file called Dockerfile. It is a "is text document that contains all the commands a user could call on the command line to assemble an image." (Docker, 2018).
### Development velocity
Development velocity estimates the amount of productive work a developer team can complete in a given time frame.
The influence of microservices on velocity can vary depending on different factors. Here are some ways in which the adoption of microservices can affect velocity:
_Independent Development and Deployment_: Microservices facilitate the autonomous development and deployment of individual services. This enables teams to concurrently work on different services, reducing dependencies and bottlenecks. Consequently, development cycles can be expedited, allowing for quick iteration and delivery of new features or improvements (Kumar et al., 2017)(Ball et al., 2016).
_Enhanced Scalability:_ Microservices architecture empowers the independent scaling of individual services based on their specific requirements. This flexibility enables teams to optimize performance and responsiveness, ensuring efficient utilization of resources. Fine-grained scaling aligned with demand can bolster velocity by effectively handling the increased workload (Ball et al., 2016).
_Concurrent Development and Testing:_ Microservices architecture facilitates parallel development and testing as services can be developed and tested in isolation. Teams can independently work on different services, enabling concurrent progress. This significantly speeds up the development lifecycle, as changes and updates can be implemented in parallel, reducing overall development time (Kumar et al., 2017).
_Reusability and Modularity:_ Microservices encourage the creation of small, reusable components that can be shared across services. This promotes reusability and modularity, accelerating development by leveraging existing services, libraries, and frameworks. Developers can build upon existing functionalities, reducing redundant efforts and hastening the development process (Ball et al., 2016).
Ultimately, the influence of microservices on velocity depends on the effectiveness of the architecture design, the maturity level of microservices adoption, the skills and experience of the development teams, and the efficiency of supporting processes and tools (Kumar et al., 2017)(Kumar et al., 2017).
To the best of our knowledge, this is the first study empirically investigating the impact of microservices on velocity.
## 3. The Empirical Study: Design and Execution Plan
In this Section, we describe our empirical study focusing on the design and the execution plan.
### Goal and Research Questions
The goal of the study is to understand the effect of Docker-based microservice architecture on the development velocity during the early stages of the evolution of a software project. Our goal is answered by the following research question:
**RQ.**_Do projects adopting microservices from the beginning have higher development velocity than monolithic projects?_
The hypothesis is that using Docker-based microservice architecture may help to have a higher development velocity.
Decomposing software projects into independent dockerized microservices are expected to accelerate the development process as each service can be independently deployed, and optimized according to their needs (Kumar et al., 2017). Despite Docker being a popular tool among developers, there are no studies on the topic exploring the tool's effect on velocity. Therefore, this is the first study that empirically investigates the impact of microservices on velocity.
### Study Design
The research question of this paper is causal in nature and the most suitable methodology for studying causality is a controlled experiment. However, the data is observational and historical which prevents us from conducting one. Thus, the study was designed as a _retrospective cohort study_ which is an analytical observational study methodology capable of obtaining high-level evidence from such data. The overall design of our study is presented in Figure 1.
A cohort study (Ball et al., 2016; Docker, 2018) investigates whether an exposure (independent variable) causes an outcome (dependent variable) by comparing the outcomes of two or more groups of study subjects with different levels of exposure. It carefully selects a study population
and follows it over a defined time period to see what naturally happens. The subjects are selected from the source population (source of data) using unambiguous eligibility criteria. They are needed to ensure the study population is capable to answer the research questions in the context of the population of interest.
As the methodology is observational, the results can be influenced also by other factors than the exposure and outcome, such as project age or size. Therefore, in cohort studies, it is crucial to identify these factors from literature or using domain knowledge, measure them, and control for their effect.
The exposure, confounders, and outcome are measured for all study subjects at the start of the follow-up. The outcome is measured at the end of the follow-up period. The gathered data is then used to analyze whether there is a relationship between the exposure and the outcome. Note that the outcome is measured also at the start of the study to control for the starting point. This provides a temporal framework that makes it possible to assess causality.
### Setting
The study investigates open-source projects from GitHub which are created between 2020-2021. The cases of the study are projects which have adopted a microservice architecture within six months of their creation while the controls of the study have a monolithic structure.
The projects are tracked for 18 months starting from when they are half a year old until they reach two years of age. Therefore, the study subjects have different data measurement dates, but the same fixed length of follow-up time (Figure 1).
### Subjects
The subjects of the study are open-source software projects. To ensure the study subjects can be used to answer the study goals, eligibility criteria are needed.
The **inclusion criteria** determines what is required from each subject in order to be considered as a part of the study.
* Open-source project created in GitHub between 2020-2021
* Uses GitHub to track issues
**Exclusion criteria** defines subjects which are excluded from the study at the start of the follow-up period. We applied the following criteria:
* _Project has one or two Dockerfiles_. Projects having only one or two Dockerfiles are purely neither controls nor cases. We consider projects having less than three to not have fully adopted the microservices, or not being large enough to be considered in this study. We do not consider files like dockerized databases or volumes in this count.
In addition to selection criteria, we define a **loss to follow-up criteria**. The criteria define the subjects which are excluded from the study based on their activity during the follow-up period. Subjects not meeting the criteria at the end of the follow-up period are considered drop-outs and excluded from the final study subjects.
* _The overall monthly trend of commits is decreasing during the follow-up_. Ensures the project is developed during the follow-up which is required to be able to detect the outcome. Inspecting the trend includes projects with different resources in the data set.
* _The overall monthly trend of issues is decreasing during the follow-up_. Ensures tracking issues using a well-integrated part of the development process.
* _Use of Docker is interrupted during the follow-up period_. Projects which at any point before the end of the follow-up period introduced Docker and then removed it. This ensures the projects maintain the exposure through the follow-up period and the projects are free of any potential effects of previous Docker usage.
### Variables
The variables included in the study are described below. The hypothesized relationships between the variables which are included in the study are visualized in Figure 3.
The **independent variable** is _adoption of microservice architecture_. It is a boolean variable indicating if a project uses the microservice architecture or not. We focus on projects implementing the microservices using Docker and define a project to have adopted microservice architecture if the repository contains at least three docker files at the age of six months. Three docker files are considered to be the minimum threshold for a system to be considered distributed.
We consider the **dependent variable** as the _development velocity of a project_ at the end of the follow-up period. The velocity of a project is the average time taken to close issues within a three month period. Therefore, lower velocity indicates faster issue fixing time. Figure 2 presents an example of the calculation while the measurement period is visualized in Figure 1.
Based on our domain knowledge, the following preliminary set of **confounding variables** is considered for the study. Even though collecting the planned variables should be possible, the set of considered confounders might be altered during data collection. All confounding variables are measured at the start of the follow-up period.
* _Start velocity_. The velocity of the project before the follow-up period.
* _Size of the project_: The number of lines of code in the repository.
Figure 1. The design of the observational study.
* _Development language:_ The main development language reported by GitHub. Different languages might have a different impact on developers' productivity, and therefore on velocity. High-level languages might enable faster development (i.e. higher velocity) while lower-level languages might require a longer time between deployments (e.g. C or C++).
* _Number of programming languages_: a higher number of different programming languages might fragment the development community, and therefore reduce velocity in case no developers are able anymore to modify a service written in a specific language.
* Age: The number of days from the first commit to the start of the follow-up.
* _Number of commits_: Number of commits to the main branch of the repository by the start of the follow-up reported by GitHub. Each commit might create or fix an issue, and therefore, it could affect the velocity.
* _Number of issues_: The number of created issues (open and closed) at the start of the follow-up. The number of issues reported for the
* _Number of developers_: The number of persons contributing to the project according to GitHub. In theory, more developers should mean higher development velocity. projects affect its velocity.
In addition to confounders, we have identified the following potential relationships between them which have potential effects on the outcome variable. These relationships will not be included in the study as additional variables.
* _Size_ / \(\neq\)_Developers_: A proxy for the amount of code a single developer produces.
### Data sources and measurement
The source population (list of potential projects) will be extracted from the World of Code (WoC) version U (updated 10/2021) or the following ones, based on their availability when conducting the study. It is a Free/Libre Open Source Software (FLOSS) ecosystem and a computational and statistical infrastructure that aims to provide an operational, research-ready, updatable, and expandable dataset (K
Due to the strict eligibility criteria, the number of cases in the obtained data set could be lower than expected for a 1:1 case-to-control ratio. Having up to four controls for each case can increase the statistical power of a study (Krishnam et al., 2017). Therefore, we calculated the required sample sizes for several case-to-control ratios. The analysis was conducted using SPSS's (version 29) power analysis for "Independent-samples T Test" functionality and t results from the analysis are presented in Table 1.
As the potential controls for this study are projects in GitHub using a monolithic structure, the number of potential controls is expected to be more than quadruple compared to the potential cases. In such case, we will consider performing random sampling for the selection of the control projects.
### Statistical analysis
The analysis will be conducted using SPSS (version 29) and R (version 4.1.1).
The connections between exposure, outcome, and confounder are visualized in Figure 4. The data analysis of the study investigates several aspects of the figure.
#### 3.8.1. Crude analysis
First, we determine the unadjusted relationship between the cases and controls (arrow 2). The analysis does not include any external variables and it compares the velocity between cases and controls. The results serve as base knowledge and justification for further analysis including the external variables. The groups are compared using independent samples t-test and Cohen's d effect size which both require (approximate) normality and homogeneity of variances. If these requirements are not fulfilled, data transformation is performed in order to fulfill them. In case the data transformation does not provide approximated normality, the comparison is conducted using the non-parametric Mann-Whitney test and Cliff's delta effect size.
#### 3.8.2. Treating confounders
We investigate if the included variables are actually confounders. A confounder needs to have a relationship to both exposure and outcome. First, we analyze if there is a relationship between the confounders and the exposure (arrow 1) by comparing the values of potential confounders between the case and control groups.
The treatment method for the included confounders is chosen individually. The choices are made based on the characteristics of the collected data. Below we present the common techniques for treating confounding (Krishnam et al., 2017), however, as the methods cannot be used in all circumstances, the used techniques depend on the observed scenario.
* _Restriction_ removes the effect of a confounding variable by ensuring all subjects are exposed to the same level of confounding. It is the simplest of the methods as in practice this is done by adding exclusion criteria and no further analysis is needed. However, after restricting a variable, its effect cannot be assessed.
* _Matching_ ensures the study population contains \(n\) similar controls for each case and is done in the design phase of the study. This ensures the groups are exposed to the effect confounders similarly. The matching criteria can consist of one or a combination of several variables and the matching can be conducted using several different algorithms (Krishnam et al., 2017). However, matching prevents studying the variables used in the process.
* _Stratification_ ensures in the analysis phase of the study that subjects with different levels of confounder have similar effects between the cases and controls. The study subjects are divided into groups (or strata) based on the level of confounding they are exposed to. The analysis is conducted separately for each group and if the results differ, an adjusted value is calculated, for example, using meta-analysis (Bauer et al., 2017).
* _Statistical adjustment_ uses mathematical models to determine the relationship on interest while controlling for the effect confounders. The confounders are added to the considered models as additional independent covariates. To determine the covariates' impact on the results for the outcome variable, we will consider potential methodologies of regression analysis based on the distributional assumptions derived from the data. Similarly, based on the assumed distribution, we will analyze the variance of the dependent variable subject to the independent variable while accounting for the rest of covariates. To assess the significance of the considered models, we utilize existing information criteria methods such as _Akaike Information Criterion_ (AIC) or _Bayesian Information Criterion_. Moreover, we consider performing _Backward Selection_ (BS) procedure based on the results from the mentioned information criteria to achieve the model that better describes the relationship between the dependent and independent variables.
## 4. Threats to Validity
**Construct Validity.** The measurement of microservice adoption is a threat in the study as version control does not provide data for validating its actual usage. Thus, we will rely on the number of dockerfiles in the repository. We try to mitigate this threat by requiring at least three dockerfiles and excluding volumes etc from the count. However, this does not guarantee active microservice adoption.
**Internal Validity.** The projects are collected from a version control system that automatically collects and logs the data. This ensures the similarity of the data collection between cases and
\begin{table}
\begin{tabular}{l|c c c c} & \multicolumn{4}{c}{**Case to control ratio**} \\ & **1:1** & **1:2** & **1:3** & **1:4** \\ \hline \hline
**\#Cases** & 394 & 295 & 263 & 246 \\
**\#Controls** & 394 & 590 & 787 & 983 \\ \hline
**Total** & 788 & 885 & 1,050 & 1,229 \\ \hline Power=0.8, p=0.05, Effect size=0.2 & & & \\ \end{tabular}
\end{table}
Table 1. Results from the power calculation for the different case-to-control ratios.
Figure 4. The relationships between exposure, outcome, and confounder.
controls. Additionally, GitHub has a large number of projects and we should be able to gather a data set that matches the minimum sample size determined in Section 3.7. However, the exclusion and loss to follow-up criteria we set for this study might lead to a sharp reduction of analyzable projects. To mitigate this, we would consider collecting projects from additional data sources, such as GitLab, as well as Jira or Bugzilla for issue tracking.
Cohort studies are sensitive to case and control groups not being comparable. We plan to ensure their similarity by suitable eligibility criteria and controlling for confounders using suitable methods such as restriction or matching.
The projects are followed from their creation until they are two years old and, therefore, they are in the same phase of their evolution. This could make the data collection process difficult as data measurement dates vary between the subjects. However, this is considered only a minor threat as GitHub automatically records the data.
The velocity of the project might differ between projects for several reasons. This is addressed by including external variables we have identified as potentially having an effect on it. To further mitigate this, we measure the development velocity also at the start of the follow-up and include the value in the analysis. The velocity of the project is calculated using a three month time window which might be inadequate for some projects, especially if the window is during holidays. The projects can also use some other issue tracking system than GitHub. However, we mitigate this threat by requiring a constant or growing trend of commits and issues during the follow-up.
**External Validity.** The included projects are open-source projects from GitHub which reach a certain level of maturity within two years of their creation. Therefore, the results are generalizable to young and active open-source projects, that is, this study is not applicable to older projects with greater maturity. However, the real generalizability of the results will depend on the collected data.
**Conclusion Validity.** The planned data analysis follows the structure generally adopted in cohort studies. As we do not have the actual data yet, we do not know what are the most suitable data analysis methods. However, we have presented the general directions and alternatives for the planned analysis.
## 5. Conclusion
The objective of this study plan is to examine the impact of microservices on development velocity. The study will involve a comparison between GitHub projects that initially implemented microservices and similar projects that utilized monolithic architectures. We have structured this study using a cohort study methodology to ensure a robust level of evidence.
The outcome of this research will validate the potential enhancement in development velocity achieved through the utilization of microservices. Additionally, this study will make a valuable contribution to the existing body of knowledge on empirical methods, as it will be one of the pioneering works adopting the cohort study methodology.
|
2302.08788 | MixNeRF: Modeling a Ray with Mixture Density for Novel View Synthesis
from Sparse Inputs | Neural Radiance Field (NeRF) has broken new ground in the novel view
synthesis due to its simple concept and state-of-the-art quality. However, it
suffers from severe performance degradation unless trained with a dense set of
images with different camera poses, which hinders its practical applications.
Although previous methods addressing this problem achieved promising results,
they relied heavily on the additional training resources, which goes against
the philosophy of sparse-input novel-view synthesis pursuing the training
efficiency. In this work, we propose MixNeRF, an effective training strategy
for novel view synthesis from sparse inputs by modeling a ray with a mixture
density model. Our MixNeRF estimates the joint distribution of RGB colors along
the ray samples by modeling it with mixture of distributions. We also propose a
new task of ray depth estimation as a useful training objective, which is
highly correlated with 3D scene geometry. Moreover, we remodel the colors with
regenerated blending weights based on the estimated ray depth and further
improves the robustness for colors and viewpoints. Our MixNeRF outperforms
other state-of-the-art methods in various standard benchmarks with superior
efficiency of training and inference. | Seunghyeon Seo, Donghoon Han, Yeonjin Chang, Nojun Kwak | 2023-02-17T10:07:35Z | http://arxiv.org/abs/2302.08788v2 | # MixNeRF: Modeling a Ray with Mixture Density
###### Abstract
Neural Radiance Field (NeRF) has broken new ground in the novel view synthesis due to its simple concept and state-of-the-art quality. However, it suffers from severe performance degradation unless trained with a dense set of images with different camera poses, which hinders its practical applications. Although previous methods addressing this problem achieved promising results, they relied heavily on the additional training resources, which goes against the philosophy of sparse-input novel-view synthesis pursuing the training efficiency. In this work, we propose MixNeRF, an effective training strategy for novel view synthesis from sparse inputs by modeling a ray with a mixture density model. Our MixNeRF estimates the joint distribution of RGB colors along the ray samples by modeling it with mixture of distributions. We also propose a new task of ray depth estimation as a useful training objective, which is highly correlated with 3D scene geometry. Moreover, we re-model the colors with regenerated blending weights based on the estimated ray depth and further improves the robustness for colors and viewpoints. Our MixNeRF outperforms other state-of-the-art methods in various standard benchmarks with superior efficiency of training and inference.
## 1 Introduction
A photo-realistic view synthesis is one of the major research topics in computer vision. Recently, the coordinate-based neural representation [26, 27, 31, 6] has gained much popularity for the novel view synthesis task. Among them, Neural Radiance Field (NeRF) [29], which models a 3D scene by learning from a dense set of 2D images, enabled high-quality view synthesis with a simple concept and has become the prevailing mainstream. However, NeRF suffers from severe performance degradation in real-world applications, AR/VR, autonomous driving, and so on, where only a sparse set of views are available due to the burdensome task of collecting dense training images.
One of the key factors for a model's high-quality rendering with limited input views is its robustness in 3D geometry learning, accurate depth estimation for a scene. There are several works to address this problem and it can be classified into two major paradigms: _pre-training_ and _regularization_ approaches. For the pre-training approach [32, 33, 35, 38, 21, 23, 7, 16, 5, 43], a general 3D geometry is trained by the multi-view images from a large-scale dataset and per-scene finetuning is optionally conducted in the test time. Although it has achieved promising results, it still requires the expensive cost for collecting a large-scale dataset across different scenes for pre-training and is not well-generalized for a novel domain in the test time.
Another line of research, the regularization approach [30, 33, 40, 12, 18, 33, 16], performs per-scene optimization from scratch by applying regularization to prevent being overfitted from the limited training views. Most existing methods of this kind depend heavily on the extra training resources for compensating a lack of supervisory signals, depth-map generation by running SfM [33,
Figure 1: **Comparison with the vanilla mip-NeRF [1] and other regularization methods.** Given the same number of training batch and iterations, our MixNeRF outperforms mip-NeRF and Diet-NeRF [12] by a large margin with comparable or shorter training time. Compared to RegNeRF [30], ours achieves superior performance with about 42% shortened training time. The size of the circles are proportional to the number of input views, indicating 3/6/9-view, respectively. More details are provided in Sec. 4.2.
tion with arbitrary camera poses [18, 30], leveraging external modules to exploit additional features [12, 30, 40], or so on. However, the additional training data might not always be available and the external modules should be pre-trained with a large-scale dataset. This is against the philosophy of novel view synthesis from sparse inputs, pursuing training efficiency.
In this work, we propose MixNeRF, an effective regularization approach for novel view synthesis from sparse inputs, modeling the colors along a ray with a mixture density model which represents a complex distribution with a mixture of component distributions. By exploiting the blending weights as mixing coefficients for our mixture model, we are able to regularize effectively both the colors and the densities of the samples along a ray. Furthermore, we propose a new auxiliary task of ray depth estimation for learning the 3D geometry which is crucial for the rendering quality. Since the estimated 3D geometry is highly correlated with the scene depth estimation, our proposed training objective acts as a useful supervisory signal. Finally, we regenerate the blending weights based on the estimated ray depth and remodel a ray. Since the estimated depth is not exactly the same, but nearly identical to the ground truth, it can play a role of pseudo geometry for adjacent points of the sample, like an unseen viewpoint. By remodeling the samples with the mixing coefficients based on the regenerated blending weights, we can further improve the robustness for shift of colors and viewpoints. Our main contributions are summarized as follows:
* Our method estimates the joint distribution of RGB color values along the ray samples by a mixture of distributions, learning the 3D geometry successfully with sparse views.
* We propose a ray depth estimation as an effective auxiliary task for few-shot novel view synthesis, playing a role of useful training objective.
* We use the regenerated blending weights based on the estimated ray depths for improving the robustness with negligible extra training cost.
* Our MixNeRF outperforms other state-of-the-art methods in the different standard benchmarks, showing much improved training and inference efficiency.
## 2 Related Works
### Neural Scene Representations
Recently, coordinate-based neural representations [6, 26, 27, 31] have gained a lot of popularity in the field of neural scene rendering [17, 22, 24, 1, 15, 1, 29, 1]. Among them, Neural Radiance Fields (NeRF) [29] have broken new ground in the novel view synthesis research due to its wide possibility with the simple concept and state-of-the-art quality. Since NeRF, several works have been followed to ameliorate its drawbacks and improve the performance. Mip-NeRF [1] tackled the problem of aliasing in NeRF by introducing cone tracing method. Ref-NeRF [37] reparameterized NeRF from the view-dependent outgoing radiance to reflected radiance, leading to significant improvement for specular reflections.
However, these methods suffer from severe performance degradation unless trained with a set of dense images with different camera poses, which hinders their practical applications. In this work, we address the sparse input scenario which is closer to the real-world condition. We are able to perform high-quality view synthesis from sparse inputs by modeling a ray with a mixture density model and improve both the training and the inference efficiency.
### Sparse Input Novel View Synthesis
One of the fundamental causes of performance degradation is the lack of 3D geometry information from training images, resulting in an inaccurate depth estimation. There are two major paradigms to tackle this problem in the novel view synthesis from sparse inputs: _pre-training_ and _regularization_ approaches. The former approach [21, 21, 32, 33, 35, 38, 43, 5, 7] provides prior knowledges to conditional models through pre-training. The image features extracted by a CNN feature extractor [43, 7] or a 3D cost volume obtained by image warping [21, 5] are used for training a generalizable model. Although they achieved promising performances under the sparse input setting, a large-scale dataset of multi-view images with different scenes is required for pre-training, which is burdensome to collect. Furthermore, despite the lengthy pre-training phase, most of these methods require additional test-time fine-tuning and are apt to suffer from quality degradation on different data domains.
The regularization approach [12, 33, 30, 18, 30, 40] introduces extra supervision to regularize the color and the geometry without an expensive pre-training process. Additional training resources, external modules such as CLIP [12] or a pre-trained normalizing flow model [10], extra depth inputs obtained by running structure-from-motion (SfM), and additional rays of unseen viewpoints, are often used to provide abundant supervisory signals. However, the existing methods are overly dependent on the extra training resources which might not always be available, hampering data/time efficiency. Moreover, it goes against the philosophy of the sparse-input novel-view synthesis which pursues the training efficiency.
Our proposed method requires neither an external module nor an additional inference of extra supervisory signals, such as additional depth inputs or pre-generated rays from unobserved viewpoints, resulting in a more efficient training framework.
### Mixture Density Model
There exists a line of research utilizing a mixture density model in different tasks of computer vision [8, 20, 34, 36, 41, 42]. Among 3D vision tasks, Tosi _et al_. [34] proposed a novel stereo-matching framework, SMD-Nets, tackling the over-smoothing problem of output representations by leveraging a mixture density network [3]. Choi _et al_. [8] reformulated 3D bounding box regression as a density estimation problem using a Gaussian Mixture Model (GMM), achieving a more efficient 3D object detection framework with few heuristic design factors.
Although the mixture density model shows a great potential in 3D vision tasks, there has not been an attempt to utilize it in the NeRF framework for novel view synthesis. Our MixNeRF is able to learn the 3D geometry successfully, which is a critical factor for rendering quality under the sparse input setting, by modeling a ray with a mixture of distributions.
## 3 Method
In this work, we propose a novel training framework of neural radiance fields for novel view synthesis from sparse inputs. We build our MixNeRF upon mip-NeRF [1] which uses a multiscale scene representation (Sec. 3.1). Moreover, we leverage the mixture density model framework to learn 3D geometry efficiently. More specifically, we model the colors of samples along a ray by a mixture of Laplace distributions with the predicted weights as mixing coefficients, which contributes to learning a scene's geometry effectively with limited input views (Sec. 3.2). Furthermore, we estimate the depths of input rays as an auxiliary task and reuse it for producing blending weights once again as supplemental training resources, which enables robust rendering from unseen viewpoints with little additional burden for training (Sec. 3.3). In the training phase, our MixNeRF is not only trained to minimize the mean squared error (MSE) between predictions and GT colors, but also to maximize the likelihood of colors and depths for each ray (Sec. 3.4). Fig. 2 demonstrates an overview of our MixNeRF.
### Preliminary: Neural Radiance Field
NeRF [29] represents a 3D scene with a continuous function, where a neural network \(f(\cdot,\cdot)\) consisting of an MLP maps a 3D location \(\mathbf{x}=(x,y,z)\) and viewing direction \((\theta,\phi)\), which is expressed as a 3D Cartesian unit vector \(\mathbf{\bar{d}}\) in practice, along rays to colors \(\mathbf{c}=(r,g,b)\) and volume density \(\sigma\):
\[f(\gamma(\mathbf{x}),\gamma(\mathbf{\bar{d}}))\rightarrow(\mathbf{c},\sigma), \tag{1}\]
where \(\gamma(\cdot)\) indicates the positional encoding applied to the inputs \((\mathbf{x},\mathbf{\bar{d}})\). Following the volume rendering theory [25], a pixel on an image is rendered by alpha compositing the colors and densities along the ray \(\mathbf{r}(t)=\mathbf{o}+t\mathbf{d}\) cast from the camera origin \(\mathbf{o}\), where \(\mathbf{d}\) is the unnormalized direction vector, _i.e_. \(\mathbf{d}=\|\mathbf{d}\|_{2}\cdot\mathbf{\bar{d}}\). The volume rendering integrals are approximated by the quadrature rule in practice [29] as follows:
\[\mathbf{\hat{c}}(\mathbf{r}) =\sum_{i=1}^{N}T_{i}(1-\exp(-\sigma_{i}\delta_{i}))\mathbf{c}_{i}, \tag{2}\] \[\text{where}\quad T_{i}=\exp(-\sum_{j=1}^{i-1}\sigma_{j}\delta_{ j}).\]
Note that \(N\) and \(\delta_{i}=\|\mathbf{d}\|_{2}\cdot(t_{i+1}-t_{i})\) denote the number of samples and the interval between the \(i\)-th sample and its adjacent one, respectively. To improve rendering efficiency, the two-stage hierarchical sampling is performed: _coarse_ and _fine_ stage. The points are sampled uniformly along a ray in the coarse stage, and then more informed samples are generated in the fine stage based on the density estimated from the coarse stage. Finally, the radiance field is optimized by minimizing the MSE between the rendered color and ground truth color over the input images:
\[\mathcal{L}_{\text{MSE}}=\sum_{\mathbf{r}\in\mathcal{R}}||\mathbf{\hat{c}}( \mathbf{r})-\mathbf{c}^{\text{GT}}(\mathbf{r})||_{2}^{2}\,, \tag{3}\]
where \(\mathcal{R}\) indicates a set of input rays.
Figure 2: **Overview of MixNeRF. Our method models the color and depth of a ray with a mixture of distributions, and remodels the color based on the estimated ray depth \(\mu^{d}\). The dotted rays indicate the imaginary views corresponding to \(\mu^{d}\). See Sec. 3 for more details.**
Following RegNeRF [30], we adopt the mip-NeRF [1] representation for our MixNeRF. Mip-NeRF effectively alleviates the aliasing problem of NeRF by introducing a cone tracing method and an integrated positional encoding.
### Modeling a Ray with Mixture Density Model
Given a set of input rays \(\mathcal{R}=\{\mathbf{r}_{1},\cdots,\mathbf{r}_{K}\}\) on training images with the ground truths \(G=\{G_{1},\cdots,G_{K}\}\) for each of \(K\) pixels, the \(i\)-th ground truth \(G_{i}\) consists of the RGB color values \(\mathbf{c}_{i}^{\text{GT}}=\{r_{i},g_{i},b_{i}\}\) and the unnormalized 3D ray vector \(\mathbf{d}_{i}^{\text{GT}}\), _i.e_. \(G_{i}\triangleq\{\mathbf{c}_{i}^{\text{GT}},\mathbf{d}_{i}^{\text{GT}}\}\). Note that \(\mathbf{d}^{\text{GT}}\) is the direction vector corresponding to \(t=1\) from the camera center. First, our MixNeRF estimates the distribution of the RGB color values \(\mathbf{c}_{i}\) along the samples of the ray \(\mathbf{r}_{i}\) on a pixel with a mixture model, which is derived from a weighted combination of component distributions. As shown in Fig. 2, in our model, \((\mathbf{c},\sigma)\), the conventional outputs of NeRF for each sampled point \(\mathbf{r}(t)\), are used as a location parameter \(\mu^{\mathbf{c}}\) and to compute a mixing coefficient \(\pi\), respectively. In addition to these, a scale parameter \(\beta=\{\beta^{r},\beta^{g},\beta^{b}\}\) is also estimated in our model.
We assume that every element of \(\mathbf{c}_{i}\) is independent of each other to simplify our mixture model formulation. Therefore, the \(j\)-th component's probability density function (pdf) corresponding to the \(j\)-th sampled point for the \(i\)-th ray \(\mathbf{r}_{i}\) is as follows:
\[\begin{split}\mathcal{F}(\mathbf{c};\mu_{ij}^{\mathbf{c}},\beta _{ij})&=\prod_{c\in\{r,g,b\}}\mathcal{F}(c;\mu_{ij}^{c},\beta_{ ij}^{c})\\ &=\prod_{c\in\{r,g,b\}}\frac{1}{2\beta_{ij}^{c}}\exp\left(-\frac {|c-\mu_{ij}^{c}|}{\beta_{ij}^{c}}\right),\end{split} \tag{4}\]
where \(\mathcal{F}\) denotes the Laplacian pdf. The pdf of our mixture model formed by the component distributions above is defined as:
\[p(\mathbf{c}|\mathbf{r}_{i})=\sum_{j=1}^{M}\pi_{ij}\mathcal{F}(\mathbf{c};\mu _{ij}^{\mathbf{c}},\beta_{ij}), \tag{5}\]
where \(M\) denotes the number of mixture components which is the same as the number of samples along a ray. The mixture coefficient \(\pi_{ij}\) is derived from the density output \(\sigma_{ij}\) as follows:
\[\pi_{j}=\frac{w_{j}}{\sum_{m=1}^{M}w_{m}}=\frac{T_{j}(1-\exp(-\sigma_{j}\delta _{j}))}{\sum_{m=1}^{M}T_{m}(1-\exp(-\sigma_{m}\delta_{m}))}. \tag{6}\]
Note that we omitted ray index \(i\) for simplicity. Here, \(w_{j}\) and \(\delta_{j}\) indicate the weight for the alpha compositing and the sample interval, respectively. Since the mixture components corresponding to the samples with higher weights, which contribute more to the alpha composition of the color than other samples, are likely to have higher \(\pi\), we use the normalized weight as a mixing coefficient \(\pi\) so that \(\sum_{j=1}^{M}\pi_{j}=1\).
The concept of a mixture model corresponds to that of alpha compositing in that a complex multimodal distribution is able to be represented by the weighted combination of component distributions with mixing coefficients \(\pi\), like a pixel value derived from the weighted combination of estimated RGB values along ray samples with blending weights \(w\). Motivated by this conceptual similarity, we are able to model a ray with a mixture of distributions successfully without any heuristic factors. The mixing coefficients derived from the blending weights provide effective supervisory signals toward the densities, which are the core factor for successfully learning 3D scene geometry with limited input views.
### Depth Estimation by Mixture Density Model
We propose a scene's depth estimation as an effective auxiliary task for training our MixNeRF with sparse inputs. As demonstrated in Fig. 2, our MixNeRF estimates \(d\), the ray's depth, which is defined as the length of the unnormalized ray direction vector \(\mathbf{d}\), _i.e_. \(d\triangleq\|\mathbf{d}\|\), along the ray samples. The ground truth \(G_{i}\) contains the ray direction values \(\mathbf{d}_{i}\) as well, which are used in the form of 3D Cartesian unit vectors \(\mathbf{\bar{d}}_{i}=\mathbf{d}_{i}/\|\mathbf{d}_{i}\|_{2}\) as an input viewing direction in practice. Like the RGB color values, the depths for each ray are modeled by our mixture model consisting of the Laplace distributions with the same scale parameters \(\beta\) and mixing coefficients \(\pi\) used above. The pdf of our mixture model for the depth of the \(i\)-th ray is as follows:
\[p(d|\mathbf{r}_{i})=\sum_{j=1}^{M}\pi_{ij}\mathcal{F}(d;\mu_{ij}^{d},\beta_{ij}). \tag{7}\]
Since the mixing coefficient \(\pi\) and parameter \(\beta\) are optimized through the supervision of the depth as well as the color values, it improves the robustness of our MixNeRF for slight changes of geometry. Also, considering that the successful depth estimation is crucial to the rendered images' quality in a NeRF model [30, 33, 9, 18], our direct estimation of the scene's depth benefits a lot.
Blending weight regeneration.In addition, we exploit the estimated depth to regenerate the blending weights along the samples and model the RGB color values by a mixture of distributions once again. Since the estimated depth of each sample is trained to be nearly identical to the ground truth depth, but not exactly the same, it can play a role of pseudo geometry for adjacent points of the sample without any additional pre-generation process of extra training data, _e.g_. depth inputs made by SfM or rays from unobserved viewpoints. The new blending weight \(\hat{w}_{j}\) of the \(j\)-th sample along a ray based on the estimated depth \(\mu_{j}^{d}\) are
defined as follow:
\[\hat{w}_{j}=\hat{T}_{j}(1-\exp(-\sigma_{j}\hat{\delta}_{j})),\ \hat{\delta}_{j}=\mu_{j}^{ d}(t_{j+1}-t_{j}), \tag{8}\]
in which we replace \(\|\mathbf{d}\|_{2}\) in \(\delta_{j}\) formulation with \(\mu_{j}^{d}\). Finally, we model the color values along a ray based on the new mixing coefficients \(\hat{\pi}\) derived from \(\hat{w}\) and the corresponding pdf is as follows:
\[\hat{p}(\mathbf{c}|\mathbf{r}_{i})=\sum_{j=1}^{M}\hat{\pi}_{ij}\mathcal{F}( \mathbf{c};\mu_{ij}^{\mathbf{c}},\beta_{ij}). \tag{9}\]
Since the estimated ray depths are likely to be close enough to those of the ground truths, we use the same GT color values of input rays for modeling the mixture distribution based on the newly generated \(\hat{\pi}\). It further improves the robustness for shift of colors and ray viewpoints by simply modeling a ray once again with regenerated blending weights, eliminating pre-generation and extra inference of unseen views without much computational overhead.
### Total Loss
Our MixNeRF is trained to maximize the likelihood of \(\mathbf{c}^{\text{GT}}\) and \(d^{\text{GT}}\) for a set of input rays \(\mathcal{R}\) as well as to minimize the \(\mathcal{L}_{\text{MSE}}\). Therefore, the loss functions can be simply defined to minimize the negative log-likelihood (NLL) of the ground truths as follows:
\[\begin{array}{l}\mathcal{L}_{\text{NLL}}^{C}=-\sum_{\mathbf{r}\in\mathcal{ R}}\log p(\mathbf{c}^{\text{GT}}|\mathbf{r}),\\ \mathcal{L}_{\text{NLL}}^{D}=-\sum_{\mathbf{r}\in\mathcal{R}}\log p(\mathbf{ d}^{\text{GT}}|\mathbf{r}),\\ \mathcal{L}_{\text{NLL}}^{C}=-\sum_{\mathbf{r}\in\mathcal{R}}\log\hat{p}( \mathbf{c}^{\text{GT}}|\mathbf{r}),\end{array} \tag{10}\]
each of which corresponds to the NLL form of Eq. (5), Eq. (7) and Eq. (9), respectively. As a result, we define our total loss as:
\[\mathcal{L}_{\text{total}}=\mathcal{L}_{\text{MSE}}+\lambda_{C}\mathcal{L}_{ \text{NLL}}^{C}+\lambda_{D}\mathcal{L}_{\text{NLL}}^{D}+\hat{\lambda}_{C} \mathcal{L}_{\text{NLL}}^{C}, \tag{11}\]
where \(\lambda_{C}\), \(\lambda_{D}\) and \(\hat{\lambda}_{C}\) are balancing terms for the losses. More details about training and implementation are provided in the supplementary material.
## 4 Experiments
### Experimental Details
Datasets and metrics.We evaluate MixNeRF on the multiple standard benchmarks: DTU [14], LLFF [28] and Realistic Synthetic 360\({}^{\circ}\)[29]. DTU consists of images containing objects located on a white table with a black background. LLFF contains real forward-facing scenes and is usually used as an out-of-distribution test set for pre-training methods. We also compare our MixNeRF against other regularization methods on the Realistic Synthetic 360\({}^{\circ}\), which provides 8 synthetic scenes each consisting of 400 images rendered from inward-facing cameras with various viewpoints. We follow the overall experimental protocols of [12, 18, 43, 29] for these datasets.
For the evaluation metrics, we adopt the mean of PSNR, structural similarity index (SSIM) [39], LPIPS perceptual metric [44], and the geometric average [1]. Kindly refer to the supp. mat. for more details about datasets and metrics.
Baselines.We compare our method against several representative pre-training and regularization approaches [5, 7, 12, 30, 43] as well as the vanilla mip-NeRF [1]. We report the evaluation results from [30] for DTU and LLFF, which are superior to those from their original papers due to the improved training curriculum. The DTU dataset is used as a pre-training resources for PixelNeRF [43], MVS-NeRF [5], and SRF [7], and the LLFF dataset serves as an out-of-domain test set. The regularization approaches and mip-NeRF are trained for each scene without pre-training. For Realistic Synthetic 360\({}^{\circ}\), we train other regularization approaches [12, 18, 30] by their training schemes. Note that the pre-trained RealNVP [10] for training RegNeRF [30] is not publicly available and we report the results of RegNeRF trained without it on the Realistic Synthetic 360\({}^{\circ}\). For the analysis of MixNeRF (Sec. 4.2), all models including ours are trained with the same batch size and iterations.
### Analysis of MixNeRF
Benefit of mixture density model.We leverage a mixture density model, which represents a complex multimodal distribution with a weighted combination of component distri
Figure 3: **Comparison of blending weight distributions.** Compared to the baselines, ours estimates the modes of weight distributions more accurately, leading to the precise 3D geometry. The ideal distributions on the blue and red points are unimodal and bimodal, respectively.
butions, to learn the distribution of density and colors along the ray samples effectively. Fig. 3 demonstrates the comparison of the blending weight distributions of cast rays on the LLFF fern scene of 3-view scenario. We compare ours against mip-NeRF and RegNeRF, with mip-NeRF trained from all training views as an ideal distribution. For the unimodal distribution in blue, mip-NeRF does not estimate the mode well and achieves degenerate geometry. However, RegNeRF and our MixNeRF show the unimodal weight distributions leading to higher-quality novel views, and especially our MixNeRF achieves the distribution with sharper mode than RegNeRF, which is more similar to that of mip-NeRF (All-view). In case of the bimodal-shaped distribution in red, our MixNeRF estimates the weight distribution successfully while both mip-NeRF and RegNeRF fail to estimate the accurate modes. Since the predicted 3D geometry is directly correlated with how well the density is estimated, our MixNeRF is able to learn the geometry more efficiently with limited input views through mixture density modeling.
**Depth map estimation.** We compare our MixNeRF against RegNeRF, which utilizes the prior of depth smoothness to learn 3D geometry, on the multiple benchmarks. As illustrated in Fig. 4, our MixNeRF estimates more accurate depth maps with distinct edges while RegNeRF generates both RGB images and depth maps with blurry fine details. Especially for Realistic Synthetic 360\({}^{\circ}\), we observe that RegNeRF fails to learn the geometry with its smoothing strategy and achieves degenerate results due to the overly strong prior of depth smoothness. However, since our MixNeRF learns the depth of a ray by leveraging a mixture density model without smoothing from additional unseen rays, the depth maps are predicted much more efficiently and precisely.
**Efficiency in training and inference.** Our MixNeRF improves the efficiency for both the training and the inference phases by learning the 3D geometry effectively without burdensome extra training resources. Fig. 1 illustrates that MixNeRF achieves superior performance with reduced training time among the vanilla mip-NeRF and two representative regularization methods on the LLFF. For a fair comparison, we compare the methods based on the identical JAX codebase [4] using the same batch size and iterations on 2 NVIDIA TITAN RTX. Although it takes a similar amount of time to train DietNeRF as MixNeRF, its performance is inferior significantly to ours in 3 and 6-view scenario. Compared to RegNeRF, ours outperforms it with about 42% shorter training time per scene under the same number of input view scenario, resulting from the elimination of extra inference for additional unseen rays. Furthermore, we also observe that our MixNeRF shows better data efficiency requiring up to about 60% fewer inputs than mip-NeRF to achieve comparable results, and outperforms mip-NeRF consistently in more than 9-view scenarios. It indicates that our proposed training strategy is effective in general scenarios as well as the sparse input setting. The related experimental results are provided in the suppl. material. For the inference efficiency, Tab. 1 demonstrates the SSIM results by the number of samples along a ray on the LLFF under the 3-view scenario. Our MixNeRF with 32-sample outperforms RegNeRF with default 128-sample, and still achieves comparable results with only 16-sample thanks to the capacity of our mixture model for representing the blending weight distributions successfully.
### Ablation Study
We report the quantitative and qualitative results of our ablation study in Fig. 5. We observe that modeling a ray with mixture of distributions is helpful for improving performance under the sparse view setting ((1) \(\rightarrow\) (2)). Also, our proposed ray depth estimation task contributes to further improving the rendering quality by generating more accurate depth maps ((2) \(\rightarrow\) (3)). However, despite the well-estimated depth map, the RGB image suffers from the foggy artifacts upon the objects as shown in (3). By remodeling a ray through the weight regeneration process, our MixNeRF
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \# of samples & 128 & 64 & 32 & 16 & 8 \\ \hline mip-NeRF [1] & 0.332 & 0.331 & 0.329 & 0.308 & 0.251 \\ RegNeRF [30] & 0.587 & 0.585 & 0.576 & 0.522 & 0.379 \\ MixNeRF (Ours) & **0.629** & **0.629** & **0.620** & **0.580** & **0.468** \\ \hline \end{tabular}
\end{table}
Table 1: **Comparison with baselines by the number of ray samples. Our MixNeRF with 75% fewer samples (32-sample) outperforms RegNeRF with default 128-sample, and still achieves comparable results with only 16-sample (\(\times\)8 reduction).**
Figure 4: **Comparison of estimated depth map. We compare MixNeRF against RegNeRF, the state-of-the-art regularization approach. Our MixNeRF estimates more accurate depth maps and captures fine details better, leading to high-quality rendering with more distinct edges and less artifacts.**
achieves high-quality of both RGB image and depth map ((3) \(\rightarrow\) (5)). Since the regenerated weights are not helpful without a supervision toward the ray depth estimation task ((2) \(\rightarrow\) (4)), our proposed auxiliary task of ray depth estimation is useful for learning 3D geometry and playing a role of additional training resources by weight regeneration process on-the-fly.
### Comparison with other SOTA methods
Llf.As shown in Tab. 2, the pre-training approaches except MVSNeRF are not able to achieve comparable results without fine-tuning for the 3-view scenario. The regularization approaches and vanilla mip-NeRF outperform the pre-training approaches in 6 and 9-view settings. Especially, RegNeRF improves the rendering quality by a large margin compared to mip-NeRF and DietNeRF, thanks to the regularization strategy using pre-generated rays from unseen viewpoints. Our MixNeRF achieves state-of-the-art results across all scenarios and metrics without any pre-training process using a large-scale dataset or extra inference for pre-generated training resources. It improves the training efficiency, requiring about 42% shorter training time than RegNeRF in the same input view setting (see Fig. 1). Furthermore, as demonstrated in Fig. 6, we observe that our method achieves more realistic results with fine details than RegNeRF since MixNeRF learns the 3D geometry successfully without an explicit regularization for smoothing the depth.
tive results with and without fine-tuning as demonstrated in Tab. 2. Our MixNeRF achieves comparable or better results than other baselines. Compared to the PSNR, our MixNeRF achieves worse quantitative results than RegNeRF for SSIM. We conjecture that since RegNeRF uses an explicit smoothing term for regularizing the depth, it can achieve slightly better quantitative results of SSIM which is a patch-wise evaluation metric. However, as shown in Fig. 6, our MixNeRF renders clearer images, capturing fine details better than other baselines despite a little worse quantitative results for some metrics. More qualitative results of LLFF and DTU are provided in the suppl. material.
**Realistic Synthetic 360\({}^{\circ}\).** As shown in Tab. 3, our MixNeRF outperforms other regularization baselines across all settings and metrics by a large margin. Fig. 5(e) illustrates that other methods suffer from severe floating artifacts and degenerate colors in the 4-view scenario. Especially, the depth smoothing strategy of RegNeRF rather brings about the significant performance degradation. It implies that smoothing is not a fundamental solution universally effective for different datasets. Compared to the baselines, our MixNeRF achieves superior rendering quality with much less artifacts and more accurate geometry in both 4 and 8-view (see the suppl. material) scenarios.
## 5 Conclusion
We have introduced MixNeRF, a novel regularization approach for training NeRF in the limited data scenario. However, previous approaches heavily depend on the extra training resources, which goes against the philosophy of sparse-input novel-view synthesis pursuing the efficiency of training. To overcome this bottleneck, we propose modeling a ray with mixture density, which enables effective learning of 3D geometry with sparse inputs. Furthermore, our novel
Figure 6: **Qualitative results on LLFF (a,b), DTU (c,d), and Realistic Synthetic 360\({}^{\circ}\) (e). More results are provided in suppl. material.**
training strategy, consisting of an auxiliary ray depth estimation and the following weight regeneration, further improves the rendering quality and better reconstructs 3D geometry by more accurate depth estimation without any extra training resources that should be prepared in advance. Our proposed MixNeRF outperforms both pre-training and regularization approaches across the multiple benchmarks with an enhanced efficiency of training and inference.
|
2301.11789 | A radiation and propagation problem for a Helmholtz equation with a
compactly supported nonlinearity | The present work describes some extensions of an approach, originally
developed by V.V. Yatsyk and the author, for the theoretical and numerical
analysis of scattering and radiation effects on infinite plates with cubically
polarized layers. The new aspects lie on the transition to more generally
shaped, two- or three-dimensional objects, which no longer necessarily have to
be represented in terms a Cartesian product of real intervals, to more general
nonlinearities (including saturation) and the possibility of an efficient
numerical approximation of the electromagnetic fields and derived quantities
(such as energy, transmission coefficient, etc.). The paper advocates an
approach that consists in transforming the original full-space problem for a
nonlinear Helmholtz equation (as the simplest model) into an equivalent
boundary-value problem on a bounded domain by means of a nonlocal
Dirichlet-to-Neumann (DtN) operator. It is shown that the transformed problem
is equivalent to the original one and can be solved uniquely under suitable
conditions. Morever, the impact of the truncation of the DtN operator on the
resulting solution is investigated, so that the way to the numerical solution
by appropriate finite element methods is available. | Lutz Angermann | 2023-01-27T15:43:52Z | http://arxiv.org/abs/2301.11789v2 | A radiation and propagation problem for a Helmholtz equation with a compactly supported nonlinearity
###### Abstract
The present work describes some extensions of an approach, originally developed by V.V. Yatsyk and the author, for the theoretical and numerical analysis of scattering and radiation effects on infinite plates with cubically polarized layers. The new aspects lie on the transition to more generally shaped, two- or three-dimensional objects, which no longer necessarily have to be represented in terms a Cartesian product of real intervals, to more general nonlinearities (including saturation) and the possibility of an efficient numerical approximation of the electromagnetic fields and derived quantities (such as energy, transmission coefficient, etc.). The paper advocates an approach that consists in transforming the original full-space problem for a nonlinear Helmholtz equation (as the simplest model) into an equivalent boundary-value problem on a bounded domain by means of a nonlocal Dirichlet-to-Neumann (DtN) operator. It is shown that the transformed problem is equivalent to the original one and can be solved uniquely under suitable conditions. Moreover, the impact of the truncation of the DtN operator on the resulting solution is investigated, so that the way to the numerical solution by appropriate finite element methods is available.
Keywords: Scattering, radiation, nonlinear Helmholtz equation, nonlinearly polarizable medium, DtN operator, truncation
AMS Subject Classification (2022): 35 J 05 35 Q 60 78 A 45
## 1 Introduction
The present work deals with the mathematical modeling of the response of a penetrable two- or three-dimensional object (obstacle), represented by a bounded domain, to the excitation
by an external electromagnetic field. A special aspect of the paper is that, in contrast to many other, thematically comparable works, nonlinear constitutive laws of this object are in the foreground.
A standard example are the so-called Kerr nonlinearities. It is physically known, but also only little investigated mathematically that sufficiently strong incident fields, under certain conditions, cause effects such as frequency multiplication, which cannot occur in the linear models frequently considered in the literature. On the other hand, such effects are interesting in applications, which is why a targeted exploitation, for example from a numerical or optimization point of view, first requires thorough theoretical investigation.
A relatively simple mathematical model for this is a nonlinear Helmholtz equation, which results from the transition from the time-space formulation of Maxwell's equations to the frequency-space formulation together with further simplifications. Although some interesting nonlinear effects cannot be modeled by means of a single scalar equation alone, its investigation is of own importance, for example from the aspect of variable coefficients, and on the other hand its understanding is also the basis for further development, for example for systems of nonlinear Helmholtz equations, see, e.g., [1]. The latter is also the reason why we consider a splitted nonlinearity and not concentrate the nonlinearity in one term as is obvious.
The Helmholtz equation with nonlinearities has only recently become the focus of mathematical investigations. However, problems are mainly dealt with in which the nonlinearities are globally smooth, while here a formulation as a transmission problem is used that allows less smooth transitions at the object boundary. In addition, we allow more general nonlinearities than the Kerr nonlinearities mentioned, in particular saturation effects can be taken into account.
Starting from a physically oriented problem description as a full-space problem, we derive a weak formulation on a bounded domain using the well-known technique of DtN operators, and show its equivalence to the weakly formulated original problem. Since the influence of the external field only occurs indirectly in the weak formulation, we also give a second variant of the weak formulation that better clarifies this influence and which we call the input-output formulation.
Since the DtN operators are non-local, their practical application (numerics) causes problems, which is why a well-known truncation technique is used. This raises the problem of proving the well-posedness of the reduced problem and establishing a connection (error estimate) of the solution of the reduced problem to the original problem. Although these questions in the linear case have been discussed in the literature for a relatively long time, they even for the linear case seemed to have been treated only selectively and sometimes only very vaguely. The latter concerns in particular the question of the independence of the stability constant from the truncation parameter. In this work, both stability and error estimates are given for the two- and three-dimensional case, whereby a formula-based relationship between the discrete and the continuous stability constant is established.
Another difference to many existing, especially older works is that the present paper works with variational (weak) formulations but not with integral equations. Unfortunately, the complete tracking of the dependence of the occurring parameters on the wave number (so-called wavenumber-independent bounds) has not yet been included.
It has already been mentioned that, for the linear situation, in connection with scattering problems or with problems that are formulated from the very beginning in bounded domains
(e.g., with impedance boundary conditions), there is an extensive and multi-threaded body of literature that is beyond the scope of this article to list. Transmission problems of the type considered here are rarely found in the literature.
Nevertheless, without claiming completeness, a few works should be mentioned here that had an influence on the present results and whose bibliographies may be of help. A frequently cited work that deals with linear scattering problems in two dimensions and also served as the motivation for the present work is [10], which, however, does not discuss the dependence of the stability constant on the truncation parameter. A number of later works by other authors quote this work, but sometimes assume results that cannot be found in the original. In this context, the papers [11] and [12] should also be mentioned, which take up and improve various aspects of [10], e.g., the convergence order (exponential convergence of the truncated solution to the origibnal one). However, they are also restricted to linear two-dimensional scattering or transmission problems, respectively.
It is also worth noting that, in addition to the DtN-type methods, there are other methods for reducing full-space problems to problems in bounded domains, too. These include, above all, the so-called PML methods. Among the works related to the present work, [13] should be mentioned, which considers Kerr nonlinearities and, using a linearization approach, can show exponential convergence of the PML problem in relation to the PML parameters. The work that comes closest to our intentions is [14], where the exterior Dirichlet boundary-value problem for the linear Helmholtz equation is considered. In this paper, no separate, parameter-uniform stability estimate of the truncated problem is given, but the truncation error is included in the error estimate of a finite element approximation. A similar work is [14], but in which another boundary condition at the boundary of the auxiliary domain is considered, the so-called modified DtN condition.
Among the more recent papers, works by Mandel [17], Chen, Evequoz & Weth [10], and Maier & Verfurth [15] should be mentioned, especially because of the cited sources. In his cumulative habilitation thesis, which contains further references, Mandel examines existence and uniqueness questions for solutions of systems of nonlinear Helmholtz equations in the full-space case. Scattering or transmission problems are not considered. Using integral operators, Chen et al. study the scattering problem with comparatively high regularity assumptions to the superlinear nonlinearities by means of topological fixed point and global bifurcation theory to prove the existence of bounded solutions, avoiding truncation approaches. In this context also the paper [10] is worth to be mentioned, in which the existence of real-valued solutions (which satisfy certain asymptotic conditions) of a nonlinear Helmholtz equation with a compactly supported nonlinearity is investigated.
Maier & Verfurth, who focus mainly on multiscale aspects for a nonlinear Helmholtz equation over a bounded domain with impedance boundary conditions, give an instructive review of the literature on nonlinear Helmholtz equations. Further works on nonlinear Helmholtz equations in bounded domains are [15] and [16], which deal with Kerr nonlinearities and use explicit iterative arguments.
A number of papers on inverse problems also deal with nonlinear Helmholtz equations, even if the questions answered there are not very closely related to ours. Typically, such treatises also include statements about the direct problem, and so we mention here [11], [12], and [13], where the latter work considers \(C^{\infty}\)-bounded domains and real solutions.
The structure of the present work is based on the program outlined above. After the problem formulation in Section 2, the exterior auxiliary problem required for truncation is discussed,
after which the weak formulation and equivalence statement follow in Section 4. Section 5 is dedicated to the existence and uniqueness of the weak solution, where in particular the assumptions on the nonlinear terms are discussed. The final section then deals with the properties of the truncated problem - uniform (with respect to the truncation parameter) well-posedness and estimate of the truncation error.
## 2 Problem formulation
Let \(\Omega\subset\mathbb{R}^{d}\) be a bounded domain with a Lipschitz boundary \(\partial\Omega\). It represents a medium with a nonlinear behaviour with respect to electromagnetic fields. Since \(\Omega\) is bounded, we can choose an open Euclidean \(d\)-ball \(B_{R}\subset\mathbb{R}^{d}\) of radius \(R>\sup_{\mathbf{x}\in\Omega}|\mathbf{x}|\) with center in the origin such that \(\Omega\subset B_{R}\). The complements of \(\Omega\) and \(B_{R}\) are denoted by \(\Omega^{c}:=\mathbb{R}^{d}\setminus\Omega\)\(B_{R}^{c}:=\mathbb{R}^{d}\setminus B_{R}\), resp., the open complement of \(B_{R}\) is denoted by \(B_{R}^{+}:=\mathbb{R}^{d}\setminus\overline{B_{R}}\) (the overbar over sets denotes their closure in \(\mathbb{R}^{d}\)), and the boundary of \(B_{R}\), the sphere, by \(S_{R}:=\partial B_{R}\) (cf. Fig. 1). The open complement of \(\Omega\) is denoted by \(\Omega^{+}:=\mathbb{R}^{d}\setminus\overline{\Omega}\). By \(\mathbf{\nu}\) we denote the outward-pointing (w.r.t. either \(\Omega\) or \(B_{R}\)) unit normal vector on \(\partial\Omega\) or \(S_{R}\), respectively.
Trace operators will be denoted by one and the same symbol \(\gamma\); the concrete meaning (e.g., traces on the common interface of an interior and exterior domain) will be clear from the context.
With regard to the function spaces used, we refer to the relevant literature, e.g., [1, Ch. 1], [13, Ch. 3]. The corresponding norms are denoted by \(\|\cdot\|_{0,p,\Omega}\) for the \(L_{p}(\Omega)\)-norm and \(\|\cdot\|_{s,2,\Omega}\) for the \(H^{s}(\Omega)\)-norm (as representative examples; other domains of definition may also occur).
The classical direct problem of radiation and propagation of an electromagnetic field - actually just one component of it - by/in the penetrable obstacle \(\Omega\) is governed by a nonlinear Helmholtz equation with a variable complex-valued wave coefficient:
\[-\Delta u(\mathbf{x})-\kappa^{2}c(\mathbf{x},u)\,u=f(\mathbf{x},u)\quad\text{for (almost) all }\mathbf{x}\in\mathbb{R}^{d}, \tag{1}\]
where the wavenumber \(\kappa>0\) is fixed. The physical properties of the obstacle \(\Omega\) are described by the coefficient \(c:\ \mathbb{R}^{d}\times\mathbb{C}\to\mathbb{C}\) (physically the square of the _refractive index_) and the right-hand side \(f:\ \mathbb{R}^{d}\times\mathbb{C}\to\mathbb{C}\). In general, both functions are nonlinear and have the following properties:
\[\operatorname{supp}(1-c(\cdot,w))=\overline{\Omega}\quad\text{and}\quad \operatorname{supp}f(\cdot,w)\subset\overline{\Omega}\quad\text{for all }w\in\mathbb{C}. \tag{2}\]
Figure 1: The nonlinear medium \(\Omega\) is excited by an incident field \(u^{\text{inc}}\) (\(d=2\))
The function \(1-c\) is often called the _contrast function_. Basically we assume that \(c\) and \(f\) are Caratheodory functions, i.e. the mapping \(\mathbf{x}\mapsto c(\mathbf{x},v)\) is (Lebesgue-)measurable for all \(v\in\mathbb{C}\), and the mapping \(v\mapsto c(\mathbf{x},v)\) is continuous for almost all \(\mathbf{x}\in\mathbb{R}^{d}\). These two conditions imply that \(\mathbf{x}\mapsto c(\mathbf{x},v(\mathbf{x}))\) is measurable for any measurable \(v\). The same applies to \(f\).
The unknown _total field_\(u:\,\mathbb{R}^{d}\to\mathbb{C}\) should have the following structure:
\[u=\begin{cases}u^{\mathrm{rad}}+u^{\mathrm{inc}}&\text{in }\Omega^{\mathrm{c}},\\ u^{\mathrm{trans}}&\text{in }\Omega,\end{cases} \tag{3}\]
where \(u^{\mathrm{rad}}:\,\,\Omega^{\mathrm{c}}\to\mathbb{C}\) is the unknown radiated/scattered field, \(u^{\mathrm{trans}}:\,\,\Omega\to\mathbb{C}\) denotes the unknown transmitted field, and the incident field \(u^{\mathrm{inc}}\in H^{1}_{\mathrm{loc}}(\Omega^{+})\) is given. The incident field is usually a (weak) solution of either the homogeneous or inhomogeneous Helmholtz equation (even in the whole space). Typically it is generated either by concentrated sources located in a bounded region of \(\Omega^{+}\) or by sources at infinity, e.g. travelling waves.
**Example 1** (\(d=2\)).: _The incident plane wave, whose transmission and scattering is investigated, is given by_
\[u^{\mathrm{inc}}(\mathbf{x}):=\alpha^{\mathrm{inc}}\exp(i(\Phi x_{1}-\Gamma x_{2} )),\mathbf{x}=(x_{1},x_{2})^{\top}\in B_{R}^{+}\]
_with amplitude \(\alpha^{\mathrm{inc}}\) and angle of incidence \(\varphi^{\mathrm{inc}}\), \(|\varphi^{\mathrm{inc}}|<\pi\), where \(\Phi:=\kappa\sin\varphi^{\mathrm{inc}}\) is the longitudinal wave number and \(\Gamma:=\sqrt{\kappa^{2}-\Phi^{2}}=\kappa\cos\varphi^{\mathrm{inc}}\) the transverse wave number. In polar coordinates is then_
\[u^{\mathrm{inc}}(r,\varphi) =\alpha^{\mathrm{inc}}\exp(i(\Phi r\cos\varphi-\Gamma r\sin \varphi))\] \[=\alpha^{\mathrm{inc}}\exp(i\kappa r(\sin\varphi^{\mathrm{inc}} \cos\varphi-\cos\varphi^{\mathrm{inc}}\sin\varphi))\] \[=\alpha^{\mathrm{inc}}\exp(i\kappa r\sin(\varphi^{\mathrm{inc}} -\varphi)),\quad(r,\varphi)\in B_{R}^{+}.\]
The radiated/scattered field \(u^{\mathrm{rad}}\) should satisfy an additional condition, the so-called _Sommerfeld radiation condition_:
\[\lim_{|\mathbf{x}|\to\infty}|\mathbf{x}|^{(d-1)/2}\left(\hat{\mathbf{x}}\cdot\nabla u^{ \mathrm{rad}}-i\kappa u^{\mathrm{rad}}\right)=0 \tag{4}\]
uniformly for all directions \(\hat{\mathbf{x}}:=\mathbf{x}/|\mathbf{x}|\), where \(\hat{\mathbf{x}}\cdot\nabla u^{\mathrm{rad}}\) denotes the derivative of \(u^{\mathrm{rad}}\) in radial direction \(\hat{\mathbf{x}}\), cf. [11, eq. (3.7) for \(d=3\), eq. (3.96) for \(d=2\)]. Physically, the condition (4) allows only _outgoing_ waves at infinity; mathematically it guarantees the uniqueness of the solution \(u^{\mathrm{scat}}:\,B_{R}^{+}\to\mathbb{C}\) of the following exterior Dirichlet problem
\[-\Delta u^{\mathrm{scat}}-\kappa^{2}u^{\mathrm{scat}}=0\quad \text{in }B_{R}^{+}, \tag{5}\] \[u^{\mathrm{scat}}=f_{S_{R}}\quad\text{on }S_{R},\] \[\lim_{|\mathbf{x}|\to\infty}|\mathbf{x}|^{(d-1)/2}\left(\hat{\mathbf{x}}\cdot \nabla u^{\mathrm{scat}}-i\kappa u^{\mathrm{scat}}\right)=0,\]
where \(f_{S_{R}}:\,\,S_{R}\to\mathbb{C}\) is given. We mention that, in the context of classical solutions (i.e. \(u^{\mathrm{scat}}\in C^{2}(B_{R}^{+})\)) to problem (5), Rellich [10] has shown that the condition (4) can be weakened to the following integral version:
\[\lim_{|\mathbf{x}|\to\infty}\int_{S_{R}}\left|\hat{\mathbf{x}}\cdot\nabla u^{\mathrm{ scat}}-i\kappa u^{\mathrm{scat}}\right|^{2}ds(\mathbf{x})=0.\]
In the context of weak solutions (i.e. \(u^{\mathrm{scat}}\in H^{1}_{\mathrm{loc}}(B_{R}^{+})\)), an analogous equivalence statement can be found in [10, Thm. 9.6].
## 3 The exterior problem in \(B_{R}^{\rm c}\)
For a given \(f_{S_{R}}\in C(S_{R})\) and \(d=3\), the unique solvability of problem (5) in \(C^{2}(B_{R}^{+})\cap C(B_{R}^{\rm c})\) is proved, for example, in [13, Thm. 3.21]. In addition, if \(f_{S_{R}}\) is smoother, say \(f_{S_{R}}\in C^{\infty}(S_{R})\), then the normal derivative of \(u^{\rm scat}\) on the boundary \(S_{R}\) is a well-defined continuous function [13, Thm. 3.27]. These assertions remain valid in the case \(d=2\), see [13, Sect. 3.10].
Therefore, by solving (5) for given \(f_{S_{R}}\in C^{\infty}(S_{R})\), a mapping can be introduced that takes the Dirichlet data on \(S_{R}\) to the corresponding Neumann data on \(S_{R}\), i.e.
\[f_{S_{R}}\mapsto T_{\kappa}f_{S_{R}}:=\left.\hat{\mathbf{x}}\cdot\nabla u^{\rm scat }\right|_{S_{R}},\]
see, e.g., [13, Sect. 3.2].
Furthermore, it is well-known that the mapping \(T_{\kappa}\) can be extended to a bounded linear operator \(T_{\kappa}:\;H^{s+1/2}(S_{R})\to H^{s-1/2}(S_{R})\) for any \(|s|\leq 1/2\)[14, Thm. 2.31] (we keep the notation already introduced for this continued operator). This operator is called the _Dirichlet-to-Neumann operator_, in short _DtN operator_, or _capacity operator_.
Since the problem (5) is considered in a spherical exterior domain, an explicit series representation of the solution is available using standard separation techniques in polar or spherical coordinates, respectively. The term-by-term differentiation of this series thus also provides a series representation of the image of \(T_{\kappa}\). We first give a formal description of the approach and then comment on its mathematical correctness at the end.
The solution of the problem (5) in the two-dimensional case (here with \(u^{\rm scat}\) replaced by \(u\)) is given by [16, Proposition 2.1], [13, eq. (30)]:
\[\begin{split} u(\mathbf{x})=u(r\hat{\mathbf{x}})=u(r,\varphi)=\sum_{n\in \mathbb{Z}}\frac{H_{n}^{(1)}(\kappa r)}{H_{n}^{(1)}(\kappa R)}\,f_{n}(R)Y_{n}( \hat{\mathbf{x}})=\sum_{n\in\mathbb{Z}}\frac{H_{n}^{(1)}(\kappa r)}{H_{n}^{(1)}( \kappa R)}\,f_{n}(R)Y_{n}(\varphi),\\ \mathbf{x}=r\hat{\mathbf{x}}\in S_{r},\ r>R,\ \varphi\in[0,2\pi]\end{split} \tag{6}\]
(identifying \(u(\mathbf{x})\) with \(u(r,\varphi)\) and \(Y_{n}(\hat{\mathbf{x}})\) with \(Y_{n}(\varphi)\) for \(\mathbf{x}=r\hat{\mathbf{x}}=r(\cos\varphi,\sin\varphi)^{\top}\)), where \((r,\varphi)\) are the polar coordinates, \(H_{n}^{(1)}\) are the cylindrical Hankel functions of the first kind of order \(n\)[12, Sect. 10.2]1, \(Y_{n}\) are the circular harmonics defined by
Footnote 1: Instead of (4) [16] considered the ingoing Sommerfeld condition and thus obtained a representation in terms of the cylindrical Hankel functions of the second kind. Note that \(H_{n}^{(2)}(-\xi)=-(-1)^{n}H_{n}^{(1)}(\xi)\)[12, (10.11.5)].
\[Y_{n}(\varphi)=\frac{e^{in\varphi}}{\sqrt{2\pi}},\quad n\in\mathbb{Z},\]
\(f_{n}(R)\) are the Fourier coefficients of \(f_{S_{R}}\) defined by
\[f_{n}(R):=(f_{S_{R}}(R\cdot),Y_{n})_{S_{1}}=\int_{S_{1}}f_{S_{R}}(R\hat{\mathbf{x }})\overline{Y_{n}}(\hat{\mathbf{x}})ds(\hat{\mathbf{x}})=\int_{0}^{2\pi}f_{S_{R}}(R, \varphi)\overline{Y_{n}}(\varphi)d\varphi, \tag{7}\]
and \(ds(\hat{\mathbf{x}})\) is the Lebesgue arc length element. The terms of the series (6) are well-defined, since the main branch of the Hankel functions \(H_{\nu}^{(1)}\) of real order \(\nu\geq 0\) is free of real zeros [1, p. 62].
Now we formally differentiate the representation (6) with respect to \(r\) to obtain the outward normal derivative of \(u\):
\[\hat{\mathbf{x}}\cdot\nabla u(\mathbf{x})=\frac{\partial u}{\partial r}(r\hat{\mathbf{x}})= \kappa\sum_{n\in\mathbb{Z}}\frac{H_{n}^{(1)^{\prime}}(\kappa r)}{H_{n}^{(1)}( \kappa R)}\,f_{n}(R)Y_{n}(\hat{\mathbf{x}}),\quad\mathbf{x}=r\hat{\mathbf{x}}\in S_{r},\ r>R.\]
Setting \(f_{R}:=u|_{S_{R}}\) and letting \(\mathbf{x}\) in this representation approach the boundary \(S_{R}\), we can formally define the (extended) DtN operator by
\[T_{\kappa}u(\mathbf{x}):=\frac{1}{R}\sum_{n\in\mathbb{Z}}Z_{n}(\kappa R)u_{n}(R)Y_ {n}(\hat{\mathbf{x}}),\quad\mathbf{x}=R\hat{\mathbf{x}}\in S_{R}, \tag{8}\]
where
\[Z_{n}(\xi):=\xi\,\frac{H_{n}^{(1)^{\prime}}(\xi)}{H_{n}^{(1)}(\xi)}\,,\]
and \(u_{n}(R)\) are the Fourier coefficients of \(u|_{S_{R}}\) analogously to (7). The admissibility of this procedure has been proven in many sources in the classical context, for example [11, Sect. 3.5]. For the present case, in the paper [12, Thm. 1] it was shown that the operator \(T_{\kappa}:\ H^{s+1/2}(S_{R})\to H^{s-1/2}(S_{R})\) is bounded for any \(s\in\mathbb{N}_{0}\). Ernst's result was extended to all \(s\geq 0\) in [13, Thm. 3.1].
In the case \(d=3\), the solution of the problem (5) is given by [11, eq. (33)]:
\[\begin{split} u(\mathbf{x})=u(r\hat{\mathbf{x}})=u(r,\varphi,\theta)& =\sum_{n\in\mathbb{N}_{0}}\sum_{|m|\leq n}\frac{h_{n}^{(1)}(\kappa r )}{h_{n}^{(1)}(\kappa R)}\,f_{n}^{m}(R)Y_{n}^{m}(\hat{\mathbf{x}})\\ &=\sum_{n\in\mathbb{N}_{0}}\sum_{|m|\leq n}\frac{h_{n}^{(1)}( \kappa r)}{h_{n}^{(1)}(\kappa R)}\,f_{n}^{m}(R)Y_{n}^{m}(\varphi,\theta),\\ &\mathbf{x}\in S_{r},\ r>R,\ (\varphi,\theta)\in[0,2\pi]\times[0, \pi]\end{split} \tag{9}\]
(identifying \(u(\mathbf{x})\) with \(u(r,\varphi,\theta)\) and \(Y_{n}^{m}(\hat{\mathbf{x}})\) with \(Y_{n}^{m}(\varphi,\theta)\) for \(\mathbf{x}=r\hat{\mathbf{x}}=r(\cos\varphi\sin\theta,\)\(\sin\varphi\sin\theta,\cos\theta)^{\top}\)), where \((r,\varphi,\theta)\) are the spherical coordinates, \(h_{n}^{(1)}\) are the spherical Hankel functions of the first kind of order \(n\)[10, Sect. 10.47], \(Y_{n}^{m}\) are the spherical harmonics defined by
\[Y_{n}^{m}(\varphi,\theta)=\sqrt{\frac{2n+1}{4\pi}\,\frac{(n-|m|)!}{(n+|m|)!}} \,P_{n}^{|m|}(\cos\theta)e^{im\varphi},\quad n\in\mathbb{N}_{0},\ |m|\leq n,\]
(identifying \(Y_{n}^{m}(\hat{\mathbf{x}})\) with \(Y_{n}^{m}(\varphi,\theta)\) for \(\hat{\mathbf{x}}=(\cos\varphi\sin\theta,\sin\varphi\sin\theta,\cos\theta)^{\top}\)), where \(P_{n}^{m}\) are the associated Legendre functions of the first kind [10, Sect. 14.21], \(f_{n}^{m}(R)\) are the Fourier coefficients defined by
\[\begin{split} f_{n}^{m}(R)=(f_{S_{R}}(R\cdot),Y_{n}^{m})_{S_{1}}& =\int_{S_{1}}f_{S_{R}}(R\hat{\mathbf{x}})\overline{Y_{n}^{m}}(\hat{ \mathbf{x}})ds(\hat{\mathbf{x}})\\ &=\int_{0}^{2\pi}\int_{0}^{\pi}f_{S_{R}}(R,\varphi,\theta) \overline{Y_{n}^{m}}(\varphi,\theta)\sin\theta d\theta d\varphi,\end{split} \tag{10}\]
and \(ds(\hat{\mathbf{x}})\) is the Lebesgue surface area element. The relationship
\[h_{n}^{(1)}(x)=\sqrt{\frac{\pi}{2x}}H_{n+1/2}^{(1)},\quad n\in\mathbb{N}_{0},\]
(see, e.g., [10, Sect. 10.47]) and the above remark about the lack of real zeros of the Hankel functions \(H_{\nu}^{(1)}\) of real order \(\nu\geq 0\) imply that the terms of the series (9) are well-defined.
Proceeding as in the two-dimensional case, we get
\[\hat{\mathbf{x}}\cdot\nabla u(\mathbf{x})=\frac{\partial u}{\partial r}(r\hat{\mathbf{x}}) =\kappa\sum_{n\in\mathbb{N}_{0}}\sum_{|m|\leq n}\frac{h_{n}^{(1)}( \kappa r)}{h_{n}^{(1)}(\kappa R)}\,f_{n}^{m}(R)Y_{n}^{m}(\hat{\mathbf{x}}),\quad \mathbf{x}=r\hat{\mathbf{x}}\in S_{r},\ r>R.\]
Setting \(f_{R}:=u|_{S_{R}}\) and letting \(r\to R\), we can define the (extended) DtN operator by
\[T_{\kappa}u(\mathbf{x})=\frac{1}{R}\sum_{n\in\mathbb{N}_{0}}\sum_{|m|\leq n}z_{n} (\kappa R)u_{n}^{m}(R)Y_{n}^{m}(\hat{\mathbf{x}}),\quad\mathbf{x}=R\hat{\mathbf{x}}\in S_{ R}, \tag{11}\]
where
\[z_{n}(\xi):=\xi\,\frac{h_{n}^{(1)^{\prime}}(\xi)}{h_{n}^{(1)}(\xi)}\,,\]
and \(u_{n}^{m}(R)\) are the Fourier coefficients of \(u|_{S_{R}}\) analogously to (10). The admissibility of this procedure is proved in [10, Thm. 2.15] or [12, Thm. 2.6.2], for example. For the present situation there is a boundedness result for \(d=3\) analogous to [12, Thm. 3.1] in [12, Thm. 2.6.4]. In summary, the following statement applies to both dimensions.
**Theorem 2**.: _The DtN operator \(T_{\kappa}:\ H^{s+1/2}(S_{R})\to H^{s-1/2}(S_{R})\) is bounded for any \(s\geq 0\)._
**Remark 3**.: _A more refined analysis of the DtN operator in the case \(s=0\) results in a sharp estimate of its norm w.r.t. the wavenumber [1, Thm. 1.4]: Given \(\kappa_{0}>0\), there exists a constant \(C>0\) independent of \(\kappa\) such that_
\[\|T_{\kappa}v\|_{-1/2,2,S_{R}}\leq C\kappa\|v\|_{1/2,2,S_{R}}\quad\text{for all }v\in H_{\rm loc}^{1}(B_{R}^{+})\quad\text{and}\quad\kappa\geq\kappa_{0}.\]
The result from [1, Thm. 1.4] applies to more general domains, for the present situation it already follows from the proof of Lemma 23 (see the estimates (36), (37) for \(s=0\), where the bounds do not depend on \(N\)).
At the end of this section we give a collection of some properties of the coefficient functions in the representations (8), (11) which will be used in some of the subsequent proofs.
**Lemma 4**.: _For all \(\xi>0\), the following holds:_
\[-n\leq\operatorname{Re}Z_{n}(\xi)\leq-\frac{1}{2},\quad 0< \operatorname{Im}Z_{n}(\xi)<\xi\quad\text{for all }|n|\in\mathbb{N},\] \[-\frac{1}{2}\leq\operatorname{Re}Z_{0}(\xi)<0,\quad\xi< \operatorname{Im}Z_{0}(\xi),\] \[-(n+1)\leq\operatorname{Re}z_{n}(\xi)\leq-1,\quad 0< \operatorname{Im}z_{n}(\xi)\leq\xi\quad\text{for all }n\in\mathbb{N},\] \[\operatorname{Re}z_{0}(\xi)=-1,\quad\operatorname{Im}z_{0}(\xi)=\xi.\]
Proof.: For the case \(d=2\), the estimates can be found in [22, eq. (2.34)]. The other estimates can be found in [22, Thm. 2.6.1], see also [22, eqs. (2.22), (2.23)]. Although only \(0\leq\operatorname{Im}z_{n}(\xi)\) is specified in the formulation of the cited theorem, the strict positivity follows from the positivity of the function \(q_{\ell}\) in [22, eq. (2.6.34)], as has been mentioned in [22].
**Corollary 5**.: _For all \(\xi>0\), the following holds:_
\[|Z_{n}(\xi)|^{2} \leq(1+n^{2})(1+|\xi|^{2})\quad\text{for all }|n|\in\mathbb{N},\] \[|z_{n}(\xi)|^{2} \leq(1+n^{2})(2+|\xi|^{2})\quad\text{for all }n\in\mathbb{N}_{0}.\]
Proof.: The estimates of the real and imaginary parts of \(Z_{n}\) from Lemma 4 immediately imply that
\[\frac{1}{1+n^{2}}|Z_{n}(\xi)|^{2} =\frac{1}{1+n^{2}}\left[|\operatorname{Re}Z_{n}(\xi)|^{2}+| \operatorname{Im}Z_{n}(\xi)|^{2}\right]\] \[\leq\frac{1}{1+n^{2}}\left[n^{2}+|\xi|^{2}\right]\leq 1+\frac{| \xi|^{2}}{1+n^{2}}\leq 1+|\xi|^{2},\quad n\in\mathbb{N}.\]
Since \(H^{(1)}_{-n}(\xi)=(-1)^{n}H^{(1)}_{n}(\xi)\), \(n\in\mathbb{N}\)[17, eq. (10.4.2)], the estimate is also valid for \(n\) such that \(-n\in\mathbb{N}\).
Analogously we obtain from Lemma 4 that
\[\frac{1}{1+n^{2}}|z_{n}(\xi)|^{2} =\frac{1}{1+n^{2}}\left[|\operatorname{Re}z_{n}(\xi)|^{2}+| \operatorname{Im}z_{n}(\xi)|^{2}\right]\] \[\leq\frac{1}{1+n^{2}}\left[(1+n)^{2}+|\xi|^{2}\right]\leq 2+ \frac{|\xi|^{2}}{1+n^{2}}\leq 2+|\xi|^{2}.\]
## 4 Weak formulations of the interior problem
Now we turn to the consideration of the problem (1)-(4).
In the classical setting it can be formulated as follows: Given \(u^{\text{inc}}\in H^{1}_{\text{loc}}(\Omega^{+})\), determine the transmitted field \(u^{\text{trans}}:\;\Omega\to\mathbb{C}\) and the radiated/scattered field \(u^{\text{rad}}:\;\Omega^{\text{c}}\to\mathbb{C}\) satisfying
\[-\Delta u^{\text{trans}}-\kappa^{2}c(\cdot,u^{\text{trans}})\,u^{ \text{trans}} =f(\cdot,u^{\text{trans}})\] in \[\Omega, \tag{12}\] \[-\Delta u^{\text{rad}}-\kappa^{2}u^{\text{rad}} =0\] in \[\Omega^{+},\] \[u^{\text{trans}} =u^{\text{rad}}+u^{\text{inc}}\] on \[\partial\Omega,\] \[\boldsymbol{\nu}\cdot\nabla u^{\text{trans}} =\boldsymbol{\nu}\cdot\nabla u^{\text{rad}}+\boldsymbol{\nu} \cdot\nabla u^{\text{inc}}\] on \[\partial\Omega\]
and the radiation condition (4). Note that the incident field is usually a (weak) solution of either the homogeneous or inhomogeneous Helmholtz equation in \(\Omega^{+}\), i.e. the second equation in (12) can be replaced by
\[-\Delta u-\kappa^{2}u=f^{\text{inc}}\quad\text{in }\Omega^{+}, \tag{13}\]
where \(f^{\rm inc}:\ \Omega^{+}\to\mathbb{C}\) is an eventual source density. For simplicity we do not include the case of a nontrivial source density in our investigation, but the subsequent theory can be easily extended by adding an appropriate linear functional, say \(\ell^{\rm src}\), on the right-hand side of the obtained weak formulations (see (14) or (16) later).
In order to give a weak formulation of (12) with the modification (13) in the case \(f^{\rm inc}=0\), we introduce the (complex) linear function spaces
\[H^{1}_{\rm comp}(\Omega^{+}) :=\left\{v\in H^{1}(\Omega^{+}):\ {\rm supp}\,v\ \mbox{ is compact}\right\},\] \[V_{\mathbb{R}^{d}} :=\{v\in L_{2}(\mathbb{R}^{d}):\ v|_{\Omega}\in H^{1}(\Omega), \ v|_{\Omega^{+}}\in H^{1}_{\rm loc}(\Omega^{+}):\ \gamma v|_{\Omega}=\gamma v|_{\Omega^{+}}\mbox{ on } \partial\Omega\},\] \[V^{\circ}_{\mathbb{R}^{d}} :=\{v\in L_{2}(\mathbb{R}^{d}):\ v|_{\Omega}\in H^{1}(\Omega), \ v|_{\Omega^{+}}\in H^{1}_{\rm comp}(\Omega^{+}):\ \gamma v|_{\Omega}=\gamma v|_{\Omega^{+}}\mbox{ on } \partial\Omega\}\]
(note the comment at the beginning of Section 2 on the notation for trace operators) and multiply the first equation of (12) by the restriction \(v|_{\Omega}\) of an arbitrary element \(v\in V_{\mathbb{R}^{d}}\) and (13) by the restriction \(v|_{\Omega^{+}}\) of \(v\in V_{\mathbb{R}^{d}}\), respectively, and integrate by parts:
\[(\nabla u^{\rm trans},\nabla v)_{\Omega}-(\boldsymbol{\nu}\cdot \nabla u^{\rm trans},\nabla v)_{\partial\Omega}-\kappa^{2}(c(\cdot,u^{\rm trans })u^{\rm trans},v)_{\Omega} =(f(\cdot,u^{\rm trans}),v)_{\Omega},\] \[(\nabla u,\nabla v)_{\Omega^{+}}-(\boldsymbol{\nu}\cdot\nabla u,\nabla v)_{\partial\Omega^{+}}-\kappa^{2}(u,v)_{\Omega^{+}} =0.\]
Here we use the notation, for any domain \(M\subset\mathbb{R}^{d}\) with boundary \(\partial M\) and appropriately defined functions on \(M\) or \(\partial M\),
\[(\nabla w,\nabla v)_{M} :=\int_{M}\nabla w\cdot\nabla\overline{v}d\boldsymbol{x},\] \[(w,v)_{M} :=\int_{M}w\overline{v}d\boldsymbol{x},\] \[(w,v)_{\partial M} :=\int_{\partial M}w\overline{v}ds(\boldsymbol{x})\]
(the overbar over functions denotes complex conjugation). Taking into consideration the last transmission condition in (12), the relationship \(\boldsymbol{\nu}|_{\Omega}=-\boldsymbol{\nu}|_{\Omega^{+}}\), and the fact that the last but one transmission condition in (12) is included in the definition of the space \(V_{\mathbb{R}^{d}}\), we define a bivariate nonlinear form on \(V_{\mathbb{R}^{d}}\times V^{\circ}_{\mathbb{R}^{d}}\) by
\[a_{\mathbb{R}^{d}}(w,v):=(\nabla w,\nabla v)_{\Omega}+(\nabla w,\nabla v)_{ \Omega^{+}}-\kappa^{2}(c(\cdot,w)w,v)_{\mathbb{R}^{d}},\]
cf., e.g., [10, Example 21.8].
**Definition 6**.: _Given \(u^{\rm inc}\in H^{1}_{\rm loc}(\Omega^{+})\), a weak solution to the problem (1)-(4) is defined as an element \(u\in V_{\mathbb{R}^{d}}\) that has the structure (3), satisfies the variational equation_
\[a_{\mathbb{R}^{d}}(u,v)=(f(\cdot,u),v)_{\mathbb{R}^{d}}\quad\mbox{for all }v\in V^{\circ}_{\mathbb{R}^{d}} \tag{14}\]
_and the Sommerfeld radiation condition (4)._
A second weak formulation can be obtained if we do not replace the second Helmholtz equation in (12) by (13). Then the first step in the derivation of the weak formulation reads as
\[(\nabla u^{\rm trans},\nabla v)_{\Omega}-(\boldsymbol{\nu}\cdot \nabla u^{\rm trans},\nabla v)_{\partial\Omega}-\kappa^{2}(c(\cdot,u^{\rm trans })u^{\rm trans},v)_{\Omega} =(f(\cdot,u^{\rm trans}),v)_{\Omega},\] \[(\nabla u^{\rm rad},\nabla v)_{\Omega^{+}}-(\boldsymbol{\nu} \cdot\nabla u^{\rm rad},\nabla v)_{\partial\Omega^{+}}-\kappa^{2}(u^{\rm rad}, v)_{\Omega^{+}} =0.\]
The last transmission condition in (12) allows to rewrite the first equation as
\[(\nabla u^{\rm trans},\nabla v)_{\Omega}-(\mathbf{\nu}\cdot\nabla u^{\rm rad },\nabla v)_{\partial\Omega}-\kappa^{2}(c(\cdot,u^{\rm trans})u^{\rm trans},v)_{\Omega}\] \[\qquad\qquad=(f(\cdot,u^{\rm trans}),v)_{\Omega}+(\mathbf{\nu}\cdot \nabla u^{\rm inc},\nabla v)_{\partial\Omega},\]
leading to the weak formulation
\[(\nabla u_{0},\nabla v)_{\Omega}+(\nabla u_{0},\nabla v)_{\Omega^{+}}-\kappa^{ 2}(c(\cdot,u_{0})u_{0},v)_{\mathbb{R}^{d}}=(f(\cdot,u_{0}),v)_{\mathbb{R}^{d}}+ (\mathbf{\nu}\cdot\nabla u^{\rm inc},v)_{\partial\Omega}\quad\text{for all }v\in V_{ \mathbb{R}^{d}}^{\circ}\]
with respect to the structure
\[u_{0}:=\begin{cases}u^{\rm rad}&\text{in }\Omega^{c},\\ u^{\rm trans}&\text{in }\Omega,\end{cases}\]
where \(u^{\rm rad}\in H^{1}_{\rm loc}(\Omega^{+})\), \(u^{\rm trans}\in H^{1}(\Omega)\).
The advantage of this formulation is that it clearly separates the unknown and the known parts of the fields, so we call this formulation the _input-output formulation_. The disadvantage is that the natural function space of the solution \(u_{0}\) is not a linear space due to the last but one transmission condition in (12).
Instead of the problem (1)-(4) we want to solve an equivalent problem in the bounded domain \(B_{R}\), that is, we define
\[V:=\{v\in L_{2}(B_{R}):\;v|_{\Omega}\in H^{1}(\Omega),\;v|_{B_{R}\setminus \overline{\Omega}}\in H^{1}(B_{R}\setminus\overline{\Omega}):\;\gamma v|_{ \Omega}=\gamma v|_{B_{R}\setminus\overline{\Omega}}\text{ on }\partial\Omega\}\]
and look for an element \(u\in V\) such that
\[-\Delta u^{\rm trans}-\kappa^{2}c(\cdot,u^{\rm trans})\,u =f(\cdot,u^{\rm trans}) \text{in }\Omega,\] \[-\Delta u-\kappa^{2}u =0 \text{in }B_{R}\setminus\overline{\Omega},\] \[u^{\rm trans} =u^{\rm rad}+u^{\rm inc} \text{on }\partial\Omega, \tag{15}\] \[\mathbf{\nu}\cdot\nabla u^{\rm trans} =\mathbf{\nu}\cdot\nabla u^{\rm rad}+\mathbf{\nu}\cdot\nabla u^{\rm inc} \text{on }\partial\Omega,\] \[\hat{\mathbf{x}}\cdot\nabla u^{\rm rad} =T_{\kappa}u^{\rm rad} \text{on }S_{R}\]
formally holds. Now the weak formulation of problem (15) reads as follows:
Find \(u\in V\) such that
\[(\nabla u,\nabla v)_{\Omega}+(\nabla u,\nabla v)_{B_{R}\setminus \overline{\Omega}}-\kappa^{2}(c(\cdot,u)u,v)_{B_{R}}-(T_{\kappa}u,v)_{S_{R}} \tag{16}\] \[=(f(\cdot,u),v)_{B_{R}}-(T_{\kappa}u^{\rm inc},v)_{S_{R}}+(\hat{ \mathbf{x}}\cdot\nabla u^{\rm inc},v)_{S_{R}}\]
for all \(v\in V\) holds.
**Lemma 7**.: _The weak formulations (14) and (16) of the problems (1)-(4) and (15), resp., are equivalent._
Proof.: First let \(u\in V_{\mathbb{R}^{d}}\) be a weak solution to (1)-(4), i.e. it satisfies (14). Then its restriction to \(B_{R}\) belongs to \(V\).
To demonstrate that this restriction satisfies the weak formulation (16), we construct the radiating solution \(u_{B_{R^{\prime}}^{\rm c}}\) of the homogeneous Helmholtz equation outside of a smaller ball \(B_{R^{\prime}}\) such that \(\overline{\Omega}\subset B_{R^{\prime}}\subset B_{R}\) and \(\left.u_{B_{R^{\prime}}^{\rm c}}\right|_{S_{R^{\prime}}}=(u-u^{\rm inc})|_{S_{ R^{\prime}}}\). This solution can be constructed
in the form of a series expansion in terms of Hankel functions as explained in the previous section. By elliptic regularity (see, e.g., [13, Thm. 4.16], [14, Sect. 6.3.1]), the solution of this problem satisfies the Helmholtz equation in \(B_{R^{\prime}}^{\mathrm{c}}\). Moreover, by uniqueness [12, Thm. 2.6.5], it coincides with \(u-u^{\mathrm{inc}}=u^{\mathrm{rad}}\) in \(B_{R^{\prime}}^{\mathrm{c}}\).
Now we choose a finite partition of unity covering \(\overline{B}_{R}\), denoted by \(\{\varphi_{j}\}_{J}\)[15, Sect. 1.2], such that its index set \(J\) can be decomposed into two disjoint subsets \(J_{1},J_{2}\) as follows:
\[\overline{B}_{R^{\prime}}\subset\mathrm{int}\,\Big{(}\bigcup_{j\in J_{1}} \mathrm{supp}\,\varphi_{j}\Big{)},\quad\bigcup_{j\in J_{1}}\mathrm{supp}\, \varphi_{j}\subset B_{R},\quad\bigcup_{j\in J_{2}}\mathrm{supp}\,\varphi_{j} \subset B_{R^{\prime}}^{\mathrm{c}}.\]
For example, we can choose \(\{\varphi_{j}\}_{J_{1}}\) to consist of one element, say \(\varphi_{1}\), namely the usual mollifier function with support \(B^{\prime}\), where the open ball \(B^{\prime}\) (centered at the origin) lies between \(B_{R^{\prime}}\) and \(B_{R}\), i.e. \(\overline{B}_{R^{\prime}}\subset B^{\prime}=\mathrm{int}\,(\mathrm{supp}\, \varphi_{1})\), \(\mathrm{supp}\,\varphi_{1}\subset B_{R}\). Then the second part consists of a finite open covering of the spherical shell \(\overline{B}_{R}\setminus B^{\prime}\).
Then we take, for any \(v\in V\), the product \(v_{1}:=v\sum_{j\in J_{1}}\varphi_{j}\). This is an element of \(V\), too, with support in \(B_{R}\), and it can be continued by zero to an element of \(V_{\mathbb{R}^{d}}^{\circ}\) (keeping the notation). Hence we can take it as a test function in the weak formulation (14) and obtain
\[a_{\mathbb{R}^{d}}(u,v_{1})=(f(\cdot,u),v_{1})_{\mathbb{R}^{d}}.\]
This is equal to
\[(\nabla u,\nabla v_{1})_{\Omega}+(\nabla u,\nabla v_{1})_{B_{R} \setminus\overline{\Omega}}-\kappa^{2}(c(\cdot,u)u,v_{1})_{B_{R}}-(T_{\kappa }u,v_{1})_{S_{R}}\] \[=(f(\cdot,u),v_{1})_{B_{R}}-(T_{\kappa}u^{\mathrm{inc}},v_{1})_{ S_{R}}+(\hat{\mathbf{x}}\cdot\nabla u^{\mathrm{inc}},v_{1})_{S_{R}}\]
due to the properties of the support of \(v_{1}\) (in particular, all terms "living" on \(S_{R}\) are equal to zero).
Since the homogeneous Helmholtz equation is satisfied in \(\bigcup_{j\in J_{2}}\mathrm{supp}\,\varphi_{j}\subset B_{R^{\prime}}^{\mathrm{ c}}\), we can proceed as follows. We continue the test function \(v_{2}:=v\sum_{j\in J_{2}}\varphi_{j}\) by zero into the complete ball \(B_{R}\) and have
\[(f(\cdot,u),v_{2})_{B_{R}\setminus\overline{\Omega}} =0=(-\Delta u-\kappa^{2}u,v_{2})_{B_{R}\setminus\overline{B}_{R^{ \prime}}}\] \[=(\nabla u,\nabla v_{2})_{B_{R}\setminus\overline{B}_{R^{\prime} }}-\kappa^{2}(u,v_{2})_{B_{R}\setminus\overline{B}_{R^{\prime}}}-(\mathbf{\nu} \cdot\nabla u,v_{2})_{\partial(B_{R}\setminus\overline{B}_{R^{\prime}})}\] \[=(\nabla u,\nabla v_{2})_{B_{R}\setminus\overline{B}_{R^{\prime} }}-\kappa^{2}(u,v_{2})_{B_{R}\setminus\overline{B}_{R^{\prime}}}-(\hat{\mathbf{x}} \cdot\nabla u,v_{2})_{S_{R}}.\]
Now, taking into consideration the properties of the support of \(v_{2}\), we easily obtain the following relations:
\[(\nabla u,\nabla v_{2})_{B_{R}\setminus\overline{B}_{R^{\prime} }} =(\nabla u,\nabla v_{2})_{\Omega}+(\nabla u,\nabla v_{2})_{B_{R} \setminus\overline{\Omega}},\] \[(u,v_{2})_{B_{R}\setminus\overline{B}_{R^{\prime}}} =(c(\cdot,u)u,v_{2})_{B_{R}},\] \[(\hat{\mathbf{x}}\cdot\nabla u,v_{2})_{S_{R}} =(\hat{\mathbf{x}}\cdot\nabla u^{\mathrm{rad}},v_{2})_{S_{R}}+(\hat{ \mathbf{x}}\cdot\nabla u^{\mathrm{inc}},v_{2})_{S_{R}}\] \[=(T_{\kappa}u^{\mathrm{rad}},v_{2})_{S_{R}}+(\hat{\mathbf{x}}\cdot \nabla u^{\mathrm{inc}},v_{2})_{S_{R}}\] \[=(T_{\kappa}u,v_{2})_{S_{R}}-(T_{\kappa}u^{\mathrm{inc}},v_{2})_{S_ {R}}+(\hat{\mathbf{x}}\cdot\nabla u^{\mathrm{inc}},v_{2})_{S_{R}},\]
where the treatment of the last term makes use of the construction of the Dirichlet-to-Neumann map \(T_{\kappa}\).
Adding both relations and observing that \(v=v_{1}+v_{2}\), we arrive at the variational formulation (16).
Conversely, let \(u\in V\) be a solution to (16). To continue it into \(B_{R}^{c}\), similar to the first part of the proof we construct the radiating solution \(u_{B_{R}^{c}}\) of the Helmholtz equation outside \(B_{R}\) such that \(u_{B_{R}^{c}}\big{|}_{S_{R}}=(u-u^{\mathrm{inc}})|_{S_{R}}\) and set \(u:=u_{B_{R}^{c}}+u^{\mathrm{inc}}\) in \(B_{R}^{+}\). Hence we have that \(T_{\kappa}u=\frac{\partial u_{B_{R}^{c}}}{\partial\mathbf{\dot{x}}}+T_{\kappa}u^{ \mathrm{inc}}\).
Now we take an element \(v\in V_{\mathbb{R}^{d}}^{\circ}\). Its restriction to \(B_{R}\) is an element of \(V\) and thus can be taken as a test function in (16):
\[\begin{split}&(\nabla u,\nabla v)_{\Omega}+(\nabla u,\nabla v)_{B _{R}\setminus\overline{\Omega}}-\kappa^{2}(c(\cdot,u)u,v)_{B_{R}}-(T_{\kappa }u,v)_{S_{R}}\\ &=(f(\cdot,u),v)_{B_{R}}-(T_{\kappa}u^{\mathrm{inc}},v)_{S_{R}}+ (\hat{\mathbf{x}}\cdot\nabla u^{\mathrm{inc}},v)_{S_{R}}.\end{split} \tag{17}\]
Since \(v\) has a compact support, we can choose a ball \(B\subset\mathbb{R}^{d}\) centered at the origin such that \(\overline{B}_{R}\cup\operatorname{supp}v\subset B\). The homogeneous Helmholtz equation is obviously satisfied in the spherical shell \(B\setminus\overline{B}_{R}\):
\[-\Delta u_{B_{R}^{c}}-\kappa^{2}u_{B_{R}^{c}}=0.\]
We multiply this equation by the complex conjugate of the test function \(v\in V\), then integrate over the shell, and apply the first Green's formula:
\[(\nabla u_{B_{R}^{c}},\nabla v)_{B\setminus\overline{B}_{R}}-\kappa^{2}(u_{B_ {R}^{c}},v)_{B\setminus\overline{B}_{R}}-(\mathbf{\nu}\cdot\nabla u_{B_{R}^{c}},v )_{\partial(B\setminus\overline{B}_{R})}=0.\]
Now we observe that
\[(\nabla u_{B_{R}^{c}},\nabla v)_{B\setminus\overline{B}_{R}} =(\nabla u_{B_{R}^{c}},\nabla v)_{B_{R}^{+}},\] \[(u_{B_{R}^{c}},v)_{B\setminus\overline{B}_{R}} =(u_{B_{R}^{c}},v)_{B_{R}^{+}},\] \[(\mathbf{\nu}\cdot\nabla u_{B_{R}^{c}},v)_{\partial(B\setminus \overline{B}_{R})} =-(\hat{\mathbf{x}}\cdot\nabla u_{B_{R}^{c}},v)_{S_{R}}=-(T_{\kappa }u-T_{\kappa}u^{\mathrm{inc}},v)_{S_{R}}\]
where the minus sign in the last line results from the change in the orientation of the outer normal (once w.r.t. the shell, once w.r.t. \(B_{R}\)) and the construction of \(u_{B_{R}^{c}}\). So we arrive at
\[(\nabla u_{B_{R}^{c}},\nabla v)_{B_{R}^{+}}-\kappa^{2}(u_{B_{R}^{c}},v)_{B_{R }^{+}}+(T_{\kappa}u,v)_{S_{R}}=(T_{\kappa}u^{\mathrm{inc}},v)_{S_{R}}.\]
Finally, since the incident field satisfies the homogeneous Helmholtz equation in the spherical shell, too, we see by an analogous argument that the variational equation
\[(\nabla u^{\mathrm{inc}},\nabla v)_{B_{R}^{+}}-\kappa^{2}(u^{\mathrm{inc}},v )_{B_{R}^{+}}=-(\hat{\mathbf{x}}\cdot\nabla u^{\mathrm{inc}},v)_{S_{R}} \tag{18}\]
holds.
Adding the variational equations (17) - (18), we arrive at the variational formulation (14).
## 5 Existence and uniqueness of a weak solution
In this section we investigate the existence and uniqueness of the weak solution of the interior problem (15). We define the sesquilinear form
\[a(w,v):=(\nabla w,\nabla v)_{\Omega}+(\nabla w,\nabla v)_{B_{R}\setminus \overline{\Omega}}-\kappa^{2}(w,v)_{B_{R}}-(T_{\kappa}w,v)_{S_{R}}\quad \text{for all }w,v\in V, \tag{19}\]
the nonlinear form
\[\begin{split} n(w,v)&:=\kappa^{2}((c(\cdot,w)-1)w,v)_{ B_{R}}+(f(\cdot,w),v)_{B_{R}}\\ &\qquad-(T_{\kappa}u^{\rm inc},v)_{S_{R}}+(\hat{\mathbf{x}}\cdot\nabla u ^{\rm inc},v)_{S_{R}}\end{split} \tag{20}\]
and reformulate (16) as follows: Find \(u\in V\) such that
\[a(u,v)=n(u,v)\quad\text{for all }v\in V. \tag{21}\]
On the space \(V\), we use the standard seminorm and norm:
\[|v|_{V}:=\left(\|\nabla v\|_{0,2,\Omega}^{2}+\|\nabla v\|_{0,2,B_{R}\backslash \overline{\Omega}}^{2}\right)^{1/2},\quad\|v\|_{V}:=\left(|v|_{V}^{2}+\|v\|_{0,2,B_{R}}^{2}\right)^{1/2}.\]
For \(\kappa>0\), the following so-called _wavenumber dependent norm_ on \(V\) is also common:
\[\|v\|_{V,\kappa}:=\left(|v|_{V}^{2}+\kappa^{2}\|v\|_{0,2,B_{R}}^{2}\right)^{1 /2}.\]
It is not difficult to verify that the standard norm and the wavenumber dependent norm are equivalent on \(V\), i.e. it holds
\[C_{-}\|v\|_{V}\leq\|v\|_{V,\kappa}\leq C_{+}\|v\|_{V}\quad\text{for all }v\in V, \tag{22}\]
where the equivalence constants depend on \(\kappa\) in the following way: \(C_{-}:=\min\{1;\kappa\}\) and \(C_{+}:=\max\{1;\kappa\}\). We now proceed to examine the linear aspects of the problem (21).
**Lemma 8**.: _The sesquilinear form \(a\) is bounded on \(V\)._
Proof.: Applying to each addend in the definition of \(a\) the appropriate Cauchy-Bunyakovsky-Schwarz inequality, we obtain
\[\begin{split}|a(w,v)|&\leq|w|_{V}|v|_{V}+\kappa^{2} \|w\|_{0,2,B_{R}}\|v\|_{0,2,B_{R}}\\ &\quad+\|T_{\kappa}w\|_{-1/2,2,S_{R}}\|v\|_{1/2,2,S_{R}}\quad \text{for all }w,v\in V.\end{split}\]
According to Theorem 2 the DtN operator \(T_{\kappa}\) is bounded, i.e. there exists a constant \(C_{T_{\kappa}}>0\) such that
\[\|T_{\kappa}w\|_{-1/2,2,S_{R}}\leq C_{T_{\kappa}}\|w\|_{1/2,2,S_{R}}\quad \text{for all }w\in V.\]
It remains to apply a trace theorem [10, Thm. 3.37]:
\[\begin{split}|a(w,v)|&\leq|w|_{V}|v|_{V}+\kappa^{2} \|w\|_{0,2,B_{R}}\|v\|_{0,2,B_{R}}+C_{T_{\kappa}}C_{\rm tr}^{2}\|w\|_{1,2,B_{R }\backslash\overline{\Omega}}\|v\|_{1,2,B_{R}\backslash\overline{\Omega}}\\ &\leq|w|_{V}|v|_{V}+\kappa^{2}\|w\|_{0,2,B_{R}}\|v\|_{0,2,B_{R}}+ C_{T_{\kappa}}C_{\rm tr}^{2}\|w\|_{V}\|v\|_{V}\\ &\leq\min\{(\max\{1,\kappa^{2}\}+C_{T_{\kappa}}C_{\rm tr}^{2})\|w \|_{V}\|v\|_{V},(1+C_{T_{\kappa}}C_{\rm tr}^{2})\|w\|_{V,\kappa}\|v\|_{V,\kappa}\} \\ &\qquad\text{for all }w,v\in V.\end{split}\]
**Lemma 9**.: _Given \(\kappa_{0}>0\) and \(R_{0}>0\), assume that \(\kappa\geq\kappa_{0}\) (cf. Rem. 3) and \(R\geq R_{0}\). In addition, \(\kappa_{0}\geq 1\) is required for \(d=2\). Then the sesquilinear form a satisfies a Garding's inequality of the form_
\[\operatorname{Re}a(v,v)\geq\|v\|_{V,\kappa}^{2}-2\kappa^{2}\|v\|_{0,2,B_{R}} ^{2}\quad\text{for all }v\in V.\]
Proof.: From the definitions of \(a\) and the wavenumber dependent norm it follows immediately that
\[\operatorname{Re}a(v,v) =\|v\|_{V,\kappa}^{2}-2\kappa^{2}\|v\|_{0,2,B_{R}}^{2}-\operatorname {Re}\left(T_{\kappa}v,v\right)_{S_{R}}\] \[\geq\|v\|_{V,\kappa}^{2}-2\kappa^{2}\|v\|_{0,2,B_{R}}^{2}+CR^{-1} \|v\|_{0,2,S_{R}}^{2}\] \[\geq\|v\|_{V,\kappa}^{2}-2\kappa^{2}\|v\|_{0,2,B_{R}}^{2},\]
where the first estimate follows from [10, Lemma 3.3] with a constant \(C>0\) depending solely on \(\kappa_{0}>0\) and \(R_{0}>0\).
Next we discuss the solvability and stability of the problem (21) for the case that the right-hand side is just an antilinear continuous functional \(\ell:\ V\to\mathbb{C}\). The linear problem of finding \(u\in V\) such that
\[a(u,v)=\ell(v)\quad\text{for all }v\in V \tag{23}\]
holds can be formulated equivalently as an operator equation in the dual space \(V^{*}\) of \(V\) consisting of all continuous antilinear functionals from \(V\) to \(\mathbb{C}\). Namely, if we define the linear operator \(\mathcal{A}:\ V\to V^{*}\) by
\[\mathcal{A}w(v):=a(w,v)\quad\text{for all }w,v\in V, \tag{24}\]
problem (23) is equivalent to solving the operator equation
\[\mathcal{A}u=\ell \tag{25}\]
for \(u\in V\).
Note that \(\mathcal{A}\) is a bounded operator by Lemma 8.
**Theorem 10**.: _Under the assumptions of Lemma 9, the problem (25) is uniquely solvable for any \(\ell\in V^{*}\)._
Proof.: The basic ideas of the proof are taken from the proof of [10, Thm 3.8]. Since the embedding of \(V\) into \(L_{2}(B_{R})\) is compact by the compactness theorem of Rellich-Kondrachov [11, Thm. 3.27] together with Tikhonov's product theorem [10, Thm. 4.1], the compact perturbation theorem [10, Thm. 2.34] together with Lemma 9 imply that the Fredholm alternative [10, Thm. 2.27] holds for the equation (25).
Hence it is sufficient to demonstrate that the homogeneous adjoint problem (cf. [10, p. 43]) of finding \(u\in V\) such that \(\overline{a(v,u)}=0\) holds for all \(v\in V\) only allows for the trivial solution.
So suppose \(u\in V\) is a solution of the homogeneous adjoint problem. We take \(v:=u\) and consider the imaginary part of the resulting equation:
\[0=\operatorname{Im}\overline{a(u,u)}=-\operatorname{Im}\overline{(T_{\kappa} u,u)_{S_{R}}}=\operatorname{Im}\left(T_{\kappa}u,u\right)_{S_{R}}.\]
Then [10, Lemma 3.3] implies \(u=0\) on \(S_{R}\). Then \(u\) satisfies the variational equation
\[(\nabla u,\nabla v)_{\Omega}+(\nabla u,\nabla v)_{B_{R}\setminus\overline{ \Omega}}-\kappa^{2}(u,v)_{B_{R}}=0\quad\text{for all }v\in V,\]
i.e. it is a weak solution of the homogeneous interior transmission Neumann problem for the Helmholtz equation on \(B_{R}\). On the other hand, \(u\) can be extended to the whole space \(\mathbb{R}^{d}\) by zero to an element \(\tilde{u}\in V_{\mathbb{R}^{d}}\), and this element can be interpreted as a weak solution of a homogeneous full-space transmission problem, for instance in the sense of [10, Problem (P)]. Then it follows from [10, Lemma 7.1] that \(\tilde{u}=0\) and thus \(u=0\)
Since a Fredholm operator has a closed image [13, p. 33], it follows from the Open Mapping Theorem and Theorem 10 (cf. [13, Cor. 2.2]) that the inverse operator \(\mathcal{A}^{-1}\) is bounded, i.e. there exists a constant \(C(R,\kappa)>0\) such that
\[\|u\|_{V,\kappa}=\|\mathcal{A}^{-1}\ell\|_{V,\kappa}\leq C(R,\kappa)\|\ell\|_{V ^{*}}\quad\text{for all }\ell\in V^{*}.\]
Then it holds
\[\frac{1}{C(R,\kappa)}\leq\frac{\|\ell\|_{V^{*}}}{\|u\|_{V,\kappa}}=\sup_{v\in V \setminus\{0\}}\frac{|\ell(v)|}{\|u\|_{V,\kappa}\|v\|_{V,\kappa}}=\sup_{v\in V \setminus\{0\}}\frac{|a(u,v)|}{\|u\|_{V,\kappa}\|v\|_{V,\kappa}}\,.\]
This estimate proves the following result.
**Lemma 11**.: _Under the assumptions of Lemma 9, the sesquilinear form \(a\) satisfies an \(\inf\)-\(\sup\) condition:_
\[\beta(R,\kappa):=\inf_{w\in V\setminus\{0\}}\sup_{v\in V\setminus\{0\}}\frac {|a(w,v)|}{\|w\|_{V,\kappa}\|v\|_{V,\kappa}}>0.\]
Now we turn to the nonlinear situation and concretize the assumptions regarding the Caratheodory functions \(c\) and \(f\).
**Lemma 12**.: _Let \(p_{f}\in\begin{cases}[2,\infty),&d=2,\\ &d=3,\end{cases}\) and assume there exist nonnegative functions \(m_{f},g_{f}\in L_{\infty}(\Omega)\) such that_
\[|f(\mathbf{x},\xi)|\leq m_{f}(\mathbf{x})|\xi|^{p_{f}-1}+g_{f}(\mathbf{x})\quad\text{for all }(\mathbf{x},\xi)\in\Omega\times\mathbb{C}.\]
_Then \(vf(\cdot,w)\in L_{1}(\Omega)\) for all \(w,v\in V\)._
Proof.: Since \(f\) is a Caratheodory function, the composition \(f(\cdot,w)\) is measurable and it suffices to estimate the integral of \(|vf(\cdot,w)|\). Moreover, it suffices to consider the term \(m_{f}v|w|^{p_{f}-1}\) in more detail. By Holder's inequality for three functions, it holds that
\[\|vf(\cdot,w)\|_{0,1,\Omega}\leq\|m_{f}\|_{0,\infty,\Omega}\|v\|_{0,p_{f}, \Omega}\|w^{p_{f}-1}\|_{0,q,\Omega}\quad\text{with }\frac{1}{p_{f}}+\frac{1}{q}=1.\]
The \(L_{p_{f}}\)-norm of \(v\) is bounded thanks to the embedding \(V|_{\Omega}\subset L_{p_{f}}(\Omega)\) for the allowed values of \(p_{f}\)[1, Thm. 4.12]. Since \(|w^{p_{f}-1}|^{q}=|w|^{p_{f}}\), the \(L_{q}\)-norm of \(w^{p_{f}-1}\) is bounded by the same reasoning.
**Lemma 13**.: _Let \(p_{c}\in\begin{cases}[2,\infty),&d=2,\\ &d=3,\end{cases}\) and assume there exist nonnegative functions \(m_{c},g_{c}\in L_{\infty}(\Omega)\) such that_
\[|c(\mathbf{x},\xi)-1|\leq m_{c}(\mathbf{x})|\xi|^{p_{c}-2}+g_{c}(\mathbf{x})\quad\text{for all }(\mathbf{x},\xi)\in\Omega\times\mathbb{C}.\]
_Then \(zv(c(\cdot,w)-1)\in L_{1}(\Omega)\) for all \(z,w,v\in V\)._
Proof.: Similar to the proof of Lemma 12 it is sufficient to consider the term \(m_{c}zv|w|^{p_{c}-2}\) in more detail. By Holder's inequality for four functions, it holds that
\[\|zv(c(\cdot,w)-1)\|_{0,1,\Omega}\leq\|m_{c}\|_{0,\infty,\Omega}\|z\|_{0,p_{c}, \Omega}\|v\|_{0,p_{c},\Omega}\|w^{p_{c}-2}\|_{0,q,\Omega}\quad\text{with }\frac{2}{p_{c}}+\frac{1}{q}=1.\]
The \(L_{p_{c}}\)-norms of \(z,v\) are bounded thanks to the embedding theorem [1, Thm. 4.12]. Since \(|w^{p_{c}-2}|^{q}=|w|_{c}^{p}\), the \(L_{q}\)-norm of \(w^{p_{c}-2}\) is bounded by the same reasoning.
**Corollary 14**.: _Under the assumptions of Lemma 12 and Lemma 13, resp., the following estimates hold for all \(z,w,v\in V\):_
\[|(f(\cdot,w),v)_{\Omega}| \leq C_{\mathrm{emb}}^{p_{f}}\|m_{f}\|_{0,\infty,\Omega}\|w\|_{1, 2,\Omega}^{p_{f}-1}\|v\|_{1,2,\Omega}\] \[\quad+\sqrt{|\Omega|_{d}}\,\|g_{f}\|_{0,\infty,\Omega}\|v\|_{0, 2,\Omega},\] \[|((c(\cdot,w)-1)z,v)_{\Omega}| \leq C_{\mathrm{emb}}^{p_{c}}\|m_{c}\|_{0,\infty,\Omega}\|w\|_{1, 2,\Omega}^{p_{c}-2}\|z\|_{1,2,\Omega}\|v\|_{1,2,\Omega}\] \[\quad+\|g_{c}\|_{0,\infty,\Omega}\|z\|_{0,2,\Omega}\|v\|_{0,2, \Omega},\]
_where \(|\Omega|_{d}\) is the \(d\)-volume of \(\Omega\)._
Proof.: Replace \(v\) by \(\overline{v}\) in Lemmata 12, 13 to get the first addend of the bounds. The estimate of the second addend is trivial.
**Example 15**.: _An important example for the nonlinearities is_
\[c(\mathbf{x},\xi):=\begin{cases}1,&(\mathbf{x},\xi)\in\Omega^{+}\times\mathbb{C},\\ \varepsilon^{(L)}(\mathbf{x})+\alpha(\mathbf{x})|\xi|^{2},&(\mathbf{x},\xi)\in\Omega\times \mathbb{C},\end{cases}\]
_with given \(\varepsilon^{(L)},\alpha\in L_{\infty}(\Omega)\), and \(f=0\). Here \(p_{c}=4\), which is within the range of validity of Lemma 13, and \(m_{c}=|\alpha|\), \(g_{c}=|\varepsilon^{(L)}-1|\)._
The estimates from Corollary 14 show that the first two terms on the right-hand side of the variational equation (21) can be considered as values of nonlinear mappings from \(V\) to \(V^{*}\), i.e. we can define
\[\ell^{\mathrm{contr}}:\;V\to V^{*}\quad\text{ by } \langle\ell^{\mathrm{contr}}(w),v\rangle:=\kappa^{2}((c(\cdot,w)-1)w,v)_{\Omega},\] \[\ell^{\mathrm{src}}:\;V\to V^{*}\quad\text{ by } \langle\ell^{\mathrm{src}}(w),v\rangle:=(f(\cdot,w),v)_{\Omega} \quad\text{ for all }w,v\in V.\]
Furthermore, if \(u^{\mathrm{inc}}\in H^{1}_{\mathrm{loc}}(\Omega^{+})\) is such that additionally \(\Delta u^{\mathrm{inc}}\) belongs to \(L_{2,\mathrm{loc}}(\Omega^{+})\) (where \(\Delta u^{\mathrm{inc}}\) is understood in the distributional sense), the last two terms on the right-hand side of (20) form an antilinear continuous functional on \(\ell^{\mathrm{inc}}\in V^{*}\):
\[\langle\ell^{\mathrm{inc}},v\rangle:=(\hat{\mathbf{x}}\cdot\nabla u^{\mathrm{inc} }-T_{\kappa}u^{\mathrm{inc}},v)_{S_{R}}\quad\quad\text{for all }v\in V.\]
This is a consequence of Theorem 2 and the estimates before the trace theorem [1, Thm. 6.13]. Hence
\[\|\ell^{\mathrm{inc}}\|_{V^{*}}\leq\tilde{C}_{\mathrm{tr}}[\|\Delta u^{ \mathrm{inc}}\|_{0,2,B_{R}\setminus\overline{\Omega}}+\|u^{\mathrm{inc}}\|_{0,2,B_{R}\setminus\overline{\Omega}}]+C_{T_{\kappa}}C_{\mathrm{tr}}^{2}\|u^{ \mathrm{inc}}\|_{1,2,B_{R}\setminus\overline{\Omega}},\]
where \(\tilde{C}_{\mathrm{tr}}\) is the norm of the trace operator defined in [1, eq. (6.39)].
However, it is more intuitive to utilize the estimate
\[\|\ell^{\text{inc}}\|_{V^{*}}\leq C_{\text{tr}}\|\hat{\mathbf{x}}\cdot\nabla u^{ \text{inc}}-T_{\kappa}u^{\text{inc}}\|_{-1/2,2,S_{R}}.\]
The reason for this is that the bound can be interpreted as a measure of the deviation of the function \(u^{\text{inc}}\) from a radiating solution of the corresponding Helmholtz equation. In other words: If the function \(u^{\text{inc}}\) satisfies the boundary value problem (5) with \(f_{S_{R}}:=u^{\text{inc}}|_{S_{R}}\), then the functional \(\ell^{\text{inc}}\) is not present.
Consequently, setting
\[\mathcal{F}(w):=\ell^{\text{contr}}(w)+\ell^{\text{src}}(w)+\ell^{\text{inc}} \qquad\text{for all }w\in V,\]
we obtain a nonlinear operator \(\mathcal{F}:\;V\to V^{*}\), and the problem (21) is then equivalent to the operator equation
\[\mathcal{A}u=\mathcal{F}(u)\quad\text{in }V^{*},\]
and further, by Lemma 11, equivalent to the fixed-point problem
\[u=\mathcal{A}^{-1}\mathcal{F}(u)\quad\text{in }V. \tag{26}\]
In order to prove the subsequent existence and uniqueness theorem, we specify some additional properties of the nonlinearities \(c\) and \(f\).
**Definition 16**.: _The functions \(c\) and \(f\) are said to generate locally Lipschitz continuous Nemycki operators in \(V\) if the following holds: For some parameters \(p_{c},p_{f}\in\begin{cases}[2,\infty),\quad d=2,\\ [2,6],\quad\quad d=3,\end{cases}\) there exist Caratheodory functions \(L_{c}:\;\Omega\times\mathbb{C}\times\mathbb{C}\to(0,\infty)\) and \(L_{f}:\;\Omega\times\mathbb{C}\times\mathbb{C}\to(0,\infty)\) such that the composition operators \(\Omega\times V\times V\to L_{q_{c}}(\Omega):\;(\mathbf{x},w,v)\mapsto L_{c}(\mathbf{x},w,v)\), \(\Omega\times V\times V\to L_{q_{f}}(\Omega):\;(\mathbf{x},w,v)\mapsto L_{f}(\mathbf{x},w,v)\) are bounded for \(q_{c},q_{f}>0\) with \(\frac{3}{p_{c}}+\frac{1}{q_{c}}=\frac{2}{p_{f}}+\frac{1}{q_{f}}=1\), and_
\[|c(\mathbf{x},\xi)-c(\mathbf{x},\eta)|\leq L_{c}(\mathbf{x},\xi,\eta)|\xi-\eta|,\quad|f( \mathbf{x},\xi)-f(\mathbf{x},\eta)|\leq L_{f}(\mathbf{x},\xi,\eta)|\xi-\eta|\]
_for all \((\mathbf{x},\xi,\eta)\in\Omega\times\mathbb{C}\times\mathbb{C}\)._
**Remark 17**.: _If the nonlinearities \(c\) and \(f\) generate locally Lipschitz continuous Nemycki operators in the sense of the above Definition 16, the assumptions of Lemmata 12, 13 can be replaced by the requirement that there exist functions \(w_{f},w_{c}\in V\) such that \(f(\cdot,w_{f})\in L_{p_{f}/(p_{f}-1)}(\Omega)\) and \(c(\cdot,w_{c})\in L_{p_{c}/(p_{c}-2)}(\Omega)\), respectively._
Proof.: Indeed, similar to the proofs of the two lemmata mentioned, we have that
\[\|vf(\cdot,w)\|_{0,1,\Omega} \leq\|vf(\cdot,w_{f})\|_{0,1,\Omega}+\|v(f(\cdot,w)-f(\cdot,w_{f} ))\|_{0,1,\Omega}\] \[\leq\|vf(\cdot,w_{f})\|_{0,1,\Omega}+\|vL_{f}(\cdot,w,w_{f})|w-w _{f}|\|_{0,1,\Omega}\] \[\leq\|v\|_{0,p_{f},\Omega}\|f(\cdot,w_{f})\|_{0,\bar{q}_{f}, \Omega}+\|v\|_{0,p_{f},\Omega}\|L_{f}(\cdot,w,w_{f})\|_{0,q_{f},\Omega}\|w-w _{f}\|_{0,p_{f},\Omega}\] \[\leq\left[\|f(\cdot,w_{f})\|_{0,\bar{q}_{f},\Omega}+\|L_{f}(\cdot,w,w_{f})\|_{0,q_{f},\Omega}(\|w\|_{V}+\|w_{f}\|_{V})\right]\|v\|_{V},\] \[\|zvc(\cdot,w)\|_{0,1,\Omega} \leq\|zvc(\cdot,w_{c})\|_{0,1,\Omega}+\|zv(c(\cdot,w)-c(\cdot,w_{c }))\|_{0,1,\Omega}\] \[\leq\|zvc(\cdot,w_{c})\|_{0,1,\Omega}+\|zvL_{c}(\cdot,w,w_{c})|w- w_{c}|\|_{0,1,\Omega}\] \[\leq\|z\|_{0,p_{c},\Omega}\|v\|_{0,p_{c},\Omega}\|c(\cdot,w_{c}) \|_{0,\bar{q}_{c},\Omega}\] \[\quad+\|z\|_{0,p_{c},\Omega}\|v\|_{0,p_{c},\Omega}\|L_{c}(\cdot,w,w_{c})\|_{0,q_{c},\Omega}\|w-w_{c}\|_{0,p_{c},\Omega}\] \[\leq\left[\|c(\cdot,w_{c})\|_{0,\bar{q}_{c},\Omega}+\|L_{c}(\cdot,w,w_{c})\|_{0,q_{c},\Omega}(\|w\|_{V}+\|w_{c}\|_{V})\right]\|z\|_{V}\|v\|_{V}\]
with \(\frac{1}{p_{f}}+\frac{1}{\bar{q}_{f}}=1\) and \(\frac{2}{p_{c}}+\frac{1}{\bar{q}_{c}}=1\).
**Theorem 18**.: _Under the assumptions of Lemma 9, let the functions \(c\) and \(f\) generate locally Lipschitz continuous Nemycki operators in \(V\) and assume that there exist functions \(w_{f},w_{c}\in V\) such that \(f(\cdot,w_{f})\in L_{p_{f}/(p_{f}-1)}(\Omega)\) and \(c(\cdot,w_{f})\in L_{p_{c}/(p_{c}-2)}(\Omega)\), respectively. Furthermore let \(u^{\mathrm{inc}}\in H^{I}_{\mathrm{loc}}(\Omega^{+})\) be such that additionally \(\Delta u^{\mathrm{inc}}\in L_{2,\mathrm{loc}}(\Omega^{+})\) holds. If there exist numbers \(\varrho>0\) and \(L_{\mathcal{F}}\in(0,\beta(R,\kappa))\) such that the following two conditions_
\[\kappa^{2} \left[\|c(\cdot,w_{c})-1\|_{0,\bar{q}_{c},\Omega}+\|L_{c}(\cdot,w,w_{c})\|_{0,q_{c},\Omega}(\varrho+\|w_{c}\|_{V})\right]\varrho\] \[+\left[\|f(\cdot,w_{f})\|_{0,\bar{q}_{f},\Omega}+\|L_{f}(\cdot,w,w_{f})\|_{0,q_{f},\Omega}(\varrho+\|w_{f}\|_{V})\right] \tag{27}\] \[+C_{\mathrm{tr}}\|\hat{\mathbf{x}}\cdot\nabla u^{\mathrm{inc}}-T_{ \kappa}u^{\mathrm{inc}}\|_{-1/2,2,S_{R}}\leq\varrho\beta(R,\kappa),\] \[\kappa^{2} \left[\|L_{c}(\cdot,w,v)\|_{0,q_{c},\Omega}\varrho+\|c(\cdot,w_{ c})-1\|_{0,\bar{q}_{c},\Omega}+\|L_{c}(\cdot,w,w_{c})\|_{0,q_{c},\Omega}( \varrho+\|w_{c}\|_{V})\right]\] \[+\|L_{f}(\cdot,w,v)\|_{0,q_{f},\Omega}\leq L_{\mathcal{F}} \tag{28}\]
_are satisfied for all \(w,v\in K^{\mathrm{cl}}_{\varrho}:=\{v\in V:\ \|v\|_{V}\leq\varrho\}\), then the problem (26) has a unique solution \(u\in K^{\mathrm{cl}}_{\varrho}\)._
Proof.: First we mention that \(K^{\mathrm{cl}}_{\varrho}\) is a closed nonempty subset of \(V\).
Next we show that \(\mathcal{A}^{-1}\mathcal{F}(K^{\mathrm{cl}}_{\varrho})\subset K^{\mathrm{cl}}_ {\varrho}\). To this end we make use of the estimates given in the proof of Remark 17 and obtain
\[\|\mathcal{F}(w)\|_{V^{*}} \leq\|\ell^{\mathrm{contr}}(w)\|_{V^{*}}+\|\ell^{\mathrm{src}}(w )\|_{V^{*}}+\|\ell^{\mathrm{inc}}\|_{V^{*}}\] \[\leq\kappa^{2}\left[\|c(\cdot,w_{c})-1\|_{0,\bar{q}_{c},\Omega}+ \|L_{c}(\cdot,w,w_{c})\|_{0,q_{c},\Omega}(\|w\|_{V}+\|w_{c}\|_{V})\right]\|w\|_ {V}\] \[\quad+\left[\|f(\cdot,w_{f})\|_{0,\bar{q}_{f},\Omega}+\|L_{f}( \cdot,w,w_{f})\|_{0,q_{f},\Omega}(\|w\|_{V}+\|w_{f}\|_{V})\right]+\|\ell^{ \mathrm{inc}}\|_{V^{*}}\] \[\leq\kappa^{2}\left[\|c(\cdot,w_{c})-1\|_{0,\bar{q}_{c},\Omega}+ \|L_{c}(\cdot,w,w_{c})\|_{0,q_{c},\Omega}(\varrho+\|w_{c}\|_{V})\right]\varrho\] \[\quad+\left[\|f(\cdot,w_{f})\|_{0,\bar{q}_{f},\Omega}+\|L_{f}( \cdot,w,w_{f})\|_{0,q_{f},\Omega}(\varrho+\|w_{f}\|_{V})\right]\] \[\quad+C_{\mathrm{tr}}\|\hat{\mathbf{x}}\cdot\nabla u^{\mathrm{inc}} -T_{\kappa}u^{\mathrm{inc}}\|_{-1/2,2,S_{R}}.\]
Hence the assumption (27) implies \(\|\mathcal{A}^{-1}\mathcal{F}(w)\|_{V}\leq\varrho\).
It remains to show that the mapping \(\mathcal{A}^{-1}\mathcal{F}\) is a contraction.
We start with the consideration of the contrast term. From the elementary decomposition
\[(c(\cdot,w)-1)w-(c(\cdot,v)-1)v=(c(\cdot,w)-c(\cdot,v))w+(c(\cdot,v)-1)(w-v)\]
we see that
\[\|\ell^{\mathrm{contr}}(w)-\ell^{\mathrm{contr}}(v)\|_{V^{*}}\] \[\leq\kappa^{2}\|L_{c}(\cdot,w,v)\|_{0,q_{c},\Omega}\|w-v\|_{V}\|w \|_{V}\] \[\quad+\kappa^{2}\left[\|c(\cdot,w_{c})-1\|_{0,\bar{q}_{c},\Omega} +\|L_{c}(\cdot,w,w_{c})\|_{0,q_{c},\Omega}\|w-w_{c}\|_{V}\right]\|w-v\|_{V}\] \[\leq\kappa^{2}\|L_{c}(\cdot,w,v)\|_{0,q_{c},\Omega}\|w-v\|_{V}\varrho\] \[\quad+\kappa^{2}\left[\|c(\cdot,w_{c})-1\|_{0,\bar{q}_{c},\Omega} +\|L_{c}(\cdot,w,w_{c})\|_{0,q_{c},\Omega}(\varrho+\|w_{c}\|_{V})\right]\|w-v\|_ {V}\] \[\leq\kappa^{2}\left[\|L_{c}(\cdot,w,v)\|_{0,q_{c},\Omega}\varrho+ \|c(\cdot,w_{c})-1\|_{0,\bar{q}_{c},\Omega}+\|L_{c}(\cdot,w,w_{c})\|_{0,q_{c}, \Omega}(\varrho+\|w_{c}\|_{V})\right]\|w-v\|_{V}.\]
The estimate of the source term follows immediately from the properties of \(f\):
\[\|\ell^{\mathrm{src}}(w)-\ell^{\mathrm{src}}(v)\|_{V^{*}}\leq\|L_{f}(\cdot,w, v)\|_{0,q_{f},\Omega}\|w-v\|_{V}.\]
From
\[\|\mathcal{F}(w)-\mathcal{F}(v)\|_{V^{*}}\leq\|\ell^{\mathrm{contr}}(w)-\ell^{ \mathrm{contr}}(v)\|_{V^{*}}+\|\ell^{\mathrm{src}}(w)-\ell^{\mathrm{src}}(v)\|_{V ^{*}}\]
and assumption (28) we thus obtain
\[\|\mathcal{F}(w)-\mathcal{F}(v)\|_{V^{*}}\leq L_{\mathcal{F}}\|w-v\|_{V}.\]
In summary, Banach's fixed point theorem can be applied (see e.g. [14, Sect. 9.2.1]) and we conclude that the problem (26) has a unique solution \(u\in K_{\varrho}^{\mathrm{cl}}\).
If we introduce the function space
\[\tilde{V}:=\{v\in L_{2}(B_{R}):\ v|_{\Omega}\in H^{1}(\Omega),\ v|_{B_{R} \setminus\overline{\Omega}}\in H^{1}(B_{R}\setminus\overline{\Omega})\}\]
equipped with the norm
\[\|v\|_{\tilde{V}}:=\left(\|v\|_{1,2,\Omega}^{2}+\|v\|_{1,2,B_{R}\setminus \overline{\Omega}}^{2}\right)^{1/2}\quad\text{for all }v\in\tilde{V},\]
the ball \(K_{\varrho}^{\mathrm{cl}}\) appearing in the above theorem can be interpreted as a ball in \(\tilde{V}\) of radius \(\varrho\) with center in
\[u_{0}:=\begin{cases}0&\text{in }\Omega,\\ -u^{\mathrm{inc}}&\text{in }B_{R}\setminus\overline{\Omega}.\end{cases}\]
Indeed, for \(u\) of the form (3), it holds that
\[\|u-u_{0}\|_{\tilde{V}}^{2}=\|u^{\mathrm{trans}}\|_{1,2,\Omega}^{2}+\|u^{ \mathrm{rad}}+u^{\mathrm{inc}}\|_{1,2,B_{R}\setminus\overline{\Omega}}^{2}=\| u\|_{V}^{2}.\]
This means that the influence of the incident field \(u^{\mathrm{inc}}\) on the radius \(\varrho\) in Theorem 18 depends only on the deviation of \(u^{\mathrm{inc}}\) from a radiating field measured by \(\|\ell^{\mathrm{inc}}\|_{V^{*}}\), but not directly on the intensity of \(u^{\mathrm{inc}}\). In other words, if the incident field \(u^{\mathrm{inc}}\) is radiating (i.e., it also satisfies the Sommerfeld radiation condition (4) and thus \(\ell^{\mathrm{inc}}=0\)), the radius \(\varrho\) does not depend on \(u^{\mathrm{inc}}\). In particular, \(u^{\mathrm{inc}}\) can be a strong field, which is important for the occurrence of generation effects of higher harmonics [1].
**Example 19** (Example 15 continued).: _The identity_
\[c(\cdot,\xi)-c(\cdot,\eta)=\alpha\left(|\xi|^{2}-|\eta|^{2}\right)=\alpha \left(|\xi|+|\eta|\right)(|\xi|-|\eta|)\]
_for all \(\xi,\eta\in\mathbb{C}\) and the inequality \(||\xi|-|\eta||\leq|\xi-\eta|\) show that_
\[|c(\cdot,\xi)-c(\cdot,\eta)|\leq|\alpha|(|\xi|+|\eta|)|\xi-\eta|\]
_holds, hence we can set \(L_{c}(\cdot,\xi,\eta):=|\alpha|(|\xi|+|\eta|)\). With \(p_{c}=q_{c}=4\), \(c\) generates a locally Lipschitz continuous Nemycki operator in \(V\). Furthermore we may choose \(w_{c}=0\). Then:_
\[\|c(\cdot,w_{c})-1\|_{0,\bar{q}_{c},\Omega} =\|\varepsilon^{(L)}-1\|_{0,2,\Omega},\] \[\|L_{c}(\cdot,w,v)\|_{0,q_{c},\Omega} =\|\alpha(|w|+|v|)\|_{0,4,\Omega}\leq\|\alpha w\|_{0,4,\Omega}+ \|\alpha v\|_{0,4,\Omega}\] \[\leq\|\alpha\|_{0,\infty,\Omega}\left[\|w\|_{0,4,\Omega}+\|v\|_{ 0,4,\Omega}\right]\leq C_{\mathrm{emb}}\|\alpha\|_{0,\infty,\Omega}\left[\|w \|_{V}+\|v\|_{V}\right],\] \[\|L_{c}(\cdot,w,w_{c})\|_{0,q_{c},\Omega} =\|\alpha w\|_{0,4,\Omega}\leq C_{\mathrm{emb}}\|\alpha\|_{0, \infty,\Omega}\|w\|_{V}.\]
_Hence the validity of the following conditions is sufficient for (27), (28):_
\[\kappa^{2}\left[\|\varepsilon^{(L)}-1\|_{0,2,\Omega}+C_{\rm emb}\| \alpha\|_{0,\infty,\Omega}\varrho^{2}\right]\varrho\] \[\qquad+\,C_{\rm tr}\|\hat{\mathbf{x}}\cdot\nabla u^{\rm inc}-T_{\kappa} u^{\rm inc}\|_{-1/2,2,S_{R}} \leq\varrho\beta(R,\kappa),\] \[\kappa^{2}\left[\|\varepsilon^{(L)}-1\|_{0,2,\Omega}+3C_{\rm emb} \|\alpha\|_{0,\infty,\Omega}\varrho^{2}\right] \leq L_{\mathcal{F}}.\]
_A consideration of these condition shows that there can be different scenarios for which they can be fulfilled. In particular, one of the smallness requirements concerns the product \(\|\alpha\|_{0,\infty,\Omega}\varrho^{3}\)._
**Example 20** (saturated Kerr nonlinearity).: _Another important example for the nonlinearities is [1]_
\[c(\mathbf{x},\xi):=\begin{cases}1,&(\mathbf{x},\xi)\in\Omega^{+}\times\mathbb{C},\\ \varepsilon^{(L)}(\mathbf{x})+\alpha(\mathbf{x})|\xi|^{2}/(1+\gamma|\xi|^{2}),&(\mathbf{x },\xi)\in\Omega\times\mathbb{C},\end{cases}\]
_with given \(\varepsilon^{(L)},\alpha\in L_{\infty}(\Omega)\), saturation parameter \(\gamma>0\), and \(f=0\). Based on the identity_
\[\frac{|\xi|^{2}}{1+\gamma|\xi|^{2}}-\frac{|\eta|^{2}}{1+\gamma|\eta|^{2}}= \frac{(1+\gamma|\eta|^{2})|\xi|^{2}-(1+\gamma|\xi|^{2})|\eta|^{2}}{(1+\gamma| \xi|^{2})(1+\gamma|\eta|^{2})}=\frac{|\xi|^{2}-|\eta|^{2}}{(1+\gamma|\xi|^{2}) (1+\gamma|\eta|^{2})}\]
_for all \(\xi,\eta\in\mathbb{C}\) we obtain_
\[\left|\frac{|\xi|^{2}}{1+\gamma|\xi|^{2}}-\frac{|\eta|^{2}}{1+\gamma|\eta|^{2 }}\right|=\frac{(|\xi|+|\eta|)\,||\xi|-|\eta||}{(1+\gamma|\xi|^{2})(1+\gamma| \eta|^{2})}\leq(|\xi|+|\eta|)|\xi-\eta|.\]
_Hence on \(\Omega\) we arrive at the same Lipschitz function as in the previous Example 19, that is_
\[L_{c}(\mathbf{x},\xi,\eta):=\begin{cases}0,&(\mathbf{x},\xi,\eta)\in\Omega^{+}\times \mathbb{C}\times\mathbb{C},\\ |\alpha|(|\xi|+|\eta|),&(\mathbf{x},\xi,\eta)\in\Omega\times\mathbb{C}\times \mathbb{C}.\end{cases}\]
_Moreover, since_
\[c(\mathbf{x},w_{c})=c(\mathbf{x},0)=\begin{cases}1,&(\mathbf{x},\xi)\in\Omega^{+}\times \mathbb{C},\\ \varepsilon^{(L)},&(\mathbf{x},\xi)\in\Omega\times\mathbb{C}.\end{cases}\]
_we get the same sufficient conditions._
## 6 The modified boundary value problem
Since the exact DtN operator is represented as an infinite series (see (8), (11)), it is practically necessary to truncate this nonlocal operator and consider only finite sums
\[T_{\kappa,N}u(\mathbf{x}):=\frac{1}{R}\sum_{|n|\leq N}Z_{n}(\kappa R)u_{n}(R)Y_{n }(\hat{\mathbf{x}}),\quad\mathbf{x}=R\hat{\mathbf{x}}\in S_{R}\subset\mathbb{R}^{2}, \tag{29}\]
\[T_{\kappa,N}u(\mathbf{x})=\frac{1}{R}\sum_{n=0}^{N}\sum_{|m|\leq n}z_{n}(\kappa R) u_{n}^{m}(R)Y_{n}^{m}(\hat{\mathbf{x}}),\quad\mathbf{x}=R\hat{\mathbf{x}}\in S_{R}\subset \mathbb{R}^{3} \tag{30}\]
for some \(N\in\mathbb{N}_{0}\). The map \(T_{\kappa,N}\) is called the _truncated DtN operator_, and \(N\) is the _truncation order_ of the DtN operator.
The replacement of the exact DtN operator \(T_{\kappa}\) in the problem (15) by the truncated DtN operator \(T_{\kappa,N}\) introduces a perturbation, hence we have to answer the question of existence and uniqueness of a solution to the following problem:
Find \(u_{N}\in V\) such that
\[a_{N}(u_{N},v)=n_{N}(u_{N},v)\quad\text{for all $v\in V.$} \tag{31}\]
holds, where \(a_{N}\) and \(n_{N}\) are the forms defined by (19), (20) with \(T_{\kappa}\) replaced by \(T_{\kappa,N}\).
The next result is the counterpart to Lemmata 8, 9. Here we formulate a different version of Garding's inequality compared to the case \(d=2\) considered in [11, Thm. 4.4].
**Lemma 21**.: _The sesquilinear form \(a_{N}\)_
1. _is bounded, i.e. there exists a constant_ \(C>0\) _independent of_ \(N\) _such that_ \[|a_{N}(w,v)|\leq C\|w\|_{V}\|v\|_{V}\quad\text{for all $w,v\in V,$}\] _and_
2. _satisfies a Garding's inequality in the form_ \[\operatorname{Re}a_{N}(v,v)\geq\|v\|_{V,\kappa}^{2}-2\kappa^{2}\|v\|_{0,2,B_{ R}}^{2}\quad\text{for all $v\in V.$}\]
Proof.: (i) If the proof of [10, eq. (3.4a)] is carried out with finitely many terms of the expansion of \(T_{\kappa}\) only, the statement follows easily. Alternatively, Lemma 23 with \(s=0\) can also be used.
(ii) As in the proof of Lemma 9, the definitions of \(a_{N}\) and the wavenumber dependent norm yield
\[\operatorname{Re}a_{N}(v,v)=\|v\|_{V,\kappa}^{2}-2\kappa^{2}\|v\|_{0,2,B_{R}} ^{2}-\operatorname{Re}\left(T_{\kappa,N}v,v\right)_{S_{R}}.\]
Hence it remains to estimate the last term. In the case \(d=2\), we have (see (29))
\[T_{\kappa,N}v(\boldsymbol{x}):=\frac{1}{R}\sum_{|n|\leq N}Z_{n}(\kappa R)v_{ n}(R)Y_{n}(\hat{\boldsymbol{x}}),\quad\boldsymbol{x}=R\hat{\boldsymbol{x}} \in S_{R}.\]
Then, using the \(L_{2}(S_{1})\)-orthonormality of the circular harmonics [11, Prop. 3.2.1], we get
\[-(T_{\kappa,N}v,v)_{S_{R}} =-\frac{1}{R}\sum_{|n|\leq N}Z_{n}(\kappa R)(v_{n}(R)Y_{n},v_{n}( R)Y_{n})_{S_{R}}\] \[=-\frac{1}{R}\sum_{|n|\leq N}Z_{n}(\kappa R)|v_{n}(R)|^{2}(Y_{n},Y _{n})_{S_{R}}\] \[=-\sum_{|n|\leq N}Z_{n}(\kappa R)|v_{n}(R)|^{2}(Y_{n},Y_{n})_{S_{1}}\] \[=-\sum_{|n|\leq N}Z_{n}(\kappa R)|v_{n}(R)|^{2}.\]
Hence, by Lemma 4,
\[-\operatorname{Re}\left(T_{\kappa,N}v,v\right)_{S_{R}} =\sum_{0<|n|\leq N}\underbrace{(-\operatorname{Re}\,Z_{n}(\kappa R) )}_{\geq 1/2}|v_{n}(R)|^{2}+\underbrace{(-\operatorname{Re}\,Z_{0}(\kappa R ))}_{>0}|v_{0}(R)|^{2}\] \[\geq\frac{1}{2}\!\!\sum_{|n|\leq N}|v_{n}(R)|^{2}\geq 0.\]
The case \(d=3\) can be treated similarly. From
\[T_{\kappa,N}v(\boldsymbol{x})=\frac{1}{R}\sum_{n=0}^{N}\sum_{|m|\leq n}z_{n}( \kappa R)v_{n}^{m}(R)Y_{n}^{m}(\hat{\boldsymbol{x}})\]
(see (30)), we immediately obtain, using the \(L_{2}(S_{1})\)-orthonormality of the spherical harmonics [11, Thm. 2.8] that
\[-(T_{\kappa,N}v,v)_{S_{R}} =-\frac{1}{R}\sum_{n=0}^{N}\sum_{|m|\leq n}z_{n}(\kappa R)(v_{n}^ {m}(R)Y_{n}^{m},v_{n}^{m}(R)Y_{n}^{m})_{S_{R}}\] \[=-\frac{1}{R}\sum_{n=0}^{N}\sum_{|m|\leq n}z_{n}(\kappa R)|v_{n}^ {m}(R)|^{2}(Y_{n}^{m},Y_{n}^{m})_{S_{R}}\] \[=-R\sum_{n=0}^{N}\sum_{|m|\leq n}z_{n}(\kappa R)|v_{n}^{m}(R)|^{2 }(Y_{n}^{m},Y_{n}^{m})_{S_{1}}\] \[=-R\sum_{n=0}^{N}\sum_{|m|\leq n}z_{n}(\kappa R)|v_{n}^{m}(R)|^{2 },\]
and Lemma 4 implies
\[-\operatorname{Re}\left(T_{\kappa,N}v,v\right)_{S_{R}}=R\sum_{n=0}^{N}\sum_{| m|\leq n}\underbrace{(-\operatorname{Re}\,z_{n}(\kappa R))}_{\geq 1}|v_{n}^{m}(R)|^{2 }\geq R\sum_{n=0}^{N}\sum_{|m|\leq n}|v_{n}^{m}(R)|^{2}\geq 0.\]
In both cases we obtain the same Garding's inequality as in the original (untruncated) problem Lemma 9.
The next result is the variational version of the truncation error estimate. It closely follows the lines of the proof of [10, Thm. 3.3], where an estimate of \(\|(T_{\kappa}-T_{\kappa,N})v\|_{s-1/2,2,S_{R}}\), \(s\in\mathbb{R}\), was proved in the case \(d=2\).
**Lemma 22**.: _For given \(w,v\in H^{1/2}(S_{R})\) it holds that_
\[\Big{|}((T_{\kappa}-T_{\kappa,N})w,v)_{S_{R}}\Big{|}\leq c(N,w,v)\|w\|_{1/2,2,S_{R}}\|v\|_{1/2,2,S_{R}},\]
_where \(c(N,w,v)\geq 0\) and \(\lim_{N\to\infty}c(N,w,v)=0\)._
Proof.: We start with the two-dimensional situation. So let
\[\begin{split}& w(\mathbf{x})=w(R\hat{\mathbf{x}})=\sum_{|n|\in\mathbb{N}_{0 }}w_{n}(R)Y_{n}(\hat{\mathbf{x}}),\\ & v(\mathbf{x})=v(R\hat{\mathbf{x}})=\sum_{|k|\in\mathbb{N}_{0}}v_{k}(R)Y _{k}(\hat{\mathbf{x}}),\quad\mathbf{x}\in S_{R},\end{split} \tag{32}\]
be series representations of \(w|_{S_{R}},v|_{S_{R}}\) with the Fourier coefficients
\[\begin{split}& w_{n}(R)=(w(R\cdot),Y_{n})_{S_{1}}=\int_{S_{1}}w(R \hat{\mathbf{x}})\overline{Y_{n}}(\hat{\mathbf{x}})ds(\hat{\mathbf{x}}),\\ & v_{k}(R)=(v(R\cdot),Y_{k})_{S_{1}}=\int_{S_{1}}v(R\hat{\mathbf{x}}) \overline{Y_{k}}(\hat{\mathbf{x}})ds(\hat{\mathbf{x}}).\end{split}\]
The norm on the Sobolev space \(H^{s}(S_{R})\), \(s\geq 0\), can be defined as follows [13, Ch. 1, Rem. 7.6]:
\[\|v\|_{s,2,S_{R}}^{2}:=R\sum_{n\in\mathbb{Z}}(1+n^{2})^{s}|v_{n}(R)|^{2}. \tag{33}\]
Then, by (29), the orthonormality of the circular harmonics [11, Prop. 3.2.1] and (33),
\[\begin{split}\left|\left((T_{\kappa}-T_{\kappa,N})w,v\right)_{S _{R}}\right|&=\frac{1}{R}\left|\sum_{|n|,|k|>N}\left(Z_{n}( \kappa R)w_{n}(R)Y_{n}(R^{-1}\cdot),v_{k}(R)Y_{k}(R^{-1}\cdot)\right)_{S_{R}} \right|\\ &=\left|\sum_{|n|,|k|>N}Z_{n}(\kappa R)\left(w_{n}(R)Y_{n},v_{k}( R)Y_{k}\right)_{S_{1}}\right|\\ &=\left|\sum_{|n|>N}Z_{n}(\kappa R)w_{n}(R)\overline{v_{n}}(R) \right|\\ &=\left|\sum_{|n|>N}\frac{Z_{n}(\kappa R)}{(1+n^{2})^{1/2}}(1+n^{ 2})^{1/4}w_{n}(R)(1+n^{2})^{1/4}\overline{v_{n}}(R)\right|\\ &\leq\max_{|n|>N}\left|\frac{Z_{n}(\kappa R)}{(1+n^{2})^{1/2}} \right|\sum_{|n|>N}\left|(1+n^{2})^{1/4}w_{n}(R)(1+n^{2})^{1/4}\overline{v_{ n}}(R)\right|\\ &\leq\max_{|n|>N}\left|\frac{Z_{n}(\kappa R)}{(1+n^{2})^{1/2}} \right|\left(\sum_{|n|>N}(1+n^{2})^{1/2}\left|w_{n}(R)\right|^{2}\right)^{1/2} \\ &\quad\times\left(\sum_{|n|>N}(1+n^{2})^{1/2}\left|v_{n}(R)\right| ^{2}\right)^{1/2}\\ &\leq\frac{1}{R}\max_{|n|>N}\left|\frac{Z_{n}(\kappa R)}{(1+n^{2} )^{1/2}}\right|\tilde{c}(N,w,v)\|w\|_{1/2,2,S_{R}}\|v\|_{1/2,2,S_{R}},\end{split}\]
where
\[\tilde{c}(N,w,v)^{2}:=\frac{\sum_{|n|>N}(1+n^{2})^{1/2}|w_{n}(R)|^{2}}{\sum_{|n| \in\mathbb{N}_{0}}(1+n^{2})^{1/2}|w_{n}(R)|^{2}}\,\frac{\sum_{|n|>N}(1+n^{2})^{1 /2}|v_{n}(R)|^{2}}{\sum_{|n|\in\mathbb{N}_{0}}(1+n^{2})^{1/2}|v_{n}(R)|^{2}}.\]
The coefficient \(\tilde{c}(N,w,v)\) tends to zero for \(N\to\infty\) thanks to (33), (35).. Corollary 5 implies the estimate
\[\frac{1}{1+n^{2}}|Z_{n}(\kappa R)|^{2}\leq\max\{|Z_{0}(\kappa R)|^{2},1+|\kappa R |^{2}\},\quad|n|\in\mathbb{N}_{0},\]
hence we can set
\[c(N,w,v):=\frac{\tilde{c}(N,w,v)}{R}\max\{|Z_{0}(\kappa R)|,(1+|\kappa R|^{2}) ^{1/2}\}.\]
The investigation of the case \(d=3\) runs similarly. So let
\[\begin{split} w(\mathbf{x})&=w(R\hat{\mathbf{x}})=\sum_{n \in\mathbb{N}_{0}}\sum_{|m|\leq n}w_{n}^{m}(R)Y_{n}^{m}(\hat{\mathbf{x}}),\\ v(\mathbf{x})&=v(R\hat{\mathbf{x}})=\sum_{k\in\mathbb{N}_{0 }}\sum_{|l|\leq k}v_{k}^{l}(R)Y_{k}^{l}(\hat{\mathbf{x}}),\quad\mathbf{x}\in S_{R}, \end{split} \tag{34}\]
be series representations of \(w|_{S_{R}},v|_{S_{R}}\) with the Fourier coefficients
\[w_{n}^{m}(R) =(w(R\cdot),Y_{n}^{m})_{S_{1}}=\int_{S_{1}}w(R\hat{\mathbf{x}}) \overline{Y_{n}^{m}}(\hat{\mathbf{x}})ds(\hat{\mathbf{x}}),\] \[v_{k}^{l}(R) =(v(R\cdot),Y_{k}^{l})_{S_{1}}=\int_{S_{1}}v(R\hat{\mathbf{x}}) \overline{Y_{k}^{l}}(\hat{\mathbf{x}})ds(\hat{\mathbf{x}}).\]
The norm on the Sobolev space \(H^{s}(S_{R})\), \(s\geq 0\), can be defined as follows [13, Ch. 1, Rem. 7.6]:
\[\|v\|_{s,2,S_{R}}^{2}:=R^{2}\sum_{n\in\mathbb{N}_{0}}\sum_{|m|\leq n}(1+n^{2} )^{s}|v_{n}^{m}(R)|^{2}. \tag{35}\]
Then, by (30), the orthonormality of the spherical harmonics [16, Thm. 2.8] and (35),
\[\left|\left((T_{\kappa}-T_{\kappa,N})w,v\right)_{S_{R}}\right| =\frac{1}{R}\left|\sum_{n,k>N}\sum_{|m|\leq n,|l|\leq k}\left(z_{ n}(\kappa R)w_{n}^{m}(R)Y_{n}^{m}(R^{-1}\cdot),v_{k}^{l}(R)Y_{k}^{l}(R^{-1} \cdot)\right)_{S_{R}}\right|\] \[=R\left|\sum_{n,k>N}\sum_{|m|\leq n,|l|\leq k}z_{n}(\kappa R) \left(w_{n}^{m}(R)Y_{n}^{m},v_{k}^{l}(R)Y_{k}^{l})\right)_{S_{1}}\right|\] \[=R\left|\sum_{n>N}\sum_{|m|\leq n}z_{n}(\kappa R)w_{n}^{m}(R) \overline{v_{n}^{m}}(R)\right|\] \[=R\left|\sum_{n>N}\sum_{|m|\leq n}\frac{z_{n}(\kappa R)}{(1+n^{2 })^{1/2}}(1+n^{2})^{1/4}w_{n}^{m}(R)(1+n^{2})^{1/4}\overline{v_{n}^{m}}(R)\right|\]
\[\leq R\max_{n>N}\left|\frac{z_{n}(\kappa R)}{(1+n^{2})^{1/2}} \right|\left(\sum_{n>N}\sum_{|m|\leq n}\left|(1+n^{2})^{1/2}\left|w_{n}^{m}(R) \right|^{2}\right)^{1/2}\] \[\leq R\max_{n>N}\left|\frac{z_{n}(\kappa R)}{(1+n^{2})^{1/2}} \right|\left(\sum_{n>N}\sum_{|m|\leq n}(1+n^{2})^{1/2}\left|w_{n}^{m}(R) \right|^{2}\right)^{1/2}\] \[\quad\times\left(\sum_{n>N}\sum_{|m|\leq n}(1+n^{2})^{1/2}\left|v_ {n}^{m}(R)\right|^{2}\right)^{1/2}\] \[\leq\frac{1}{R}\max_{n>N}\left|\frac{z_{n}(\kappa R)}{(1+n^{2})^ {1/2}}\right|\tilde{c}(N,w,v)\|w\|_{1/2,2,S_{R}}\|v\|_{1/2,2,S_{R}},\]
where
\[\tilde{c}(N,w,v)^{2}:=\frac{\sum_{n>N}\sum_{|m|\leq n}(1+n^{2})^{1/2}\left|w_ {n}^{m}(R)\right|^{2}}{\sum_{|n|\in\mathbb{N}_{0}}\sum_{|m|\leq n}(1+n^{2})^{ 1/2}\left|w_{n}^{m}(R)\right|^{2}}\frac{\sum_{n>N}\sum_{|m|\leq n}(1+n^{2})^{ 1/2}\left|v_{n}^{m}(R)\right|^{2}}{\sum_{|n|\in\mathbb{N}_{0}}\sum_{|m|\leq n }(1+n^{2})^{1/2}\left|v_{n}^{m}(R)\right|^{2}}.\]
Thanks to Corollary 5 we can define
\[c(N,w,v):=\frac{\tilde{c}(N,w,v)}{R}\left(2+\left|\kappa R\right|^{2}\right)^{ 1/2}.\]
**Lemma 23**.: _For \(s\in[0,1/2)\) and \(w\in H^{1-s}(B_{R}\setminus\overline{\Omega})\), \(v\in H^{1+s}(B_{R}\setminus\overline{\Omega})\) it holds that_
\[\left|(T_{\kappa,N}w,v)_{S_{R}}\right|\leq C_{\mathrm{bl}}\|w\|_{1-s,2,B_{R} \setminus\overline{\Omega}}\|v\|_{1+s,2,B_{R}\setminus\overline{\Omega}},\]
_where the constant \(C_{\mathrm{bl}}\geq 0\) does not depend on \(N\)._
Proof.: We start with the two-dimensional situation as in the proof of Lemma 22. If \(w,v\) have the representations (32), then, by (29), the orthonormality of the circular harmonics [29, Prop. 3.2.1] and (33),
\[\left|(T_{\kappa,N}w,v)_{S_{R}}\right| =\frac{1}{R}\left|\sum_{|n|,|k|\leq N}\left(Z_{n}(\kappa R)w_{n}( R)Y_{n}(R^{-1}\cdot),v_{k}(R)Y_{k}(R^{-1}\cdot)\right)_{S_{R}}\right|\] \[=\left|\sum_{|n|,|k|\leq N}Z_{n}(\kappa R)\left(w_{n}(R)Y_{n},v_{ k}(R)Y_{k}\right)_{S_{1}}\right|\] \[=\left|\sum_{|n|\leq N}Z_{n}(\kappa R)w_{n}(R)\overline{v_{n}}(R)\right|\] \[=\left|\sum_{|n|\leq N}\frac{Z_{n}(\kappa R)}{(1+n^{2})^{1/2}}(1+ n^{2})^{(1/2-s)/2}w_{n}(R)(1+n^{2})^{(1/2+s)/2}\overline{v_{n}}(R)\right|\] \[\leq\max_{|n|\leq N}\left|\frac{Z_{n}(\kappa R)}{(1+n^{2})^{1/2}} \right|\sum_{|n|\leq N}\left|(1+n^{2})^{(1/2-s)/2}w_{n}(R)(1+n^{2})^{(1/2+s)/ 2}\overline{v_{n}}(R)\right|\]
\[\leq\max_{|n|\leq N}\left|\frac{Z_{n}(\kappa R)}{(1+n^{2})^{1/2}} \right|\left(\sum_{|n|\leq N}(1+n^{2})^{1/2-s}\left|w_{n}(R)\right|^{2}\right)^ {1/2}\] \[\quad\times\left(\sum_{|n|\leq N}(1+n^{2})^{1/2+s}\left|v_{n}(R) \right|^{2}\right)^{1/2}\] \[\leq\frac{1}{R}\max_{|n|\leq N}\left|\frac{Z_{n}(\kappa R)}{(1+n ^{2})^{1/2}}\right|\|w\|_{1/2-s,2,S_{R}}\|v\|_{1/2+s,2,S_{R}}.\]
Corollary 5 implies the estimate
\[\frac{1}{1+n^{2}}|Z_{n}(\kappa R)|^{2}\leq\max\{|Z_{0}(\kappa R)|^{2},1+| \kappa R|^{2}\},\quad|n|\in\mathbb{N}_{0},\]
hence
\[|(T_{\kappa,N}w,v)_{S_{R}}|\leq\frac{1}{R}\max\{|Z_{0}(\kappa R)|,(1+|\kappa R| ^{2})^{1/2}\}\|w\|_{1/2-s,2,S_{R}}\|v\|_{1/2+s,2,S_{R}}. \tag{36}\]
By the trace theorem [13, Thm. 3.38], we finally arrive at
\[|(T_{\kappa,N}w,v)_{S_{R}}|\leq\frac{C_{tr}^{2}}{R}\max\{|Z_{0}(\kappa R)|,(1+| \kappa R|^{2})^{1/2}\}\|w\|_{1-s,2,B_{R}\backslash\overline{\Omega}}\|v\|_{1+ s,2,B_{R}\backslash\overline{\Omega}}.\]
The investigation of the case \(d=3\) runs similarly. So let \(w,v\) have the representations (34), then, by (30), the orthonormality of the spherical harmonics [12, Thm. 2.8] and (35),
\[|(T_{\kappa,N}w,v)_{S_{R}}| =\frac{1}{R}\left|\sum_{n,k=0}^{N}\sum_{|m|\leq n,|l|\leq k}\left( z_{n}(\kappa R)w_{n}^{m}(R)Y_{n}^{m}(R^{-1}\cdot),v_{k}^{l}(R)Y_{k}^{l}(R^{-1} \cdot)\right)_{S_{R}}\right|\] \[=R\left|\sum_{n,k=0}^{N}\sum_{|m|\leq n,|l|\leq k}z_{n}(\kappa R) \left(w_{n}^{m}(R)Y_{n}^{m},v_{k}^{l}(R)Y_{k}^{l})\right)_{S_{1}}\right|\] \[=R\left|\sum_{n=0}^{N}\sum_{|m|\leq n}z_{n}(\kappa R)w_{n}^{m}(R) \overline{v_{n}^{m}}(R)\right|\] \[=R\left|\sum_{n=0}^{N}\sum_{|m|\leq n}\frac{z_{n}(\kappa R)}{(1+ n^{2})^{1/2}}(1+n^{2})^{(1/2-s)/2}w_{n}^{m}(R)(1+n^{2})^{(1/2+s)/2}\overline{v_{n}^ {m}}(R)\right|\] \[\leq R\max_{n\in\mathbb{N}_{0}}\left|\frac{z_{n}(\kappa R)}{(1+ n^{2})^{1/2}}\right|\left(\sum_{n=0}^{N}\sum_{|m|\leq n}\left|(1+n^{2})^{(1/2-s)/2}w_{n }^{m}(R)(1+n^{2})^{(1/2+s)/2}\overline{v_{n}^{m}}(R)\right|\] \[\leq R\max_{n\in\mathbb{N}_{0}}\left|\frac{z_{n}(\kappa R)}{(1+ n^{2})^{1/2}}\right|\left(\sum_{n=0}^{N}\sum_{|m|\leq n}(1+n^{2})^{1/2-s}\left|w_{n }^{m}(R)\right|^{2}\right)^{1/2}\] \[\quad\times\left(\sum_{n=0}^{N}\sum_{|m|\leq n}(1+n^{2})^{1/2+s} \left|v_{n}^{m}(R)\right|^{2}\right)^{1/2}\]
\[\leq\frac{1}{R}\max_{n\in\mathbb{N}_{0}}\left|\frac{z_{n}(\kappa R)}{(1+n^{2})^{1/ 2}}\right|\|w\|_{1/2-s,2,S_{R}}\|v\|_{1/2+s,2,S_{R}}.\]
Corollary 5 yields
\[|(T_{\kappa,N}w,v)_{S_{R}}|\leq\frac{1}{R}\left(2+|\kappa R|^{2}\right)^{1/2} \|w\|_{1/2-s,2,S_{R}}\|v\|_{1/2+s,2,S_{R}}. \tag{37}\]
By the trace theorem [10, Thm. 3.38], we finally arrive at
\[|(T_{\kappa,N}w,v)_{S_{R}}|\leq\frac{C_{tr}^{2}}{R}\left(2+|\kappa R|^{2} \right)^{1/2}\|w\|_{1-s,2,B_{R}\setminus\overline{\Omega}}\|v\|_{1+s,2,B_{R} \setminus\overline{\Omega}}.\]
**Theorem 24**.: _Under the assumptions of Lemma 9, given an antilinear continuous functional \(\ell:\,V\to\mathbb{C}\), there exists a constant \(N^{*}>0\) such that for \(N\geq N^{*}\) the problem Find \(u_{N}\in V\) such that_
\[a_{N}(u_{N},v)=\ell(v)\quad\text{for all }v\in V \tag{38}\]
_is uniquely solvable._
Proof.: First we show that the problem (38) has at most one solution. We start as in the proof of [11, Thm. 4.5] and argue by contradiction, i.e. we suppose the following:
\[\begin{array}{rl}\forall N^{*}\in\mathbb{N}&\exists N=N(N^{*})\geq N^{*} \quad\text{and}\quad u_{N}=u_{N(N^{*})}\in V\quad\text{such that}\\ &a_{N}(u_{N},v)=0\quad\text{for all }v\in V\quad\text{and }\|u_{N}\|_{V}=1. \end{array} \tag{39}\]
However, the subsequent discussion differs significantly from the proof of [11, Thm. 4.5]. We apply an argument the idea of which goes back to Schatz [10].
First we _assume_ there exists a solution \(u_{N}\in V\) of (38) and derive an a priori estimate of the error \(\|u-u_{N}\|_{V}\), where \(u\in V\) is the solution of (23), see Theorem 10. Since \(a_{N}\) satisfies a Garding's inequality (Lemma 21(ii)), we have, making use of (22),
\[C_{-}^{2}\|u-u_{N}\|_{V}^{2}-2\kappa^{2}\|u-u_{N}\|_{0,2,B_{R}}^{2}\leq\operatorname {Re}a_{N}(u-u_{N},u-u_{N}).\]
Since
\[\begin{split} a_{N}(u-u_{N},v)&=a_{N}(u,v)-a_{N}(u_{ N},v)\\ &=\underbrace{a(u,v)}_{=\ell(v)}+a_{N}(u,v)-a(u,v)-\underbrace{a_ {N}(u_{N},v)}_{=\ell(v)}\\ &=\left((T_{\kappa}-T_{\kappa,N})u,v\right)_{S_{R}},\end{split}\]
we obtain
\[C_{-}^{2}\|u-u_{N}\|_{V}^{2}-2\kappa^{2}\|u-u_{N}\|_{0,2,B_{R}}^{2}\leq\eta_{1 }\|u-u_{N}\|_{V} \tag{40}\]
with
\[\eta_{1}:=\sup_{v\in V}\frac{\operatorname{Re}\left((T_{\kappa}-T_{\kappa,N})u,v\right)_{S_{R}}}{\|v\|_{V}}.\]
Now we consider the following auxiliary adjoint problem (cf. [10, p. 43]):
Find \(w_{N}\in V\) such that
\[\overline{a(v,w_{N})}=(v,u-u_{N})_{B_{R}}\quad\text{for all }v\in V.\]
Since \(\mathcal{A}\) is a Fredholm operator (see the proof of Theorem 10), the adjoint problem possesses a unique solution \(w_{N}\in V\). Then
\[\|u-u_{N}\|_{0,2,B_{R}}^{2} =\overline{a(u-u_{N},w_{N})}=\overline{a(u,w_{N})}-\overline{a(u_ {N},w_{N})}\] \[=\underbrace{\overline{a(u,w_{N})}-\overline{a_{N}(u_{N},w_{N})} }_{=\overline{\ell(w_{N})}-\overline{\ell(w_{N})}=0}+\overline{a_{N}(u_{N},w_ {N})}-\overline{a(u_{N},w_{N})}\] \[=\overline{((T_{\kappa}-T_{\kappa,N})u_{N},w_{N})}_{S_{R}}.\]
In particular, this relation shows that \(((T_{\kappa}-T_{\kappa,N})u_{N},w_{N})_{S_{R}}\) is real. With
\[\eta_{2}:=\sup_{v\in V}\frac{\left((T_{\kappa}-T_{\kappa,N})u_{N},v\right)_{S _{R}}}{\|v\|_{V}}\]
we obtain
\[\|u-u_{N}\|_{0,2,B_{R}}^{2}\leq\eta_{2}\|w_{N}\|_{V}\leq\eta_{2}C_{-}^{-1}C(R,\kappa)\|u-u_{N}\|_{V^{*}}.\]
The continuous embedding \(V\subset V^{*}\) yields
\[\|u-u_{N}\|_{0,2,B_{R}}^{2}\leq\eta_{2}C_{-}^{-1}C(R,\kappa)C_{\rm emb}\|u-u_{ N}\|_{V}.\]
Applying this estimate in (46), we get
\[C_{-}^{2}\|u-u_{N}\|_{V}^{2}-2\kappa^{2}\eta_{2}C_{-}^{-1}C(R,\kappa)C_{\rm emb }\|u-u_{N}\|_{V}\leq\eta_{1}\|u-u_{N}\|_{V}.\]
Now, if \(\|u-u_{N}\|_{V}\neq 0\), we finally arrive at
\[C_{-}^{2}\|u-u_{N}\|_{V}\leq\eta_{1}+2\kappa^{2}\eta_{2}C_{-}^{-1}C(R,\kappa) C_{\rm emb}. \tag{41}\]
Clearly this inequality is true also for \(\|u-u_{N}\|_{V}=0\) so that we can remove this interim assumption.
Thanks to Lemma 22 we have that
\[\Big{|}((T_{\kappa}-T_{\kappa,N})u,v)_{S_{R}}\Big{|}\leq c(N,u,v)\|u\|_{1/2,2,S_{R}}\|v\|_{1/2,2,S_{R}}\leq c(N,u,c)C_{\rm tr}^{2}\|u\|_{V}\|v\|_{V},\]
hence
\[\eta_{1}\leq c_{+}(N,u)C_{\rm tr}^{2}\|u\|_{V}\quad\text{with}\quad c_{+}(N,u ):=\sup_{v\in V}c(N,u,v), \tag{42}\]
where \(\lim_{N\to\infty}c_{+}(N,u)=0\). Note that, as can be seen from the proof of Lemma 22, the second fractional factor in the representation of \(\tilde{c}(N,w,v)\) can be estimated from above by one without losing the limit behaviour for \(N\to\infty\). Consequently, \(\eta_{1}\) can be made arbitrarily small provided \(N\) is large enough.
In order to estimate \(\eta_{2}\) we cannot apply Lemma 22 directly since the second argument in the factor \(c(N,u_{N},v)\) depends on \(N\), too. Therefore we give a more direct estimate.
Namely, let \(v\in V\) have the representation (32) or (34), respectively. Then we define
\[V_{N}|_{S_{R}}:=\begin{cases}\operatorname{span}_{|n|\leq N}\{Y_{n}(R^{-1}\cdot) \},&d=2,\\ \operatorname{span}_{n=0\dots N,|m|\leq n}\{Y_{n}^{m}(R^{-1}\cdot)\},&d=3,\end{cases}\]
and introduce an orthogonal projector
\[P_{N}:\;V|_{S_{R}}\to V_{N}|_{S_{R}}:\;v\mapsto P_{N}v:=\begin{cases} \sum_{|n|\leq N}v_{n}(R)Y_{n}(R^{-1}\cdot),&d=2,\\ \sum_{n=0}^{N}\sum_{|m|\leq n}v_{n}^{m}(R)Y_{n}^{m}(R^{-1}\cdot),&d=3.\end{cases}\]
Then it holds that \(V_{N}|_{S_{R}}\subset\ker(T_{\kappa}P_{N}-T_{\kappa,N})\). Indeed, if \(d=2\) and \(v\in V_{N}|_{S_{R}}\), then \(P_{N}v=v=\sum_{|n|\leq N}v_{n}(R)Y_{n}(R^{-1}\cdot)\) and
\[T_{\kappa}P_{N}v=T_{\kappa}v=\frac{1}{R}\sum_{|n|\leq N}Z_{n}(\kappa R)v_{n}(R )Y_{n}(R^{-1}\cdot)=T_{\kappa,N}v.\]
An analogous argument applies in the case \(d=3\).
Now we return to the estimate of \(\eta_{2}\) and write, for \(u_{N}\in V\),
\[(T_{\kappa}-T_{\kappa,N})u_{N}=(T_{\kappa}-T_{\kappa}P_{N})u_{N}+(T_{\kappa}P _{N}-T_{\kappa,N})u_{N}=T_{\kappa}(\operatorname{id}-P_{N})u_{N},\]
where we have used the above property. The advantage of this approach is that we can apply a wellknown estimate of the projection error. The proof of this estimate runs similarly to the proof of Lemma 22 but only without the coefficients \(Z_{n}\) or \(z_{n}\), respectively:
\[\left|((\operatorname{id}-P_{N})w,v)_{S_{R}}\right| =\left|\sum_{|n|,|k|>N}\left(w_{n}(R)Y_{n}(R^{-1}\cdot),v_{k}(R)Y_ {k}(R^{-1}\cdot)\right)_{S_{R}}\right|\] \[=R\left|\sum_{|n|,|k|>N}\left(w_{n}(R)Y_{n},v_{k}(R)Y_{k}\right)_ {S_{1}}\right|\] \[=R\left|\sum_{|n|>N}w_{n}(R)\overline{v_{n}}(R)\right|\] \[=R\left|\sum_{|n|>N}\frac{1}{(1+n^{2})^{1/2}}(1+n^{2})^{1/4}w_{n }(R)(1+n^{2})^{1/4}\overline{v_{n}}(R)\right|\] \[\leq\max_{|n|>N}\frac{R}{(1+n^{2})^{1/2}}\sum_{|n|>N}\left|(1+n^ {2})^{1/4}w_{n}(R)(1+n^{2})^{1/4}\overline{v_{n}}(R)\right|\] \[\leq\frac{R}{(1+N^{2})^{1/2}}\left(\sum_{|n|>N}(1+n^{2})^{1/2} \left|w_{n}(R)\right|^{2}\right)^{1/2}\] \[\quad\times\left(\sum_{|n|>N}(1+n^{2})^{1/2}\left|v_{n}(R)\right| ^{2}\right)^{1/2}\] \[\leq\frac{1}{(1+N^{2})^{1/2}}\|w\|_{1/2,2,S_{R}}\|v\|_{1/2,2,S_{R}}.\]
The same estimate holds true for \(d=3\). Then we get, by Remark 3 (or Lemma 23),
\[\left|\left((T_{\kappa}-T_{\kappa,N})u_{N},v\right)_{S_{R}}\right| =\left|(T_{\kappa}(\operatorname{id}-P_{N})u_{N},v)_{S_{R}}\right|\] \[\leq\frac{C\kappa}{(1+N^{2})^{1/2}}\|u_{N}\|_{1/2,2,S_{R}}\|v\|_{ 1/2,2,S_{R}}\] \[\leq\frac{CC_{\operatorname{tr}}^{2}\kappa}{(1+N^{2})^{1/2}}\|u_ {N}\|_{V}\|v\|_{V},\]
thus
\[\eta_{2}\leq\frac{CC_{\operatorname{tr}}^{2}\kappa}{(1+N^{2})^{1/2}}\|u_{N}\|_ {V}.\]
Using this estimate and (49) in (48), we obtain
\[C_{-}^{2}\|u-u_{N}\|_{V}\leq c_{+}(N,u)C_{\operatorname{tr}}^{2}\|u\|_{V}+2 \kappa^{2}C_{-}^{-1}C(R,\kappa)C_{\operatorname{emb}}\frac{CC_{\operatorname{ tr}}^{2}\kappa}{(1+N^{2})^{1/2}}\|u_{N}\|_{V}. \tag{43}\]
Now we apply this estimate to the solutions \(u_{N}\) of the homogeneous truncated problems in (39). By Theorem 10, the homogeneous linear interior problem (23) (i.e. \(\ell=0\)) has the solution \(u=0\), and the above estimate implies
\[C_{-}^{2}\|u_{N}\|_{V}\leq 2\kappa^{2}C_{-}^{-1}C(R,\kappa)C_{\operatorname{emb }}\frac{CC_{\operatorname{tr}}^{2}\kappa}{(1+N^{2})^{1/2}}\|u_{N}\|_{V},\]
which is a contradiction to \(\|u_{N}\|_{V}=1\) for all \(N\).
Although the proof of Theorem 24 allows an analogous conclusion as in Lemma 11 that the truncated sesquilinear form \(a_{N}\) satisfies an inf-sup condition, such a conclusion is not fully satisfactory since the question remains whether and how the inf-sup constant depends on \(N\) or not. However, at least for sufficiently large \(N\), a positive answer can given.
**Lemma 25**.: _Under the assumptions of Lemma 9, there exists a number \(N^{*}\in\mathbb{N}\) such that_
\[\beta_{N^{*}}(R,\kappa):=\inf_{w\in V\setminus\{0\}}\sup_{v\in V\setminus\{0 \}}\frac{|a_{N}(w,v)|}{\|w\|_{V,\kappa}\|v\|_{V,\kappa}}>0\]
_is independent of \(N\geq N^{*}\)._
In the proof a formula is given that expresses \(\beta_{N^{*}}(R,\kappa)\) in terms of \(\beta(R,\kappa)\).
Proof.: We return to the proof of Theorem 24 and mention that the estimate (43) is valid for solutions \(u,u_{N}\) of the general linear problems (23) (or, equally, (25)) and (38), respectively. By the triangle inequality,
\[\|u_{N}\|_{V} \leq\|u\|_{V}+\|u-u_{N}\|_{V}\] \[\leq\|u\|_{V}+c_{+}(N,u)C_{-}^{-2}C_{\operatorname{tr}}^{2}\|u\|_ {V}+2\kappa^{2}C_{-}^{-3}C(R,\kappa)C_{\operatorname{emb}}\frac{CC_{ \operatorname{tr}}^{2}\kappa}{(1+N^{2})^{1/2}}\|u_{N}\|_{V}.\]
If \(N^{*}\) is sufficiently large such that
\[\kappa^{2}C_{-}^{-3}C(R,\kappa)C_{\mathrm{emb}}\frac{CC_{\mathrm{tr}}^{2}\kappa}{( 1+N^{2})^{1/2}}\leq\frac{1}{4}\quad\text{and}\quad c_{+}(N,u)C_{-}^{-2}C_{ \mathrm{tr}}^{2}\leq 1\quad\text{for all }N\geq N^{*},\]
then, by Lemma 11,
\[\|u_{N}\|_{V}\leq 4\|u\|_{V}\leq\frac{4}{C_{-}}\|u\|_{V,\kappa}\leq\|\ell\|_{V^{*}}.\]
That is, the sesquilinear form \(a_{N}\) satisfies an _inf-sup condition_
\[\beta_{N^{*}}(R,\kappa):=\inf_{w\in V\setminus\{0\}}\sup_{v\in V\setminus\{0 \}}\frac{|a_{N}(w,v)|}{\|w\|_{V,\kappa}\|v\|_{V,\kappa}}>0\]
with \(\beta_{N^{*}}(R,\kappa):=\dfrac{C_{-}\beta(R,\kappa)}{4C_{+}}\) independent of \(N\geq N^{*}\).
Analogously to (24) we introduce the truncated linear operator \(\mathcal{A}_{N}:\ V\to V^{*}\) by
\[\mathcal{A}_{N}w(v):=a_{N}(w,v)\quad\text{for all }w,v\in V.\]
By Lemma 21, \(\mathcal{A}_{N}\) is a bounded operator, and Lemma 25 implies that \(\mathcal{A}_{N}\) has a bounded inverse:
\[\|w\|_{V,\kappa}\leq\beta_{N^{*}}(R,\kappa)^{-1}\|\mathcal{A}_{N}w\|_{*}\quad \text{for all }w\in V.\]
Furthermore, we define a nonlinear operator \(\mathcal{F}_{N}:\ V\to V^{*}\) by
\[\mathcal{F}_{N}(w)(v):=\ell^{\mathrm{contr}}(w)+\ell^{\mathrm{src}}(w)+\ell_ {N}^{\mathrm{inc}}\qquad\text{for all }w\in V,\]
where
\[\langle\ell_{N}^{\mathrm{inc}},v\rangle:=(\hat{\mathbf{x}}\cdot\nabla u^{\mathrm{ inc}}-T_{\kappa,N}u^{\mathrm{inc}},v)_{S_{R}}.\]
The problem (31) is then equivalent to the operator equation
\[\mathcal{A}_{N}u=\mathcal{F}_{N}(u)\quad\text{in }V^{*},\]
and further to the fixed-point problem
\[u=\mathcal{A}_{N}^{-1}\mathcal{F}_{N}(u)\quad\text{in }V. \tag{44}\]
**Theorem 26**.: _Under the assumptions of Lemma 9, let the functions \(c\) and \(f\) generate locally Lipschitz continuous Nemycki operators in \(V\) and assume that there exist functions \(w_{f},w_{c}\in V\) such that \(f(\cdot,w_{f})\in L_{p_{f}/(p_{f}-1)}(\Omega)\) and \(c(\cdot,w_{f})\in L_{p_{c}/(p_{c}-2)}(\Omega)\), respectively. Furthermore let \(u^{\mathrm{inc}}\in H^{1}_{\mathrm{loc}}(\Omega^{+})\) be such that additionally \(\Delta u^{\mathrm{inc}}\in L_{2,\mathrm{loc}}(\Omega^{+})\) holds. If there exist numbers \(\varrho>0\) and \(L_{\mathcal{F}}\in(0,\beta_{N^{*}}(R,\kappa))\) (where \(N^{*}\) and \(\beta_{N^{*}}(R,\kappa)\) are from Lemma 25) such that the following two conditions_
\[\kappa^{2}\left[\|c(\cdot,w_{c})-1\|_{0,\tilde{q}_{c},\Omega}+\| L_{c}(\cdot,w,w_{c})\|_{0,q_{c},\Omega}(\varrho+\|w_{c}\|_{V})\right]\varrho\] \[\quad+\left[\|f(\cdot,w_{f})\|_{0,\tilde{q}_{f},\Omega}+\|L_{f}( \cdot,w,w_{f})\|_{0,q_{f},\Omega}(\varrho+\|w_{f}\|_{V})\right]\] \[\quad+C_{\mathrm{tr}}\|\hat{\mathbf{x}}\cdot\nabla u^{\mathrm{inc}}- T_{\kappa,N}u^{\mathrm{inc}}\|_{-1/2,2,S_{R}}\leq\varrho\beta_{N^{*}}(R,\kappa),\] \[\kappa^{2}\left[\|L_{c}(\cdot,w,v)\|_{0,q_{c},\Omega}\varrho+\| c(\cdot,w_{c})-1\|_{0,\tilde{q}_{c},\Omega}+\|L_{c}(\cdot,w,w_{c})\|_{0,q_{c}, \Omega}(\varrho+\|w_{c}\|_{V})\right]\] \[\quad+\|L_{f}(\cdot,w,v)\|_{0,q_{f},\Omega}\leq L_{\mathcal{F}}\]
_are satisfied for all \(w,v\in K^{\mathrm{cl}}_{\varrho}\), then the problem (44) has a unique solution \(u_{N}\in K^{\mathrm{cl}}_{\varrho}\) for all \(N\geq N^{*}\)._
Proof.: Analogously to the proof of Theorem 18.
The next result is devoted to an estimate of the truncation error \(\|u-u_{N}\|_{V}\). We start from the proof of Theorem 24 but have in mind the nonlinear problems (21) and (31). So let \(u,u_{N}\in V\) be the solutions of (21) and (31), respectively. Since \(a_{N}\) satisfies a Garding's inequality by Lemma 21, we have that
\[C_{-}^{2}\|u-u_{N}\|_{V}^{2}-2\kappa^{2}\|u-u_{N}\|_{0,2,B_{R}}^{2}\leq\operatorname {Re}a_{N}(u-u_{N},u-u_{N}), \tag{45}\]
where \(C_{-}:=\min\{1;\kappa\}\). In order to estimate the right-hand side, we write
\[a_{N}(u-u_{N},v) =a_{N}(u,v)-a_{N}(u_{N},v)\] \[=a(u,v)+a_{N}(u,v)-a(u,v)-a_{N}(u_{N},v)\] \[=a_{N}(u,v)-a(u,v)+n(u,v)-n_{N}(u_{N},v).\]
Now, the first difference term is equal to \(\left((T_{\kappa}-T_{\kappa,N})u,v\right)_{S_{R}}\), and for the second one we have
\[n(u,v)-n_{N}(u_{N},v)\] \[=\kappa^{2}(c(\cdot,u)-1)u,v)_{B_{R}}+(f(\cdot,u),v)_{B_{R}}-(T_ {\kappa}u^{\mathrm{inc}},v)_{S_{R}}+(\hat{\mathbf{x}}\cdot\nabla u^{\mathrm{inc}},v)_{S_{R}}\] \[\quad-\kappa^{2}(c(\cdot,u_{N})-1)u_{N},v)_{B_{R}}-(f(\cdot,u_{N} ),v)_{B_{R}}+(T_{\kappa,N}u^{\mathrm{inc}},v)_{S_{R}}-(\hat{\mathbf{x}}\cdot \nabla u^{\mathrm{inc}},v)_{S_{R}}\] \[=\kappa^{2}(c(\cdot,u)-1)u,v)_{B_{R}}-\kappa^{2}(c(\cdot,u_{N})- 1)u_{N},v)_{B_{R}}+(f(\cdot,u),v)_{B_{R}}-(f(\cdot,u_{N}),v)_{B_{R}}\] \[\quad-\left((T_{\kappa}-T_{\kappa,N})u^{\mathrm{inc}},v\right)_{S _{R}}.\]
As in the proof of Theorem 18 we see that
\[|n(u,v)-n_{N}(u_{N},v)|\] \[\leq\|\ell^{\mathrm{contr}}(u)-\ell^{\mathrm{contr}}(u_{N})\|_{V^ {*}}\|v\|_{V}+\|\ell^{\mathrm{src}}(u)-\ell^{\mathrm{src}}(u_{N})\|_{V^{*}} \|v\|_{V}+\left|\left((T_{\kappa}-T_{\kappa,N})u^{\mathrm{inc}},v\right)_{S_{R }}\right|\] \[\leq L_{\mathcal{F}}\|u-u_{N}\|_{V}\|v\|_{V}+\eta^{\mathrm{inc}}\| v\|_{V}\]
with
\[\eta^{\mathrm{inc}}:=\sup_{v\in V}\frac{\left|\left((T_{\kappa}-T_{\kappa,N}) u^{\mathrm{inc}},v\right)_{S_{R}}\right|}{\|v\|_{V}}.\]
Hence we obtain from (45)
\[C_{-}^{2}\|u-u_{N}\|_{V}^{2}-2\kappa^{2}\|u-u_{N}\|_{0,2,B_{R}}^ {2} \leq\eta_{1}\|u-u_{N}\|_{V}+L_{\mathcal{F}}\|u-u_{N}\|_{V}^{2}+ \eta^{\mathrm{inc}}\|u-u_{N}\|_{V} \tag{46}\] \[=\left(\eta_{1}+\eta^{\mathrm{inc}}+L_{\mathcal{F}}\|u-u_{N}\|_{V }\right)\|u-u_{N}\|_{V}\]
with
\[\eta_{1}:=\sup_{v\in V}\frac{\operatorname{Re}\left((T_{\kappa}-T_{\kappa,N}) u,v\right)_{S_{R}}}{\|v\|_{V}}.\]
Now we consider the following auxiliary adjoint problem (cf. [14, p. 43]):
Find \(w_{N}\in V\) such that
\[\overline{a(v,w_{N})}=(v,u-u_{N})_{B_{R}}\quad\text{for all }v\in V.\]
Since \(\mathcal{A}\) is a Fredholm operator (see the proof of Theorem 10), the adjoint problem possesses a unique solution \(w_{N}\in V\). Then
\[\begin{split}\|u-u_{N}\|_{0,2,B_{R}}^{2}&=\overline{ a(u-u_{N},w_{N})}=\overline{a(u,w_{N})}-\overline{a(u_{N},w_{N})}\\ &=\overline{a(u,w_{N})}-\overline{a_{N}(u_{N},w_{N})}+\overline{ a_{N}(u_{N},w_{N})}-\overline{a(u_{N},w_{N})}\\ &=\overline{n(u,w_{N})}-\overline{n_{N}(u_{N},w_{N})}+\overline{ a_{N}(u_{N},w_{N})}-\overline{a(u_{N},w_{N})}.\end{split} \tag{47}\]
The first difference term can be estimated as above, that is,
\[\left|\overline{n(u,w_{N})}-\overline{n_{N}(u_{N},w_{N})}\right|\leq L_{ \mathcal{F}}\|u-u_{N}\|_{V}\|w_{N}\|_{V}+\eta^{\rm inc}\|w_{N}\|_{V}.\]
Setting
\[\eta_{2}:=\sup_{v\in V}\frac{\left|\left((T_{\kappa}-T_{\kappa,N})u_{N},v \right)_{S_{R}}\right|}{\|v\|_{V}}\,,\]
we obtain from (47)
\[\begin{split}\|u-u_{N}\|_{0,2,B_{R}}^{2}&\leq\left( L_{\mathcal{F}}\|u-u_{N}\|_{V}+\eta^{\rm inc}+\eta_{2}\right)\|w_{N}\|_{V}\\ &\leq\left(L_{\mathcal{F}}\|u-u_{N}\|_{V}+\eta^{\rm inc}+\eta_{2} \right)C_{-}^{-1}C(R,\kappa)\|u-u_{N}\|_{V^{*}},\end{split}\]
and the continuous embedding \(V\subset V^{*}\) yields
\[\|u-u_{N}\|_{0,2,B_{R}}^{2}\leq\left(L_{\mathcal{F}}\|u-u_{N}\|_{V}+\eta^{\rm inc }+\eta_{2}\right)C_{-}^{-1}C(R,\kappa)C_{\rm emb}\|u-u_{N}\|_{V}.\]
Applying this estimate in (46), we get
\[\begin{split} C_{-}^{2}\|u-u_{N}\|_{V}^{2}&-2\kappa ^{2}\left(L_{\mathcal{F}}\|u-u_{N}\|_{V}+\eta^{\rm inc}+\eta_{2}\right)C_{-}^ {-1}C(R,\kappa)C_{\rm emb}\|u-u_{N}\|_{V}\\ &\leq\left(\eta_{1}+\eta^{\rm inc}+L_{\mathcal{F}}\|u-u_{N}\|_{V} \right)\|u-u_{N}\|_{V}.\end{split}\]
Now, if \(\|u-u_{N}\|_{V}\neq 0\), we finally arrive at
\[\begin{split} C_{-}^{2}\|u-u_{N}\|_{V}\\ \leq\eta_{1}+\eta^{\rm inc}+L_{\mathcal{F}}\|u-u_{N}\|_{V}+2\kappa ^{2}\left(L_{\mathcal{F}}\|u-u_{N}\|_{V}+\eta^{\rm inc}+\eta_{2}\right)C_{-}^ {-1}C(R,\kappa)C_{\rm emb}.\end{split} \tag{48}\]
Clearly this inequality is true also for \(\|u-u_{N}\|_{V}=0\) so that we can remove this interim assumption.
Thanks to Lemma 22 we have that
\[\left|\left((T_{\kappa}-T_{\kappa,N})u,v\right)_{S_{R}}\right|\leq c_{+}(N,u) \|u\|_{1/2,2,S_{R}}\|v\|_{1/2,2,S_{R}}\]
with \(\lim_{N\to\infty}c_{+}(N,u)=0\), hence
\[\eta_{1}\leq c_{+}(N,u)C_{\rm tr}^{2}\|u\|_{V}. \tag{49}\]
Consequently, \(\eta_{1}\) can be made arbitrarily small provided \(N\) is large enough. Analogously, \(\eta^{\rm inc}\) can be made arbitrarily small for sufficiently large \(N\).
For \(\eta_{2}\), we have the following estimate (see the proof of Theorem 24):
\[\eta_{2}\leq\frac{CC_{\mathrm{tr}}^{2}\kappa}{(1+N^{2})^{1/2}}\|u_{N}\|_{V}.\]
Using this estimate, the inequality (49), and the analogous estimate for \(\eta^{\mathrm{inc}}\) in (48), we obtain
\[C_{-}^{2}\|u-u_{N}\|_{V} \leq c_{+}(N,u)C_{\mathrm{tr}}^{2}\|u\|_{V}+c_{+}(N,u^{\mathrm{inc} })C_{\mathrm{tr}}^{2}\|u^{\mathrm{inc}}\|_{V}+L_{\mathcal{F}}\|u-u_{N}\|_{V}\] \[\quad+2\kappa^{2}C_{-}^{-1}C(R,\kappa)C_{\mathrm{emb}}\Big{(}L_{ \mathcal{F}}\|u-u_{N}\|_{V}+c_{+}(N,u^{\mathrm{inc}})C_{\mathrm{tr}}^{2}\|u^{ \mathrm{inc}}\|_{V}\] \[\quad+\frac{CC_{\mathrm{tr}}^{2}\kappa}{(1+N^{2})^{1/2}}\|u_{N}\| _{V}\Big{)}.\]
Now, if \(L_{\mathcal{F}}\) also satisfies
\[\tilde{\varrho}\big{(}1+2\kappa^{2}C_{-}^{-1}C(R,\kappa)C_{\mathrm{emb}}\big{)} L_{\mathcal{F}}\leq\frac{C_{-}^{2}}{4},\]
where we have used the (pessimistic) estimate
\[\|u-u_{N}\|_{V}\leq\|u\|_{V}+\|u_{N}\|_{V}\leq 2\tilde{\varrho}\]
with \(\tilde{\varrho}\) being the maximum value of the radii \(\varrho\) from the nonlinear existence and uniqueness theorems for \(u\) and \(u_{N}\), respectively (cf. Theorems 18, 24), we conclude
\[C_{-}^{2}\|u-u_{N}\|_{V} \leq 2c_{+}(N,u)C_{\mathrm{tr}}^{2}\|u\|_{V}+2c_{+}(N,u^{\mathrm{ inc}})C_{\mathrm{tr}}^{2}\|u^{\mathrm{inc}}\|_{V} \tag{50}\] \[\quad+4\kappa^{2}C_{-}^{-1}C(R,\kappa)C_{\mathrm{emb}}\Big{(}c_{ +}(N,u^{\mathrm{inc}})C_{\mathrm{tr}}^{2}\|u^{\mathrm{inc}}\|_{V}+\frac{CC_{ \mathrm{tr}}^{2}\kappa}{(1+N^{2})^{1/2}}\|u_{N}\|_{V}\Big{)}.\]
We have proved the following result.
**Theorem 27**.: _Let the assumptions of the above Theorem 26 with respect to \(R\), \(\kappa\), \(c\), and \(f\) be satisfied. Then, if the Lipschitz constant \(L_{\mathcal{F}}\) is sufficiently small, i.e. satisfies_
\[L_{\mathcal{F}}\leq\min\left\{\beta(R,\kappa),\beta_{N^{\star}}(R,\kappa), \frac{C_{-}^{2}}{4\tilde{\varrho}\big{(}1+2\kappa^{2}C_{-}^{-1}C(R,\kappa)C_ {\mathrm{emb}}\big{)}}\right\},\]
_there exists a constant \(c>0\) independent of \(N\geq N^{\star}\) (the structure of the constant can be seen from (50)) such that the following estimate holds:_
\[c\|u-u_{N}\|_{V}\leq c_{+}(N,u)\|u\|_{V}+c_{+}(N,u^{\mathrm{inc}})\|u^{ \mathrm{inc}}\|_{V}+\frac{1}{(1+N^{2})^{1/2}}\|u_{N}\|_{V}.\]
## 7 Conclusion
A mathematical model together with an investigation of existence and uniqueness of its solution for radiation and propagation effects on compactly supported nonlinearities is presented. The full-space problem is reduced to an equivalent truncated local problem, whereby in particular the dependence of the solution on the truncation parameter (with regard to stability and error of the truncated solution) is studied. The results form the basis for the use of numerical methods, e.g., FEM, for the approximate solution of the original problem with controllable accuracy. |
2308.02598 | Sensitivity in Nanomechanical Pedestal MEMS Cantilever | Nanomechanical resonator-based sensing devices are used in medical
diagnostics based on their high-frequency dynamic behavior. Cantilevers fall
into the category of Nanomechanical resonators. It also resembles a resonator
whose shape is like that of a nanowire clamped at one end. As the
surface-to-volume ratio of a nanowire resonator increases due to scaling down,
surface stress plays a crucial role in the mechanical behavior of a resonator.
Piezoresistive MEMS cantilevers are used for vapor phase analysis of volatile
compounds and gas. Studies were done to address the mass sensitivity issues and
fractures associated with bioceramic and nanocomposite coatings-based
cantilever resonators. The studies show how the sensing performance can be
determined or tuned. Nanomechanical studies of thin films of SiCN on silicon
were performed. The sharpness of the tip was found to have an influence on the
tip-sample conduction mechanism useful for MEMS applications | Abhay K. Rajak, Ritambhara Dash, Ashwini Kumari, A. S. Bhattacharyya | 2023-08-04T04:01:08Z | http://arxiv.org/abs/2308.02598v1 | # Sensitivity in Nanomechanical Pedestal MEMS Cantilever
###### Abstract
Nanomechanical resonator-based sensing devices are used in medical diagnostics based on their high-frequency dynamic behavior. Cantilevers fall into the category of Nanomechanical resonators. It also resembles a resonator whose shape is like that of a nanowire clamped at one end. As the surface-to-volume ratio of a nanowire resonator increases due to scaling down, surface stress plays a crucial role in the mechanical behavior of a resonator. Piezoresistive MEMS cantilevers are used for vapor phase analysis of volatile compounds and gas. Studies were done to address the mass sensitivity issues and fractures associated with bioceramic and nanocomposite coatings-based cantilever resonators. The studies show how the sensing performance can be determined or tuned. Nanomechanical studies of thin films of SiCN on silicon were performed. The sharpness of the tip was found to have an influence on the tip-sample conduction mechanism useful for MEMS applications
**Keywords:** Nanomechanical resonators, piezoresistive, MEMS, cantilevers, SiCN
## 1 Introduction
Due to several benefits, such as their simple geometry (such as a Cantilever beam structure), the ability to use batch fabrication, their suitability for extreme miniaturization, even in the nanoscale, and their high mass sensitivity, MEMS-based resonant sensors have been widely used as biological, physical, and chemical sensors [1, 2]. As the sensor's resonant frequency, \(\omega\), is inversely correlated to the square root of its total mass, (\(\omega\)o \(\sim\) 1/\(\sqrt[]{\mathrm{m}}\)) observation of the resonant frequency shift between the system with and without the target mass yields the target entity's mass. Utilizing this feature of the MEMS-based resonator, measurements of mass, stiffness, viscosity, and other physical properties have been made.
Research into cell biology, tissue engineering, cancer, and diseases may benefit greatly from the characterization of the physical characteristics of living cells, such as their mass and stiffness [3]. The ability to measure the physical characteristics of cells provides the chance to solve unanswered issues about the development of biological systems. For instance, the mechanisms underpinning cell cycle
progression can be clarified by examining the direct relationship between cell growth rate and cell mass for individual adherent human cells. [4, 5].A MEM-based resonator is one of the frequently used instruments to measure the mass and stiffness of the individual cell. There are numerous techniques to measure these parameters, and one of the widely-used devices to measure these quantities is a MEM-based resonator [2]. Thin films memristor from hBN and SiC show good binary resistive memory switching which is beneficial for memory devices. Silicon-based piezoelectric nano/microelectromechanical systems (N/MEMS) are being integrated with memristors. The nano resistive switch properties can be studied with the help of nanoindentation as reported for amorphous (a-SrTiO3) perovskites memristors [6].
In this communication, Si-based cantilever MEMS cantilevers were fabricated using lithographic techniques, and the issue of mass sensitivity associated with Cantilever MEMS was addressed Finite Element Analysis of a novel design has been provided. As Silicon-based MEMS sensors have the limitation of operating at high temperatures high temperature-resistant multicomponent hard coatings were deposited. The nanomechanical characterization of these coatings using nanoindentation techniques gave insights into their mechanical strength and toughness.
## 2 Experimental Procedures
The fabrication process of Si-based Cantilever MEMS was done in a clean room with proper aprons, head covers, face masks, and foot covers following the safety protocols. RCA cleaning of a silicon wafer took place in _Wet Etch Bay_. RCA 1 cleaning is used for the organic contaminants whereas RCA 2 is done for the metallic contaminants. The RCA 1 cleaning is equivalent to piranha (H\({}_{2}\)SO\({}_{4}\) and H\({}_{2}\)O\({}_{2}\)) cleaning for removing organic contaminants. The samples were also rinsed in DI water and blow-dried with nitrogen gas.
Silicon Nitride (Si\({}_{3}\)N\({}_{4}\)) was deposited on the silicon substrate by Low-Pressure Chemical Vapour Deposition (LPCVD) in the Diffusion Bay. The equipment had three chambers: _Loading_: where the samples were loaded in a boat made up of quartz and heated to 500oC. Nitrogen gas was passed and the temperature was further raised to 850 \({}^{\mathrm{o}}\)C. In the _process_ chamber, 120 sccm of dichlorosilane (DSC) and ammonia (NH\({}_{3}\)) at 300 mTorr were supplied to deposit silicon nitride. After the deposition, the sample was transferred to the _unloading_ chamber and the temperature was lowered again to 500oC by applying nitrogen gas. This Si3N4 thin film formed is called _low-stress_ silicon nitride deposition. Isi-rich rich and mainly used for cantilevers where stiction needs to be avoided. An ammonia-rich high-stress silicon nitride can also be prepared to take 10 sccm of DSC and 70 sccm of ammonia. The LPCVD can operate between 200 - 500 mTorr and the temperature can go up to 950 \({}^{\mathrm{o}}\)C. There are four tubes in the LPCVD system: in Tube 1 a reaction with dichlorosilane (SiCl\({}_{2}\) H\({}_{2}\)) with
ammonia take place to form silicon nitride as discussed above. In Tube 2, thin films of Si, Ge, SiGe, a-Si Si- Nanowires, Ge, etc can take place. In this tube some metal contamination is allowed however in Tube 3 no metal contamination is allowed with the same depositing materials as in tube 2. Low-temperature oxide deposition of average 100 nm thickness using silane and oxygen as precursors takes place in Tube 4. Oxidation and diffusion furnaces were also present which can perform dry oxidation (5nm - 150 nm) and pyrogenic oxidation (250 nm - 1 \(\upmu\)m). Phosphorous and Boron diffusion in Si for doping purposes can also be done.
The thickness of the Si\({}_{3}\)N\({}_{4}\) thin film was estimated to be around 231 nm in ellipsometry. The thickness measurement is done on the principle of polarization. The film was also found to be an absorbing one with a refractive index of 2. Lithography was then performed as per the following procedures: Substrate cleaning (in acetone and IPA) followed by dehydration bake. A positive photoresist was then coated using spin coating followed by a soft bake. UV exposure was done using a mask designed for cantilevers. The etching can be isotropic or anisotropic. Anisotropic etching corresponds to different etch rates at different crystallographic planes. The chemical wet etch is usually an isotropic etch whereas the as dry etch is usually an anisotropic etch. The reason for making Si\({}_{3}\)N\({}_{4}\) films rather than SiO2 films is: SiO\({}_{2}\) unlike Si\({}_{3}\)N\({}_{4}\) is not a good mask for longer etch duration.
The mask used in lithography is made up of soda-lime glass, chrome, and, photoresist. While for labeling the masks more than one label should be used otherwise any rotation may get undetected. Dry etching was using SF\({}_{6}\) which is an isotropic etching as bombarding as well as the chemical reaction takes place. The resist was then removed by O\({}_{2}\) plasma - a process called PR ashing. Dry etching consists of Plasma generation (isotropic), sputtering, reactive ion etch (RIE) and Deep Reactive Ion etch (DRIE). RIE and DRIE are anisotropic etching processes. DRIE is done with C4F8(Scallop) and can etch up to 400 - 500 \(\upmu\)m. E-beam lithography was shown which uses a PMMA photoresist. The apparatus consists of a laser interferometer is used. The aperture size varies between 7.5 \(\upmu\)m to 120 \(\upmu\)m. It uses an accelerating voltage of 30 kV producing high-energy electrons which undergo less scattering. Au is used as identification marks. The microscopic inspection was done showing the _release_ of the Cantilevers. An image of the fabricated cantilever is shown in Fig 1.
Silicon is not able to withstand high temperatures and is being coated with hard nanocomposite coatings like SiCN which itself has potential applications in MEMS. The parameters of SiCN useful for piezoresistive sensing applications are given in **Table 1**. The deposition was carried out using rf magnetron sputtering using a sintered SiC target in an evacuated chamber with a combination of Ar/N\({}_{2}\) gas, the details of which have been published previously along with extensive structural and mechanical characterization [7-10]. A sintered SiC pellet was used as the target and Nitrogen gas along with Argon was introduced in the evacuated chamber for SiCN deposition on Silicon, details which have been published previously [11]. Nanoindentation was done on the SiCN films by MTS (USA) nanonidenter. The mechanism of nanoindentation is given in detail in ref [12, 13]. The nanoindentation tests were performed by MTS Nanoindenter, USA having a 3-sided pyramidal Berkovich indenter based on continuous stiffness mode (CSM) [14].
\begin{table}
\begin{tabular}{|l|l|} \hline Parameter & Value \\ \hline Bandgap & 2.3–3.0 eV \\ \hline Break down voltage & 29 V at RT with leakage current density \\ & 1.2 \(\times\)10 \({}^{-4}\) A/cm\({}^{2}\) \\ \hline & 5 V at 200oC \\ & 1.47 \(\times\) 10 \({}^{-4}\) A/cm\({}^{2}\) \\ \hline Modulus & 240 GPa \\ \hline Chemical inertness & excellent \\ \hline MEMS compatibility & excellent \\ \hline \end{tabular}
\end{table}
Table 1: Parameters of SiCN useful for piezoresistive applications
Figure 1: Fabricated Si-based Cantilever structure (at CENSE, IISc Bangalore)
## 3 Results & Discussions
### MEMS cantilever structure analysis
Cantilever-based MEMS resonator sensors suffer the problem of un attainment of homogenous mass sensitivity. The intensity of vibration being highest at the free end of the cantilever makes the mass sensitivity highest and reduces as one moves towards the fixed end. The resonant frequency of operation is inversely proportional to the mass., and impulse plays a very significant role in these devices. The schema f spatially non-uniform mass sensitivity of greater than 100% from the free end of the cantilever to the middle of the cantilever can be found in Fig 2(a). A novel design with pedestal geometry has been proposed which can solve this problem [4, 15]. The resonator consists of a square pedestal suspended by four beam springs as shown in Fig 2(b). The pedestal oscillates in the vertical direction while vibrating, and the end of four beam springs is fixed to the substrate. Fig 2(b) also shows the pedestal design with spatial non-uniformity of mass sensitivity to be less than 4% from the center to the edge of the platform. The difference occurs due to radial variation as compared to linear variation as in the previous case. The color distribution indicates the sensitivity with red being the most sensitive and blue being the least sensitive. Fig 2(c) shows the arrangement of the pedestal cantilever as an array for use as a sensor [4].
Finite element analysis is performed (ANSYS 2022 R12, ANSYS Inc.) on this novel design using the geometry and mesh distribution, and the directional deformation, total deformation, and minimum principal strain values were determined (Fig 3) The mesh distribution is given in Fig 3(a). The directional deformation in Fig 3(b) and
Figure 2: Schema of (a) spatially non-uniform mass sensitivity of greater than 100% from the free end of the cantilever to the middle of the cantilever.(b)The Pedestal design with spatial non-uniformity of mass sensitivity to be less than 4% from the center to the edge of the platform. The color distribution indicates the sensitivity with red being the most sensitive and blue being the least sensitive (c) Pedestal sensor array[4]
the total deformation in Fig 3(c) are responses of the pedestal structure to the stresses imposed on it due to the capture of biological molecules showing spatial non - uniformity existing in a linear fashion for the first case diagonally for the second case. The pedestal structure is however arranged in an array for sensing as shown above which nullifies the effect as one pedestal cantilever compensating for the other. The minimum principle strain was found to be homogenous as well as of higher proportions (red region throughout the surface) indicating the superiority of these structures compared to the conventional cantilevers with one fixed end.
### Nanomechanical studies
A hardness of 11 GPa and a Modulus of 140 GPa were indicating the good mechanical properties of the deposited films (Fig 4 b). Hysteresis during unloading was observed during unloading due to pressure-induced phase transformation occurring in Si-C-N/Si films which can be related to switching properties in memristors (Fig 4 a). As the films were about 50 nm thick, the influence of the Si substrate was prominent. Thicker coatings usually do not show this hysteresis effect. An increase in load caused an increase in conduction due to increased defect densities and diffusion of oxygen vacancies. The formation of conductive channels takes place by extension of defect structures. The hysteresis effect has been found in 2D materials and transparent conductive multilayer films due to imperfect elastic behavior. The evidence of thermally-induced
Figure 3: a) Geometry of pedestal novel design with mesh distribution b) Directional deformation c) Total deformation d) Minimum principal strain
transformations has been observed through partial indent recoveries at the nanoscale. Nanoindentation has also been used in for magnetoelectric memory devices [16-19].
Recent studies have also shown the influence of electric field on nanoindentation with an increase in maximum indentation depth and final penetration depth with an increasing electric field. The influence is due to competition between mechanical load and electric field in the domain switching process [20]. The fracture studies of materials used in Li batteries have been done through nanoindentation. Recent studies have also shown an effect of charge states on the
Figure 4: Nanoindentation a) Load-depth (P-h) and Time on the sample b) Hardness and Elastic Modulus plots c) impulse w.r.t load and time on sample
yield and elastic modulus obtained through stress-strain plots based on nanoindentation using the power-law hardening model for Li\({}_{x}\)Sn alloys [21].
Nanoindentation performed at low precession can be correlated to surface electrical properties. Current can pass through the contact region of the conducting indenter and surface. The resistivity offered to the current as a result varies with the forces applied causing plastic deformation. Plastic deformation has been a major contributor to resistivity which is very well established as the dislocation can as scattering centers for the electrons.
When two conductors come into contact, the resistance R of the contact region is made up of two parts: \(R=R_{c}+R_{f}\), where \(R_{c}\) is the constriction resistance that depends on the material's bulk properties and \(R_{f}\) is the contact resistance brought on by the characteristics of surface layers. The contact region's constriction resistance is given by **eq 1[22, 23].**
\(R_{c}=\frac{\rho_{1}+\rho_{2}}{2}\sqrt{\frac{\pi}{A}}\) (1)
where \(\rho_{1}\) is the resistivity of the indenter material, \(\rho_{2}\) is the resistivity of the sample material, and A is the contact area. If V is the voltage drop and I is the current in the contact region, then for an indentation depth \(h\), we can have the following relation (**eq 2**) [23].**
\(h\frac{\nu}{I}=\sqrt{\frac{\pi}{24.5}}\left(\frac{\rho_{1}+\rho_{2}}{2}\right)\) (2)
The resistivity of the diamond which is the indenter material is 0.1 \(\Omega\) m. SiCN used for MEMS has been reported to have a room temperature resistivity of 5.5 \(\Omega\) m which leads to the following expression h (V/I) = 1.89. V/I is nothing but the resistance which there is found to vary with indentation depth.
If the shape of the contact region is considered as a circle of radius a, then the constriction resistance is given as **eq 1**, where \(\rho_{1}\): is the resistivity due to tip and \(\rho_{2}\):is for the sample. For a circular region, the Rc is given as **eq 2**
\(\text{R}_{\text{c}}=\frac{\rho_{1}+\rho_{2}}{2a}\) (1)
\[R_{c}=\frac{\rho_{1}+\ \rho_{2}}{2}\sqrt{\frac{\pi H}{F}}\ \ \ (2)\]
where H is the hardness and F is the force applied [24]. For a Berkovich indenter, the contact region is no longer circular and we take 24.5 h\({}^{2}\) as the contact area which modifies the resistance equation as **eq 3**.
\[R_{c}=\frac{\rho_{1}+\ \rho_{2}}{2h}\sqrt{\frac{\pi}{24.5}}=0.\ 18\,\frac{\rho_{1}+\ \rho_{2}}{h}.\ \ \ \ (3)\]
The fractured region or the heavily plastically deformed regions should also influence the electrical properties as they indicate high strain fields surrounding the indentations which should interfere with the free electron flow motion. The force during nanoindentation and the current passing in the contact region vary with time [24]. A higher value of impulse is causing a lower current value hence a higher rate of force application causes an increased dislocation jamming causing a higher scattering of the conducting electrons. The impulse obtained from Load and Time-on-sample is given in Fig 4(c). The importance of impulse in nanoindentation has been reported [25].
Piezoresistive MEMS used in pacemakers is based on impulse received from the heart's vibration for energy production proving a longer lifetime of the device. A silicon-based MEMS cantilever is used which uses CMOS-compatible AlN as the piezoelectric layer and works on shock-induced vibration producing energy. SiCN having piezoelectric properties can replace AlN [26]. The mechanical response of N/MEMs under impulse loading is a major criterion for device fabrication [27]. Cantilever-based MEMS resonator sensors suffer the problem of un attainment of homogenous mass sensitivity. The intensity of vibration being highest at the free end of the cantilever makes the mass sensitivity highest and reduced as one moves towards the fixed end. The resonant frequency of operation is inversely proportional to the mass., impulse plays a very significant role in these devices. Nanoindentation and stress analysis on cantilever nanobeams have been performed and reported [28].
High precession indentation at the nanoscale is a means of studying surface features, especially for materials used in MEMS applications. The capacitive controlled load-displacement features in a nanoindenter bring into consideration the conductivity phenomenon occurring at the contact point between the probe and the surface. The zero-point tip defects also influence the electrical properties as there occurs a difference in strain in the case of sharp factory default 3-sided pyramidal Berkovich tip contact and an effective hertzian contact due to tip blunting [29]. This stain is known to affect the
resistivity as per Matthiessen's rule which talks about impurity scattering as one of the deciding factors and has correlations with the strain development at the tip-surface contact. The strain field gradients were determined to study the structure inhomogeneities and microstructure. The strain gradients have been found to be very crucial in the proper analysis of obtained values from nanoindentation testing [29, 30, 31, 32, 33]. Elastic-plastic strain gradients are given as **eq 4 (a, b, c, d)** where _P'_ represents the total strain gradient, _S'_ represents the elastic total strain gradient, _E'_ represents the elastic normal strain gradient, and _H'_, the plastic total strain gradient.
\[P^{\prime}=\frac{h}{P}\frac{dP}{dh} \tag{4a}\] \[S^{\prime}=\frac{h}{S}\frac{dS}{dh}\] (4b) \[H^{\prime}=\frac{h}{H}\frac{dH}{dh}\] (4c) \[E^{\prime}=\frac{h}{E}\frac{dE}{dh} \tag{4d}\]
We found out the strain field gradients for a Nanoindentation performed on SiCN/Si substrates for the load(P), harmonic stiffness (S), hardness (H), and modulus (E) as given in Fig 5 (a, b, c, d) respectively. The films showed a a maximum hardness of 20 GPa and a modulus of 220 GPa under optimized conditions [7, 8]. The plots show linear and smooth characteristics of the strain gradients indicating the homogeneity of the microstructure. However, looking at the S' strain gradient at shallow depths some inhomogeneities can be observed which have been magnified and shown in Fig 5e. This deviation in elastic strain gradient occurs since at shallow depth a transition from elastic Hertzian spherical contact from the blunt tip to conical sides occurs. A clear demarcation was shown in the figure when the tip starts to penetrate the substrate. This increased sharpness leads to higher plastic deformation leading to the material getting strain hardened in a localized zone surrounding the indentation region. The dislocations piling up due to increased plastic deformation causes strain hardening.
An external bias is given to the indenter tip and sample surface to allow conduction to occur, as shown in Fig 5f with the current direction. The indentation and any related processes affect the constriction resistance (R\({}_{\text{C}}\)) and the contact resistance R\({}_{\text{f}}\). [34, 35]. The red dotted lines represent current flows that construct at the contact and are modified by indentation according to Matthiessen's rule, which states the contribution of deformation (plastic) in conduction. In addition to Rc, the resistance component affected by mechanical stimulus provided by the indenter is the resistance at the coating/substrate interface R\({}_{\text{intf}}\), which may not be active at shallow depths of indentation (less than 10% thickness). When employed as a conducting tip and as a MEMS nanoindenter with an integrated AFM cantilever gripper for nanomechanical characterizations such as
transducers for biosensing, the tip voltage in Berkovich indenter will be substantially greater. [36].
Figure 5: Strain Gradients corresponding to a) load-depth, b) Stiffness, c) hardness, and d) modulus. (e) Elastic Strain gradient at shallow depths (f) Electrical conduction between indenter tip and sample with different resistances [34]
From the substrate effect, it can be inferred that the coating thickness was about 200 nm as the substrate effect starts after 20 nm i.e 10% penetration (Fig 5 c, d). For a depth of about 6 nm, the resistance is 0.315 M\(\Omega\). According to the expression, there seems to be a linear decrease in R with h. However, there are other factors like plastic deformation, the effect of substrate, etc need to be considered before coming to any concrete statement. The strain gradient can be taken into consideration for this. For Harmonic stiffness change of 7500 N/m, the strain gradient shows a variation of 0.2 units for the initial contact. The sharpness of the tip has, therefore, a role to play in the tip-sample conduction mechanism. A Berkovich tip will have lesser strain gradient fluctuation at shallow depth and will be a path of enhanced conduction.
### MEMS (\(\mu\))-cantilevers
MEMS nanoindenter with an integrated AFM cantilever gripper has been made for nanomechanical characterizations [37]. These micros (\(\mu\))-cantilevers act as transducers for biosensing. The cantilever response to biomolecule sensing depends on its mechanical properties determined by the spring constant and resonance frequency which are again functions of cantilever material and geometry. The change in stress occurring at the cantilever surface resulting in bending is expressed as **eq 5[38]**.
\[\Delta\sigma=\frac{Et^{2}}{3(1-\nu)L^{2}}\Delta z\quad\quad(5)\]
where E is Young's modulus, v is the Poisson's ratio. L and t are the cantilever length and thickness respectively. Using the directional and total deformation values from Fig 3(b, c) and the dimension (Fig 6a) of the pedestal cantilever, the surface stress values for directional as well as total deformation were determined (**Table 2**). Both compressive and tensile stress may act on the structure creating changes in the curvature as shown in Fig 6(b). The silicon chip was 1mm thick with standard values of E =140 GPa and v = 0.25. The deformations are given over 10 zones each having length 1.4 mm (\(=\) L) as the total dimension of one side of the structure is 14mm. If the system changes to SiCN/Si with E = 220 GPa, v = 0.25 and we would like to find out the stress on SiCN/Si coating, the thickness of the system should be kept constant as the deformation is acting on the SiCN/Si system. So, the values just increase by a factor of 1.57 (220/140).
\begin{table}
\begin{tabular}{|l|c|l|l|l|l|} \hline
**S. No** & **Az (dir)** & **\(\Delta\)\(\alpha\) (dir) - Si** & **Az (T)** & **\(\Delta\)\(\alpha\) (T)** & **\(\Delta\)\(\alpha\) (T)** \\ & **(\(\mu\)m)** & **(MPa \(\mu\)m)** & **(\(\mu\)m)** & **(MPa \(\mu\)m)** & **SiCN/Si** \\ \hline
1 & 80.067 & 2.54 & 1.66 & 0.052 & 0.08164 \\ \hline
2 & 60.675 & 1.92 & 1.47 & 0.046 & 0.07222 \\ \hline
3 & 41.284 & 1.31 & 1.29 & 0.040 & 0.0628 \\ \hline
4 & 21.892 & 0.69 & 1.10 & 0.034 & 0.05338 \\ \hline
5 & 2.5005 & 0.08 & 0.92 & 0.028 & 0.04396 \\ \hline
6 & -16.891 & -0.54 & 0.74 & 0.023 & 0.03611 \\ \hline
7 & -36.283 & -1.15 & 0.55 & 0.017 & 0.02669 \\ \hline
8 & -55.674 & -1.76 & 0.37 & 0.012 & 0.01884 \\ \hline
9 & -75.006 & -2.38 & 0.18 & 0.006 & 0.00942 \\ \hline
10 & -94.452 & -2.99 & - & - & - \\ \hline \end{tabular}
\end{table}
Table 2: The deformations in the pedestal cantilever structure and corresponding surface stress. (The positive values denote compressive stress and the negative values denote tensile stress)
Figure 6: (a) The dimensions of the proposed pedestal cantilever structure and (b) vibrations associated with tensile and compressive stress
For coating substrate system, a modified form of the Stoney equation can be used as given in **eq 6[39, 40]**
\[\mathbf{\sigma t}=\mathbf{p}\;\mathbf{ln}\left(\frac{\mathbf{k}\mathbf{E}\mathbf{n}^{2}}{\mathbf{\delta p}( \mathbf{1}-\mathbf{\nu})}+\mathbf{1}\right)\;\;\text{where}\;\mathbf{p}=\frac{\mathbf{h}^{3}}{\mathbf{ b}^{2}}\,\mathbf{E}\left(\frac{\mathbf{13}\cdot\mathbf{5}}{\mathbf{1}-\mathbf{\nu}^{2}}-\mathbf{10}.\,\mathbf{3}\right)\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \
## Declarations
### Compliance with Ethical Standards
The manuscript has not been submitted in parallel either in full or partially to any other journal.
### Conflict of interest
There is no conflict of interest among the authors
### Research Data Policy and Data Availability Statements
Data shall be provided on request
### Author Contribution
All the authors have contributed equally to the paper
### Funding
No funding was received for conducting the research
|
2307.05472 | An effective density matrix approach for intersubband plasmons coupled
to a cavity field: electrical extraction/injection of intersubband polaritons | The main technological obstacle hampering the dissemination of modern
optoelectronic devices operating with large light-matter coupling strength
${\Omega}$ is an in-depth comprehension of the carrier current extraction and
injection from and into strongly coupled light-matter states, the so-called
polaritonic states. The main challenge lies in modeling the interaction between
excitations of different nature, namely bosonic excitations (the plasmonic ISB
excitations) with fermionic excitations (the electrons within the extraction or
injection subband). In this work, we introduce a comprehensive quantum
framework that encompasses both the ISB plasmonic mode and the
extractor/injector mode, with a specific emphasis on accurately describing the
coherent nature of transport. This reveals inherent selection rules dictating
the interaction between the ISB plasmon and the extraction/injection subband.
To incorporate the dynamics of the system, this framework is combined to a
density matrix model and a quantum master equation which have the key property
to distinguish intra and intersubband mechanisms. These theoretical
developments are confronted to experimental photocurrent measurements from
midinfrared quantum cascade detectors (${\lambda}$ = 10 ${\mu}$m) embedded in
metal-semiconductor-metal microcavities, operating at the onset of the strong
light-matter coupling regime (2${\Omega}$ = 9.3 meV). We are able to reproduce
quantitatively the different features of the photocurrent spectra, notably the
relative amplitude evolution of the polaritonic peaks with respect to the
voltage bias applied to the structure. These results on extraction allow us to
elucidate the possibility to effectively inject electronic excitations into ISB
plasmonic states, and thus polaritonic states. | M. Lagrée, M. Jeannin, G. Quinchard, S. Pes, A. Evirgen, A. Delga, V. Trinité, R. Colombelli | 2023-07-10T14:46:36Z | http://arxiv.org/abs/2307.05472v1 | An effective density matrix approach for intersubband plasmons coupled to a cavity field: electrical extraction/injection of intersubband polaritons
###### Abstract
The main technological obstacle hampering the dissemination of modern optoelectronic devices operating with large light-matter coupling strength \(\Omega\) is an in-depth comprehension of the carrier current extraction and injection from and into strongly coupled light-matter states, the so-called polaritonic states. The main challenge lies in modeling the interaction between excitations of different nature, namely bosonic excitations (the plasmonic ISB excitations) with fermionic excitations (the electrons within the extraction or injection subband). In this work, we introduce a comprehensive quantum framework that encompasses both the ISB plasmonic mode and the extractor/injector mode, with a specific emphasis on accurately describing the coherent nature of transport. This reveals inherent selection rules dictating the interaction between the ISB plasmon and the extraction/injection subband. To incorporate the dynamics of the system, this framework is combined to a density matrix model and a quantum master equation which have the key property to distinguish intra and intersubband mechanisms. These theoretical developments are confronted to experimental photocurrent measurements from midinfrared quantum cascade detectors (\(\lambda=10\) um) embedded in metal-semiconductor-metal microcavities, operating at the onset of the strong light-matter coupling regime (\(2\Omega=9.3\) meV). We are able to reproduce quantitatively the different features of the photocurrent spectra, notably the relative amplitude evolution of the polaritonic peaks with respect to the voltage bias applied to the structure. These results on extraction allow us to elucidate the possibility to effectively inject electronic excitations into ISB plasmonic states, and thus polaritonic states.
## I Introduction
The use of electromagnetic resonators like antennas or cavities is an established tool to tailor and improve the properties of optoelectronic devices, whether by increasing the sensitivity, reducing the electronic noise, improving the wall-plug efficiency. In general, the strategy is to engineer, and typically increase, the interaction strength between light and an electronic transition in matter. However, the interaction strength in practical devices is always limited to a small fraction of the photon or electronic transition lifetimes, which places the device in the so-called _weak coupling_ regime. On the contrary, when the light-matter interaction strength overcomes the losses in the system, the latter enters the _strong coupling_ regime. The new constituents of this system are mixed light-matter states called _polaritons_, which can be formed by hybridizing any polarization-carrying matter excitation and a photon field.
Polariton physics thus emerged as a transverse research field studying the fundamental properties of strongly coupled systems. It revealed a plethora of phenomena, the most recognized being the out-of-equilibrium Bose-Einstein condensation of exciton-polaritons [1; 2; 3]. How
ever, most experiments on polaritons are performed by optical means, whereas practical devices require electrical injection or extraction of charge carriers. Recent experiments sparked new interest in electrical transport in systems under strong light-matter coupling conditions, with the report of increased conductivity in organic molecules [4], or the breakdown of topological protection in quantum Hall systems [5; 6]. Intense research effort is thus currently devoted to provide an accurate description of transport in systems strongly coupled to a cavity field.
In this context, intersubband (ISB) polaritons, that originate from the coupling between an intersubband transition in doped semiconductor quantum wells (QW) and a cavity mode, are of particular interest. They were first reported in 2003 [7] with absorption experiments, and that same year electronic detection of the signature of strong coupling was also reported [8]. However, proposals for electrical injection and electroluminescence of ISB polariton devices [9; 10], that were quickly followed by experimental work [11; 12], faced the problem of inefficient electrical injection in a polaritonic state. That issue proved insurmountable in the following years [12; 13; 14; 15]. To circumvent the problem, the study of the "reverse" process (photo-detection) was proposed to elucidate transport mechanisms in polaritonic ISB electronic devices, with experiments on quantum well infrared photodetectors (QWIP) operating in the strong light-matter coupling regime [16].
In this context, we have recently presented a semi-empirical model to describe the electronic photoresponse of quantum cascade detectors (QCD) operating in the strong light-matter coupling regime [17]. Based solely on classical oscillators, it allowed us to shine new light on the polariton-to-electron process, and in particular to conjecture that a direct polariton-to-electron tunnel mechanism may play a major role in such devices. This result was obtained at the expense of great simplifications. In particular, because the model is based on classical theory, it cannot include any consideration on the _coherence_ of the involved processes.
Nevertheless, coherence is of paramount importance when dealing with systems operating in the strong-coupling regime, and even more so for ISB polaritons, that originate from the coupling between a cavity mode and a collective excitation. ISB transitions, that are more rigorously defined as ISB plasmons[18; 19; 20], are collective matter excitations originating from the electronic plasma inside a semiconductor quantum well, subject to its own Coulomb interaction. This is in stark contrast to, for instance, exciton-polaritons that result from an ensemble of single-particle transitions. The main consequence is the presence of _dark states_, that do not couple to the electromagnetic field, but do participate in electronic transport. This has important consequences on the behavior of ISB polariton systems under electrical injection.
In this paper, we propose a quantum description of QCDs based on a density matrix formalism, that we compare to a complete set of experimental data. Crucially, this approach allows us to describe (de)coherence and dissipation in the system. Our goal is to develop a theoretical description that permits to explain the _electronic extraction_ process (photo-detection), and that - at the same time - provides a more suitable vantage point to elucidate the more complex _electronic injection_ process leading to light emission. We note that a very recent work reports experimental results and proposes an alternative transport model for similar QCD structures operating in the strong coupling regime [21]. It works explicitly with the Fermionic approach, without performing the bozonisation steps. While similar conclusions are drawn in the photo-detection case, the work we present raises fundamental open questions and presents ways forward to the case of electrically pumped polaritonic light emitters.
In the first part, we develop the model and derive the main observable quantities, notably the photocurrent generated by an exciting external photon field.
In the second part, we validate the theoretical results by studying the photoresponse of quantum cascade detectors operating in the strong coupling regime as a function of the applied bias. We compare the values obtained in our model with a in-house code based on Ref. [22] that models the electronic transport in a more rigorous way, but does not incorporate the cavity effects [23; 24].
In the last part, we discuss the implications of the main assumption at the basis of our new model, and extend them to the case of electrical injection.
The system under study is sketched in the central part of Fig. 1. It consists of two electronic subbands confined inside a QW, here represented in momentum space. The second subband is tunnel-coupled to the fundamental state of an adjacent QW, and the whole system is embedded inside a cavity. The system can operate as a detector, acting as a QCD (top sketch), when it is excited by a photon that generates a photocurrent. This path is represented by blue arrows. It is also possible to inject electrons in the system (red arrows and bottom sketch), when an electric bias is applied, that can eventually lead to photon emission. In this case the device behaves as a polaritonic LED.
## II An effective density matrix approach for electronic transport in cavity-coupled QCDs
### Bosonization of the active optical transition
We start by defining the annihilation and creation operators \(c_{\lambda\mathbf{k}}\) and \(c_{\lambda\mathbf{k}}^{\dagger}\), the fermionic operators related to the creation and annihilation of electrons in subbands \(\lambda=\{0,1,2\}\) (see Fig. 1). We impose \(T=0\)K and we assume that all \(N\) electrons are contained inside the 0-subband without external excitation. The one-particle quantum state \(|1,\mathbf{k}\rangle\) of electronic wave vector \(\mathbf{k}\), representing a state where one electron is in subband \(\lambda=1\) is:
\[|1,\mathbf{k}\rangle=c_{1\mathbf{k}}^{\dagger}c_{0\mathbf{k}}|F\rangle \tag{1}\]
where \(|F\rangle\) denotes the fundamental Fermi state (equilibrium state, where all the electrons are contained in subband \(\lambda=0\)). For now, we restrain the problem to the \(\lambda=0,1\) subbands, that form the intersubband optical transition. This transition will be denoted as \(\alpha\). Following the developments of Ref. [25], to describe the
Figure 1: (center) Schematic representation of the system in momentum space. Bright and dark states are represented for both \(\alpha\) (0\(\rightarrow\)1) and \(\beta\) (0\(\rightarrow\)2) transitions. Note that the \(\beta\)-bright state is degenerated with the \(\beta\)-dark states. The different important operators and their effect on the transport of the excitations are represented. The blue path represents a detection process, whereas the red path represents an injection process. (Top) Typical bandstructure for a quantum cascade _detector_. The main _extraction_ pathway is represented in blue. (Bottom) Typical bandstructure for a quantum cascade _emitter_. The main _injection_ pathway is represente in red. The cavity electric field \(E_{z}\) is also schematically superimposed on the figure.
photo-excitation of an electron in the \(\alpha\)-transition, it is relevant to switch from the fermionic basis formed by the \(|1,{\bf k}\rangle\) states to a new basis of states \(\{|B_{i}^{\alpha}\rangle\}_{i=[1:N]}\).
We have:
\[|B_{i}^{\alpha}\rangle=\sum_{|{\bf k}|<{\bf k}_{F}}w_{i{\bf k}}^{\alpha}\,|1,{ \bf k}\rangle \tag{2}\]
Since the system is considered at \(T=0\) K, only \(|{\bf k}|<{\bf k}_{F}\) states are occupied, \({\bf k}_{F}\) the module of the \(k\)-wavevector corresponding to the Fermi level of 0-subband. The \(\{|B_{i}^{\alpha}\rangle\}_{i=[1:N]}\) basis only covers the single-excitation subspace (only one photo-excited electron per-subband), which is sufficient in the case of a weak excitation regime. The coefficients \(w_{i{\bf k}}^{\alpha}\) are defined as:
\[w_{1{\bf k}}^{\alpha} = \frac{1}{\sqrt{N}}\quad\forall{\bf k} \tag{3}\] \[\sum_{{\bf k}}w_{i{\bf k}}^{\alpha} = 0\quad\forall i\neq 1 \tag{4}\]
The \(|B_{1}^{\alpha}\rangle\) state, of eigenenergy equal to the ISB transition energy \(\omega_{\alpha}=\omega_{1}-\omega_{0}\) (assuming parabolic dispersion), has the remarkable property of holding _the entire oscillator strength_ of the \(\alpha\) transition:
\[\langle F|\hat{d}|B_{1}^{\alpha}\rangle=z_{\alpha}\sqrt{N} \tag{5}\]
where \(\hat{d}\) denotes the dipole operator and \(z_{\alpha}\) the dipole strength of one electronic transition. The \(|B_{1}^{\alpha}\rangle\) state is called the _bright_ state: it is formed by the _coherent_ superposition of the one-particle fermionic states \(|1,{\bf k}\rangle\) of the \(\alpha\)-transition and it holds the entire capacity of light-matter interaction. The \(\{|B_{i}^{\alpha}\rangle\}_{i=[2:N]}\) are called the _dark_ states since they can not interact with the light:
\[\langle F|\hat{d}|B_{i}^{\alpha}\rangle=0\quad i\neq 1 \tag{6}\]
From these developments, one can define bright state destruction and creation operators \(b_{\alpha}\) and \(b_{\alpha}^{\dagger}\) which describe the collective excitation of the \(\alpha\)-transition:
\[b_{\alpha}^{\dagger}=\frac{1}{\sqrt{N}}\sum_{{\bf k}}c_{1{\bf k}}^{\dagger}c_ {0{\bf k}} \tag{7}\]
In a weak excitation regime and for a large number of electrons \(N\), \(b_{\alpha}\) can be approximated as a bosonic operator. \(b_{\alpha}\) and \(b_{\alpha}^{\dagger}\) respectively demote and promote excitations inside the bright state \(|B_{1}^{\alpha}\rangle\).
The final step in this development is to include the plasmonic effect \(\omega_{P}\) of the electronic polarizations. The diagonalization of the plasmonic Hamiltonian leads to the emergence of new operators of eigen-energy \(\tilde{\omega}_{\alpha}=\sqrt{\omega_{\alpha}^{2}+\omega_{P}^{2}}\) and a plasmonic bright state that is still orthogonal to the dark states [25]. Mathematically, this new state is essentially the same as the previous bright state, except that it is no longer degenerated with the dark states: for simplicity, we will keep the notation \(|B_{1}^{\alpha}\rangle\) and \(b_{\alpha}\) for respectively the bright state and the corresponding creation operator. Note: at this stage we did not introduce strong light-matter coupling yet. This derivation is therefore valid in any coupling regime.
### Bosonization of the extractor: the tunnel-coupling Hamiltonian
We now turn to the insertion of the extraction subband in the formalism. As outlined in Refs. [10; 26], the mixing of bosonic (the plasmonic ISB excitations) and fermionic (the electrons in the extraction subband) degrees of freedom is necessary to correctly model the transport mechanisms that take place in an optically excited ISB system. The focus of our paper is on ISB systems strongly coupled to a photonic mode, but we stress that the above consideration is valid also in the weak-coupling regime. When a photon is absorbed by an ISB transition, it generates a bosonic excitation: an ISB plasmon. But the measured current, in a detector, is of course of fermionic nature.
In the case where the extraction subband is explicitly included in the system dynamics (and not only in the form of an external bath), it becomes an extremely tedious task to keep track of all these degrees of freedom. Effectively, one correct way to describe the interaction between these excitations of different nature is to use a full fermionic Hamiltonian of extremely large dimension. It is a significant mathematical challenge that demands considerable effort, and the nature of transport cannot
be straightforwardly interpreted due to this complexity.
In this work, we overcome this strong limitation with a key modification: we propose to depict the subband \(\lambda=2\) with a bosonic operator in the context of an extraction process. This approach has several advantages, and - as we will discuss later on - it might also permit to address the scenario involving an injection process. To explicitly incorporate subband \(\lambda=2\) into our formalism, we introduce the one-particle fermionic states \(|2,\mathbf{k}\rangle\) of the \(\beta\)-transition:
\[|2,\mathbf{k}\rangle=c_{2\mathbf{k}}^{\dagger}c_{0\mathbf{k}}|F\rangle \tag{8}\]
Analogous to the \(\alpha\)-transition, we will not use this fermionic state basis and instead employ a new orthonormal basis \(\{|B_{i}^{\beta}\rangle\}_{i=[1:N]}\) defined as:
\[|B_{i}^{\beta}\rangle=\sum_{|\mathbf{k}|<\mathbf{k}_{f}}w_{i\mathbf{k}}^{\beta }\,|2,\mathbf{k}\rangle \tag{9}\]
where the coefficients \(w_{i\mathbf{k}}^{\beta}\) are chosen such that:
\[w_{i\mathbf{k}}^{\beta} = \frac{1}{\sqrt{N}}\quad\forall k \tag{10}\] \[\sum_{k}w_{i\mathbf{k}}^{\beta} = 0\quad\forall i\neq 1 \tag{11}\]
The construction of this basis follows a similar approach as that of the \(\{|B_{i}^{\alpha}\rangle\}_{i=[1:N]}\) basis. Specifically, the first state \(|B_{1}^{\beta}\rangle\) is the bright state of the \(\beta\)-transition, while the remaining states \(\{|B_{i}^{\beta}\rangle\}_{i=[2:N]}\) are the dark states of this same transition. However, this time, the oscillator strength of a diagonal transition being very small, we have \(z_{\beta}\ll z_{\alpha}\) and thus the bright and dark states of the extractor are degenerated. Note that the one excitation subspace describing subband 1 and 2, of dimension \(2N\), is spanned by the concatenation of the \(\{|B_{i}^{\alpha}\rangle\}_{i=[1:N]}\) and \(\{|B_{i}^{\beta}\rangle\}_{i=[1:N]}\) basis.
The introduction of this new basis is valuable to evaluate the tunnel coupling between subbands 1 and 2 within the regime of strong light-matter coupling. The tunnel coupling operator \(\hat{T}\) can be defined as:
\[\hat{T}=\Omega_{T}\sum_{\mathbf{k}}(c_{2\mathbf{k}}c_{1\mathbf{k}}^{\dagger}+ c_{2\mathbf{k}}^{\dagger}c_{i\mathbf{k}}) \tag{12}\]
where \(\Omega_{T}\) is the tunnel coupling strength. Using equations (3), (4), (10) and (11), we compute the tunnel interaction between subbands 1 and 2:
\[\langle B_{1}^{\alpha}|\hat{T}|B_{1}^{\beta}\rangle= \Omega_{T} \tag{13}\] \[\langle B_{1}^{\alpha}|\hat{T}|B_{j}^{\beta}\rangle= 0 j\neq 1\] (14) \[\langle B_{i}^{\beta}|\hat{T}|B_{1}^{\beta}\rangle= 0 i\neq 1\] (15) \[\langle B_{i}^{\alpha}|\hat{T}|B_{j}^{\beta}\rangle= \Omega_{T}\sum_{\mathbf{k}}w_{i\mathbf{k}}^{\alpha*}w_{j\mathbf{k }}^{\beta}\quad\quad i\neq 1,j\neq 1 \tag{16}\]
The above relations, that are _de facto_ selection rules, are one of the key results of this work: through tunnel interaction, it is not possible to transition from a dark state to a bright state (Eq. (14)) or vice versa (Eq. (15)). Obviously, dark states can interact with each other through tunnel coupling (Eq. (16)), and the same applies to bright states as well (Eq. (13)).
These results have crucial implications on the nature of electronic transport in a QCD. For a detection process, where light promotes excitations into the \(|B_{1}^{\alpha}\rangle\) bright state, the previous results suggest that an optical excitation can generate an electronic current in only two ways:
1. Direct tunnelling into the extractor bright state \(|B_{1}^{\beta}\rangle\), preserving the coherent nature of the excitation, and subsequent decay - with loss of coherence - into an extractor dark state \(|B_{i\neq 1}^{\beta}\rangle\) or
2. First decay - with loss of coherence - into an ISB dark state \(|B_{i\neq 1}^{\alpha}\rangle\) in the active region, and subsequent tunneling into an extractor dark state \(|B_{i\neq 1}^{\beta}\rangle\)
Other channels involving bright-to-dark tunneling should not be considered, as they are prohibited by selection rules (14)(15). Once in the extractor dark states, the electronic excitation will simply decay in the remaining cascade, generating photocurrent. We stress that the construction of the new \(\beta\) basis merely extended the procedure applied to the \(\alpha\) transition (detailed in reference [25]) to the \(\beta\) transition, without additional hypothesis.
By implementing this basis transformation, the comprehension of the transport process is streamlined, leading to the natural emergence of the selection rules presented in Equation (13) to (16). In the following section, we will assess the need to actually incorporate the dark states from both the \(\alpha\) and \(\beta\)-transitions to replicate the experimental photocurrent measurements from a QCD operating in the strong light-matter coupling regime. The implications of this section for an electronic injection process into polaritonic states will be discussed in section IV.
### Introducing dissipation and decoherence in the model
In the following, we develop an effective density matrix model of the photocurrent extraction. We apply a drastic choice in the description of the system: we limit the extraction model to the transport induced by the bright states \(|B_{1}^{\alpha}\rangle\) and \(|B_{1}^{\beta}\rangle\). The dark states from both the \(\alpha\) and \(\beta\)-transitions are omitted. Both subbands 1 and 2 will thus be described only using bosonic operators. This is equivalent to choose scenario (1) among the two described at the end of the previous section: direct tunnelling into the extractor bright state \(|B_{1}^{\beta}\rangle\) (preserving the coherent nature of the excitation), and subsequent decay - with loss of coherence - into an extractor dark state \(|B_{i\neq 1}^{\eta}\rangle\). This choice was already implicit in the approach that we have employed in our previous work based on a _classical_ description of the electronic transport, using coupled mode theory [17].
We now go beyond this classical model using a quantum master equation. The key addition is the introduction of _decoherence_ in the system, that is distinct from dissipation.
In terms of spectral effects, decoherence impacts the broadening of the photocurrent peaks, while dissipation primarily affects their amplitude. In the experimental study we will report in Sec. III, bias will be varied, and - as a result - the amplitude of the peaks will be affected more than their broadening. It will be essential to differentiate between the effects of decoherence and dissipation, a distinction that was previously impossible to achieve with the classical model.
We define the operator \(b_{\beta}\) using our new basis from equations (9) and (10):
\[b_{\beta}^{\dagger} = \frac{1}{\sqrt{N}}\sum_{\mathbf{k}}c_{2\mathbf{k}}^{\dagger}c_{0 \mathbf{k}} \tag{17}\] \[b_{\beta}^{\dagger}|F\rangle = |B_{1}^{\beta}\rangle \tag{18}\]
Using the fermionic commutation rules and a weak excitation regime, we have:
\[[b_{\beta},b_{\beta}^{\dagger}]=\frac{\hat{N}_{0}-\hat{N}_{2}}{N}\approx\hat{ \mathcal{I}}_{d} \tag{19}\]
where \(\hat{N}_{i}\) is the population operator of subband \(i\) and \(\hat{\mathcal{I}}_{d}\) the identity operator. \(b_{\beta}\) can thus be approximated as a bosonic operator: \(b_{\beta}\) and \(b_{\beta}^{\dagger}\) describe the destruction and creation of electronic excitations inside the extraction mode, of eigen-frequency \(\omega_{\beta}=\omega_{2}-\omega_{0}\). The related Hamiltonian is:
\[\hat{\mathcal{H}}_{\beta}=\omega_{\beta}b_{\beta}^{\dagger}b_{\beta} \tag{20}\]
We restrict the tunnel interaction to the interaction between the plasmonic bright mode and this new extraction mode. This drastically simplifies the tunnel interaction Hamiltonian described in Eq. (13). The restricted Hamiltonian \(\hat{T}_{\mathrm{bright}}\) is:
\[\hat{T}_{\mathrm{bright}}=\Omega_{T}(b_{\alpha}^{\dagger}b_{\beta}+b_{\alpha}b_ {\beta}^{\dagger}) \tag{21}\]
The TM\({}_{01}\) electromagnetic mode confined in the patch antennas will be modeled as a standard optical resonator of frequency \(\omega_{c}\), using \(a_{c}\) and \(a_{c}^{\dagger}\) bosonic destruction and creation operators. Using the rotating wave approximation to describe the light-matter interaction, the time dependent Hamiltonian \(\mathcal{H}(t)\) of the whole system reads:
\[\hat{\mathcal{H}}(t) = \omega_{c}a_{c}^{\dagger}a_{c}+\tilde{\omega}_{\alpha}b_{\alpha }^{\dagger}b_{\alpha}+\omega_{\beta}b_{\beta}^{\dagger}b_{\beta} \tag{22}\] \[+\Omega\left(a_{c}^{\dagger}b_{\alpha}+a_{c}b_{\alpha}^{\dagger} \right)+\Omega_{T}\left(b_{\alpha}^{\dagger}b_{\beta}+b_{\alpha}b_{\beta}^{ \dagger}\right)\] \[+\kappa_{c}s_{+}\left(a_{c}e^{i\omega t}+a_{c}e^{-i\omega t}\right)\]
where \(s_{+}\) is the amplitude of the incoming light excitation, \(\omega\) its frequency, and \(\kappa_{c}\) is the coupling constant between this external field and the confined optical mode inside the cavity.
We map this system on an equivalent open quantum system described by the reduced density matrix \(\rho\). Under standard Born-Markov approximations, the time evolution of the density matrix \(\rho\) obey the following quantum master equation [27] (\(\hbar=1\) for clarity):
\[\frac{\mathrm{d}\rho(t)}{\mathrm{d}t} = -i\big{[}\mathcal{H}(t),\rho\big{]}\] \[+\gamma_{\alpha}\mathcal{L}\left[b_{\alpha},\rho\right]+\gamma_{ \beta}\mathcal{L}\left[b_{\beta},\rho\right]+(\gamma_{c}+\Gamma_{c})\mathcal{ L}\left[a_{c},\rho\right]\] \[+\gamma_{\alpha}^{\mathrm{intra}}\mathcal{L}\left[b_{\alpha}^{ \dagger}b_{\alpha},\rho\right]+\gamma_{\beta}^{\mathrm{intra}}\mathcal{L} \left[b_{\beta}^{\dagger}b_{\beta},\rho\right]\]
where the \(\mathcal{L}\) are Lindblad super-operators modeling the dissipative and decoherent interactions of the environment with the system. For any operator \(\hat{A}\), a super-operator \(\mathcal{L}\) reads:
\[\mathcal{L}[\hat{A},\rho]=2\hat{A}\rho\hat{A}^{\dagger}-(\hat{A}^{\dagger} \hat{A}\rho+\rho\hat{A}^{\dagger}\hat{A}) \tag{24}\]
The plasmonic ISB excitations are mainly dissipated through their interaction with interface roughness, at a non-radiative rate \(\gamma_{\alpha}\). Similarly, the extractor dissipates electrons into the next period at a non-radiative rate \(\gamma_{\beta}\), and is responsible for the generation of electrical current inside the structure. \(\gamma_{\beta}\) represents an effective dissipation rate that takes into consideration the remaining electronic cascade. The cavity also dissipates photons (mainly through undesired free-carriers absorption) at a rate \(\gamma_{c}\), but also through a spontaneous emission channel, at a radiative rate \(\Gamma_{c}\). Note that the radiative coupling \(\kappa_{c}\) is related to the radiative damping through \(\kappa_{c}=\sqrt{2\Gamma_{c}}\)[28].
The main difference with our previous work [17] lies in the ability to explicitly introduce the _intra_-subband scattering through the pure decoherence terms \(\gamma_{\alpha}^{\mathrm{intra}}\mathcal{L}\left[b_{\alpha}^{\dagger}b_{ \alpha},\rho\right]\) (resp. \(\gamma_{\beta}^{\mathrm{intra}}\mathcal{L}[b_{\beta}^{\dagger}b_{\beta},\rho]\)) [29]. These terms model pure decoherence damping without excitation dissipation (the intra-subband scattering thermalize excitations inside a subband without dissipating them into an other subband). By using the density matrix formalism, it thus becomes possible to differentiate between the effects of inter-subband (dissipation) and intra-subband (pure decoherence) processes on the evolution of the system (and ultimately on the shape of the calculated photoresponse spectra). More details on the necessity to distinguish intra and intersubband scatterings can be found in Appendix A.
### Deriving observable quantities for comparison with experiments
Equation 23 can be solved numerically in steady state. The solution is a stationnary reduced density matrix \(\rho_{S}\), and any observable \(\hat{O}\) can then be computed using:
\[\langle\hat{O}\rangle=\mathrm{Tr}(\hat{O}\rho_{s}) \tag{25}\]
where \(\mathrm{Tr}\) represents the trace function. We can then compute the different interesting quantities of the system. The system total absorption is the sum of the power dissipated into the different decay channels, normalized by the incoming power \(|s_{+}|^{2}\):
\[\mathcal{A}_{\mathrm{tot}} = \mathcal{A}_{c}+\mathcal{A}_{\alpha}+\mathcal{A}_{\beta} \tag{26}\] \[= 2\gamma_{c}\frac{\langle a_{c}^{\dagger}a_{c}\rangle}{\left|s^{ +}\right|^{2}}+2\gamma_{\alpha}\frac{\langle b_{\alpha}^{\dagger}b_{\alpha} \rangle}{\left|s^{+}\right|^{2}}+2\gamma_{\beta}\frac{\langle b_{\beta}^{ \dagger}b_{\beta}\rangle}{\left|s^{+}\right|^{2}} \tag{27}\]
where \(\mathcal{A}_{c}\), \(\mathcal{A}_{\alpha}\) and \(\mathcal{A}_{\beta}\) represent respectively the cavity, ISB and extraction absorptions.
The net photocurrent \(\mathcal{J}_{\beta}\) is defined as the current under illumination. \(\mathcal{J}_{\beta}\) is proportional to the power dissipated from a period to the next adjacent period. This is exactly the power dissipated by the extraction mode \(\beta\):
\[\mathcal{J}_{\beta}=2\gamma_{\beta}\left\langle b_{\beta}^{\dagger}b_{\beta}\right\rangle \tag{28}\]
Note: this is a phenomenological interpretation of the photocurrent. It is in fact expected that an excitation inside the bright extractor state \(|B_{1}^{\beta}\rangle\) should first decay in the dark states \(|B_{i\neq 1}^{\beta}\rangle\) before being extracted in the electronic cascade and contribute to the photocurrent. We choose to neglect these dark extractor states such that
the power is directly dissipated from the bright extractor state. This also applies on the ISB dissipation, where the \(|B_{i\neq 1}^{\alpha}\rangle\) dark states are neglected when considering the non-radiative dissipation \(\gamma_{\alpha}\).
## III Experimental validation in photo-detection: the polariton-to-current process
### Experimental details
The samples investigated in this study are the same as those already studied in Ref. [17]. They are processed into 8 \(\times\) 8 (approximately equal to 50 \(\times\) 50 \(\mathrm{\SIUnitSymbolMicro m}^{2}\)) patch antenna arrays, with the patches connected through 250-nm thin metallic wires (see Fig. 6 in Appendix B). Details of the processing can be found in [30]. The samples are cooled down to T = 78K in a cryostat, and they are illuminated by light from a globar source at normal incidence. The photocurrent spectra are acquired in rapid scan mode, after amplification using a low-noise transimpedance amplifier.
We extend the data presented in [17], and now present measurements with voltage bias applied to the samples. The applied electric field ranges from \(F=-25\) kV.cm\({}^{-1}\) to \(F=8\) kV.cm\({}^{-1}\). We have fabricated several array designs (\(p\), \(s\)), with \(p\) the inter-patch period of the array, and \(s\) the lateral dimensions of the patches. However, to allow for a quantitative comparison, we present measurements under an applied electric field for two samples only, with same \(p=\)7 \(\mathrm{\SIUnitSymbolMicro m}\), and \(s\)= 1.5 \(\mathrm{\SIUnitSymbolMicro m}\) and \(s\)=1.55 \(\mathrm{\SIUnitSymbolMicro m}\), respectively, as reported in Fig. 2 (continuous lines). Additional measurements can be found in Appendix C. While the relative amplitude of the spectra when varying the bias contains meaningful information of the electronic transport, one should exercise caution when comparing the amplitudes of different pairs (\(p\), \(s\)) as the experimental protocol does not ensure a consistent illumination between each measurement of the device.
Two photocurrent peaks are clearly visible in Fig. 2, signature of the strong light-matter coupling regime. Note: the peaks under consideration cannot be confused with the two peaks arising from coupled subbands (tunnel coupling), since the peak positions would change with the applied bias in the latter case. Here, the energy splitting (for a given pair \(p\), \(s\)) is constant regardless of the applied field. For all (\(p\), \(s\)) couples studied, the global amplitude of the photocurrent spectra evolves with the applied electric field \(F\). A maximum amplitude is observed around \(F=-10\) kV.cm\({}^{-1}\). The noise level increases strongly when the absolute amplitude of the field \(|F|\) increases. The noise level is the direct consequence of the increase of the parasitic dark current with the electric field and - as is well known [31; 32] - it affects the range of exploitable field \(F\) for device applications.
The relative amplitude of these peaks inverts with respect to the applied field \(F\), with the equal amplitude condition of the two polaritonic photo-detection peak found for a negative field \(F\approx-5\) kV.cm\({}^{-1}\). Below this threshold, the low energy peak dominates. Inversely, for \(F>-5\)kV.cm\({}^{-1}\), it is the high energy peak that dominates. This phenomenon can be attributed to the realignment of the subbands under the influence of the applied bias. When a highly negative voltage is applied, the subbands follow a clear staircase structure (see Fig. 7 in Appendix B for the QCD bandstructure), which facilitates the extraction process. Conversely, at positive voltages, the subband cascade becomes less organized, hindering the extraction process.
### System parameters and constraints
Before applying the theoretical developments of section II to the experimental data, let us detail the system parameters and the constraints applied to them.
The photonic degrees of freedom are the cavity parameters \(\omega_{c}\), \(\gamma_{c}\) and \(\Gamma_{c}\), that are independent of the applied electric field \(F\). They only depend on the geometrical
parameters (\(p\), \(s\)) of the cavities [33; 34; 35]:
\[\omega_{c}(s) = \frac{\pi c_{0}}{n_{\rm eff}s} \tag{29}\] \[\Gamma_{c}(p) = \frac{\alpha_{c}}{p^{2}} \tag{30}\]
where \(c_{0}\) is the light velocity, \(n_{\rm eff}\) is the effective index of the cavity, that represents the effective medium composed of the semiconductor contacts and of the undoped periodic structure embedded between the gold layers forming that cavity, and \(\alpha_{c}\) is the cavity dispersion loss factor. We choose to constrain \(n_{\rm eff}\), \(\alpha_{c}\) and \(\gamma_{c}\) to the values obtained from our prior investigation of the same samples [17], where the photocurrent of several samples with different (\(s\),\(p\)) couples have been studied for \(F=0\) kV.cm\({}^{-1}\):
\[n_{\rm eff} = 3.22 \tag{31}\] \[\alpha_{c} = 29.1\ \ {\rm meV.\mu m^{2}}\] (32) \[\gamma_{c} = 3.4\ \ {\rm meV} \tag{33}\]
The cavity parameters are thus excluded from the fitting process.
Several electronic degree of freedom can also be fixed or constrained independently of our density matrix model. The parameters of the ISB transition in the active QW (\(\alpha\)) are assumed independent of the applied electric field \(F\): the transition is vertical in a single quantum well and therefore is very marginally affected by the applied electric field \(F\). The measured \(F\) is then then obtained from the measured \(F\).
Figure 2: Normalized photocurrent measurements (continuous lines) and quantum master equation global fit (dashed lines), for two cavity geometries [a] \(s=1.50\) μm, \(p=7\) μm and [b] \(s=1.55\) μm, \(p=7\) μm. Offsets are added for clarity. Filled areas represent the errors of the fit parameters propagated onto the spectra. The extractor frequency \(\omega_{\beta}(F)\), dependant of the electric field \(F\), and the plasma-shifted ISB transition \(\tilde{\omega}_{\alpha}\) are both superimposed on the spectra. Additional results can be found in Appendix C.
bias. The ISB frequency \(\omega_{\alpha}\) and the plasma frequency \(\omega_{P}\) could be computed from our sequential transport software [22]. However, it is common to observe disparities between expected and measured doping levels (up to 15%). Experimental discrepancies also affect the ISB frequency (up to 5%), usually caused by the quality of the quantum wells interfaces during the epitaxial process. To account for these disparities, and since both \(\omega_{\alpha}\) and \(\omega_{P}\) are crucial parameters to reproduce the strong coupling measurements, we chose to let these parameters free during the fitting process:
\[\tilde{\omega}_{\alpha} = \sqrt{\omega_{\alpha}^{2}+\omega_{P}^{2}} \tag{34}\]
Note: the light-matter coupling constant \(\Omega\) is parametrized using \(\omega_{P}\):
\[\Omega=\frac{\omega_{P}}{2}\sqrt{f_{w}} \tag{35}\]
with \(f_{w}\) (\(\approx\) 0.17), the computed overlap factor between the cavity field and the doped active quantum wells.
Two additional \(\alpha\) parameters can be computed using our sequential transport software: the non-radiative dissipation rate \(\gamma_{\alpha}\) of the \(\alpha\) plasmon from the excited subband to the fundamental subband, and the tunnel coupling \(\Omega_{T}\). We compute \(\gamma_{\alpha}=0.66\) meV and \(\Omega_{T}=4.2\) meV, respectively. The new parameter of our transport model in the strong coupling regime, the intra-subband rate \(\gamma_{\alpha}^{\rm intra}\), will instead be fitted.
The parameters related to the extractor \(\beta\) are instead dependent on the electric field \(F\): the extractor energy shifts with respect to the upper excited state of the ISB transition when a bias is applied to the structure. The misalignment is approximated as linear:
\[\omega_{\beta}(F)=\alpha_{F}F+\omega_{\beta}^{0} \tag{36}\]
where \(\alpha_{F}\) is the linear coefficient and \(\omega_{\beta}^{0}\) is the extractor energy for \(F=0\). This dispersion can be computed using our sequential transport software and is injected into the model:
\[\alpha_{F} = 1.12\ \ {\rm meV/(kV.cm^{-1})} \tag{37}\] \[\omega_{\beta}^{0} = 124\ \ {\rm meV} \tag{38}\]
Similarly to \(\gamma_{\alpha}^{\rm intra}\), \(\gamma_{\beta}^{\rm intra}\) will be a fitting parameter common to the whole data set.
Finally, we expect the misalignment of the cascade with the electric field to modify the value of the effective extraction rate \(\gamma_{\beta}(F)\). \(\gamma_{\beta}\) is one of the most important parameters of the fitting process, as it controls the relative amplitude of the spectra. Although we suspect that it might closely match with the actual extraction rate calculated from our sequential transport model, we decided to keep it as a free parameter: for each measured electric field value \(F_{i}\), we fit one extraction rate \(\gamma_{\beta}(F_{i})\). Note: \(\gamma_{\beta}(F_{i})\) is independent of the geometrical parameters \(p\) and \(s\). In summary, \(\omega_{\alpha}\), \(\omega_{P}\) and \(\gamma_{\alpha}^{\rm intra}\) and \(\gamma_{\beta}^{\rm intra}\) are fitting parameters common to the whole data set, and their initial values for the fit will be based on the ones derived by our software.
### Discussion on the validity of the fit
In this section, we perform a global fit on the whole experimental photocurrent dataset (Fig. 2), using the parameters constraints exposed in the previous section. We solve Eq.(23) in the stationary regime (using the QuTiP python library [36]) to evaluate the theoretical photocurrent \(J_{\beta}\), as per Eq.(28). The parameters resulting from the fit are presented in Table 1.
The returned values are consistent with the previous fits performed with the coupled mode theory in [17]. In particular, the extraction rate \(\gamma_{\beta}\) as a function of the applied electric field is plotted in Fig. 3 and compared with the values computed through our sequential trans
\begin{table}
\begin{tabular}{|l||l|} \hline Fit parameters & Fit results \\ \hline \(\omega_{\alpha}\) (meV) & 116.9 \(\pm\) 0.1 meV \\ \(\gamma_{\alpha}^{\rm intra}\) (meV) & 2.4\(\pm\) 0.1 meV \\ \(\gamma_{\beta}^{\rm intra}\) (meV) & 9.3 \(\pm\) 0.1 meV \\ \(\omega_{P}\) (meV) & 23.4\(\pm\) 0.1 meV \\ \hline \end{tabular}
\end{table}
Table 1: Parameters returned by the global fit using a quantum master equation model.
port model. The right order of magnitude is obtained (\(\gamma_{\beta}<1\) meV) and the evolution trends are relatively well reproduced (\(\gamma_{\beta}\) decreasing for \(F>0\), slope break around \(F=-4\) kV.cm\({}^{-1}\)). These results on \(\gamma_{\beta}\) are also consistent with the evolution of the integrated amplitude of the spectra (Fig. 3, right-side scale): when the electric field is below \(F=-4\) kV.cm\({}^{-1}\), the electronic cascade is efficiently aligned, and the effective extraction rate \(\gamma_{\beta}\) is high. This leads to a significant photocurrent signal.
The spectrally resolved photocurrent calculated using the parameters returned by the global fit procedure is compared to the experimental data in Fig. 3, with a quantitative agreement obtained on the set of triplets (\(p\), \(s\), \(F\)). Two important trends are reproduced as a function of the bias, i.e. as a function of the \(\omega_{\alpha}-\omega_{\beta}\) alignment: (i) the overall amplitude of the spectra, and (ii) the relative amplitude inversion between the peaks of the two polaritonic branches.
This study _quantitatively_ confirms that the extractor (the electronic cascade of the QCD) and its relative alignment with respect to the ISB transition controls the overall amplitude of the spectra, and also the relative amplitude of the peaks of the polaritonic branches. Applying an electric field to the structure enables the selective extraction of excitations from a polaritonic state towards the electronic cascade, while also providing control over the efficiency of this extraction. This selective extraction capacity is enabled by the sharp transfer function and the \(2\Omega\) spacing (the Rabi splitting) between the polaritonic peaks: a finer transfer function and a stronger coupling would allow for better selectivity of \(\omega_{\pm}\) polaritons. More details on a QCD transfer function in the strong coupling regime can be found in Appendix A.
The good agreement between the experimental data and the theoretical model provides strong evidence that the dark states for both transitions \(\alpha\) and \(\beta\) do not need to be included in the model to depict an extraction process. The bright tunnel interaction \(\hat{T}_{\rm bright}\) and the phenomenological dissipation rate \(\gamma_{\beta}\) from the extractor bright state are sufficient to quantitatively reproduce the experimental measurements. As previously postulated in [17], this result confirms that the polaritonic nature of the excitation is carried on during the extraction process through the coherent tunnel coupling. The extraction is a coherent process, mainly involving the bright states from both \(\alpha\) and \(\beta\) transitions.
This model permits however a step forward in the comprehension of the polariton-to-electron process. Chronologically, the early attempts were limited to the observation of a polariton splitting in photo-detection [37, 8]. A phenomenological transfer function was then introduced in the study of QWIPs operating in strong coupling [16]. Recently, the Coupled Mode Theory (CMT) permitted a more rigorous modeling of the transfer function, and an initial indication of direct tunneling into the extractor bright state, with no role for the polaritonic dark states [17]. The model presented in this paper gets rid of the transfer function - a phenomenological concept - and
Figure 3: Left-side scale: extraction rate \(\gamma_{\beta}\) as a function of the applied electric field \(F\). Red cross: predicted values computed using a standard sequential transport model. Blue plus sign: values returned by the global fit using a quantum master equation model. Right-side scale: experimental photocurrent integrated amplitude, for two different (\(s\), \(p\)) couples of cavity parameters.
replaces it with a rigorous tunnel coupling Hamiltonian between the \(\alpha\) and \(\beta\) transitions, with a complete description of bright and dark states. The latter do not play a major role for the _polariton extraction_ process, but they have a crucial role for _polariton injection_. Our model integrates them, and might constitute a valid vantage point to study electrically injected polariton emitters. More information on the transfer function and the difference between the CMT and the effective density matrix approach can be found in Appendix A.
IV Implications of the model for electrically pumped polariton emitters: the electron-to-polariton process
The validity of the density matrix approach to describe electrical _extraction_ from optically excited polaritons, motivates to study the implications of these findings on the electrical _injection_ and subsequent photon emission, represented by the red arrows in Fig. 1. As discussed in Ref. [26], the main difficulty describing an intersubband emitter operating in the strong light-matter coupling regime lies in the simultaneous description of both optical (bosonic) and electronic (fermionic) excitations. The injection process fills subband 2 with fermionic excitations in the form of electrons, while the plasmonic excitations that occupy the \(\alpha\) bright state are bosonic. Working with the full Fermionic Hamiltonian is an arduous task [26], that could hinder the development of an intuitive understanding of the transport, although very recently a Fermionic approach was successfully used to model QCDs operating in the strong coupling regime [21].
The previous section II.2 suggests that the bosonization procedure of the extractor, that we employed to describe the extraction process, is a novel and readily interpretable approach for examining the injection process. In particular the selection rules for the tunnel Hamiltonian, eqs. (13)-(16) might prove a powerful tool. Due to the impossibility of conducting an experimental study resembling the one carried out for QCDs for a detection process, the following discussion will be supported by the quantitative arguments previously presented in section II.3. Note: the \(\beta\) extractor states are now referred to as _injector_ states.
An injection process is inherently incoherent because it introduces electrical excitations into an intersubband system through an incoherent external bath of electrons.
The relevant _coherence_ here is the one of the ISB plasmon[18; 19; 20], that is a collective - and coherent - matter excitation originating from the electronic plasma inside a semiconductor quantum well (QW). In this respect, an intuitive picture suggests that for an ISB polariton system, the electrical injection process is _not_ the reverse of the electrical extraction. In the latter, coherence (induced by light) is destroyed to generate an electrical current, while in the former it appears that coherence must be created.
More formally, in the framework of a bosonized injector, we expect most of the electronic population to be located in the dark states \(|B_{i}^{\beta}\rangle\) (\(i\neq 1\)) upon electrical injection. Furthermore, to emit light, excitations must be transferred to the plasmonic bright state \(|B_{1}^{\alpha}\rangle\), which holds the entire oscillator strength of the system. However, the selection rules (14) and (15) are clear: it is impossible for a dark state from the injector to interact with the plasmonic bright state through a tunnel interaction. In other words, the primary injection pathway, which would involve direct transfer from the injector states to the bright plasmonic state, can not be taken. The bosonized injector formalism confirms that _polaritonic emitters do not operate as reversed polaritonic detectors_.
In QCDs, the coherence is established through the photonic mode and maintained up to the extractor using both light-matter coupling \(\Omega\) and tunnel coupling \(\Omega_{T}\). Coherence can also be lost through the irreversible intrasubband scatterings \(\gamma_{\alpha}^{\rm intra}\) in the plasmonic mode, al
though we have demonstrated that it is _not_ the main extraction scheme. However, the extraction process can still take place, since the usual dark-to-dark tunnel interactions are possible (Eq. (16)). On the contrary, in a LED the injection mechanism is incoherent, and coherence cannot emerge spontaneously during the transport. Additionally, we showed that incoherent (dark) states cannot interact with a coherent (bright) _via_ the tunneling Hamiltonian (Eq. (15)) and (Eq. (14)). As a result, it seems unfeasible to efficiently transfer excitations to the optically active bright state \(\alpha\), and thus to the polaritonic states, in the absence of an additional mechanism to generate coherence.
In the case where the electrical injection would be uniform among the \(N\) states of \(|B_{i}^{\beta}\rangle\), light could be emitted since the system would start with excitations in \(|B_{1}^{\beta}\rangle\), but the expected efficiency would be at most \(1/N\), without considering intrasubband decoherence.
There are however two points that need to be discussed further. First, light emission from another kind of polariton states under electrical injection is well documented, namely in exciton-polariton devices [38; 39; 2], with additional reports of polariton lasing under electrical injection [40; 41]. The key difference is that exciton-polaritons states do not result from a collective matter excitation, but rather from an ensemble of single-particle transitions. As a consequence, non-resonant pumping schemes can apply to exciton polaritons, as demonstrated in optical experiments.
Second, several reports of electroluminescence from electrically-injected polariton LEDs exist in the literature. Some of them clearly determine that thermally assisted emission processes have a major role [42; 14], but in many other ones simple thermal models cannot explain the data [11; 12; 13; 15]. We can only conjecture possible ways forward to elucidate electrical injection of polaritonic LEDs. On one hand, one might wonder if the application of the generalized, local Kirchoff [43] law to ISB polariton LEDs can shine new light on the electrical injection process, and possibly explain all the existing experimental data in the literature. On the other, the problem of electrical excitation of coherent electronic motion - which is essentially the mechanism at play in electrically pumped polariton emitters - is well known from the field of surface plasmon polaritons (SPPs) [44; 45; 46; 47; 48; 49; 50]. The extremely low efficiency of the electron-to-plasmon and electron-to-photon processes is well known, although recent theoretical works, supported by one experimental finding, have demonstrated that the efficiency could be drastically increased by tailoring the electronic landscape to favor inelastic over elastic tunneling, as long as the electronic coherence is preserved in the process [51; 52].
###### Acknowledgements.
We thank S. De Liberato, J-M Manceau, I. Carusotto, A. Bousseksou for helpful discussions. We acknowledge financial support from the European Union Future and Emerging Technologies (FET) Grant No. 737017 (MIR-BOSE), and by the French National Research Agency: project SOLID (No. ANR-19-CE24-0003), HISPANID (ANR-17-ASTR-0008-01), and EVEREST (ANR-21-CE24-0021).
Appendix A Quantum master equation model for a QCD operating in the strong light-matter coupling regime: parametric study of the impact of the light-matter coupling strength on the transfer function
The transfer function between the photocurrent and the total power dissipated inside the QCD (\(\mathcal{A}_{\mathrm{QCD}}=\mathcal{A}_{\alpha}+\mathcal{A}_{\beta}\)) is defined as \(\mathcal{T}\):
\[\mathcal{T}(\omega)=\frac{\mathcal{A}_{\beta}}{\mathcal{A}_{\alpha}+\mathcal{A }_{\beta}} \tag{10}\]
\(\mathcal{T}\) is dependent on the light frequency \(\omega\).
### Parametric study
Fig. 4 plots the different quantities \(\mathcal{A}_{\mathrm{tot}}\) (\(\mathcal{A}_{\mathrm{tot}}=\mathcal{A}_{\mathrm{QCD}}+\mathcal{A}_{c}\)), \(\mathcal{A}_{\mathrm{QCD}}\), \(\mathcal{J}_{\beta}\) and \(\mathcal{T}\) computed from the solution of equation (23). We impose a realistic situation between the inter and intra-subband dynamics within the QCD such that 90% of the total broadening is due to the intrasubband scattering:
\[\gamma_{\alpha}^{\mathrm{intra}}+\gamma_{\beta}^{\mathrm{intra}}=0.9 \cdot\gamma_{\alpha\beta} \tag{11}\]
where \(\gamma_{\alpha\beta}=\gamma_{\alpha}^{\mathrm{intra}}+\gamma_{\beta}^{ \mathrm{intra}}+\gamma_{\alpha}+\gamma_{\beta}\) represents the total contribution to the broadening from the \(\alpha\) and \(\beta\) transitions, including intersubband and intrasubband scatterings. This assumption is equivalent to set \(T_{1}\approx 10\cdot T_{2}\), where \(T_{2}\) (\(T_{1}\)) are the dephasing (upper state) lifetime, respectively. For a typical mid-IR ISB transition this is verified, as we have \(T_{1}\) of the order of the ps, and \(T_{2}\) of the order of a few hundreds fs. The cavity resonance \(\omega_{c}\) and the extractor resonance are also voluntarily mismatched with the ISB transition:
\[\omega_{c}=1.05\omega_{\alpha},\hskip 28.452756pt\omega_{\beta}=0.95\omega_{\alpha} \tag{12}\]
\(\mathcal{A}_{\mathrm{tot}}\), \(\mathcal{A}_{\mathrm{QCD}}\), \(\mathcal{J}_{\beta}\) and \(\mathcal{T}\) are computed for different light-matter coupling amplitudes \(\Omega\), up to 10% of the ISB transition \(\omega_{\alpha}\).
When the light-matter coupling ratio \(\Omega/\omega_{\alpha}\) increases, the system progressively moves from a weak coupling regime to a strong coupling regime: around the spectral resolution criteria \(2\Omega>\gamma_{\alpha\beta}\), we compute the characteristic splitting of the polaritonic peaks, for each spectra \(\mathcal{A}_{\mathrm{tot}}\) (A), \(\mathcal{A}_{\mathrm{QCD}}\) (B) and \(\mathcal{J}_{T}\) (C). The model is able to reproduce the smaller splitting of the QCD absorption (B) compared to the splitting of the total absorption (A) for a same coupling situation \(\Omega/\omega_{\alpha}\), something previously observed in [17]. The important novelties that brings the model are found in the transfer function \(\mathcal{T}\). In weak coupling (small ratios \(\Omega/\omega_{\alpha}\)), the transfer function is almost scalar: it coincides with the transfer function computed in the framework of a QCD that is not inside a cavity. As the ratio \(\Omega/\omega_{\alpha}\) increases, the baseline of the transfer function gradually falls, and the amplitude of its peak increases: increasing \(\Omega\) enables the transfer function to reach a Lorentzian shape.
Therefore, in a model where the intra-subband dynamics is explicitly described, the progressive increase of the light-matter coupling allows us to move continuously from a sequential transport in QCDs (flat, quasi-scalar transfer function \(\mathcal{T}(\omega)\)) to a delocalized description of the transport (sharp, Lorentzian transfer function). Again, when the strong light-matter coupling \(\Omega\) is sufficiently intense, the coherent nature of the transport is maintained during the extraction process.
The previous discussion explains the satisfactory description of the photocurrent experimental data produced by the semi-classical CMT obtained in our previous work [17], despite the impossibility in this previous model to describe the intrasubband dynamic. By default, the CMT predicts a sharp Lorentzian transfer function \(\mathcal{T}\). While this description is not suited for a weak coupling scenario, where the sequential transport should be described with a scalar transfer function, Fig. 4-[D] illustrates that it is on the other hand quite adapted to a strong coupling scenario and a delocalized transport scheme. However, being a semi-classical model, the
CMT also lacked the ability to distinguish between the inter and intrasubband dynamic which would prevent the distanglement between the spectra broadening and their amplitude.
### Tunneling current
Another quantity of interest is the tunneling current \(\mathcal{J}_{T}\) between the plasmonic mode \(\alpha\) and the electronic extraction mode \(\beta\). It is defined as:
\[\mathcal{J}_{T}=\Omega_{T}(\langle b_{\alpha}^{\dagger}b_{\beta} \rangle-\langle b_{\alpha}b_{\beta}^{\dagger}\rangle) \tag{10}\]
Using Eq. (23) in the low excitation regime, and developing the expressions of the coherences, \(\mathcal{J}_{T}\) can be approximated as:
\[\mathcal{J}_{T}= \frac{2\Omega_{T}^{2}\gamma_{\alpha\beta}}{\left(\bar{\omega}_{ \alpha}-\omega_{\beta}\right)^{2}+\left(\gamma_{\alpha\beta}\right)^{2}}\left( \langle b_{\alpha}^{\dagger}b_{\alpha}\rangle-\langle b_{\beta}^{\dagger}b_{ \beta}\rangle\right) \tag{11}\] \[+\Re\left[\frac{2i\Omega_{T}\Omega}{\left(\bar{\omega}_{\alpha}- \omega_{\beta}\right)^{2}+\left(\gamma_{\alpha\beta}\right)^{2}}\left\langle a _{e}b_{\beta}^{\dagger}\right\rangle\right]\]
where \(\gamma_{\alpha\beta}=\gamma_{\alpha}+\gamma_{\alpha}^{\mathrm{intra}}+\gamma_ {\beta}+\gamma_{\beta}^{\mathrm{intra}}\) is the sum of the different contributions to the coherences damping inside the QCD. The obtained expression of \(\mathcal{J}_{T}\) in Eq.(11) is decomposed in two contributions. The fist term is the standard sequential tunnel current [53, 54] (in its first order expression) which is broadely used for the electronic transport in QCD operating in the weak coupling regime [22]. It is a semi-classical expression of the current, in the sense that it directly involves the population difference between the modes involved in the tunneling process \(\langle b_{\alpha}^{\dagger}b_{\alpha}\rangle-\langle b_{\beta}^{\dagger}b_{ \beta}\rangle\). The second term is a new addition to the tunnel current. It involves the coherences \(\langle a_{c}b_{\beta}^{\dagger}\rangle\) between the cavity and the extractor modes, which could be qualified as long-range coherences (the two modes are only coupled through their mutual coupling to the ISB mode). It thus expresses the system capacity to transport current between modes that are not directly coupled. We will refer to this current as _delocalized current_.
The amplitude of the delocalized current of Eq.(11) is controlled by a Lorentzian function and involves the cross product of the couplings \(\Omega_{T}\) (tunnel coupling) and \(\Omega\) (light-matter coupling). In the case of a weakly coupled QCD, it is thus expected that the delocalized current is null. Note that it can be numerically checked that the current expression is independent of the considered interface where its computed, thus \(\mathcal{J}_{T}=\mathcal{J}_{\beta}\).
Figure 4: Light-matter coupling \(\Omega\) parametric study for the different quantities of interest: [a] Total absorption of the system (ISB absorption, extractor dissipation and cavity absorption) [b] Internal QCD absorption (ISB absorption and extractor dissipation) [c] Photocurrent (extractor dissipation) [d] Transfer function between the extractor dissipation (\(\mathcal{A}_{\beta}:=\mathcal{J}_{\beta}/|s_{+}|^{2}\)) and the internal QCD absorption (\(\mathcal{A}_{\alpha}+\mathcal{A}_{\beta}\)). The blue dashed line represents an equivalent situation weak coupling situation where the cavity is not included in the model, and the ISB \(\alpha\) transition is directly pumped (every other parameters are exactly the same as the strong coupling situations).
### Validity domains of the different models
To explore the validity domains of the different models introduced here and in our previous work [17] (sequential model, CMT model, quantum master equation model), we define a criterion based on the spectral shape of the transfer function \(\mathcal{T}(\omega)\), the sharpness \(r\):
\[r(\Omega,\gamma_{\alpha}^{\text{intra}}+\gamma_{\beta}^{\text{intra}})=\frac{ \text{Max}\{\mathcal{T}(\omega)\}-\text{Min}\{\mathcal{T}(\omega)\}}{\text{Max} \{\mathcal{T}(\omega)\}} \tag{10}\]
\(r=1\) thus indicates that the transfer function \(\mathcal{T}(\omega)\) is a sharp Lorentzian function, \(r=0\) indicates that \(\mathcal{T}(\omega)\) is a flat scalar function. Fig. 5 summarizes the results of the parametric exploration on both the total intrasubband scattering and the light-matter coupling strength. We differentiate three domains D1, D2 and D3:
1. Domain D1: sequential transport model, flat scalar transfer function (\(\mathcal{T}(\omega)\approx p_{E}\)). This domain is correctly described by the standard thermalized subband model for QCD. It corresponds for instance to QCDs operating in the weak-coupling regime
2. Domain D3: delocalized transport model, sharp Lorentzian transfer function. This domain is correctly described by the CMT.
3. Domain D2: intermediate domain, where transport combines contributions from different sources. D1, D2 and D3 are correctly described by the density matrix formalism of equation (23) and the capability to distinguish intra and inter-subband dynamic.
## Appendix B Experimental system and QCD bandstructure
In this section, we present additional information about the samples used in this work. Fig. 6 presents a scanning electron microscope (SEM) image of a patch cavity array, and Fig. 7 presents the bandstructure of the QCD embedded inside the patches. The bandstructure is computed using our sequential transport software [22].
Figure 5: Sharpness of the transfer function \(\mathcal{T}\) as a function of the light-matter coupling strength ratio \(\Omega/\omega_{\alpha}\) and the intrasubband scattering predominance \(\gamma^{\text{intra}}\) in both \(\alpha\) and \(\beta\) transitions with respect to the total scattering \(\gamma_{\alpha\beta}\). This map gives intuitive validity domains for all the different physical models considered up to now, but the limits between the domains are arbitrary: a flat transfer function (\(r<0.3\)) suggests a sequential transport model, whereas a sharp transfer function (\(r>0.7\)) suggests a delocalized transport model, described by the Coupled Mode Theory. The quantum master equation model presented in this article covers the whole domain.
Figure 6: SEM image of a patch cavity-embedded QCD detector. \(s\) is the patch lateral size and \(p\) is the array period. Patches are electrically connected using gold wires deposited on a dielectric bridge layer. The active layers, the QCDs, are embedded between gold layers (Au).
## Appendix C Additionnal photocurrent measurements and computational results
In this section, we present additional photocurrent measurements and computational results to supplement the results of Fig. 2.
Figure 8: Normalized photocurrent measurements (continuous lines) and quantum master equation global fit (dashed lines), for two cavity geometries [a] \(s=1.50\) μm, \(p=5\) μm and [b] \(s=1.6\) μm, \(p=7\) μm. Offsets are added for clarity. Filled areas represent the errors of the fit parameters propagated on the spectra. The extractor frequency \(\omega_{\beta}(F)\), dependent of the electric field \(F\), and the plasma-shifted ISB transition \(\tilde{\omega}_{\alpha}\) are both superimposed on the spectra.
Figure 7: Bandstructure of the QCD used for the photocurrent measurements studied in Fig. 2. Here the electric field applied on the structure is \(F=-10\) kV.cm\({}^{-1}\). The extraction probability computed for this field and \(T=78\) K is \(p_{E}=0.43\)[55]. The potential of the quantum wells is calculated by considering the gradual variation in the composition profile. |
2306.12227 | Testing $χ$PT with the masses of the Nambu-Goldstone bosons | The spontaneous breakdown of an approximate symmetry implies that the
spectrum of the theory contains approximately massless particles. The hidden
symmetry very strongly constrains their masses. A numerical evaluation of these
constraints on the lattice would allow a more precise determination of the
quark mass ratios m_u:m_d:m_s and thereby reduce some of the uncertainties
encountered in precision flavour physics. | H. Leutwyler | 2023-06-21T12:37:23Z | http://arxiv.org/abs/2306.12227v1 | # Testing \(\chi\)PT with the masses
###### Abstract
The spontaneous breakdown of an approximate symmetry implies that the spectrum of the theory contains approximately massless particles. The hidden symmetry very strongly constrains their masses. A numerical evaluation of these constraints on the lattice would allow a more precise determination of the quark mass ratios \(m_{u}:m_{d}:m_{s}\) and thereby reduce some of the uncertainties encountered in precision flavour physics.
## 1 Introduction
In 1972, Murray Gell-Mann gave a series of lectures at the Schladming Winter School. At that time, the quarks were still taken as a theoretical construct to be treated like the veal used to prepare a pheasant in the royal french cuisine: the pheasant is baked between two slices of veal; while it deserves being served on the royal table, the veal stays in the kitchen for the less royal members of the court. Murray invited me to visit Caltech, which I did during three months in the spring break of 1973. There I met Harald Fritzsch and spent an extremely interesting period, collaborating with him and with Murray Gell-Mann on the possibility that the force between the quarks might be generated by an octet of gluons [1]. The masses of the quarks is one of the puzzles which strongly attracted Harald's attention. During my visit at Caltech, we worked together on sum rules that involve the quark masses [2]. Later on, Harald wrote a paper on his own [3], concerning the proposal that the Cabibbo angle might be related to quark mass ratios [4, 5, 6]. He intensively pursued this idea since then (for a recent account of his work in this direction see Ref. [7]).
It is not a simple matter to subject such relations to a stringent test, because the quark masses can be measured only indirectly, via their impact on measurable quantities. The lattice approach has made it possible to evaluate the Standard Model beyond perturbation theory. Slowly, but steadily the precision reached with
these calculations is increasing. One of the problems encountered originates in the fact that the lightest hadron, the pion, is very light. The size of the box in which the system is enclosed must be large compared to the pion Compton wavelength for the presence of the box not to distort the properties of the system. The fact that the e.m. interaction is of long range represents an even more serious obstacle - finite size effects persist even if the box is large. Currently, this appears to be the limiting factor in the determination of the quark mass ratios. Indeed, the current knowledge of the light quark masses is subject to substantial uncertainties.
In the following, I try to draw attention to a theoretical question arising in this connection. The answer is within reach of presently available methods and could shed light on a poorly understood aspect of precision flavour physics: the sensitivity of the Standard Model predictions to the masses of the light quarks.
As such, the quark masses do not represent physical quantities, but the renormalization procedure can be chosen such that their ratios do. It is customary to parametrize the two independent ratios of the light quark masses with
\[S\equiv\frac{m_{s}}{m_{ud}}\;,\hskip 28.452756ptR\equiv\frac{m_{s}-m_{ud}}{m_{d}- m_{u}}\;, \tag{1}\]
where \(m_{ud}\equiv\frac{1}{2}(m_{u}+m_{d})\) stands for the mean mass of \(u\) and \(d\).
The accuracy to which \(S\) can be determined on the lattice is amazing. The results quoted in the most recent edition of the FLAG review [8] reads1:
Footnote 1: Throughout the present article, the lattice numbers quoted stem from calculations with \(N_{f}=2+1+1\).
\[S=27.23(10)\,. \tag{2}\]
The ratio \(R\) is less well determined on the lattice because it concerns isospin breaking effects and is much more sensitive to the contributions from the e.m. interaction than \(S\). Since this interaction is of long range, it is more difficult to properly account for on a lattice than QCD by itself. While the above value of \(S\) has an accuracy of about 4 per mille, the uncertainty attached to the result for \(R\) in Ref. [8] is more than 10 times larger:
\[R=35.9(1.7)\,. \tag{3}\]
## 2 QCD as part of the Standard Model
The question I wish to draw attention to concerns QCD as such. I reserve the symbols \(M_{i}\) for the physical masses and use \(\hat{M}_{i}\) for what becomes of these if the electroweak interaction is turned off. It is well-known that the manner in which the electroweak interaction is turned off is not unique, but requires a convention, which can be specified as follows. I assume that the Standard Model accurately describes nature unless the energies involved are too high. At low energies, where the weak interaction is frozen, this model reduces to QCD + QED. The neutrini can be described as free particles and the degrees of freedom of the intermediate bosons
\(W^{\pm}\), \(Z\), the Higgs field and the heavy quarks \(b\), \(t\) affect the low energy properties only indirectly. They are responsible for the fact that the neutron as well as the pions, kaons and myons decay. They do contribute to the masses of the particles and their magnetic moments, for instance, but these effects are much too small to be relevant at the accuracy reached in the determination of the quark masses. In the \(\overline{\rm MS}\) scheme, the framework can thus be characterized by two running coupling constants and the running masses of four quarks and three charged leptons.
As thoroughly discussed by Gasser, Rusetsky and Scimemi [9], the natural way to identify the QCD part of QCD + QED is to match the running coupling constant \(\alpha_{s}(\mu)\) and the running quark masses \(m_{u}(\mu)\), \(m_{d}(\mu)\), \(m_{s}(\mu)\), \(m_{c}(\mu)\) of the two theories, at a suitable value of the renormalization scale \(\mu\). Once the matching scale is chosen, the QCD part of the Standard Model is uniquely defined. Within QCD, the coupling constant and the quark masses do not run in the same manner as in the Standard Model and the masses \(\hat{M}_{i}\) differ from the physical masses \(M_{i}\) - the difference represents the electroweak self energy. Note also that QCD + QED involves contributions from virtual leptons. In the masses of the hadrons, these effects show up only at very high precision, but in principle, the matching of lattice calculations with the parameters of the Standard Model must account also for these contributions.
The prescription to be used in the specification of the lattice version of QCD is currently under critical examination within FLAG1. I assume that, within errors, the numbers quoted in the FLAG review for \(\hat{M}_{\pi^{+}}\), \(\hat{M}_{K^{+}}\), \(\hat{M}_{K^{0}}\), do represent the masses obtained within QCD from the values quoted for the quark masses. Since the low energy theorems discussed below hold irrespective of the values adopted for the coupling constant and the quark masses, the matching with QCD + QED and with experiment does not play any role when comparing these predictions with lattice data.
Footnote 1: Urs Wenger, private communication.
## 3 Expansion in powers of the quark masses
It so happens that three of the quarks are very light. If they were massless, QCD would have an exact chiral symmetry. The symmetry is partly hidden because the ground state is not symmetric with respect to chiral rotations. As pointed out by Nambu [10], this implies that the spectrum of the theory contains massless particles: if the quarks were massless, the spectrum of QCD would contain an octet of massless Nambu-Goldstone bosons.
Chiral perturbation theory (\(\chi\)PT) treats the mass matrix of the light quarks, \({\cal M}={\rm diag}\{m_{u},m_{d},m_{s}\}\), as a perturbation and shows that, for the square of the masses of the charged pion and of the kaons, the expansion in powers of the light
quark masses ("chiral expansion") starts with a linear term:
\[\begin{split}\hat{M}_{\pi^{+}}^{2}&=(m_{u}+m_{d})B_{0}+ O({\cal M}^{2})\;,\\ \hat{M}_{K^{+}}^{2}&=(m_{u}+m_{s})B_{0}+O({\cal M}^{2 })\;,\\ \hat{M}_{K^{0}}^{2}&=(m_{d}+m_{s})B_{0}+O({\cal M}^{2 })\;.\end{split} \tag{4}\]
On account of invariance under charge conjugation, these formulae also hold for \(\pi^{-}\), \(K^{-}\) and \(\bar{K}^{0}\), but for \(\pi^{0}\) and \(\eta\), the leading terms are more complicated, because the difference \(m_{d}-m_{u}\) breaks isospin and hence generates mixing between the two levels:
\[\hat{M}_{\pi^{0}}^{2} = \left\{(m_{u}+m_{d})-\frac{4}{3}(m_{s}-m_{ud})\sin^{2}\delta/ \cos 2\delta\right\}B_{0}+O({\cal M}^{2})\;, \tag{5}\] \[\hat{M}_{\eta}^{2} = \left\{\frac{2}{3}(m_{ud}+2m_{s})+\frac{4}{3}(m_{s}-m_{ud})\sin^{ 2}\delta/\cos 2\delta\right\}B_{0}+O({\cal M}^{2})\;.\]
The mixing angle \(\delta\) is determined by the quark mass ratio \(R\) according to
\[\mbox{tg}\,2\,\delta=\frac{\sqrt{3}}{2R}\;. \tag{6}\]
Since \(R\) happens to be large, \(\delta\) is small. The repulsion of the two levels makes the neutral pion somewhat lighter than the charged one, but the effect is of second order in isospin breaking and hence very small.
## 4 Reparametrization invariance
At leading order of the chiral expansion, the masses of the Nambu-Goldstone bosons do determine the ratios of the light quark masses, but the quark masses themselves cannot be determined within \(\chi\)PT: the reparametrization \({\cal M}^{\prime}=\kappa{\cal M}\) of the quark mass matrix leaves the effective Lagrangian invariant, provided the low energy constant \(B_{0}\) is transformed as well, with \(B_{0}^{\prime}=B_{0}/\kappa\). In the standard notation, the quark mass matrix enters the entire effective Lagrangian exclusively via the matrix \(\chi=2B_{0}{\cal M}\), so that the higher order contributions are automatically invariant under this transformation.
At NLO, not even the quark mass ratios are determined by the meson masses. Kaplan and Manohar [11] identified the algebraic origin of this property of \(\chi\)PT: it is a consequence of the fact that the effective Lagrangian only exploits the symmetry properties of the quark mass matrix. The matrix \({\cal M}^{\dagger-1}\)det\({\cal M}\) transforms in the same way under chiral transformations as \({\cal M}\) itself and the same thus holds for \({\cal M}^{\prime}={\cal M}+\lambda\,{\cal M}^{\dagger-1}\)det\({\cal M}\). The operation amounts to a reparametrization of the quark masses: \(m_{u}^{\prime}=m_{u}+\lambda\,m_{d}\,m_{s}\) and analogously for \(m_{d}\) and \(m_{s}\). Replacing \({\cal M}\) by \({\cal M}^{\prime}\) in the effective Lagrangian leaves the leading order terms alone, but generates contributions of NLO that are quadratic in \({\cal M}\). The extra terms can be absorbed by changing the corresponding low energy constants (LECs) according to \(L_{6}^{\prime}=L_{6}-\bar{\lambda}\), \(L_{7}^{\prime}=L_{7}-\bar{\lambda}\), \(L_{8}^{\prime}=L_{8}+2\bar{\lambda}\), with \(\bar{\lambda}=\lambda\,B_{0}/32F_{0}^{2}\). The simultaneous change \({\cal M}\to{\cal M}^{\prime}\), \(L_{i}\to L_{i}^{\prime}\) does leave the effective Lagrangian invariant to NLO. Hence the chiral representation to first nonleading order of all quantities of physical interest obtained
with this Lagrangian is invariant under the above transformation. With a suitable transformation rule for the LECs occurring at higher orders, reparametrization invariance of the effective Lagrangian holds to all orders.
As a side remark, I mention that in the effective theory relevant if only two quark flavours are treated as light, the leading order Lagrangian only involves the sum of the two quark masses - the difference only starts showing up at NLO. The Lagrangian is invariant under the reparametrization \(m_{u}^{\prime}+m_{d}^{\prime}=\kappa(m_{u}+m_{d})\), \(m_{d}^{\prime}-m_{u}^{\prime}=\lambda(m_{d}-m_{u})\), provided the LECs \(B\), \(\ell_{7}\) and \(h_{3}\) are transformed with \(B^{\prime}=B/\kappa\), \(\ell_{7}^{\prime}=\kappa^{2}\ell_{7}/\lambda^{2}\), \(h_{3}^{\prime}=\kappa^{2}h_{3}/\lambda^{2}\). In this framework, the quark mass ratio \(m_{u}/m_{d}\) is not reparametrization invariant, either.
Since the quark mass ratios \(S\) and \(R\) are not reparametrization invariant, they do pick up NLO corrections that cannot be pinned down with \(\chi\)PT. Early estimates of the quark mass ratios [12, 13] had to rely on LO formulae and were subject to considerable uncertainties that were often underestimated. An extreme example is the Dashen theorem [14], which states that - at leading order of the chiral expansion - the e.m. contributions to the square of the masses of the charged kaons and pions are the same, while the neutral Nambu-Goldstone bosons do not pick up such contributions at all. Langacker and Pagels [15, 16] pointed out even before \(\chi\)PT had been set up that the Dashen theorem neglects NLO contributions that contain juicy chiral logarithms. Several authors [17, 18, 19, 20, 21, 22, 23] tried to estimate the low energy constants relevant at NLO, but the dust only settled when the work done on the lattice made it possible to solve QCD numerically.
### 5 Meson mass ratio relevant for \(S\)
When evaluating QCD on a lattice, the quark masses represent free parameters - in principle, they can be taken arbitrarily small. In this limit, the ratios of the Nambu-Goldstone masses are given by ratios of quark masses. In particular, the ratio of the mean mass square in the kaon multiplet, \(\hat{M}_{K}^{2}=\frac{1}{2}(\hat{M}_{K^{+}}^{2}+\hat{M}_{K^{0}}^{2})\), to the mass square of the charged pion is determined by the quark mass ratio \(S\):
\[\frac{2\hat{M}_{K}^{2}}{\hat{M}_{\pi^{+}}^{2}}=(S+1)(1+\Delta_{S}). \tag{7}\]
The factor \((1+\Delta_{S})\) accounts for the corrections arising from higher orders of the expansion: \(\Delta_{S}\) is of \(O(\mathcal{M})\).
The uncertainties in the e.m. self energies mainly concern the mass difference between the charged and neutral kaons. For the masses occurring in the above relation, the currently available lattice determinations imply
\[\hat{M}_{\pi^{+}}=134.8(3)\,\mathrm{MeV}\,,\quad\hat{M}_{K}=494.2(4)\,\mathrm{ MeV}\,. \tag{8}\]
Solving equation (7) for \(\Delta_{S}\) and using the value (2) for \(S\), I obtain
\[\Delta_{S}=-0.048(4)(3)\,, \tag{9}\]
where the first and second error stem from the uncertainties in the meson and quark masses, respectively - since the two are correlated, it is not legitimate to add them in quadrature. The result is displayed as a black dot with error bars in Fig. 1. It show
The chiral perturbation series for \(\Delta S\) starts with
\[\Delta S = (1-\delta_{S})\left\{-\mu_{\pi^{0}}+\mu_{\eta}-\frac{6}{F_{0}^{2}} (\hat{M}_{\eta}^{2}-\hat{M}_{\pi^{0}}^{2})(L_{5}^{r}-2L_{8}^{r})\right\}+O( \mathcal{M}^{2})\,,\] \[\delta_{S} = \frac{4(S+2)-2(S-1)\sec 2\delta}{3(S+1)}\,\sin^{2}\delta\,,\ \ \ \mu_{P}\equiv\frac{\hat{M}_{P}^{2}}{32\pi^{2}F_{0}^{2}}\ln\frac{\hat{M}_{P}^{2} }{\mu^{2}}\,. \tag{10}\]
The term \(\delta_{S}\) is of second order in isospin breaking and hence tiny: the quark mass ratios in equations (2) and (3) yield \(\delta_{S}=1.11(10)\cdot 10^{-4}\). The first two terms in the curly bracket stem from loop graphs which are divergent and depend logarithmically on the masses of the Nambu-Goldstone bosons. The divergence is absorbed in a renormalization of the low energy constants \(L_{5}^{r}\), \(L_{8}^{r}\) and \(\mu\) stands for the running scale used in the renormalization. If isospin breaking is turned off (\(m_{u}-m_{d}\to 0\)), the formula agrees with the representation for the masses of the Nambu-Goldstone bosons given in Ref. [24].
The lower band in Fig. 1 illustrates the above formula by showing the dependence of \(\Delta_{S}\) on the masses of the light quarks. Terms of \(O(\mathcal{M}^{2})\) are neglected and the
Figure 1: Corrections to the leading order relations between the masses of the Nambu-Goldstone bosons and the quark masses. The ratios \(m_{a}:m_{d}:m_{a}\) as well as \(\Lambda_{\rm QCD}\) and \(m_{c}\) are kept fixed, \(m_{s}\) is varied in the interval \(0<m_{s}<100\,{\rm MeV}\). The lattice values for \(\Delta S\) and \(\Delta R\) are based on the \(N_{f}=2+1+1\) results quoted in the FLAG review [8]. \(\Delta_{\eta}\) represents the analogous correction to the Gell-Mann-Okubo formula.
combination \(L_{5}^{r}-2L_{8}^{r}\) of LECs is fixed such that, at the physical value of the quark masses5, the band matches the linear sum of the errors in (9). Numerically, this leads to \(L_{5}^{r}-2L_{8}^{r}=-0.014(26)\cdot 10^{-3}\) at scale \(\mu=M_{\rho}\), well within the range obtained from the individual numerical values of the LECs quoted in Ref. [8]. The strong curvature of the band illustrates the fact that the combination of LECs relevant here is very small - the dependence on the quark masses is dominated by the chiral logarithms.
Footnote 5: FLAG quotes \(m_{s}=93.44(68)\,\mathrm{MeV}\) in the \(\overline{\mathrm{MS}}\) scheme for \(N_{f}=2+1+1\)[8].
The chiral expansion of the Nambu-Goldstone masses has been worked out explicitly to NNLO of the chiral expansion [25, 26]. The package _Chiron_ built by Hans Bijnens [27] includes everything needed to obtain the chiral representation for the massses of the Nambo-Goldstone bosons to two loops. The corresponding representation for the terms of \(O(\mathcal{M}^{2})\) in formula (10) involves further non-analytic terms as well as further LECs. The available numerical estimates of the latter are yet too crude to shed light on the size of the corresponding contributions to \(\Delta S\), but the framework provides an excellent basis for the analysis of the lattice data. In the left half of the figure, these terms are negligible, but towards the right their importance grows.
The work done on the lattice drastically reduced the uncertainty to which the quark mass ratio \(S\) can be determined from phenomenology. The value (9) shows that lattice calculations also yield a sharp determination for \(\Delta S\) at the physical point. An accurate evaluation of the quark mass dependence of \(\Delta S\) and of the term \(\Delta R\) to be discussed below would allow to reduce the uncertainties in the LECs relevant for the Nambu-Goldstone masses. I expect this evaluation to confirm that, in Fig. 1, the neglected higher orders are indeed very small, throughout the range shown in that figure.
## 6 Meson mass ratio relevant for \(R\)
The ratio \(R\) compares the difference \(m_{s}-m_{ud}\), which is responsible for the breaking of the eightfold way, with the difference \(m_{d}-m_{u}\), which breaks isospin symmetry. The relations (4) imply that, at leading order in the chiral expansion, the ratio of the corresponding differences between the squares of the Nambu-Goldstone masses is the same:
\[\frac{\hat{M}_{K}^{2}-\hat{M}_{\pi^{+}}^{2}}{\hat{M}_{K^{0}}^{2}-\hat{M}_{K^{ +}}^{2}}=R\,(1+\Delta_{R})\;. \tag{11}\]
The correction \(\Delta_{R}=O(\mathcal{M})\) again accounts for the higher order contributions.
The leading term in the chiral expansion of \(\Delta_{R}\) not only involves the same low energy constants as \(\Delta_{S}\), but also the same chiral logarithms:
\[\Delta_{R}=-(1-\delta_{R})\,\Delta_{S}+O(\mathcal{M}^{2})\;,\qquad\delta_{R} =\frac{2S}{4R^{2}(S+1)+S-1}\;. \tag{12}\]
In the isospin limit, the term \(\delta_{R}\) vanishes. The isospin breaking effect is of second order also in this case and hence tiny: the quark mass ratios in equations (2) and (3) yield \(\delta_{R}=3.74(33)\cdot 10^{-4}\). Up to numerically irrelevant contributions, the low energy theorem (12) thus simplifies to \(\Delta_{R}=-\Delta_{S}+O({\cal M}^{2})\). The upper band in Fig. 1 shows the prediction obtained for \(\Delta_{R}\) if the NNLO corrections are neglected.
In order to test the low energy theorem (12), we need an estimate for the kaon mass difference in QCD, which occurs in the definition (11) of \(\Delta_{R}\). The FLAG review [8] discusses the matter in terms of the parameter \(\varepsilon\) that measures the size of the higher order contributions to the Dashen theorem, but the results for the kaon mass difference can be worked out from the information given in the quoted references.
\begin{tabular}{|c|c|c|c||c||c|} \hline reference & BMW [28] & RM123 [29] & MILC [30] & average & \(\eta\to 3\pi\)[31] \\ \hline \(\hat{M}_{K^{+}}^{2}-\hat{M}_{K^{0}}^{2}\) & 6.149(122) & 5.947(151) & 6.075(125) & 6.072(76) & 6.24(38) \\ \hline \end{tabular}
Table 1. Results for the kaon mass difference in QCD (in units of \(10^{-3}\,\mbox{GeV}^{2}\)).
Table 1 shows that the lattice results are consistent within errors. Their average determines the mass difference to an accuracy of 1.2%. The last entry of the table indicates that the phenomenological determination based on \(\eta\) decay is much less accurate, but also consistent with the outcome of the lattice calculations.
Evaluating the term \(\Delta_{R}\) with the masses in (8), the average for \(\hat{M}_{K^{+}}^{2}-\hat{M}_{K^{0}}^{2}\) in Table 1 and the lattice result for \(R\) in (3), I obtain
\[\Delta_{R}=0.037(13)(47)\,. \tag{13}\]
The first error stems from the uncertainty in the meson masses and is totally dominated by the one in the mass difference \(\hat{M}_{K^{+}}^{2}-\hat{M}_{K^{0}}^{2}\). The second error represents the uncertainty due to the fact that the value of \(R\) is known only to an accuracy of about 5%.
In Fig. 1, the central value of \(\Delta_{R}\) is marked with a triangle. The small error bar represents the uncertainties in the meson masses, while the large one is the linear sum of the two sources of uncertainty. This shows that, even if the uncertainty in \(R\) is ignored, the result is consistent with the prediction. For a proper test of the low energy theorem, however, lattice calculations are required that simultaneously evaluate \(\hat{M}_{K^{0}}^{2}-\hat{M}_{K^{+}}^{2}\) and \(R\), so that the correlation between the two can be accounted for. With the currently available information, not even the sign of \(\Delta R\) can be verified.
## 7 Low energy theorem for \(Q\)
For the product of the two meson mass ratios in equations (7) and (11), the chiral expansion starts with [24]
\[\frac{\hat{M}_{K}^{2}(\hat{M}_{K}^{2}-\hat{M}_{\pi^{+}}^{2})}{M_{\pi^{+}}^{2} (\hat{M}_{K^{0}}^{2}-\hat{M}_{K^{+}}^{2})}=Q^{2}(1+\Delta_{Q})\,, \tag{14}\]
where \(Q^{2}\equiv\frac{1}{2}R(S+1)\) represents a ratio of quark mass squares,
\[Q^{2}\equiv\frac{m_{*}^{2}-m_{ud}^{2}}{m_{d}^{2}-m_{u}^{2}}\,, \tag{15}\]
and \(\Delta_{Q}\) accounts for the contributions of higher order.
The first determination of \(Q\) was based on an analysis of the decay \(\eta\to 3\pi\) in the framework of \(\chi\)PT [32]. This process is of particular interest because it violates the conservation of isospin and is therefore sensitive to the difference between \(m_{u}\) and \(m_{d}\). If the e.m. interaction is ignored, the transition amplitude is proportional to \(m_{d}-m_{u}\). As shown by Bell and Sutherland [33, 34], the e.m. contributions are suppressed: in contrast to the mass differences \(M_{\pi^{+}}^{2}-M_{\pi^{0}}^{2}\) and \(M_{K^{0}}^{2}-M_{K^{+}}^{2}\) which do pick up substantial contributions from the e.m. self energies already at leading order, the expansion of the e.m. contribution to the transition amplitude only starts at \(O(e^{2}{\cal M})\). Accordingly, the quark mass ratio \(Q\) can be expressed in terms of measured quantities, to next-to-leading order of the chiral expansion. More than 35 years ago, the numerical evaluation led to \(1/Q^{2}=1.9(3)\cdot 10^{-3}\)[32], which corresponds to \(Q=23.2(1.8)\).
In the meantime, the calculation was improved, accounting for higher order contributions in the expansion in powers of the momenta by means of dispersion theory [35, 36] as well as within \(\chi\)PT [37]. Also, the effects generated by the e.m. interaction were studied in detail [38, 39]. For a thorough discussion, I refer to Ref. [31], where the outcome for \(Q\) is given as \(Q=22.1(7)\) - this confirms the one loop result of \(\chi\)PT and is more accurate.
The determination of \(Q\) from \(\eta\) decay relies on the assumption that the correction \(\Delta Q\) is negligibly small. The critical term in relation (14) is the mass difference \(\hat{M}_{K^{0}}^{2}-\hat{M}_{K^{+}}^{2}\). Table 1 shows that the work done on the lattice has led to a significantly more accurate value for this quantity. Using the average listed in this table and the lattice results (8) for \(\hat{M}_{\pi^{+}}\) and \(\hat{M}_{K}\), the left hand side of (14) can be evaluated rather accurately. The higher orders of the chiral series produce the correction factor
\[(1+\Delta_{Q})=(1+\Delta_{S})(1+\Delta_{R})\,. \tag{16}\]
Equation (12) implies that the two factors on the right hand side of this relation nearly compensate one another:
\[\Delta_{Q}=\delta_{R}\Delta_{S}+O({\cal M}^{2})\;. \tag{17}\]
Assuming that \(\Delta_{Q}\) is indeed small compared to \(\Delta_{S}\) or \(\Delta_{R}\), I obtain
\[Q=22.4(2)\;. \tag{18}\]
With the lattice result (2) for \(S\), the relation \(Q^{2}=\frac{1}{2}R(S+1)\) then leads to
\[R=35.5(5)\;. \tag{19}\]
The quoted errors account for the uncertainties in all of the variables that enter the calculation as input, but do not include an estimate for the neglected higher order
contributions. The uncertainty in the kaon mass difference dominates - it is of the order of 1%.
Concerning evaluations of \(Q\) on the lattice, there were discrepancies, not only between different lattice calculations but also between some of these and phenomenology (see the 2019 edition of the FLAG review [40]), but the dust appears to have settled: the most recent update of this review [8] quotes the value
\[Q=22.5(5)\,, \tag{20}\]
which is in perfect agreement with the value (18) obtained from the assumption that the correction \(\Delta_{Q}\) is negligibly small. Note, however that the comparison does not provide a significant test of this assumption. Using the lattice results not only for the left hand side of (14) but also for \(Q\), I obtain
\[\Delta Q=-0.011(13)(43)\,, \tag{21}\]
where the first error reflects the uncertainties in the meson masses, while the second stems from the one in the lattice result for \(Q\). The central value is indeed small, but since the uncertainty is large, it is not yet excluded that \(\Delta Q\) is of the same size as \(\Delta S\) - contrary to what is expected from \(\chi\)PT.
## 8 Second order isospin breaking effects
The quark mass ratio \(Q\) is not strictly reparametrization invariant. Also, in the chiral counting of powers, where the three light quark masses count as small quantities of the same order, the low energy theorem for \(Q\) does not strictly hold to NLO. The quantity \(\Delta_{Q}\), which represents the difference between the meson and quark mass ratios occurring in (14), contains the term \(\delta_{R}\Delta_{S}\), which is of \(O({\cal M})\). It so happens that \(m_{d}-m_{u}\) is small compared to \(m_{s}-m_{ud}\), so that \(\delta_{R}\) is tiny. As discussed above, the isospin breaking effects encountered in the low energy theorem for \(Q\) are too small to be of physical interest, but they do complicate matters.
The change needed to arrive at a version of the low energy theorem that strictly holds to NLO is very modest: it suffices to replace the quark mass ratio \(Q^{2}\) by
\[\tilde{Q}^{2}\equiv\frac{m_{s}^{2}-\tilde{m}_{ud}^{2}}{m_{d}^{2}-m_{u}^{2}}\,, \qquad\tilde{m}_{ud}^{2}\equiv\frac{1}{2}(m_{u}^{2}+m_{d}^{2})\,. \tag{22}\]
The differences between the squares of the light quark masses, \(m_{u}^{2}-m_{d}^{2}\), \(m_{d}^{2}-m_{s}^{2}\), \(m_{s}^{2}-m_{u}^{2}\) are reparametrization invariant modulo contributions of \(O({\cal M}^{4})\). Since \(\tilde{Q}^{2}\) can be expressed in terms of these differences, it is reparametrization invariant up to terms of \(O({\cal M}^{2})\), in contrast to \(Q\).
In order to find the corresponding change in the meson mass ratio that enters the low energy theorem, it suffices to solve the leading order mass formulae (4) for the quark masses. Inserting the result in the expression for \(\tilde{Q}^{2}\), the denominator remains the same as before, while in the numerator, the term \(\hat{M}_{K}^{4}\) is replaced by the product \(\hat{M}_{K^{0}}^{2}\hat{M}_{K^{+}}^{2}\). Hence the correction \(\Delta_{\tilde{Q}}\) defined by
\[\frac{\hat{M}_{K^{0}}^{2}\hat{M}_{K^{+}}^{2}-\hat{M}_{\pi^{+}}^{2}\hat{M}_{K ^{\prime}}^{2}}{\hat{M}_{\pi^{+}}^{2}(\hat{M}_{K^{0}}^{2}-\hat{M}_{K^{+}}^{2}) }=\tilde{Q}^{2}(1+\Delta_{\tilde{Q}}) \tag{23}\]
vanishes at leading order of the chiral expansion. Actually, inserting the chiral expansion of the meson masses to NLO, one finds that not only the contributions from the LECs but also the chiral logarithms occurring at NLO cancel out, so that the low energy theorem takes the simple form
\[\Delta_{\tilde{Q}}=O({\cal M}^{2})\,. \tag{24}\]
The only difference between \(Q\) and \(\tilde{Q}\) is that \(m_{ud}\) is replaced by \(\tilde{m}_{ud}\). In view of \(\tilde{m}_{ud}^{2}-m_{ud}^{2}=\frac{1}{4}(m_{d}-m_{u})^{2}\), the difference is of second order in isospin breaking and hence tiny. At the quoted accuracy, the lattice result (20) also holds for \(\tilde{Q}\) and the values of \(Q\) and \(R\) obtained from the assumption that \(\Delta\tilde{Q}\) is negibly small cannot be distinguished from those given in equations (18) and (19), obtained by neglecting \(\Delta Q\).
## 9 Gell-Mann-Okubo formula
The quantities \(\Delta_{S}\), \(\Delta_{R}\) and \(\Delta_{Q}\) can be compared with the higher order corrections arising in the case of the Gell-Mann-Okubo formula, which predicts the mass of the \(\eta\) in terms of those of the pions and kaons. At leading order and in the isospin limit, the prediction reads \(\hat{M}_{\eta}^{2}=\frac{1}{3}(4\hat{M}_{K}^{2}-M_{\pi}^{2})\)[41, 42]. For \(m_{u}\neq m_{d}\), the LO mass formulae (4) and (5) imply that the relation takes the form \(\hat{M}_{\eta}^{2}=\frac{1}{3}(2\hat{M}_{K^{+}}^{2}+2\hat{M}_{K^{0}}^{2}+2 \hat{M}_{\pi^{+}}^{2}-3\hat{M}_{\pi^{0}}^{2})\). The higher orders of the chiral expansion generate a correction which I denote by \(\Delta_{\eta}\):
\[\hat{M}_{\eta}^{2}=\frac{1}{3}(4\hat{M}_{K}^{2}+2\hat{M}_{\pi^{+}}^{2}-3\hat{M }_{\pi^{0}}^{2})(1+\Delta_{\eta})\;. \tag{25}\]
The numerical value of the correction is determined by the masses of the Nambu-Goldstone bosons. I discuss the estimates used for these, in turn.
Since the \(\eta\) is electrically neutral and spinless, its e.m. self energy is expected to be positive but very small, comparable to the one of the \(K^{0}\). The lattice results in equation (8) and Table 1 yield \(M_{K^{0}}^{\rm G\Sigma_{0}}=0.35(40)\) MeV. In my opinion, the estimate \(M_{\eta}^{\rm QED}=0.4(4)\) MeV is on the conservative side. The corresponding range for the mass of the \(\eta\) in QCD reads:
\[\hat{M}_{\eta}=547.5(4)\,{\rm MeV}\,. \tag{26}\]
The lattice results in equation (8) determine the values of \(\hat{M}_{K}^{2}\) and \(\hat{M}_{\pi^{+}}^{2}\) to good precision. Since the difference \(\hat{M}_{\pi^{+}}^{2}-\hat{M}_{\pi^{0}}^{2}\) is of second order in isospin breaking, it is tiny. The change in the value of \(\Delta_{\eta}\) obtained if \(M_{\pi^{0}}^{2}\) is replaced by \(M_{\pi^{+}}^{2}\) is totally negligible compared to the uncertainty in the one from the kaons and from the \(\eta\), which are of comparable size.
These estimates determine the value of the correction to high accuracy:
\[\Delta_{\eta}=-0.062(2)\;. \tag{27}\]
The result is small, comparable with the one obtained for \(\Delta S\), which also represents a quantity of \(O({\cal M})\). The outcome is not sensitive to the estimate used for the mass
of the \(\eta\) in QCD: if \(\hat{M}_{\eta}\) is instead identified with the physical mass of the particle, the central value of \(\Delta_{\eta}\) is replaced by \(-0.061\).
The one loop representation of \(\Delta_{\eta}\),
\[\Delta_{\eta}=\frac{2}{3\hat{M}_{\eta}^{2}}\Biggl{\{} -3\hat{M}_{\pi^{0}}^{2}\mu_{\pi^{0}}+2\hat{M}_{\pi^{+}}^{2}\mu_{\pi^{+}}+2 \hat{M}_{K^{+}}^{2}\mu_{K^{+}}+2\hat{M}_{K^{0}}^{2}\mu_{K^{0}}-3\hat{M}_{\eta}^ {2}\mu_{\eta} \tag{28}\] \[-\frac{3(\hat{M}_{\eta}^{2}-\hat{M}_{\pi^{0}}^{2})^{2}}{F_{0}^{2} }(L_{5}^{r}-12L_{7}-6L_{8}^{r})\Biggr{\}}+O(\mathcal{M}^{2})\,,\]
is similar to the one for \(\Delta_{S}\), but the combination of LECs occurring in \(\Delta_{\eta}\) is reparametrization invariant, while the one in \(\Delta_{S}\) isn't. Indeed, the definition (7) of \(\Delta_{S}\) contains the quark mass ratio \(S\), which is not reparameterization invariant, while the definition (25) of \(\Delta_{\eta}\) exclusively involves meson masses.
The corrections \(\Delta_{S}\) and \(\Delta_{\eta}\) are both independent of the renormalization scale. The representation (10) for \(\Delta_{S}\) manifestly exhibits this property: the scale dependence of the logarithmic contributions cancels against the one of the LECs. In the above representation for \(\Delta_{\eta}\), scale independence is not manifest, but it does hold if the LO formulae for the meson masses are inserted (this is a good check of the algebra). As such, it does not matter whether the chiral logarithms are evaluated with the LO expressions for the meson masses or with the estimates given for the full masses in QCD, for instance - the difference in the result for \(\Delta_{\eta}\) is of \(O(\mathcal{M}^{2})\). In order to preserve scale independence in numerical evaluations of the loop integrals, however, these must be expressed in terms of the parameters that occur in the chiral Lagrangian at leading order. The masses of the mesons running around the loops are given by the leading order formulae (4), (5) and depend on the quark mass ratios \(S\), \(R\), which are left open. For definiteness, I fix the scale with \(B_{0}=2.52\) GeV; this choice ensures that, at the physical value of \(m_{s}\), the meson masses occurring in the loop integrals differ from the lattice results for the full masses in QCD by less than 3%.
The narrow blue band in Fig. 1 displays the dependence of \(\Delta_{\eta}\) on the light quark masses, again at fixed ratios \(m_{u}:m_{d}:m_{s}\). The relevant combination of LECs is varied such that the value of \(\Delta_{\eta}\) at the physical point matches the range (27). The plot shows that the lattice results determine the correction to the Gell-Mann-Okubo formula to remarkable precision. In size, the correction \(\Delta_{\eta}\) is comparable to \(\Delta_{S}\) or to the prediction for \(\Delta_{R}\). The curvature of the bands stems from the chiral logarithms - the contributions from the LECs are linear in the quark masses.
## 10 Low energy theorem for \(\hat{M}_{\pi^{+}}-\hat{M}_{\pi^{0}}\)
At leading order, the chiral representation of the five mass squares \(\hat{M}_{\pi^{0}}^{2}\), \(\hat{M}_{\pi^{+}}^{2}\), \(\hat{M}_{K^{+}}^{2}\), \(\hat{M}_{K^{0}}^{2}\), \(\hat{M}_{\eta}^{2}\) involves a single low energy constant, \(B_{0}\). At NLO, three additional parameters appear: \(L_{4}-2L_{6}\), \(L_{5}-2L_{8}\), \(3L_{7}+L_{8}\). In the meson mass ratios, the first one of these as well as \(B_{0}\) cancel out. Hence there are two parameter
free constraints among the meson masses, valid to NLO. The first one of these is the low energy theorem discussed above, which states that - to NLO of the chiral expansion - the meson mass ratio on the left hand side of equation (23) is given by the quark mass ratio \(\tilde{Q}^{2}\). It may be viewed as a prediction for the isospin breaking mass difference \(\hat{M}_{K^{0}}-\hat{M}_{K^{+}}\) in terms of the quark mass ratio \(\tilde{Q}\), the mean kaon mass \(\hat{M}_{K}\) and \(\hat{M}_{\pi^{+}}\). The second prediction instead concerns the mass difference \(\hat{M}_{\pi^{+}}-\hat{M}_{\pi^{0}}\), which represents an isospin breaking effect as well, but while \(\hat{M}_{K^{0}}-\hat{M}_{K^{+}}\) is proportional to \(m_{d}-m_{u}\), \(\hat{M}_{\pi^{+}}-\hat{M}_{\pi^{0}}\) is of order \((m_{d}-m_{u})^{2}\) and hence much smaller.
The chiral perturbation series for \(\hat{M}_{\pi^{+}}-\hat{M}_{\pi^{0}}\) was worked out to NLO already in Ref. [24], where a low energy theorem was established that relates the mass splittings in the two multiplets:
\[\hat{M}_{\pi^{+}}^{2}-\hat{M}_{\pi^{0}}^{2}=\frac{(\hat{M}_{K^{0}}^{2}-\hat{M }_{K^{+}}^{2})^{2}}{3(\hat{M}_{\eta}^{2}-\hat{M}_{\pi^{0}}^{2})}(1+\Delta_{ \pi})\;. \tag{29}\]
An explicit representation for \(\Delta_{\pi}\) in terms of meson masses, valid to NLO of the chiral expansion, was given in the limit \(m_{u},m_{d}\to 0\), where the algebra simplifies considerably. In the remainder of the present section, this limitation is removed.
Unfortunately, the fact that \(\pi^{0}-\eta\) mixing occurs already at leading order of the chiral expansion leads to formulae at NLO that are too clumsy to be displayed here, but the numerical evaluation is straightforward and can be carried through without neglecting higher order isospin breaking effects.
Equation (29) defines \(\Delta_{\pi}\) in terms of meson mass ratios. Inserting the leading order mass formulae (4), (5), one readily checks that \(\Delta_{\pi}\) vanishes at LO of the chiral expansion. At NLO, the chiral representation of \(\Delta_{\pi}\) involves a combination of low energy constants. Remarkably, the combination is the same as the one occurring in \(\Delta_{\eta}\), i.e. in the NLO correction to the Gell-Mann-Okubo formula. Eliminating the LECs in favour of the scale invariant quantity \(\Delta_{\eta}\) with equation (28), we thus arrive at a representation for \(\Delta_{\pi}\) that holds to NLO of the chiral expansion and exclusively contains the quark mass ratios \(S\), \(R\) and \(\Delta_{\eta}\) (in particular, the prescription used for the evaluation of the chiral logarithms given in the preceding section only involves \(S\) and \(R\)).
The available lattice results restrict the quantities \(S\), \(\hat{M}_{\pi^{+}}\), \(\hat{M}_{K}\), \(\hat{M}_{K^{0}}^{2}-\hat{M}_{K^{+}}^{2}\), \(\hat{M}_{\eta}\) to a narrow range. I evaluate the correction \(\Delta_{\pi}\) on this basis, treat these quantities as known input and vary them in the range specified in equations (2), (8), (26) and Table 1. The low energy theorem (24) implies that - to NLO of the chiral expansion - the quark mass ratio \(\tilde{Q}\) is given by a ratio of meson masses that belong to the list of input variables. Since the ratio \(R\) can be expressed in terms of \(\tilde{Q}\) and \(S\), this quantity is also determined. The definition (25) of \(\Delta_{\eta}\), however, involves the mass of the neutral pion, which is not contained in that list (in the evaluation of \(\Delta_{\eta}\) discussed in the preceding section, higher order isospin breaking effects were neglected - this is why the problem did not arise there). For the numerical analysis, however, it makes little difference whether \(\hat{M}_{\pi^{0}}\) only occurs on the left hand side of
formula (29) or also on the right hand side - with the NLO representation for \(\Delta_{\pi}\), this formula determines the value of \(\hat{M}_{\pi^{0}}\) in terms of known input in either case. Numerically, I obtain
\[\hat{M}_{\pi^{0}}=134.58(30)\;{\rm MeV}\;,\qquad\Delta_{\pi}=0.339(17)\;. \tag{30}\]
The errors account for the uncertainties in the input variables, but do not include an estimate for the neglected higher order contributions. The uncertainty in \(\hat{M}_{\pi^{0}}\) is totally dominated by the one in \(\hat{M}_{\pi^{+}}\) - the uncertainty in the mass difference is much smaller:
\[\hat{M}_{\pi^{+}}-\hat{M}_{\pi^{0}}=0.217(6)\;{\rm MeV}\;. \tag{31}\]
The central value \(\hat{M}_{\pi^{+}}-\hat{M}_{\pi^{0}}=0.17(3)\) MeV obtained in Ref. [24] is lower and the uncertainty attached to it is much larger. The difference arises because the Dashen theorem was used to estimate the mass difference \(\hat{M}_{K^{0}}^{2}-\hat{M}_{K^{+}}^{2}\) - the work done on the lattice has shown that this theorem receives large corrections from higher orders of the chiral expansion. These are now known quite accurately.
I emphasize that the above calculation ignores higher order contributions - the quoted error exclusively accounts for the noise in the input. The outcome for \(\Delta_{\pi}\) shows that, in this case, the low energy theorem receives a large correction at NLO. In view of this, the error given in (31) underestimates the present uncertainty in the mass difference: the neglected higher order contributions may well be larger than those arising from the errors attached to the lattice results used in the evaluation of the first two terms of the chiral perturbation series. The low energy theorem for \(\tilde{Q}\) is on an entirely different footing: it does not receive NLO corrections at all. A lattice calculation within QCD is needed to reliably determine \(\hat{M}_{\pi^{+}}-\hat{M}_{\pi^{0}}\) to comparable accuracy.
## 11 Discussion, summary and conclusion
1. At leading order of the chiral expansion, the \(\chi\)PT formulae for the masses of the Nambu-Goldstone bosons in QCD provide a crude estimate for the ratios of the light quark masses. At NLO, these formulae involve low energy constants that cannot be determined in this way. In particular, the LO relation between the meson mass ratio \(\hat{M}_{K}^{2}/M_{\pi^{+}}^{2}\) and the quark mass ratio \(S=m_{s}/m_{ud}\) picks up a correction from nonleading orders of the chiral expansion, measured by the term \(\Delta_{S}\) defined in equation (7). The available lattice results show that this correction is remarkably small.
2. The quark mass ratio \(R=(m_{s}-m_{ud})/(m_{d}-m_{u})\) is known much less well because it concerns the isospin breaking mass difference \(m_{d}-m_{u}\), which is small and not easy to disentangle from the isospin breaking effects generated by the e.m. interaction. The low energy theorem for \(Q^{2}\equiv\frac{1}{2}R(S+1)\), however, correlates \(R\) with \(S\): the term \(\Delta_{Q}\) defined in (14) is strongly suppressed. At NLO of the chiral expansion, it represents a second order isospin breaking effect that is negligibly small compared to \(\Delta_{S}\).
3. The quark mass ratio \(Q\) is not reparametrization invariant, but a modest variation suffices to repair this shortcoming. The quantity \(\tilde{Q}\) defined in (22) differs from \(Q\) only through numerically irrelevant contributions of second order in isospin breaking. More importantly, expressed in terms of \(\tilde{Q}\), the low energy theorem discussed in the preceding paragraph takes a very simple form: the chiral expansion of the meson mass ratio on the left hand side of (22) agrees with \(\tilde{Q}^{2}\), not only at LO, but also at NLO of the chiral expansion. This implies that, if the quark masses are taken sufficiently small, the term \(\Delta_{\tilde{Q}}=O({\cal M}^{2})\) is negligible compared to \(\Delta_{S}=O({\cal M})\). The assumption that the physical quark masses are sufficiently small in this sense leads to a rather sharp prediction for the value of \(Q\) and - together with the lattice result for \(S\) - also for \(R\):
\[Q=22.4(2)\,,\qquad R=35.5(5)\,. \tag{32}\]
A more precise determination of \(\hat{M}_{\pi^{+}}\), \(\hat{M}_{K^{+}}\) and \(\hat{M}_{K^{0}}\) is needed to determine the size of \(\Delta_{\tilde{Q}}\). Note that the issue concerns the relation between the meson masses and those of the quarks in QCD. The self energies generated by QED are not important here. What is needed to submit the low energy theorem \(\Delta_{\tilde{Q}}=O({\cal M}^{2})\) to a crucial test is a
the quark
4. In the plane of the quark mass ratios \(x=m_{u}/m_{d}\), \(y=m_{s}/m_{d}\), a given value of \(\tilde{Q}\) corresponds to an ellipse centered at \(x=y=0\):
\[\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}=1\,,\quad a^{2}=\frac{2\,\tilde{Q}^{2} +1}{2\,\tilde{Q}^{2}-1},\quad b^{2}=\tilde{Q}^{2}+\frac{1}{2}\,. \tag{33}\]
Figure 2: Quark mass ratios. The numerical values of \(S\), \(R\) and \(Q\) are taken from the entries for \(N_{f}=2+1+1\) flavours in the FLAG review [8]. The triangle shows the values obtained from Weinberg’s leading order formulae [12] for the mass ratios of the light quarks, which account for the e.m. self energies with Dashen’s theorem.
While \(a\) is very close to 1, \(b\) is large. The red band in Fig. 2 shows the region allowed by the lattice results for \(Q\), which agree very well with those obtained from \(\eta\) decay and are even somewhat more accurate. Visibly, the constraint imposed by the lattice results for \(S\) is much stronger. It is indicated by a narrow wedge that intersects the \(x\)-axis at \(x=-1\), close to the point \(x=-a\), where the ellipse crosses this axis. The black dot marks the intersection of the lattice results for \(S\) with the narrow band for \(Q\) obtained by evaluating the chiral expansion of the corresponding ratio of meson masses to NLO and neglecting corrections of higher order. Lines of constant \(R\) instead pass through the point \(x=y=1\), where the quark masses are the same and the eightfold way is an exact symmetry. \(R\) is more sensitive to the effects generated by the e.m. interaction than \(S\). As these are of long range and hence more difficult to account for on the lattice, the corresponding wedge is considerably broader.
5. It is not known why the strong interaction is CP-invariant to an extremely high degree of accuracy. If the \(u\)-quark were massless, this puzzle would be solved. The fact that the one loop formulae of \(\chi\)PT only constrain the quark mass ratios to an ellipse, while the position on the ellipse depends on LECs that cannot be determined within \(\chi\)PT, led to the suggestion that the NLO corrections could be so large that they shift Weinberg's LO values [12], which in the figure are marked with a triangle, to the point on the ellipse that belongs to \(m_{u}/m_{d}=0\). The ratios \(S\) and \(Q\) would then be related by \(S^{2}=4Q^{2}+1\). With the lattice result for \(Q\), this implies \(S=45.0(1.0)\), more than 17 standard deviations away from the lattice result for \(S\). This corroborates the conclusion reached long ago [43]: \(m_{u}=0\) is an interesting way not to understand this world - it is not the only one.
6. The triangle is rather close to the intersection of the three bands which represents the present knowledge of the quark mass ratios. In fact, if the Dashen theorem that underlies Weinberg's formulae is replaced by the present knowledge of the e.m self energies, the triangle moves even closer to the physical point: the numerical value (9) for \(\Delta_{S}\) shows that the lattice result for \(S\) deviates from the LO formula \(S+1=2\hat{M}_{K}^{2}/\hat{M}_{\pi^{+}}^{2}\) only by about 5%.
7. The Gell-Mann-Okubo formula also receives a correction at NLO. The explicit expression in (28) shows that the relevant combination of LECs is reparametrization invariant: in contrast to the definition (7) of \(\Delta_{S}\), the analogous formula (25) for \(\Delta_{\eta}\) exclusively involves the meson masses. Numerically, \(\Delta_{S}\) and \(\Delta_{\eta}\) are of comparable size.
8. The states \(\pi^{0}\) and \(\eta\) undergo mixing. The repulsion of the two levels makes the \(\pi^{0}\) lighter than the \(\pi^{+}\). The shift is of second order in \(m_{d}-m_{u}\) and hence small compared to the splitting generated by the e.m. interaction. The low energy theorem (29) relates the difference between the masses of the charged and neutral pions in QCD to the mass splitting in the kaon multiplet. The NLO correction \(\Delta_{\pi}\) involves the same reparametrization invariant combination of low energy constants as the Gell-Mann-Okubo formula, but since it is large, the uncertainties arising from
the neglect of the higher order contributions are considerable.
## Acknowledgments
I thank Balasubramanian Ananthanarayan, Hans Bijnens, Gilberto Colangelo, Jurg Gasser, Akaki Rusetski and Urs Wenger for useful comments on the manuscript.
|
2306.09161 | A kinetic study of black hole activation by local plasma injection into
the inner magnetosphere | (Abridged) An issue of considerable interest in the theory of jet formation
by the Blandford-Znajek mechanism, is how plasma is being continuously supplied
to the magnetosphere to maintain it in a force-free state. Injection of
electron-positron pairs via annihilation of MeV photons, emitted from a hot
accretion flow, has been shown to be a viable possibility, but requires a high
enough accretion rate. At lower accretion rates, and in the absence of any
other form of plasma supply, the magnetosphere becomes charge starved, forming
intermittent spark gaps that can induce intense pair cascades via interactions
with soft disk radiation, enabling outflow formation. It is often speculated
that enough plasma can penetrate the inner magnetosphere from the accretion
flow through some rearrangement of magnetic field lines (e.g., interchange
instability). However, the question arises whether such episodes of plasma
intrusion can prevent the formation of spark gaps. To address this question we
conducted a suite of numerical experiments, by means of radiative, 2D
axisymmetric general relativistic particle-in-cell simulations, in which plasma
is injected into specified regions at a prescribed rate. We find that when pair
production is switched off, nearly complete screening is achieved when the
plasma is injected within the outer light cylinder at a high enough rate.
Injection beyond the outer light cylinder results in either, the formation of
large vacuum gaps, or coherent, large-amplitude oscillations of the
magnetosphere, depending on the injection rate. Within the allowed dynamic
range of our simulations, we see no evidence for the system to approach a
steady state as the injection rate is increased. Switching on pair production
results in nearly complete screening of the entire magnetosphere in all cases,
with some fraction of the maximum Blandford-Znajek power emitted as TeV
gamma-rays. | Idan Niv, Omer Bromberg, Amir Levinson, Benoit Cerutti, Benjamin Crinquand | 2023-06-15T14:37:53Z | http://arxiv.org/abs/2306.09161v1 | # A kinetic study of black hole activation by local plasma injection into the inner magnetosphere
###### Abstract
An issue of considerable interest in the theory of jet formation by the Blandford-Znajek mechanism, is how plasma is being continuously supplied to the magnetosphere to maintain it in a force-free state. Injection of electron-positron pairs via annihilation of MeV photons, emitted from a hot accretion flow, has been shown to be a viable possibility, but requires a high enough accretion rate. At lower accretion rates, and in the absence of any other form of plasma supply, the magnetosphere becomes charge starved, forming intermittent spark gaps that can induce intense pair cascades via interactions with soft disk radiation, enabling outflow formation. It is often speculated that enough plasma can penetrate the inner magnetosphere from the accretion flow through some rearrangement of magnetic field lines (e.g., interchange instability). However, the question arises whether such episodes of plasma intrusion can prevent the formation of spark gaps. To address this question we conducted a suite of numerical experiments, by means of radiative, 2D axisymmetric general relativistic particle-in-cell simulations, in which plasma is injected into specified regions at a prescribed rate. We find that when pair production is switched off, nearly complete screening is achieved when the plasma is injected within the outer light cylinder at a high enough rate. Injection beyond the outer light cylinder results in either, the formation of large vacuum gaps, or coherent, large-amplitude oscillations of the magnetosphere, depending on the injection rate. Within the allowed dynamic range of our simulations, we see no evidence for the system to approach a steady state as the injection rate is increased. Switching on pair production results in nearly complete screening of the entire magnetosphere in all cases, with some fraction (a few percents) of the maximum Blandford-Znajek power emitted as TeV gamma-rays.
keywords:
## 1 Introduction
A key issue in the theory of black hole (BH) outflows (Blandford and Znajek, 1977) is the nature of the plasma source in the inner magnetosphere. The activation of outflows by magnetic extraction requires continuous plasma production in the magnetospheric region enclosed between the inner and outer light surfaces, defined as the loci where the speed of an observer rotating with the magnetic flux tube equals the speed of light (Blandford and Znajek, 1977; Globus and Levinson, 2013)1. In order to establish a force-free jet, the plasma injection rate must be sufficiently high to maintain the density everywhere in the magnetosphere above a critical value, known as the Goldreich-Julian (GJ) density, (Goldreich and Julian, 1969). If the plasma source cannot accommodate this requirement, charge starved regions (spark gaps) will be created, potentially leading to self-sustained pair discharges. In this scenario, charged leptons accelerated along magnetic field lines by the gap electric field scatter soft photons emitted by the surrounding matter to TeV energies. These gamma rays, in turn, interact with the soft photons to create more pairs, initiating pair cascades that tend to screen the gap, regulating the discharge process. Analytic models (Levinson, 2000; Neronov and Aharonian, 2007; Levinson and Rieger, 2011; Hirotani and Pu, 2016) as well as general relativistic particle-in-cell (GRPIC) simulations (Levinson and Cerutti, 2018; Chen and Yuan, 2020; Crinquand et al., 2020, 2021; Kisaka et al., 2022) indicate that the energy dissipated in the gap is robustly emitted in the TeV band, and it has been speculated (Levinson, 2000; Neronov and Aharonian, 2007; Levinson and Rieger, 2011; Hirotani and Pu, 2016; Hirotani et al., 2016; Levinson and Cerutti, 2018; Katsoulakos and Rieger, 2018; Chen and Yuan, 2020; Kisaka et al., 2020, 2022) that this mechanism may explain the extreme TeV flares seen in M87 and, conceivably, other AGNs.
Footnote 1: Formally the surface surface the solutions to the equation \(g_{\mu\nu}u^{\mu}u^{\nu}=0\), with \(u^{\mu}=u^{\mu}=0\) and \(u^{\mu}=\Omega u^{\mu}\) in Boyer-Lindquist coordinates, where \(g_{\mu\nu}\) is the Kerr metric and \(\Omega\) is the angular velocity of magnetic field lines. It can be shown (Takahashi et al., 1990; Globus and Levinson, 2013) that these are the surfaces on which the velocity of an ideal MHD flow equals the Alfvén velocity in the limit of zero inertia.
A plausible plasma production mechanism that has been discussed extensively in the literature is annihilation of MeV photons emitted by the hot accretion flow (or a putative corona). However, the pair injection rate predicted by this process is extremely sensitive to the rate at which plasma in the close vicinity of the BH is being accreted (Levin
son & Rieger, 2011; Moscibrodzka et al., 2011; Hiroani & Pu, 2016), and a too low accretion rate is unable to produce enough plasma to continuously screen the magnetosphere everywhere. Whether this mechanism can provide complete screening of the BH magnetosphere in M87 is currently under debate (Levinson & Segev, 2017). Here we consider alternative injection processes that might operate in the absence of sufficient pair production opacity.
One might speculate (as occasionally argued) that since the density of accreted plasma is much larger than the GI density, screening of the magnetosphere by direct feeding of charges from the inner parts of the accretion flow might be viable. Since the diffusion of charged particles across magnetic field lines is highly unlikely to supply sufficient plasma to the polar flow, given that the cross-field diffusion time is vastly longer than the accretion time, one must resort to yet unspecified injection channel, e.g., occasional rearrangement of magnetic surfaces at the jet boundary that might lead to sporadic loading of the inner magnetosphere. To our knowledge, no such process has been identified in GRMHD simulations, however, one must keep in mind their limited resolution and dynamic range. But even if such episodic injections indeed occur in nature, it is unlikely that plasma can be dumped continuously in the entire region encompassed between the inner and outer light surfaces. The question is then how the magnetosphere of an active BH will respond to injections in localized regions, for instance in the vicinity of the outer light surface. It could be that if the injected plasma is relativistically hot it quickly spreads over to cover the entire magnetosphere. However, it is unclear whether the electric charge distribution imposed by the injection process will conspire to completely screen the magnetosphere. Alternatively, the inner magnetosphere will become highly intermittent in response to sporadic plasma injection. At any rate, if complete screening does not ensue, particles will be accelerated to high energies by the parallel electric fields generated in gaps (\(E_{\parallel}={\bf E}\cdot{\bf B}/B\)), producing pairs and high-energy radiation via interactions with soft photons emitted by the accretion flow, and via curvature radiation.
Motivated by the above consideration, we conducted a set of numerical experiments, by means of particle-in-cell (PIC) simulations, to explore how the magnetosphere responds to localized plasma injections. Our experiments are restricted to steady injection in spherical shells (annuli in our 2D axisymmetric simulations). We also conducted several experiments where injection is restricted to a ring sector (in 2D) about the equatorial plane. This configuration represents an accretion torus in more realistic situations.
Quite generally, we find that when plasma is injected in the entire causal region of the magnetosphere, complete screening ensues, even in the absence of external radiation, leading to the generation of a force-free outflow that appears to be in good agreement with the predictions of the Blandford-Znajek (BZ) mechanism. However, in cases where the injection zone does not encompass the entire region between the inner and outer light surfaces and the interaction with disk radiation is switched off, a parallel electric field \(E_{\parallel}\) is generated even when the injected plasma is relativistically hot and the injection rate is relatively high (i.e., the mean pair density largely exceeds the GJ density in the injection zone). The dynamics of the magnetosphere depends on the injection rate; when it is low enough (but still sufficiently high to maintain the density in the injection zone well above the GJ density) a quasi steady state is established, whereby the amount of energy extracted from the black hole is small. At higher injection rates the magnetosphere exhibits a cyclic dynamics, with (quasi) periodic modulations of the density and the parallel electric field over a duration of tens \(t_{\rm g}\), resulting in the ejections of energy bursts with a maximum power that can reach \(\sim 80\) percents of the optimal BZ power, \(L_{\rm BZ}\). When the interaction with disk radiation is switched on in these experiments, the system relaxes to a quasi steady force-free state, with the extracted power reaching \(L_{\rm BZ}\), and the TeV luminosity of emitted radiation reaching a few percents \(L_{\rm BZ}\).
## 2 Simulation Setup
We conducted 2D axisymmetric simulations with the PIC code Zelttron (Cerutti et al., 2013), modified to include GR effects (Parfrey et al., 2019; Crinquand et al., 2020). The system consists of a Kerr BH with a Kerr parameter \(a=0.99\) threaded initially by a monopole magnetic field. The choice of a monopole field was made to avoid the formation of current sheets at the equatorial plane, which complicate the analysis and the interpretation of the results. We use geometrized units, where length scales and time are normalized by the BH gravitational radius, \(r_{\rm g}\) and \(t_{\rm g}=r_{\rm g}/c\), respectively. Henceforth, densities are measured in units of a fiducial density, \(n_{0}=\Omega B_{\rm H}/2\pi ec\), where \(B_{\rm H}\) is the magnetic field strength on the horizon, \(\Omega=\Omega_{\rm H}/2\) is the angular velocity of the monopole field, and \(\Omega_{\rm B}=ac/2r_{\rm H}=1/2t_{\rm g}\) is the BH angular velocity. For this choice, the associated plasma frequency is \(\omega_{p}=\sqrt{\Omega_{\rm H}\omega_{\rm H}}\), with \(\omega_{\rm B}=eB_{\rm H}/m_{e}c\), the fiducial magnetization is \(\sigma_{0}=\omega_{B}/\Omega_{\rm H}\approx 2eB_{\rm H}r_{\rm g}/m_{e}c^{2}\), and the ratio of gravitational radius to skin depth is \(r_{\rm g}\omega_{p}/c=\sqrt{\sigma_{0}}/2\). In M87 we typically have \(\sigma_{0}\sim 10^{13}\). Such a value is unrealistic for GRPIC simulations that attempt to resolve the skin depth. In the simulations presented below we choose a rescaled value of \(\sigma_{0}=5\times 10^{5}\), which allows skin depth resolution in all cases studied (see Crinquand et al., 2020 for further details). It is worth noting that for the monopole field adopted here the magnetization at radius \(r\) scales as \(\sigma(r)\propto\kappa(r)^{-1}r^{-2}\), where \(\kappa(r)=n(r)/n_{0}\) is the dimensionless pair density at radius \(r\).
We used a grid of spherical Kerr-Schild coordinates that extends from \(0.9r_{\rm H}\) to a radius of \(15r_{\rm g}\), where we set an absorbing layer between \(13.5-15~{}r_{\rm g}\). Once the simulation starts we impose a steady injection of electron-positron pairs in a spherical shell between radii \(r_{\rm in}\) and \(r_{\rm out}\), where the pairs are distributed randomly inside the shell and have a thermal velocity distribution with a temperature \(T\). Table 1 shows the 3 types of models used in this work. Each simulation was run until it reached a steady state, or in cases where the system exhibited cyclic dynamics (as in the models with high injection rate discussed below), until it completed several cycles. The simulations
Figure 1: The different types of models used in this work. The injection zones are marked with gray dots. The red solid lines mark the inner and outer light surfaces and the dashed line marks the outer surface of the ergosphere. The light surfaces are evaluated for a case of \(\Omega=\Omega_{\rm H}/2\).
were conducted in two limits. In the first we turned off Compton scattering (CS) decoupling the particles from the background radiation field. In this case particle flux is conserved outside the injection zone, while particles can exchange energy with the EM field and emit curvature radiation. In the second limit we turn on CS allowing for pair creation to take place in the box, which in turn allows for a more efficient screening of the parallel electric field reducing the energy gain from the EM field. We measured the Poynting flow and the energization of particles in the magnetosphere in each model and compared them to estimate its efficiency in activating the BH.
### Electromagnetic fields
In the \(3+1\) formalism of Komissarov (2004), the electromagnetic tensor, \(F^{\mu\nu}\), is decomposed into electric field \(\mathbf{D}\) and magnetic field \(\mathbf{B}\), defined (in components) by
\[\mathrm{D}^{i}=\frac{1}{2}e^{ijk}F_{jk}, \tag{1}\]
and
\[\mathrm{B}^{i}=\frac{1}{2}e^{ijk}F_{jk}, \tag{2}\]
where \({}^{*}\!F^{\mu\nu}\) is the dual electromagnetic tensor, \(\gamma\) is the determinant of the three-dimensional metric tensor \(\gamma_{ij}\) describing the space-like hypersurfaces in the \(3+1\) foliation, and \(e\) is its corresponding Levi-Civita tensor. The two general relativistic invariants can be expressed in terms of these fields as \({}^{*}\!F_{\mu\nu}F^{\mu\nu}=4\mathbf{D}\cdot\mathbf{B}\) and \(F_{\mu\nu}F^{\mu\nu}=2(\mathbf{B}^{2}-\mathbf{D}^{2})\). In ideal MHD (or FFE) these invariants satisfy \(\mathbf{D}\cdot\mathbf{B}=0\) and \(\mathbf{B}^{2}-\mathbf{D}^{2}>0\). In starved magnetospheric regions \(\mathbf{D}\cdot\mathbf{B}\neq 0\). Therefore, the quantity \(\mathbf{D}\cdot\mathbf{B}/B^{2}\), which measures the strength of the electric field along magnetic field lines relative to the local magnetic field can be used to identify unscreened regions.
### Plasma injection scheme
As explained above, in each numerical experiment pairs are injected in a spherical shell of inner radius \(r_{\mathrm{in}}\) and outer radius \(r_{\mathrm{out}}\). The rate at which pairs are injected inside the shell is taken to be
\[\dot{n}_{\mathrm{inj}}=\dot{n}_{0}\frac{r_{g}^{2}}{r^{2}}=\chi n_{0}c\frac{r_{ g}}{r^{2}}, \tag{3}\]
where we adopt the normalization \(n_{0}/t_{g}\), viz., \(\dot{n}_{0}=\chi n_{0}/t_{g}\) and \(\chi\) is a dimensionless factor. For the models listed in table 1, the temperature of the injected plasma is mildly relativistic, \(k_{\mathrm{B}}T=m_{e}c^{2}\), except for models \(\Sigma\)5, \(\Sigma\)N and \(\Sigma\)W for which it is ten times larger. At such temperatures, the injected pairs should be able to propagate from the injection zone to other regions of the magnetosphere at nearly the speed of light.
A rough estimate of the mean density in a shell far enough from the BH (where the metric is nearly flat) can be obtained upon assuming that the system is in a steady state and the density inside the shell is uniform. Equating the total rate of injection, \(\int_{r_{\mathrm{in}}}^{r_{\mathrm{out}}}\dot{n}_{\mathrm{inj}}d^{3}r=4\pi r _{\mathrm{in}}0r_{\mathrm{e}}r_{\mathrm{g}}(r_{\mathrm{out}}-r_{\mathrm{in}})\), with the rate at which plasma is lost from the shell boundaries, \(4\pi nc(r_{\mathrm{out}}^{2}\beta_{\mathrm{out}}-r_{\mathrm{in}}^{2}\beta_{ \mathrm{in}})\), where \(\beta_{\mathrm{out}}>0\left(\beta_{\mathrm{in}}<0\right)\) is the radial bulk 3-velocity of the plasma escaping from the outer (inner) boundary, one obtains:
\[n=\frac{\chi r_{g}(r_{\mathrm{out}}-r_{\mathrm{in}})}{(r_{\mathrm{out}}^{2} \beta_{\mathrm{out}}-r_{\mathrm{in}}^{2}\beta_{\mathrm{in}})}n_{0}. \tag{4}\]
For the \(\Sigma\) models in table 1, \(r_{\mathrm{out}}-r_{\mathrm{in}}=r_{g}\), yielding \(nr_{\mathrm{out}}^{2}/n_{0}r_{g}^{2}\approx\chi/(\beta_{\mathrm{out}}-\beta_{ \mathrm{in}})\). From the simulation we find \(\beta_{\mathrm{out}}-\beta_{\mathrm{in}}\approx 0.3\), from which we obtain \(nr_{\mathrm{out}}^{2}/n_{0}r_{g}^{2}\approx 3\chi\), which is smaller by about a factor of 2 than the value measured in the simulation. For extended injection, with \(r_{\mathrm{out}}\ll r\) and \(\beta_{\mathrm{out}}\approx 1\), we estimate the local density to be \(n(r)\approx\chi n_{0}(r_{g}/r)\) by setting \(r_{\mathrm{in}}=0\) and \(r_{\mathrm{out}}=r\) in Eq. (4), or \(nr^{2}/n_{0}r_{g}^{2}\approx\chi/(r_{g})\). Thus, we generally anticipate the ratio between the density and the local GJ density to be of the order of a few times \(\chi\), consistent with the results of the simulations.
#### 2.2.1 Photon generation and pair production
In addition to the prescribed injection scheme described above, we also included in some of the runs photon generation by inverse Compton scattering of disk radiation, and pair creation via interactions of the IC gamma rays thereby produced with the same soft photons. Following Crinquand et al. 2020 we assume that the radiation field is time independent, uniform, isotropic, and monoenergetic, with energy \(\epsilon_{0}\) and density \(n_{\mathrm{soft}}\). We do not include any feedback of the simulation on this radiation field. The upscattered photons and created leptons are assumed to propagate along the same direction as their high-energy parents, reflecting strong relativistic beaming. The intensity of the background radiation field is quantified in table 1 by the fiducial optical depth
\[\tau_{0}=\sigma_{\mathrm{T}}r_{g}n_{\mathrm{soft}}, \tag{5}\]
where \(\sigma_{\mathrm{T}}\) is the Thomson cross section. To guarantee optimum scale separation we adopt \(\epsilon_{0}=5\times 10^{-3}\) (see Crinquand et al. 2020 for further details).
## 3 Results
In order to examine the effect of external plasma injection on the dynamics of the magnetosphere, we run a series of models where we varied the size and location of the injection zone, the injection rate and the optical depth for photon-photon pair creation. The different models are listed in table 1. In what follows, cases in which the plasma injection zone encompasses the region below the outer light surface (left panel in Fig. 1) are termed "internal injection", otherwise they are termed "external injection".
### Internal injection
In the first suite of experiments we fixed \(r_{\mathrm{in}}\) at the BH horizon and varied \(r_{\mathrm{out}}\) (models 651, 61, 6L). The interaction with the external radiation was switched off by setting \(\tau_{0}=0\). Each model was run for a long enough time to allow the system to reach a quasi steady-state (typically after about \(30r_{g}\)). Figure 2 shows a comparison of the three models well after the system in each case has reached the quasi steady-state phase. The top panels show the number density of electrons, \(n_{-}\), in units of \(n_{0}(r_{g}/r)^{2}\). The distribution of positrons is a mirror image with respect to the \(x\) axis and is not presented. The bottom panels show the quantity \(\mathbf{D}\cdot\mathbf{B}/B^{2}\), which indicates the level of charge starvation in magnetospheric zones. As seen, effective screening of the entire magnetosphere is established in models \(\mathbf{\Theta}\)I and \(\mathbf{\theta}\)L, in which the injection zone extends beyond the outer light cylinder (marked with a solid red vertical line). In model 651, wherein the plasma is injected within a radius of \(r\leq 2r_{g}\), a strong parallel electric field is generated in a large portion of the magnetosphere above and below the equatorial plane.
Figure 3 exhibits the radial distribution of \(\mathbf{D}\cdot\mathbf{B}/B^{2}\), averaged over the angular direction (top row), and (bottom panels) the radial distribution of the energy flow, \(\int T^{\prime}_{r}dA\), where \(T^{\prime}_{r}\) is the total energy flux and \(dA\) a surface element of a sphere at radius \(r\), in units of the BZ power, here defined as
\[L_{\rm BZ}=\frac{1}{6c}\omega_{\rm H}^{2}\Phi^{2}, \tag{6}\]
where \(\Phi=\int\mathrm{B}^{r}\sqrt{\gamma}dA_{\rm H}\) is the magnetic flux on the horizon. The Poynting flow is shown in green, particle energy flow in red and the total power (sum of the two) in blue. The decrease in Poynting flow seen in model #51 is consistent with the existence of a significant parallel electric field, which exerts work on the pair plasma at the expense of the EM energy. The small drop in total power seen in models #1 and #1 is due to radiative losses.
Increasing the plasma injection rate near the horizon further in model #52,#53 improves the screening of \(E_{\parallel}\), as seen in Figure 4. The figure shows the radial distribution of the solid angle-averaged northern hemisphere parallel electric field, \((\mathbf{D}\cdot\mathbf{B}/B^{2})_{\Omega}\) (top), and total power, \(\int T^{\prime}_{r}dA\) (bottom), when \(\chi\) varies from \(\chi=1\) to \(\chi=10\). We identify a scaling \(E_{\parallel}\propto\chi^{-1/2}\) (see figure caption), implying that in order to reduce \(E_{\parallel}\) below \(0.01\,B\) an injection rate of \(\chi>100\) is required.
### External injection
In the second suite of experiments we injected plasma in a ring between \(r_{\rm in}=10r_{g}\) and \(r_{\rm out}=11r_{g}\) (shown schematically in the middle panel in Fig. 1), varying the pair injection rate \(\chi\) and the fiducial optical depth for pair creation, \(\tau_{0}\), between the different runs. Snapshots from simulations with \(\chi=1\) and \(\tau_{0}=0,5,10,20\), taken at times after the system (in each run) has reached a steady state, are exhibited in Figure 5. The top panel delineates the normalized
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline Model & \(\tau_{0}\) & \(r_{\rm in}-r_{\rm out}\) & \(\Delta\theta\) & \(\chi\) & \(k_{\rm BZ}T/m_{e}c^{2}\) & \(\sigma_{0}\) & \(\gamma/t_{g}\) & Screening \\ \hline
051 & 0 & \(r_{\rm in}-2\) & \(\pi\) & 1 & 1 & \(5\times 10^{5}\) & 99 & no \\
052 & 0 & \(r_{\rm in}-2\) & \(\pi\) & 5 & 1 & \(5\times 10^{5}\) & 140 & no \\
053 & 0 & \(r_{\rm in}-2\) & \(\pi\) & 10 & 1 & \(5\times 10^{5}\) & 99 & no \\
0I & 0 & \(r_{\rm in}-5\) & \(\pi\) & 1 & 1 & \(5\times 10^{5}\) & 99 & yes \\
0L & 0 & \(r_{\rm in}-13.5\) & \(\pi\) & 1 & 1 & \(5\times 10^{5}\) & 99 & yes \\ Z1 & 0 & 10\(-\)11 & \(\pi\) & 1 & 1 & \(5\times 10^{5}\) & 99 & no \\ Z2 & 5 & 10\(-\)11 & \(\pi\) & 1 & 1 & \(5\times 10^{5}\) & 99 & no \\ Z3 & 10 & 10\(-\)11 & \(\pi\) & 1 & 1 & \(5\times 10^{5}\) & 99 & partial \\ Z4 & 20 & 10\(-\)11 & \(\pi\) & 1 & 1 & \(5\times 10^{5}\) & 99 & yes \\ Z5 & 0 & 10\(-\)11 & \(\pi\) & 10 & 10 & \(5\times 10^{5}\) & 163 & damped oscillations \\ Z6 & 0 & 10\(-\)11 & \(\pi\) & 30 & 10 & \(5\times 10^{5}\) & 123 & periodic \\ EN1 & 0 & 10\(-\)11 & \(\pi/3\) & 30 & 10 & \(5\times 10^{5}\) & 99 & no \\ EN2 & 0 & 10\(-\)11 & \(\pi/3\) & 300 & 10 & \(5\times 10^{3}\) & 37 & yes \\ EN3 & 20 & 10\(-\)11 & \(\pi/3\) & 30 & 10 & \(5\times 10^{5}\) & 99 & yes \\ EN1 & 0 & 10\(-\)11 & \(2\pi/3\) & 15 & 10 & \(5\times 10^{5}\) & 99 & no \\ EN2 & 0 & 10\(-\)11 & \(2\pi/3\) & 150 & 10 & \(5\times 10^{3}\) & 40 & yes \\ EN3 & 20 & 10\(-\)11 & \(2\pi/3\) & 15 & 10 & \(5\times 10^{5}\) & 34 & yes \\ \hline \end{tabular}
\end{table}
Table 1: A list of the models discussed in the text. The corresponding configurations of the injection zone are presented in Fig. 1. The models differ by their opacity for pair creation \(\tau_{0}\) (Eq. 5), injection zone geometry, injection rate \(\chi\) (Eq. 3), fiducial magnetization \(\sigma_{0}\) and temperature of injected plasma. In models EN and EN the injection zone is a ring sector of angular width \(\Delta\theta=\pi/3\) and \(2\pi/3\), respectively (see Sec. 3.3 for further details). Each model is linked to a movie that shows the time evolution of the electron number density \((n_{-}r^{2}/n\sigma_{0}^{2})\), parallel electric field \((\mathbf{D}\cdot\mathbf{B}/B^{2})\) and power from the BH \(\left(\int T^{\prime}_{r}dA/L_{\rm BZ}\right)\). The movies are accessible in the on-line version by pressing on the model name (in blue text).
Figure 2: Electrons number density (top) and normalized parallel electric field, \(\mathbf{D}\cdot\mathbf{B}/B^{2}\) (bottom), for cases of plasma injection between \(r_{\rm in}=r_{\rm in}\) and (left to right) \(r_{\rm out}=2,\,5,\,13.5\,\,r_{g}\). In all cases shown \(\chi=1\). The injection zones are marked with black dots, magnetic field lines with gray solid lines and the inner and outer light surfaces with solid red lines. A nearly complete screening is obtained in the two right cases where the injection zone extends beyond the outer light cylinder.
electron density and the bottom panel shows \(\mathbf{D}\cdot\mathbf{B}/B^{2}\), as in Fig 2. As seen, when pair creation is switched off (\(\tau_{0}=0\), model \(\mathbb{Z}1\)) the injected plasma is unable to screen the entire magnetosphere, even though the plasma density in the injection ring and its vicinity exceeds the GJ density considerably. A large vacuum gap pertains in the inner region, within about \(5\tau_{g}\) Inside the gap electrons are accelerated by the field aligned electric field \(E_{\parallel}\) inwards in the southern hemisphere and likewise positrons in the northern hemisphere. The supply of plasma into the ergosphere by the accelerated pairs induces electric current that generates an outward Poynting flow (Fig 6). However, the outflowing Poynting energy is compensated by the inflowing energy carried by the inwards moving pairs. The net positive energy flux is small, about \(0.02L_{\mathrm{BZ}}\).
Switching on the interaction with the ambient soft photons gives rise to prodigious generation of gamma rays and newly created pairs for large enough \(\tau_{0}\), as expected. We find that complete screening of the entire magnetosphere occurs at \(\tau_{0}\gtrsim 20\) (model \(\mathbb{Z}4\)). The total energy flux is carried completely by the Poynting flow, and approaches its maximum value. We also observe that a small fraction (a few percents) of the energy flux emerging from ergosphere is converted to intermittent (high-energy) radiation (curvature radiation through radiation back-reaction and IC photons below the pair creation threshold). Note that unlike IC photons above the pair production threshold, curvature photons and IC photons below the threshold are not treated as PIC particles in the simulations, and are not included in the plot of the radiation energy flux in the figures. The overall behaviour of the system is similar to that presented in Crinquand et al. (2020), except for the density distribution which in our case is partly imposed by the external plasma injection process.
One might suspect that the formation of a macroscopic vacuum gap in the case of \(\tau_{0}=0\) is a consequence of insufficient plasma supply, and that increasing the injection rate sufficiently might ultimately result in a complete screening. To examine how the magnetosphere responds to increased plasma injection rate, we performed simulations with \(\tau_{0}=0\), \(\chi=30\) (model \(\mathbb{Z}6\)) and \(\chi=50,100\) (these models are not listed in table 1). Interestingly, we find a cyclic dynamics for \(\chi>10^{2}\). The inner gap exhibits oscillations with a period of about \(70g\), during which the gap size repeatedly shrinks to a minimum (at which it extends from the horizon to some radius within the ergosphere) and then expands to a maximum size in excess of \(5\tau_{g}\) (a link to the movie showing this behaviour is given in table 1, model \(\mathbb{Z}6\)). The density in the region outside the injection ring exhibits strong time modulations that correlate with the gap activity. For \(\chi=100\) the density at maximum largely exceeds \(n_{\mathrm{GJ}}\) in most of the simulation box, approaching a few houndreds \(n_{\mathrm{GJ}}\) in the injection zone. Within our limited dynamic range, we find no evidence for a tendency of the system to reach a steady state as \(\chi\) is increased.
To examine the dependence on the width of the injection ring we ran a simulation with hot plasma injection into a ring extending from \(r_{\mathrm{in}}=9\tau_{g}\) to the outer edge of the simulation box, \(r_{\mathrm{out}}=13r_{g}\) (not listed in table 1). We find cyclic dynamics, very similar to that described above. A similar behaviour is also exhibited in the cases with a torus configuration (see Sec. 3.3 below). We conclude that this quasi-cyclic evolution occurs in cases where plasma is injected outside the outer light cylinder.
The following heuristic argument offers an explanation for this behaviour: When the magnetosphere is nearly completely screened, and a BZ outflow is established, a stagnation surface forms across which which the velocity of injected plasma changes sign (Globus & Levinson, 2014). This double flow structure is a consequence of the causal structure of the magnetosphere. In particular, plasma within the inner light surface must be flowing inwards and plasma above the outer light surface must be flowing outwards. This implies that plasma must be continuously injected between the inner and outer light surfaces to keep the outflow in a force-free state at all times.
Figure 3.— The radial distribution of the solid angle-averaged northern hemisphere parallel electric field, \((\mathbf{D}\cdot\mathbf{B}/B^{2})_{\Omega}\) (top), and normalized power \(\int\tau_{\mathrm{s}}^{\prime}d\lambda I/\mathrm{{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rmrmrmrmrmrmrmrm
Now, in the simulations described above plasma is injected only above the outer light surface, and since this plasma cannot reach the region below the stagnation surface, over time it becomes devoid of plasma and a macroscopic gap forms. If the injection rate is not high enough, as in the cases with \(\chi\lesssim 10\), a steady state is established, in which part of the injected plasma is flowing outwards, and part is being pulled into the BH by the parallel electric field generated in the starved magnetospheric region around the BH. When the injection rate is high enough, as in the runs with \(\chi>10\), enough plasma is being pulled inwards during phases of magnetospheric starvation to nearly screen the entire magnetosphere. A BZ outflow is then formed for a time it takes the plasma below the stagnation surface to be evacuated, leading again to formation of a large vacuum gap in the inner region and the cycle repeats.
### Torus configurations
In our final suite of experiments we inject hot plasma (\(k_{\rm B}T/m_{e}c^{2}=10\)) into a ring sector with an opening angle \(\Delta\theta\) about the equatorial plane (right panel in Fig. 1), located between radii \(r_{\rm in}=10r_{g}\) and \(r_{out}=11r_{g}\) (that is, the ring extends from \(\theta_{\rm min}=\frac{\pi-\Delta\theta}{2}\) to \(\theta_{\rm max}=\frac{\pi+\Delta\theta}{2}\)). In these runs the entire injection region is located outside the outer light cylinder, and numerical effects that might be associated with injection near the axis are avoided. As in the other cases, the density outside the injection zone is taken to be zero initially. We examined cases with \(\Delta\theta=60^{\circ}\) (models \(\overline{\rm\Sigma}1-\overline{\rm\Sigma}3\)) and \(\Delta\theta=120^{\circ}\) (models \(\overline{\rm\Sigma}1-\overline{\rm\Sigma}3\)). We find a similar behaviour to the previous cases; at \(\chi=\) a few, the system reaches a quasi steady-state at \(t\approx 70t_{g}\). At higher injection rates (particularly for the \(\Delta\theta=120^{\circ}\) case) the system exhibits cyclic oscillations similar to those seen in the full rings with \(\chi>10\). In all cases the plasma is confined to the magnetic field lines, as expected for \(\sigma\gg 1\); the polar regions at \(\theta<\theta_{\rm min}\) and \(\theta>\theta_{\rm max}\) remains evacuated from charges for the entire simulation. The net energy flux emerging from the ergosphere is small (practically zero for \(\Delta\theta=60^{\circ}\) and \(\sim 0.1\) for \(\Delta\theta=120^{\circ}\)). When the interaction with external radiation is switched on (\(\tau_{0}\neq 0\)), photons produced through IC scattering inside the ring section slowly leak out, producing new pairs, whereupon the entire magnetosphere is eventually filled with plasma and screened, and the extracted power approaches \(L_{\rm BZ}\).
We also ran two cases for each configuration with fiducial magnetization \(\sigma_{0}=5\times 10^{3}\), one with low injection rate, \(\chi=1\), and one with high injection rate (models \(\overline{\rm\Sigma}2\) and \(\overline{\rm\Sigma}2\)). The actual magnetization in the injection zone is around unity for the low injection rate cases and below unity in the high injection cases. We find a strong distortion of magnetic field lines and production of waves, as naively expected. Plasma from the injection zone diffuses into part of the polar region; in the case with high injection rate (see model \(\overline{\rm\Sigma}2\) in Fig. 7 for example) it penetrates down to an angle of about \(15^{\circ}\) in the northern hemisphere (\(165^{\circ}\) in the southern one). Close to the poles (\(\theta<15^{\circ}\)) the density remains very low (nearly zero). We find an emerging Poynting flux from the horizon, mainly within the injection section, but it decays over a few \(r_{g}\), transferring energy to particles. It seems that this energy is given back to the torus. This choking of BH outflow is anticipated on overloaded field lines (Globus & Levinson, 2014). In the polar region, where the plasma density is low and the magnetization is high (\(\gg 1\)), the power of the emerging Poynting flow is very small.
Figure 5: Electrons number density (top) and normalized parallel electric field, \({\bf D}-{\bf B}/B^{2}\) (bottom), for cases of plasma injection in a ring with \(r_{\rm in}=10r_{g}\) and \(r_{out}=11r_{g}\). Here we set the initial optical depth for pair creation to be (from left to right) \(\tau_{0}=0\), \(10\), \(20\). The injection zones are marked with black dots, magnetic field lines with gray solid lines and the outer light cylinder with a dashed red line. screening is obtained in the two right panels with \(\tau_{0}\geq 10\).
Figure 6: The radial distribution of the solid angle-averaged northern hemisphere parallel electric field, \(({\bf D}\cdot{\bf B}/B^{2})_{\rm\Omega}\) (top), and power, \(\int T_{e}dA/L_{\rm BZ}\) (bottom) for cases of plasma injection in a ring with \(r_{\rm in}=10r_{g}\), \(r_{out}=11r_{g}\) and varied optical depth, where from left to right, \(\tau_{0}=0\), \(10\), \(20\). The green, red and blue lines in the bottom panels mark the EM Poynting power, plasma kinetic power and the sum of the two respectively. In the models with high opacity better screening is obtained resulting in an outgoing Poynting flow close to the BZ value. The drop in the total power at large radii is due to radiative losses (including IC photons produced below the threshold that are discarded from the simulation).
## 4 Conclusion
We studied the response of a BH magnetosphere to plasma injection by means of radiative 2D GRPIC simulations, that incorporate photon generation and pair production through interactions with a given radiation field (representing disk emission) in a self-consistent manner. We conducted several sets of numerical experiments in which relativistically hot plasma is injected locally at a prescribed rate in a given section of the magnetosphere, varying the geometry of the injection zone, the injection rate and the intensity of ambient radiation field between the different experiments. In all of the experiments a monopole magnetic field configuration was adopted in the initial state.
We find that when the interaction of pairs with the external radiation field is switched off (formally, setting the intensity to zero), injection of hot plasma can completely screen the magnetosphere, provided the injection zone is located within the outer light cylinder and the injection rate is high enough. In that case we observe the formation of a Poynting flow that emanates from the BH horizon and propagates to infinity with nearly maximum BZ power. On the other hand, when the plasma is injected beyond the outer light cylinder complete screening never occurs; at modest injection rates the system reaches a steady-state, with a macroscopic vacuum gap extending from the vicinity of the horizon up to the outer light cylinder roughly. At higher injection rates the magnetosphere exhibits cyclic dynamics, during which it oscillates between nearly complete screening to extended starvation.
In all cases, when the interaction with the external radiation field is switched on, and the opacity is large enough (\(\tau_{0}\gtrsim 20\)), complete screening always ensues, with nearly maximal energy extraction. In the cases where the plasma is injected externally beyond the outer light cylinder, we find that a fraction of a few percents of the extracted energy (the maximum BZ power) is converted to VHE radiation through IC emission and radiation backreaction (curvature emission), as found earlier in Crinquand et al. (2020).
Our main conclusion is that, in reality, sporadic injection of plasma from the accretion flow into the polar region by some (yet unspecified) process, is unlikely to screen the magnetosphere completely at all times, and prevent intermittent sparking. Formation of spark gaps during charge starvation episodes should lead to variable TeV emission with a luminosity that can approach a few percents of the jet power, as proposed earlier (e.g., Levinson, 2000; Neronov & Aharonian, 2007; Levinson & Rieger, 2011; Hirotani & Pu, 2016).
## 5 Data availability
The data underlying this article will be shared on reasonable request to the corresponding author.
## 6 Acknowledgments
AL acknowledges support by the Israel Science Foundation grant 1995/21. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant Agreement No. 863412). This research was facilitated by the Multimessenger Plasma Physics Center (MPPC), NSF grant PHY-2206607.
|
2308.01091 | Control of vortex orientation of ultrashort optical pulses using spatial
chirp | Introducing a spatial chirp into a pulse with a longitudinal vortex, such as
a standard pulsed Laguerre-Gauss beam, results in a vortex pulse with an
arbitrary orientation of the line phase singularity between longitudinal and
transverse, depending on the amount of chirp. Analytical expressions are given
for such pulses with arbitrary topological charge valid at any propagation
distance. | Miguel A. Porras, Spencer W. Jolly | 2023-08-02T11:53:01Z | http://arxiv.org/abs/2308.01091v2 | # Control of vortex orientation of ultrashort optical pulses using spatial chirp
###### Abstract
Introducing a spatial chirp into a pulse with a longitudinal vortex, such as a standard pulsed Laguerre-Gauss beam, results in a vortex pulse with an arbitrary orientation of the line phase singularity between longitudinal and transverse, depending on the amount of chirp. Analytical expressions are given for such pulses with arbitrary topological charge valid at any propagation distance.
The structuring of ultrashort laser pulse-beams in space or time has long allowed for control of propagation properties, both in linear and nonlinear media, as well as the interaction of the shaped pulses with materials or other physical systems. One of the most well-known examples of shaped or structured light is the optical vortex, whose prototype is the Laguerre-Gaussian (LG) beam solution to the paraxial wave equation, carrying orbital angular momentum (OAM) proportional to the topological charge (or helicity) of the vortex.
Spatiotemporal couplings refer to electric fields that cannot be described as a separable product of complex spatial and temporal functions, or equivalently, spatial and temporal frequency functions [1] which are increasingly being used to produce new types of laser pulses with interesting properties [2]. Spatiotemporal optical vortices (STOVs) are a class of spatiotemporal coupled fields featuring a phase line singularity oriented along an axis transverse to the propagation direction [3; 4; 5], e.g., the \(y\)-direction. The realization of more general STOVs, the phase line singularity of which is oriented at an arbitrary angle between the \(y\) and \(z\) direction, has been proposed to be possible using photonic crystal structures [6]. Subsequently, they have been realized experimentally using astigmatic mode converters [7].
In this Letter we provide closed-form, analytical expressions of STOVs with arbitrarily oriented phase line singularities that are solutions of the paraxial wave equation under quasi-monochromatic conditions. These expressions are valid at any propagation distance and for any STOV topological charge. In addition, these expressions represent standard, longitudinal pulsed vortices to which a spatial chirp has been introduced, which provides an alternate and simpler method to realize experimentally STOVs with arbitrary orientations. The change of the orientation using spatiotemporal couplings has been shown in Ref. [8]. The analysis of our solution demonstrates that the orientation of the singularity can be finely tuned with the amount of spatial chirp and the pulse and beam parameters.
Let us start with a LG pulse in frequency domain, of zero radial order, propagating along the \(z\) direction, conveniently written as
\[\hat{E}_{\omega}=\hat{a}_{\omega}e^{-i(l+1)\psi}\frac{s_{0}}{s}\left[\frac{ \sqrt{2}(x\pm iy)}{s}\right]^{l}e^{\frac{i\omega(x^{2}+y^{2})}{2cq}}e^{i\frac {\omega}{2}z}, \tag{1}\]
where \(l\) is the absolute value of the topological charge, the \(\pm\) sign stands for positive and negative topological charge, \(\psi=\tan^{-1}(z/z_{R})\) is Gouy's phase, \(q=z-iz_{R}\) is the complex beam parameter, \(s_{0}=\sqrt{2z_{R}c/\omega}\) is the waist Gaussian width, \(s=s_{0}\sqrt{1+(z/z_{R})^{2}}\), and \(z_{R}\) is the Rayleigh distance. For a narrowband spectrum \(\hat{a}_{\omega}\) about a carrier frequency \(\omega_{0}\), the time-domain field is conveniently written as
\[E=\frac{1}{2\pi}\int_{0}^{\infty}\hat{E}_{\omega}e^{-i\omega t}d\omega\simeq \frac{1}{2\pi}\int_{-\infty}^{\infty}d\Omega\hat{E}_{\Omega}e^{-i\Omega t^{ \prime}}d\Omega\,e^{-i\omega_{0}t^{\prime}}, \tag{2}\]
where \(\Omega=\omega-\omega_{0}\), \(t^{\prime}=t-z/c\) is the local time,
\[\hat{E}_{\Omega}=\hat{a}_{\Omega}e^{-i(l+1)\psi}\frac{s_{0}}{s}\left[\frac{ \sqrt{2}(x\pm iy)}{s}\right]^{l}e^{\frac{i\omega_{0}(x^{2}+y^{2})}{2cq}}\,, \tag{3}\]
and \(\hat{a}_{\Omega}=\hat{a}_{\omega_{0}+\Omega}\). We also have used that, for a pulsed vortex with many oscillations, or sufficiently narrow \(\hat{a}_{\omega}\), the dependence of the beam parameters on frequency can be ignored, thus taking those at the carrier frequency and hence replacing \(\omega\) with \(\omega_{0}\) in the last exponential factor in (3). This approximation is valid in the quasi-monochromatic regime of propagation which yields increasingly accurate results as the number of optical cycles increases well-above the single-cycle pulses. As is well-known, (3) is a solution of the paraxial wave equation \(\partial\hat{E}_{\Omega}/\partial z=i(c/2\omega_{0})\Delta_{\perp}\hat{E}_{\Omega}\), or \(\partial E/\partial z=i(c/2\omega_{0})\Delta_{\perp}E\) in time domain for quasi-monochromatic pulses in non-dispersive media.
Spatial chirp is a particular spatiotemporal coupling where the different temporal frequencies are separated along one transverse coordinate [9]. Practically, this can be achieved by using dispersive optics such as prisms or gratings in favorable orientations, shown in the top panel of Fig. 1. Here weintroduce the spatial chirp to the separable vortex pulse in (3), which adds to it important amplitude and phase structure, as in the bottom panel
of Fig. 1. Addition of the spatial chirp via transforming \(x\to x-b\Omega\), where \(b\) is a constant, leads to
\[\hat{E}_{\Omega}=\hat{a}_{\Omega}e^{-i(l+1)\psi}\frac{s_{0}}{s}\left[\frac{ \sqrt{2}((x-b\Omega)\pm iy)}{s}\right]^{l}e^{\frac{i\omega_{0}(x-b\Omega)^{2} +y^{2}1}{2cq}}, \tag{4}\]
which is non-separable in temporal frequency and space and still satisfies the paraxial wave equation. We have implicitly ignored any other space-time couplings such as angular dispersion to focus on pure spatial chirp. To obtain the non-separable field in space and time field we introduce (4) into (2), which readily leads to
\[E=e^{-i(l+1)\psi}\frac{s_{0}}{s}e^{\frac{i\omega_{0}(x^{2}+y^{2}) }{2cq}}\left(\frac{\sqrt{2}}{s}\right)^{l}\] \[\times\frac{1}{2\pi}\int_{-\infty}^{\infty}\hat{a}_{\Omega}(x-b \Omega\pm iy)^{l}e^{\frac{i\omega_{0}b^{2}\Omega^{2}}{2cq}}e^{-i\Omega t^{ \prime\prime}}d\Omega\,e^{-i\omega_{0}t^{\prime}}, \tag{5}\]
where \(t^{\prime\prime}=t^{\prime}+\omega_{0}bx/cq\). Taking the Gaussian spectrum \(\hat{a}_{\Omega}=E_{0}\sqrt{\pi}(2/\Omega_{0})e^{-(\Omega/\Omega_{0})^{2}}\) of Gaussian width \(\Omega_{0}\) (inverse Fourier transform \(a(t^{\prime})=E_{0}e^{-t^{2}/\tau_{0}^{2}}\), \(\tau_{0}=2/\Omega_{0}\)), using integral 3.462.4,
\[\int_{-\infty}^{\infty}\xi^{l}e^{-(\xi-\beta)^{2}}d\xi=\int_{-\infty}^{\infty} (\eta+\beta)^{l}e^{-\eta^{2}}d\eta=\frac{\sqrt{\pi}}{(2i)^{l}}H_{l}(i\beta) \tag{6}\]
in Ref. [10] [\(H_{l}(\cdot)\) is the Hermite polynomial of order \(l\)] after completing the square in the exponential in (5) with some changes of variables, we obtain the result
\[E=\frac{E_{0}}{\alpha\Omega_{0}}e^{-i(l+1)\psi}\frac{s_{0}}{s}e ^{\frac{i\omega_{0}(x^{2}+y^{2})}{2cq}}e^{-\frac{(t^{\prime}+\omega_{0}bx/cq)^ {2}}{4\alpha^{2}}}\left(\frac{\sqrt{2}}{s}\frac{ib}{\alpha}\right)^{l}\frac{1 }{2^{l}} \tag{7}\]
where
\[\alpha=\sqrt{\frac{1}{\Omega_{0}^{2}}-\frac{i\omega_{0}b^{2}}{2cq}}. \tag{8}\]
Equation (7) is the main result of this Letter. It represents an optical field with a phase line singularity (for \(l=1\)) or \(l\) phase line singularities oriented at an arbitrary angle in the \(z\)-\(y\) plane, as detailed below. Although the dependence on \(z\) in all parameters is omitted for conciseness, (7) is valid at any propagation distance.
We first note that \(\alpha\) is never zero so that (7) does not present singularities. We also note that the factor \(1/2^{l}\) cancels the pre-factor \(2^{l}\) in the highest-power term of the Hermite polynomial. Further, in the limit \(b\to 0\), the factor \((ib/\alpha)^{l}\) cancels out all terms of the Hermite
Figure 2: Amplitude and phase of STOV at \(z=0\) with \(l=1\) and the spatial chirp factor \(b=s_{0}/\Omega_{0}=s_{0}\tau_{0}/2\). Slices are shown with \(x=0\) (a), \(y=0\) (b), and at \(t=\{-\tau_{0},0,\tau_{0}\}\) (c–e), with the amplitude on top and phase on bottom. Each amplitude plot is normalized to its maximum, and the phase is always shown in the range \([-\pi,\pi]\).
Figure 1: Sketches of how spatial chirp can be done experimentally (top) with either prisms or gratings. The concept of spatial chirp and vortex beams (bottom) with \(b=s_{0}/\Omega_{0}\), whereby the frequencies at \(\pm\Omega_{0}\) are displaced by \(\pm s_{0}\). This affects both the amplitude and the phase, shown here at \(z=0\).
polynomial except the highest-power term. In addition, \((ib/\alpha)^{l}\) multiplied by the opposite factor \((\alpha/ib)^{l}\) in the highest-power term yields unity, and (7) reduces to the pulsed Laguerre-Gauss beam
\[E=E_{0}e^{-\frac{\epsilon^{2}}{r_{0}^{2}}}e^{-i(l+1)\psi}\frac{8_{0}}{s}\left[ \frac{\sqrt{2}(x\pm iy)}{s}\right]^{l}e^{\frac{i\omega(x^{2}+y^{2})}{2\alpha y }}e^{-i\omega_{0}t^{\prime}}. \tag{9}\]
With spatial chirp (\(b\neq 0\)), we first examine the field at \(z=0\), where (7) simplifies to
\[E(z=0)=\frac{E_{0}}{\alpha\Omega_{0}}e^{-\frac{(x^{2}+y^{2})}{s_ {0}^{2}}}e^{-\frac{(t^{\prime}+2ibx/s_{0}^{2})^{2}}{4\alpha^{2}}}\left(\frac{ \sqrt{2}}{s_{0}}\frac{ib}{\alpha}\right)^{l}\frac{1}{2^{l}}\] \[\times H_{l}\left\{\left(\frac{\alpha}{ib}\right)\left[x\left(1- \frac{b^{2}}{\alpha^{2}s_{0}^{2}}\right)\pm iy+i\frac{b}{2\alpha^{2}}t^{\prime }\right]\right\}e^{-i\omega_{0}t^{\prime}}, \tag{10}\]
where \(\alpha=[(1/\Omega_{0})^{2}+(b/s_{0})^{2}]^{1/2}\) is real. The factor \(e^{-(t^{\prime}+2ibx/s_{0}^{2})^{2}/4\alpha^{2}}\) contains the spatial chirp \(e^{-ibxt^{\prime}/s_{0}^{2}\alpha^{2}}\), the Gaussian temporal envelope of enlarged duration \(\tau_{0,\text{eff}}=\tau_{0}[1+b^{2}\Omega_{0}^{2}/s_{0}^{2}]^{1/2}\), and an anti-Gaussian factor that widens the Gaussian width to the effective width \(s_{0,\text{eff}}=s_{0}[1+b^{2}\Omega_{0}^{2}/s_{0}^{2}]^{1/2}\). Various slices of the amplitude and phase in the simplest case of \(l=1\) and with the relevant value \(b=s_{0}/\Omega_{0}\) of spatial chirp (see below) are shown in Fig. 2, where spatial and spatiotemporal phase singularities can be appreciated. Interestingly, the singularity in the \(x=0\) section, observed as a \(\pi\)-step line \(y=\mp(b/2\alpha^{2})t^{\prime}\) in the phase, is seen to move along the \(y\) direction with time when the spatial chirp is along \(x\). On the contrary, in the \(y=0\) section, the amplitude forms a donut with the spatiotemporal phase singularity at the center --a feature in common with purely transversal STOVs. Transversal slices at different times in the bottom of Fig. 2 offer an alternate view of the same structure whereby the spatial phase singularity shifts along the \(y\) direction, distorting the amplitude pattern over time. At \(t^{\prime}=0\) the amplitude and phase resemble a standard optical vortex slightly elongated along \(x\) since the width \(s_{0,\text{eff}}\) along \(x\) is larger than the width \(s_{0}\) along \(y\).
When changing the value of the spatial chirp \(b\), the slope \(\mp(b/2\alpha^{2})\) of the phase line singularity changes accordingly, taking a maximum absolute value for \(b=\pm s_{0}/\Omega_{0}\), as seen in the \(x=0\) sections in Fig. 3(top). Increase of the absolute value of \(b\) is accompanied by longer effective durations \(\tau_{0,\text{eff}}\). Remarkably, a perfect donut shape in the \(y=0\) sections only exists for the values \(b=\pm s_{0}/\Omega_{0}\) of maximum tilt, as seen in Fig. 3(bottom). This is because with \(b=\pm s_{0}/\Omega_{0}\) the scaling in the Gaussian temporal and spatial factors are the same as the scaling in the temporal and spatial terms in the argument of the Hermite polynomial, i.e., the STOV at \(y=0\) is of the form \(e^{-t^{\prime 2}/r_{0,\text{eff}}^{2}}e^{-x^{2}/x_{0,\text{eff}}^{2}}H_{l}(t^{ \prime}/\tau_{0,\text{eff}}\pm ix/s_{0,\text{eff}})\), with \(\tau_{0,\text{eff}}=\sqrt{2}\tau_{0}\) and \(s_{0,\text{eff}}=\sqrt{2}s_{0}\).
For general \(l\), the phase singularities are located at the zeros of the Hermite polynomial, say \(h_{l,n}\), with \(n=1,2,\ldots l\), which are all real. Thus, all phase singularities are in the plane \(x=0\) and specified by the straight lines \(y=\mp(b/2\alpha^{2})t^{\prime}\pm(b/\alpha)h_{l,n}\), being then parallel to each other with the same slope \(\mp b/2\alpha^{2}\), as seen in Fig. 4(top) for \(l=1,2,\) and \(3\). In the orthogonal \(y=0\) section the \(l\) phase singularities manifest as \(l\) null intensity points at times \(t^{\prime}=2\alpha h_{l,n}\), as in Fig. 4(bottom), which confers the intensity pattern a more complex structure as \(l\) increases.
With regard to the actual orientation of the vortex line in three-dimensional space, the slope in the \(\mp b/2\alpha^{2}\) in the \(t^{\prime}\)-\(y\) plane amounts to a slope \(\pm b/2c\alpha^{2}\) in the \(z\)-\(y\) plane, or a tilt angle of the phase line singularity
\[\theta=\pm\tan^{-1}\left[\frac{b}{2c[(1/\Omega_{0})^{2}+(b/s_{0})^{2}]}\right] \tag{11}\]
Figure 4: Amplitude of STOVs at \(z=0\) with \(b=s_{0}/\Omega_{0}\) and \(l=1,2\), and \(3\) (left to right). There are an increasing number of line singularities in the \(x=0\) plane (top), resulting in null points of amplitude in the \(y=0\) plane (bottom).
Figure 3: Amplitude of STOVs with other parameters, still at \(z=0\) with \(l=1\) and \(b=\{0.25,1,2\}\times s_{0}/\Omega_{0}\) (left to right). The slope of the singularity along \(y\) changes (top), and so does the amplitude distribution in the \(x\)-\(t\) plane (bottom).
with respect to the \(z\)-axis. Figures 5 (a) and (b) illustrate the behavior of the vortex orientation depending on the spatial chirp for typical real pulse and beam parameters under paraxial and quasimonochromatic conditions, and Figs. 5 (c) and (d) show two examples of the actual tilts in space along with the three-dimensional structure of the intensity. The tilt angle is maximum for spatial chirp \(b=\pm s_{0}/\Omega_{0}\), as in Fig. 1, resulting in \(\theta=\pm\tan^{-1}[s_{0}\Omega_{0}/4c]\). As seen in Fig. 5(a), the maximum tilt angle can be as close as 90 degrees as desired with paraxial waist widths using pulses of duration of the order of tens of femtoseconds, although 90 degrees, i.e., a purely transversal STOV, is never reached. Evaluation of the derivative of (11) with respect to \(b\) at \(b=0\) yields \(\Omega_{0}^{2}/2c\), which is independent of \(s_{0}\), as can be appreciated in Fig. 5(a). Control of this derivative, and thus of the sensitivity of the tilt angle to the spatial chirp, can be exercised by the bandwidth or pulse duration, as illustrated in Fig. 5(b). The longer the duration, the easier it is to steer the vortex in a precise direction.
As the STOV propagates from \(z=0\), it diffracts as a LG beam but distorting and rotating, and maintaining the same duration due to the absence of dispersion. We could add new contour plots, but we prefer to focus on the new phenomena. The complexity of the behavior of the singularities in purely transverse STOVs in free space and in dispersive media has been discussed only very recently [11; 12]. One important conclusion is the absence of relation between topological charges and OAM except for canonical vortices (with elliptical symmetry in the case of purely transverse STOVs). The OAM is conserved on propagation, but the signs of the charges of the transverse vortices may not [13]. A similar situation is observed here, affecting now to the direction of the phase line singularity, the orientation of which is seen to precess upon propagation.
Limiting our consideration to \(l=1\), and setting the argument of the Hermite polynomial to zero in (7), one obtains the phase line singularity at arbitrary propagation distance \(z\) as the intersection of the two planes
\[y=\mp\frac{1}{2}\frac{b}{(1/\Omega_{0})^{2}+(b/s)^{2}}\,t^{\prime},\quad x=\mp \frac{\omega_{0}b^{2}\Omega_{0}^{2}}{2cR}\,y \tag{12}\]
where \(s\) is the Gaussian width at any distance defined above, and \(1/R=z/(z^{2}+z_{R}^{2})\) is the radius of curvature. From these planes, the orientation in space can be derived as above. The angle \(\theta\) from the propagation direction and the new angle of deviation from the \(x=0\) plane are defined in Fig. 6(a), and are plotted one as a function of the other in Fig. 6(b) as the STOV in Fig. 5(c) propagates from \(z=-\infty\) to \(z=+\infty\). The phase line singularity is only in the \(x=0\) plane (\(\varphi=0\)) at the waist and at the far fields \(z=\pm\infty\), with respective minimum and maximum values. The deviation angle from the \(x=0\) plane is maximum at \(\pm z_{R}\).
In conclusion, we have provided analytical expressions of fields of STOVs the phase line singularity of which can be tuned from zero to almost 90 degrees from the propagation direction. By construction, these fields can be realized experimentally by introducing a spatial chirp into a standard longitudinal vortex. The arbitrary orientation suggests that one can also tune the direction of the OAM. However, we leave for further research the amount and direction of the OAM until the current debate on the OAMtransverse STOVs will be closed [13; 14; 15].
Figure 5: (a) Angle of the phase line singularity in the \(z\)-\(y\) plane at \(z=0\) as a function of \(b\) for several waist widths. The slope at \(b=0\) is independent of \(s_{0}\). (b) The same but for several pulse durations. (c,d) Snapshots (\(t=0\)) of iso-intensity surfaces (18% the peak intensity) of STOVs at 800 nm, waist width \(s_{0}=50\)\(\mu\)m, duration \(\tau_{0}=100\) fs (\(\Omega_{0}=0.02\) rad/fs), spatial chirp \(b=s_{0}/\Omega_{0}\) for maximum tilt \(\theta\simeq 40\) deg, and topological charges \(l=1\) and \(2\). The scales in \(x,y\) and \(z\) are the same to visualize the actual tilt.
Figure 6: (a) Transversal tilt angle \(\theta\) and deviation angle \(\varphi\) from \(x=0\). (b) Their change with \(z\) for the STOV in Fig. 5(c).
## Funding
Horizon 2020 Framework Programme (801505); Ministerio de Ciencia, Innovacion y Universidades (PID2021-122711NB-C21).
## Acknowledgments
S.W.J. has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 801505. This work has been partially supported by the Spanish Ministry of Science and Innovation, Gobierno de Espana, under Contract No. PID2021-122711NB-C21.
## Disclosures
The authors declare no conflicts of interest.
## Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
|
2304.08406 | A first application of machine and deep learning for background
rejection in the ALPS II TES detector | Axions and axion-like particles are hypothetical particles predicted in
extensions of the standard model and are promising cold dark matter candidates.
The Any Light Particle Search (ALPS II) experiment is a
light-shining-through-the-wall experiment that aims to produce these particles
from a strong light source and magnetic field and subsequently detect them
through a reconversion into photons. With an expected rate $\sim$ 1 photon per
day, a sensitive detection scheme needs to be employed and characterized. One
foreseen detector is based on a transition edge sensor (TES). Here, we
investigate machine and deep learning algorithms for the rejection of
background events recorded with the TES. We also present a first application of
convolutional neural networks to classify time series data measured with the
TES. | Manuel Meyer, Katharina Isleif, Friederike Januschek, Axel Lindner, Gulden Othman, Jose Alejandro Rubiera Gimeno, Christina Schwemmbauer, Matthias Schott, Rikhav Shah | 2023-04-17T16:18:35Z | http://arxiv.org/abs/2304.08406v1 | A first application of machine and deep learning for background rejection in the ALPS II TES detector
###### Abstract
Axions and axion-like particles are hypothetical particles predicted in extensions of the standard model and are promising cold dark matter candidates. The Any Light Particle Search (ALPS II) experiment is a light-shining-through-the-wall experiment that aims to produce these particles from a strong light source and magnetic field and subsequently detect them through a reconversion into photons. With an expected rate \(\sim 1\) photon per day, a sensitive detection scheme needs to be employed and characterized. One foreseen detector is based on a transition edge sensor (TES). Here, we investigate machine and deep learning algorithms for the rejection of background events recorded with the TES. We also present a first application of convolutional neural networks to classify time series data measured with the TES.
## 1 Introduction
Axions and axion-like particles (ALPs) are hypothetical particles predicted in extensions of the Standard Model of particle physics [1]. Both axions and ALPs are candidates to explain the observed density of cold dark matter in the Universe [2, 3, 4]. Additionally, axions could solve the so-called strong CP problem of the strong interactions [5, 6, 7]. One predicted interaction of axions and ALPs is the conversion into photons in the presence of external magnetic fields. Such an interaction would make it possible to detect axions and ALPs present in the dark matter halo in the Milky Way or produced in astrophysical sources such as the Sun or in supernova explosions [1].
In contrast to searches relying on astrophysical sources of ALPs, the Any Light Particle Search II (ALPS II) experiment aims to produce and subsequently detect ALPs with the so-called light-shining-though-a-wall (LSW) technique [8, 9, 10]. In ALPS II, a powerful laser beam is immersed in a strong magnetic field and directed onto an opaque barrier. A fraction of photons in the laser beam convert to ALPs, which traverse the barrier unimpeded. Behind this wall, in an additional magnetic field, ALPs reconvert into photons with the same properties as the original ones, which can be subsequently detected. Once commissioned, ALPS II will reach unprecedented sensitivity for an LSW-type experiment by employing a high-power infrared laser at a wavelength of 1064 nm, optical cavities for additional power build-up before and behind the wall, and sensitive photon detectors measuring rates down to \(\sim~{}10^{-6}\) Hz [11, 12]. Within a 20 day measurement we aim to to probe photon-ALP couplings down to \(g_{\alpha\gamma}\gtrsim 2\times 10^{-11}\) GeV
for masses \(m_{a}\lesssim 10^{-4}\,\)eV. This would make it possible to probe ALP dark matter scenarios [13] and axion models that predict a large coupling to photons [14, 15]. For this photon-ALP coupling, we expect a reconverted photon rate of \(n_{s}\gtrsim\,\times 10^{-5}\,\)Hz (corresponding \(\sim\,1\,\)photons per day) given the ALPS II design specifications. To significantly detect such a low rate, the background rate has to be \(\lesssim 10^{-5}\,\)Hz [11]. One foreseen detection technique is based on a transition edge sensor (TES) [16]. Such sensors are essentially microcalorimeters: they consist of a superconducting chip integrated in a circuit where they are biased at a temperature between the normal and superconducting phase [17]. A reconverted photon will be guided via an optical fiber to the TES where it is absorbed. This increases the chip's temperature thereby causing a large change of its resistance of the order of several Ohms. Through an inductive coil, the current change induced by the change in resistance leads to a change in the magnetic field, which is read out with a superconducting quantum interference device (SQUID). Such detectors can be optimized for near-infrared light and show high quantum efficiencies close to unity, a high energy resolution, and low dead time [18, 19].
The majority of background events registered with the TES is expected from thermal radiation of the warm (at room temperature) end of the optical fiber [20]. We call this background source _extrinsic_. Additional sources of background include radioactive decays inside the detector volume and energy deposition of charged cosmic rays interacting with the TES or the surrounding material (e.g., Refs. [21, 22]). We refer to these types of events, which are present with and without an optical fiber, as _intrinsic_ background events. To achieve the necessary low background rates, background events must be efficiently rejected by both the experimental design (see, e.g., Ref. [23]) and the data analysis.
Here, we present a first investigation of the performance of machine learning (ML) and deep learning (DL) classification algorithms to discriminate fake signals from intrinsic background events at the data analysis level. Due to the excellent performance in, e.g., classification tasks, both ML and DL algorithms enjoy increasing popularity in fundamental physics research as a whole [24] and for searches of axion signatures in particular [25, 26, 27]. As we will see in Section 2, where we introduce the training data for our classifiers, the TES data are essentially time series in which individual photons are seen as pulses. The integral over this pulse is proportional to the deposited energy and thus the photon energy [17].
Therefore, the signal-and-background discrimination boils down to a time series classification (TSC). In particular DL algorithms perform particularly well for such tasks [28]. In previous analyses of ALPS II TES calibration data, signal and background events were distinguished through a standard pulse shape analysis (PSA) [29, 11, 19]. In PSA, recorded pulses are fit either with a parametric function or a template pulse with a free amplitude parameter. The distinction between signal and background is then achieved through cuts in the parameter space of the extracted pulse parameters, i.e., extracted _features_ (e.g., pulse amplitude and pulse integral). In principle, ML and DL algorithms should be perfectly suited to either optimize such cuts or to find high-dimensional data representations where the feature space of signal and background events can be separated in an optimal way (in the sense of minimizing some cost function). This will be explored in Section 3.1. Instead of feature extraction we will use the time lines themselves for classification in Section 3.2. We closely follow Ref. [28] and present first results of convolutional neural networks (CNNs) for this task. Compared to conventional (fully-connected) deep neural networks, CNNs are based on shared weights from convolutional kernels, which reduced the number of parameters and leads to an improved learning of translation-equivariant features. The results of both strategies are presented in Section 4. In Section 5, we provide conclusions and an outlook on how to improve the present proof-of-concept study and how to extend it in the future.
## 2 Data for Classifier Training
For training the classifiers, we use the same data sets as described in Refs. [11, 19] which were collected in an experimental setup for characterizing the TES. In particular, intrinsic background events were collected in a continuous data run lasting \(T=518\,\)hours, in which the TES was not connected to an optical fiber. These background events are labeled \(y=0\). In a second data run, real photon signals were generated by connecting a continuous wave laser at a wavelength of about \(1064\,\)nm to an optical fiber which
was then attached to the TES (class labels \(y=1\)). This data run lasted for less than a minute given the high photon rate of the input laser. Each event \(i\) consists of a voltage time line (sometimes called trace) with \(M\) sample points, measured with the TES and SQUID setup, \(x_{i}\,\equiv\,(x_{i1},\ldots,x_{iM})^{T}\). Events were triggered and saved to disk when the amplitude reached a trigger threshold \(\,<\,-20\,\)mV. This threshold is chosen as a compromise between the reduction of background events while loosing close to zero events due to \(1064\,\)nm photons. Each trigger window is \(200\,\mu\)s long (including \(30\,\mu\)s before the trigger time) with a sampling rate of \(f_{\rm sample}\,=\,50\,\)MHz yielding \(M\,=\,10^{4}\) samples per trace. We show example traces triggered by a laser photon in the upper panels of Fig. 1 and traces from intrinsic background events in the lower panels of Fig. 1. For the chosen examples, it is easy to distinguish light from background events by eye when comparing the overall pulse shapes.
The time lines are fit with an exponential rise and decay function \(V(t)\)1
Footnote 1: We prefer this phenomenological function over the pulse shape from small signal theory [17] as it is continuous for all values of \(t\). It is commonly used to described the time variability of certain galaxies, see e.g., Ref. [30].
\[V(t)=C-2A\left[\exp\left(\frac{t_{0}-t}{\tau_{\rm rise}}\right)+\exp\left( \frac{t-t_{0}}{\tau_{\rm decay}}\right)\right]^{-1}, \tag{1}\]
using a \(\chi^{2}\) minimization. The parameters of the function are the pulse normalization \(A\), the trigger time \(t_{0}\), the rise and decay times \(\tau_{\rm rise,decay}\), respectively, and a constant offset \(C\). The rise and decay times are connected to the electrical and thermal constants of the TES circuit [17]. For \(t\,=\,t_{0}\), One finds that \(V(t_{0})\,=\,C-A\). It should be noted that the pulse minimum is not reached at \(t_{0}\) but at a later time \(t_{\rm peak}\), where \(V(t_{\rm peak})=C-2A\tau_{\rm rise}(\tau_{\rm rise}+\tau_{\rm decay})^{-1}(\tau_{ \rm decay}/\tau_{\rm rise})^{\tau_{\rm decay}/(\tau_{\rm rise}+\tau_{\rm decay})}\). For the \(\chi^{2}\) minimization, a constant uncertainty of \(1.5\,\)mV is assumed for each measured voltage value. This choice is simply motivated to achieve fast convergence of the fit. However, When the uncertainty is estimated from the square root of the diagonal terms of the covariance matrix of pure noise traces, similar values are found. Examples for the fit are also shown in Fig. 1 as red lines together with the best-fit values. After an initial minimal data cleaning of the light data,2 we are left with in total \(N\,=\,40,646\) events of which \(N_{\rm bkg}\,=\,39,580\) are background events recorded when the laser was off and disconnected from the TES. For the classification based on these extracted features (Section 3.1), we use the best-fit values of the model in Eq. (1) together with the \(\chi^{2}\) value of the fit and the integral over time of the fitted model, which we denote with PI (for pulse integral). Our feature vector thus becomes \(X_{i}=(A,\tau_{\rm rise},\tau_{\rm decay},C,\chi^{2},\rm PI)_{i}^{T}\) with class labels \(y_{i}\) for samples \(i=1,\ldots,N\). In contrast, the time series classification scheme discussed in Section 3.2 will take the raw traces as input such that \(X_{i}=x_{i}\) with class labels \(y_{i}\).
Footnote 2: The light data could be contaminated by background data; for this reason we exclude pulses with a decay time \(\tau_{\rm decay}\quad\quad>\quad 10\,\mu\)s and a \(\chi^{2}/\)d.o.f. \(\quad>\quad 6\), where d.o.f. denotes the degrees of freedom of the fit. These values are motivated from the average pulse observed in the light data.
## 3 Training of Classifiers
With our data at hand, we now turn to the training of the classifiers. We start with the classifiers based on the extracted time-line features in Section 3.1 before turning to the training of a CNN on the raw time series data in Section 3.2. Throughout, we split the data into training and test data sets using a split ratio of \(80\,\%\) and \(20\,\%\). The classifiers will be optimized on the training set and their performance is then evaluated on the test set. As our data set is highly imbalanced with a ratio of \(\sim\,40\,:1\) of background versus light data, we employ a stratified split of training and test data. That means that the ratio of signal and background data is roughly the same (\(40\,:\,1\)) for both data sets. This ensures that we will not end up with a test data set that does not contain any light samples.
### Training of Classifiers on Extracted Features
We test the performance of two ML and DL algorithms for signal and background discrimination: a random forest (RF) and a multilayer perceptron (MLP), i.e., a fully connected deep neural network. To avoid overfitting of the MLP, L2 regularization is applied, which adds the sum over all weights squared
(the L2 or Euclidean norm) to the cost function (see, e.g., Ref. [31] for a review of the different methods used in this section).
Before the actual training, we perform two preprocessing steps on the data. First, we take the logarithm of the extracted features. As all PI values are negative, we first multiply them with \(-1\). Some offset values \(C\) are also below zero, and we use \(\log_{10}(C/(1\,\mathrm{mV})\) + 1) for the transformation. Second, this log-transformed data is then further transformed using a principle component analysis (PCA) [31]. The principle components are fit only to the training data and then applied to training and test data sets. For illustration, the first three (out of six) principle components are shown in Fig. 2. The separation between signal and background events is already visible. We found that the log and PCA transformations resulted in better classification results and faster convergence when training the classifiers.
Each classifier comes with its own set of hyper parameters such as the number and depth of the trees for the RF or the number of nodes and hidden layers for the MLP. In this first application of ML presented here, we optimize a subset of hyper parameters on coarse parameter grids to observe general trends. For this task we use the scikit-learn python package (version 0.24.2) [32] implementation of stratified \(K\)-fold cross validation [31] applied to the training data with \(K\,=\,5\). For the RF classifier, we change the number of trees in the forests (100, 300, and 500 trees), the number of features to consider when looking for the best split between 1 and 6 with a step size of 1, and the minimum number of samples required to split a node between 2 and 82 with a step size of 10. The Gini impurity measure is used for optimizing the data splits in the trees, which are grown to their maximum depth. For the MLP we consider 2, 4, and 6 hidden layers with 100 nodes per layer and values for the L2 regularization strength \(\alpha\) on a logarithmic scale between \(\log_{10}(\alpha)=-4,-3.5,\ldots,-1.5\). A rectified linear unit (ReLU) function is chosen as the activation function, and the learning rate of the MLP is held constant. The weights of the network are found with the Adam stochastic gradient-based optimizer [33]. All other hyper parameters for the RF and MLP are set to their default values in the scikitlearn implementation.3
Footnote 3: For the random forest, the minimum number of samples required to split an internal node is kept at 2 and the minimum number of samples required to be at a leaf node is kept at 1. For the MLP, the tolerance is set to \(10^{-4}\) and the learning rate is held constant at \(10^{-3}\). At most, 200 epochs of learning are used.
The best set of hyper parameters are those that maximize the significance \(S\) of a detection of signal counts above a certain number of background events. For Poisson distributed data, the detection significance \(S\) over the square root of observation time \(T\) is given by [34, 35],
\[S/\sqrt{T}=2\left(\sqrt{\epsilon_{d}\epsilon_{a}n_{s}+n_{b}}-\sqrt{n_{b}} \right). \tag{2}\]
Figure 1: Example traces recorded with the TES. _Upper panels:_ Time lines triggered by \(1064\,\mathrm{nm}\) laser photons. _Lower panels:_ examples of intrinsic background events recorded while the optical fiber was disconnected from the TES.
In the expression above \(\epsilon_{d}\) is the detector efficiency, \(\epsilon_{a}\) is the analysis efficiency to correctly classify signal evens, \(n_{b}\) is the background rate from mis-identified background events, and \(n_{s}\) is the signal rate that depends on the photon-ALP coupling. From the classifier predictions, \(\epsilon_{a}\) and \(n_{b}\) are found as follows. For a given threshold \(\xi\), \(0\,\leqslant\,\xi\,\leqslant\,1\), events will be classified as light-like if their predicted class label \(\hat{y}_{i}\,\geqslant\,\xi\) (both RFs and MLPs provide predictions \(\hat{y}_{i}\) as real numbers between 0 and 1). We calculate the true and false positive rates, \(\text{TP}(\xi)=N_{\text{test}}^{-1}\sum_{i}\left[(\hat{y}_{i}\,\geqslant\,\xi) \&\&(y_{i}==1)\right]\) and \(\text{FP}(\xi)=N_{\text{test}}^{-1}\sum_{i}\left[(\hat{y}_{i}\,\geqslant\,\xi) \&\&(y_{i}==0)\right]\), respectively, where \(N_{\text{test}}\) is the number of samples in the test data. These rates are rescaled to the entire data set by multiplying with the raw trigger rate, \(r_{\text{trig}}\,=\,N_{\text{bkg}}/T\,\approx\,0.02\,\text{Hz}\), such that \(n_{b}\,=\,r_{\text{trig}}\text{FP}\). The analysis efficiency is simply equal to the true positive rate, \(\epsilon_{a}\,=\,\text{TP}\). For the detector efficiency, we take \(\epsilon_{d}\,=\,0.5\) to account for potential losses in the TES sensitivity or the ALPS II cavities and \(n_{s}\,=\,2.8\times 10^{-5}\,\text{Hz}\). For choosing the best set of hyper parameters, we set \(\xi\,=\,0.5\) and compute \(S\). Once the parameters are determined from \(K\)-fold cross validation, the classifier is re-fit on the entire training set and its score on the initial test set is evaluated.
The whole procedure is repeated for five initial 80-20 splits of the data.4 From these five splits, we calculate the median and standard deviation of \(S\), \(n_{b}\), and \(\epsilon_{a}\) which we present in Section 4.
Footnote 4: Put differently, we perform two loops. In the outer loop, we perform splits \(i\,=\,1,\ldots,5\) of the whole data set into test and training sets with non-overlapping test sets. In the inner loop, a \(K\)-fold cross validation is performed on the training set to find the best hyper parameters, which involves another 80-20 split.
### A First Training of CNN on the TES Time Series Data
We also test the performance of CNNs trained on the time series data itself. This eliminates the need for feature extraction, i.e., in our case, fitting the observed pulses with a parametric function. As the only preprocessing step, we perform a \(z\) transformation, which is common in time series classification
Figure 2: The first three principal components of the training data. The signal (red) and background (blue) data is already quite well separated in feature space.
tasks [36]. We perform the \(z\) transformation on each sample individually,
\[\hat{x}_{i}=\frac{x_{i}-\langle x_{i}\rangle}{\sqrt{\langle x_{i}-\langle x_{i} \rangle\rangle^{2}}}, \tag{3}\]
where the mean is given by \(\langle x_{i}\rangle\,=\,M^{-1}\sum_{j=1}^{M}x_{ij}\). The denominator in the expression above is the standard deviation of each time series \(x_{i}\). Furthermore, to reduce memory requirements, we focus on the measurements around the trigger time between \(j\,=\,(1000,\ldots,3000)\) and downsample each time series by a factor of 4, such that \(M\,=\,(3000\,-\,1000)/4\,=\,500\). Since we extract a fixed number of measurement points before and after the trigger time, it is not necessary to align the time series along the time axis as done, e.g., in Ref. [37].
Our network architecture follows closely the full CNN described in Ref. [28]. Specifically, we perform two convolutions with kernel size 11 with zero padding, stride equal to one, and with \(N_{f}\,=\,16\) filters each. The convolution is followed by batch normalization [38] and a ReLU activation function. After the two convolutions, a global average pooling (GAP) is performed, which means that the time dimension is averaged over yielding 16 output neurons, one for each filter. The GAP output neurons are then fully connected to two output neurons-one for each class-with the categorical cross-entropy activation function. A sketch for our simple network architecture is shown in Fig. 3. The training of the network is performed with the keras and tensorflow packages (version 2.4.0) [39]. Again, the Adam optimizer is used with an initial learning rate of 0.01. The batch size is set to 50 and the network is trained for up to 250 epochs. If the validation loss does not improve for 20 epochs the learning rate is reduced by a factor of 1/2 until a minimum learning rate of \(10^{-4}\) is reached. 5 If the validation loss still does not improve after 20 additional epochs, training is stopped. The model resulting in the minimal validation loss is saved. The advantage of the GAP layer is that it is possible to calculate the class activation map (CAM), which provides an easy way to visualize which portions of the time series are important for classification [40]. In our case, the CAM itself is a univariate time series with the same dimension as the input time series. Let \(A_{f}(t)\) be the output time series after the second convolution layer (after batch normalziation and activation) for each filter \(f\,=\,1,\ldots,N_{f}\) and let \(w_{fc}\) be the weight connecting the GAP layer node to the
Figure 3: A sketch of our CNN architecture. Two convolutions with kernel size 11 and 16 filters are performed before a GAP layer reduces the output to 16 neurons which are connected to the two output neurons (one for each class). The axis labeled “1” denotes the direction of a forward pass within the network.
output class node \(c=(0,1)\). Then the \(\text{CAM}(t)\) is given as an average over the weights,
\[\text{CAM}_{c}(t)=\sum_{f=1}^{N_{f}}w_{fc}A_{f}(t), \tag{4}\]
and normalized such that \(0\;\leqslant\;\text{CAM}_{c}(t)\;\leqslant\;1\). In contrast to the feature-based learning presented in Section 3.1, no tuning of the hyper parameters is performed, which is left for future work. However, the training-test split is again performed five times.
## 4 Results
The median performance of all tested classifiers on the test sets in terms of signficance \(S\) (see Eq. (2)), background rate \(n_{b}\), and analysis efficiency \(\epsilon_{a}\) as a function of threshold \(\xi\) is shown in Fig. 4. The shaded regions denote the standard deviation from the five different optimization runs with different test data sets. As expected, as \(\xi\) increases, the false positive rate and thus \(n_{b}\) is decreased as we only classify events as light-like that have predicted class labels closer to one. At the same time, the number of true positives and hence \(\epsilon_{a}\) decreases as well. Our metric \(S\) gives more weight to the false positives and as a result \(S\) can be \(\sim\;5\,\sigma\) even for comparatively low values of \(\epsilon_{a}\). This can be observed in Fig. 4 as well: \(S\) increases with increasing \(\xi\) up until the decreasing background cannot compensate the loss of true positives any longer. Example values for the performance are provided in Tab. 1 for values \(\xi\) close to maximum performance.
Our feature-based classification scheme can be compared to the performance of the cut-based analysis, which meets the ALPS II design requirements [11]. In that analysis, the histograms of the best-fit parameters of signal events were fit with Gaussian distributions. Using these distributions, cuts in units of Gaussian standard deviations were defined and background events were classified as such if their best-fit parameters fell outside these cut values. It should be noted that our classifiers here provide real numbers for the class prediction, so it is in principle possible to tune \(\xi\) on the training set to maximize \(S\). The cut-based analysis presented in Ref. [11] did not perform a split of the data into a training and test set but reported results on the entire data set. Even so, our RF and MLP outperform the cut-based analysis reaching a detection significance of \(\gtrsim\) 6 \(\sigma\), albeit with large uncertainties due to the limited statistics of our data set. Comparing the RF and the MLP, it can be seen that the RF performs best in rejecting backgrounds whereas the MLP retains a high analysis efficiency even for high values of \(\xi\).
In comparison to the feature-based classifiers, our CNN performs worse. Only for high values of \(\xi\gtrsim 0.97\) are we able to reach a median significance close to 5 \(\sigma\) at the cost of a poor analyis efficiency with a true positive rate below 50 %. The CNN performs worst of all classifiers in rejecting backgrounds and only achieves a higher true positive rate in comparison with the RF for \(\xi\gtrsim 0.8\). It should be noted, however, that for the CNN no systematic tuning of the hyper parameters was performed and no prior knowledge of the pulse shape is required.
Figure 5 shows the CAMs defined in Eq. 4 for 15 example light pulses that were correctly classified by the network. Higher CAM values indicate that the corresponding points are more important for classification. It is clearly visible that the rising part of the pulse is most important in this sense, whereas the decaying part of the pulse is less important. This is somewhat surprising as the background pulses in
\begin{table}
\begin{tabular}{l c c c c} \hline Classifier & Threshold \(\xi\) & Signal efficiency & Background Rate (\(10^{-6}\,\text{Hz}\)) & Detection significance (\(\sigma\)) \\ \hline Cut-based analysis [11] & – & 0.898 & 6.9 & 4.88 \\ RF & 0.862 & 0.66 \(\pm\) 0.15 & 2.16 \(\pm\) 2.02 & 6.04 \(\pm\) 1.50 \\ MLP & 0.944 & 0.90 \(\pm\) 0.07 & 5.93 \(\pm\) 5.23 & 6.51 \(\pm\) 2.47 \\ CNN & 0.974 & 0.42 \(\pm\) 0.18 & \(<8.54\) & 4.94 \(\pm\) 2.56 \\ \hline \end{tabular}
\end{table}
Table 1: Classifier performance for example values of \(\xi\). Values are chosen that lead to \(S\;>\;6\,\sigma\) for the RF and MLP with maximum \(\epsilon_{a}\), whereas for the CNN the \(\xi\) value is chosen that maximizes \(S\). For the values of \(S\), an observation time of 518 hours and a signal rate of \(2.8\times 10^{-5}\,\text{Hz}\) are assumed.
Fig. 1 show much longer decay times as the signal pulses. This could be related to our choice of the kernel size: a kernel size of 11 corresponds to a time window \(11/(f_{\rm sample}/4)\approx 0.9\,\mu\)s and thus it is difficult for the network to capture these long trends in time. This might indicate an option to improve the CNN performance in the future.
## 5 Discussion and Outlook
With the low expected rate of photons reconverted from ALPs of the order of 1 photon per day, it is of utmost importance to achieve an efficient background suppression. For this purpose, we have trained ML and DL classifiers on time lines measured with the ALPS II TES detector. Data from a calibration setup of the TES have been used for this purpose which comprise around 1,000 real light pulses generated with a 1064 nm laser and roughly 40,000 background events collected while the TES was disconnected from the optical fiber (so-called _intrinsic_ backgrounds). All our classifiers provide a signal-and-background discrimination that result in a potential detection significance that is higher or comparable
Figure 4: Performance of different classifiers (RF, MLP, and CNN) as a function of classification threshold \(\xi\). Events with a predicted class label \(\hat{y}_{i}\) will be classified as signal events if \(\hat{y}_{i}\geqslant\xi\). The performance is shown in terms of detection significance \(S\) (top), the background rate (center), and the analysis efficiency \(\epsilon_{a}\) (bottom). The solid lines indicate the median of the performance on five different training-test splits of the data, the shaded region represent the standard deviation. The results from the cut-based analysis are shown as a dashed line.
to a cut-based analysis presented in Ref. [11]. In particular the classifiers based on extracted features (best-fit parameters of a parametric function describing the pulse shape) can achieve a detection significance in excess of \(6\,\sigma\) compared to roughly \(5\,\sigma\) for the cut-based analysis.
These results are very encouraging. The present work merely serves as a proof-of-concepts and several improvements are foreseen in the future. First, the given data set is highly imbalanced with a ratio \(\sim 40\,:\,1\) of background versus light data, which represents a challenge for the classifiers. More training data with an updated experimental setup will mitigate this problem. A larger set of available data will also reduce errors on the performance metrics as values of \(K\,>\,\,5\) for \(K\)-fold cross validation can be chosen while retaining large enough data sets for each iteration. In our tests, a CNN trained on the raw time lines performed worst. The likely reason is that a) we did not optimize the hyper parameters (e.g., number of convolutions, size of convolution kernels) and b) the CNN might suffer most from an imbalanced data set, high frequency electronic noise, and might depend on the length of the input time lines. The CAMs indicate that the rising edge of the pulse is most important for discriminating signal and background events. The rise time could be shortened further with a higher gain bandwidth product (GBWP) of the SQUIDs. However, a higher GBWP will also amplify the high frequency noise. The reasons for this noise are currently under investigation.
We plan to extend the present analysis on more data, in particular including background data while the optical fiber is connected to the TES, in order to evaluate the performance of our classifiers to reject events induced by black body radiation. Furthermore, we will perform an optimization of the hyper parameters of the CNN and will investigate the performance of autoencoders for signal and background discrimination as done in Ref. [37]. We also plan to investigate unsupervised ML techniques in order to
Figure 5: Class activation maps for 15 example time lines of light events which are classified as such by our CNN. The rising part of the pulse is most important for the classification of these samples. The time lines are shifted along the \(y\) axis for better visibility.
identify different background sources. For example, Fig. 2 suggests at least two background populations. Lastly, it will also be interesting to see how well deep neural networks perform in reconstructing different incident photon energies and whether this can improve the energy resolution of TES detectors.
**Acknowledgements**
M. M. acknowledges the support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2121 "Quantum Universe" - 390833306 and from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program Grant agreement No. 948689 (AxionDM).
|
2307.11407 | Line-of-sight structure of troughs identified in Subaru Hyper
Suprime-Cam Year 3 weak lensing mass maps | We perform the weak lensing mass mapping analysis to identify troughs, which
are defined as local minima in the mass map. Since weak lensing probes
projected matter along the line-of-sight, these troughs can be produced by
single voids or multiple voids projected along the line-of-sight. To scrutinise
the origins of the weak lensing troughs, we systematically investigate the
line-of-sight structure of troughs selected from the latest Subaru Hyper
Suprime-Cam (HSC) Year 3 weak lensing data covering $433.48 \, \mathrm{deg}^2$.
From a curved sky mass map constructed with the HSC data, we identify 15
troughs with the signal-to-noise ratio higher than $5.7$ and address their
line-of-sight density structure utilizing redshift distributions of two galaxy
samples, photometric luminous red galaxies observed by HSC and spectroscopic
galaxies detected by Baryon Oscillation Spectroscopic Survey. While most of
weak lensing signals due to the troughs are explained by multiple voids aligned
along the line-of-sight, we find that two of the 15 troughs potentially
originate from single voids at redshift $\sim 0.3$. The single void
interpretation appears to be consistent with our three-dimensional mass mapping
analysis. We argue that single voids can indeed reproduce observed weak lensing
signals at the troughs if these voids are not spherical but are highly
elongated along the line-of-sight direction. | Takumi Shimasue, Ken Osato, Masamune Oguri, Rhythm Shimakawa, Atsushi J. Nishizawa | 2023-07-21T08:05:41Z | http://arxiv.org/abs/2307.11407v2 | Line-of-sight structure of troughs identified in Subaru Hyper Suprime-Cam Year 3 weak lensing mass maps
###### Abstract
We perform the weak lensing mass mapping analysis to identify _troughs_, which are defined as local minima in the mass map. Since weak lensing probes projected matter along the line-of-sight, these troughs can be produced by single voids or multiple voids projected along the line-of-sight. To scrutinise the origins of the weak lensing troughs, we systematically investigate the line-of-sight structure of troughs selected from the latest Subaru Hyper Suprime-Cam (HSC) Year 3 weak lensing data covering 433.48 deg\({}^{2}\). From a curved sky mass map constructed with the HSC data, we identify 15 troughs with the signal-to-noise ratio higher than 5.7 and address their line-of-sight density structure utilizing redshift distributions of two galaxy samples, photometric luminous red galaxies observed by HSC and spectroscopic galaxies detected by Baryon Oscillation Spectroscopic Survey. While most of weak lensing signals due to the troughs are explained by multiple voids aligned along the line-of-sight, we find that two of the 15 troughs potentially originate from single voids at redshift \(\sim 0.3\). The single void interpretation appears to be consistent with our three-dimensional mass mapping analysis. We argue that single voids can indeed reproduce observed weak lensing signals at the troughs if these voids are not spherical but are highly elongated along the line-of-sight direction.
keywords: gravitational lensing: weak - large-scale structure of Universe - cosmology: observations
## 1 Introduction
The shapes and sizes of distant galaxies are distorted due to the gravitational potential sourced by the intervening matter distribution. This phenomenon is called weak gravitational lensing. In particular, the coherent shape distortions induced by the large-scale structure of the Universe are referred to as _cosmic shear_(for reviews, see Bartelmann & Schneider, 2001; Kilbinger, 2015; Mandelbaum, 2018). Since cosmic shear measurements and analyses do not rely on the uncertain relationship between galaxies and matter as adopted in galaxy clustering analysis, the matter distribution can be probed in an unbiased manner. Thus, cosmic shear is recognized as a powerful and unique approach to studying the nature of dark matter and dark energy in modern cosmology.
The standard approach to extracting cosmological information from weak lensing measurements is the correlation analysis of galaxy shapes. The widely used statistics are two-point statistics, i.e., the correlation function and power spectrum, which are sensitive to the dark matter density and the fluctuation amplitude of matter inhomogeneities and contain complete information for the Gaussian random field (Schneider et al., 2002). While the matter distribution in the very early Universe is close to Gaussian, the late-time matter density field becomes highly non-Gaussian due to the non-linear gravitational growth of the matter distribution. In order to capture the non-Gaussian information, statistics beyond two-point correlations are required. Among them, peak statistics of weak lensing mass maps (Jain & Van Waerbeke, 2000; Hamana et al., 2004; Yang et al., 2011; Liu & Haiman, 2016; Fluri et al., 2018), i.e. number density of peaks as a function of peak height, is thought to be one of the most promising statistics since weak lensing peaks are easily identified, and high peaks have clear correspondence with the density peaks of the matter distribution (see e.g., Miyazaki et al., 2018; Oguri et al., 2021). Indeed, the peak statistics have been employed to constrain cosmological parameters in weak lensing measurements (Liu et al., 2015; Kacprzak et al., 2016; Shan et al., 2018; Harnois-Deraps et al., 2021).
Because of their potential association with _voids_(for a review, see van de Weygaert & Platen, 2011), the low-density region in the Universe, _troughs_, which are defined as local minima in weak lensing mass maps, are also expected to convey cosmological information
(Jain and Van Waerbeke, 2000; Miyazaki et al., 2002). The first identifications of the void have been reported in Gregory and Thompson (1978); Joeveer et al. (1978) followed by detections with larger spectroscopic surveys (Kirshner et al., 1981; de Lapparent et al., 1986). As a recent study, Douglass et al. (2023) present the void catalogue constructed from Sloan Digital Sky Survey (SDSS) Main Sample in Data Release 7, in which more than 1,000 voids with radii larger than 10 \(h^{-1}\) Mpc are identified. Such a large sample of voids enables a robust measurement of the void statistics, which offers a new avenue to study modified gravity theories (e.g., Cai et al., 2015; Nadathur, 2016). The structure and abundance of voids are less sensitive to baryonic physics due to the scarcity of gas and the fact that the formation and evolution are purely driven by gravity. The clustering analysis of voids is employed to constrain the geometry of the Universe through the Alcock-Paczynski test (Lavaux and Wandelt, 2012; Hamaus et al., 2016; Mao et al., 2017; Nadathur et al., 2019; Hamaus et al., 2020) and to measure the baryon acoustic oscillation scale (Kitaura et al., 2016; Liang et al., 2016; Nadathur et al., 2019; Zhao et al., 2020, 2022). Accordingly, the same feature is expected to hold for weak lensing trough statistics, and the trough statistics contain information complementary to peak statistics (Gruen et al., 2016; Barreira et al., 2017; Coulton et al., 2020; Davies et al., 2021; Osato et al., 2021).
The voids have attracted a lot of attention due to their potential association with the _cold spot_ of cosmic microwave background (CMB), which is a large low-temperature region located at \((l,b)\simeq(209^{\circ},-57^{\circ})\) in the Galactic coordinate. The cold spot is first reported by Wilkinson Microwave Anisotropy Probe (Bennett et al., 2013) and further confirmed by _Planck_ (Planck Collaboration et al., 2014). One of the plausible explanations for the origin of the cold spot is a _supervoid_, which is a large-scale (\(\gtrsim 100\,h^{-1}\,\)Mpc) underdense region and possibly consists of multiple voids (Inoue and Silk, 2006). The decaying gravitational potential of the supervoid can generate the decrement of the CMB temperature through the integrated Sachs-Wolfe effect. Indeed, at the position of the cold spot, an extended void region (\(\approx 200\,h^{-1}\,\)Mpc) called the Eridanus supervoid, is identified at the redshift \(z\simeq 0.2\)(Szapudi et al., 2015; Kovacs et al., 2022). However, the shallow un underdensity of the Eridanus supervoid with the density contrast of \(\delta\simeq-0.2\) can account for only 10-20 per cent of the decrement signal assuming the \(\Lambda\) cold dark matter cosmological model. A further extensive study of the supervoid region is required to fully confirm the hypothesis that the origin of the cold spot is the supervoid.
The straightforward method to find underdense regions in the Universe is based on the galaxy number density field, which is the biased tracer of matter, measured from spectroscopic surveys. However, voids are regions with less or no galaxies by nature, and thus, the identification of voids from galaxy catalogues entails large statistical uncertainty. Furthermore, the different void-finding algorithms (El-Ad and Piran, 1997; Hoyle and Vogeley, 2002; Neyrinck, 2008; Sutter et al., 2015; Nadathur et al., 2019) lead to different void populations, leading to large systematic noise. The comparison of the observed void catalogue with simulations also depends on how galaxies are populated in the simulations, and therefore is subject to the baryon physics uncertainty.
The void search in weak lensing mass maps has a potential to identify underdense regions that cannot be detected by galaxy surveys and settle the debate about the origin of the cold spot. However, simple analytic estimates indicate that single voids cannot produce significant weak lensing signals, unless their size is extremely large, \(\gtrsim 100\,h^{-1}\,\)Mpc (Amendola et al., 1999). Therefore, it is likely that the detection of a single large underdense region is difficult with weak lensing. This is why previous work of weak lensing by voids has largely focused on the stacked weak lensing analysis of voids identified from galaxy distributions (Higuchi et al., 2013; Krause et al., 2013; Melchior et al., 2014; Clampitt and Jain, 2015; Sanchez et al., 2017; Fang et al., 2019; Vielzeuf et al., 2021). Troughs in weak lensing mass maps tend to be associated with multiple underdense regions along the line-of-sight (see, e.g., Chang et al., 2018), although possible weak lensing troughs associated with single large voids have also been identified (see, e.g., Jeffrey et al., 2021; Shimakawa et al., 2021). In either case, studies of the line-of-sight structure of weak lensing troughs have been limited to those in a few curious cases, and there have not been any systematic studies of the line-of-sight structures of weak lensing troughs.
In this paper, we employ the latest Subaru Hyper Suprime-Cam (HSC) Year 3 (Y3) weak lensing shape catalogue (Li et al., 2022) to systematically study line-of-sight structures of the most significant troughs identified in weak lensing mass maps. The weak lensing shape catalogue spans \(433.48\,\)deg\({}^{2}\) with the mean source galaxy number density \(22.9\,\)arcmin\({}^{-2}\). The high number density from the deep imaging data enables us to probe the density field out to higher redshifts, and offers an ideal tool to map the cosmic web structure with high statistical significance. We study the line-of-sight structures of the most significant troughs employing two galaxy catalogues; one is the photometric luminous red galaxy (LRG) catalogue selected with CAMIRA algorithm (Oguri, 2014; Oguri et al., 2018, 2018) and the other is the LOWZ and CMASS spectroscopic galaxy samples from SDSS Data Release 12 (Reid et al., 2016). Furthermore, we perform the three-dimensional weak lensing mass mapping (Simon et al., 2009; Oguri et al., 2018) to probe the large-scale density field around the identified troughs.
This paper is organized as follows. In Section 2, we briefly overview the basics of weak lensing and the mass mapping analysis. In Section 3, we present the weak lensing shape catalogue from HSC Y3 data and galaxy catalogues to identify the weak lensing troughs and investigate the line-of-sight structures at trough positions. In Section 4, we present the results on the identification of weak lensing troughs and line-of-sight galaxy number densities at the trough positions. We discuss our results in Section 5 and give conclusions in Section 6. Throughout this paper, we adopt a flat \(\Lambda\) cold dark matter cosmology with the matter density \(\Omega_{\rm m}=0.3\), the baryon density \(\Omega_{\rm b}=0.05\), the Hubble constant \(H_{0}=100h=70\,\)km\({}^{-1}\,\)Mpc\({}^{-1}\), the tilt of the scalar perturbation \(n_{\rm s}=0.96\), and the present amplitude of the matter fluctuation at the scale of \(8h^{-1}\,\)Mpc \(\sigma_{8}=0.81\).
## 2 Weak lensing mass mapping with subaru HSC Y3 data
In this Section, we overview the basics of weak lensing mass mapping analysis and the HSC Y3 shear catalogue. Throughout the paper, weak lensing mass maps are constructed in a curved sky without adopting the flat sky approximation, in contrast to the previous weak lensing mass map analyses using the HSC survey data (Oguri et al., 2018, 2021; Miyazaki et al., 2018) for which the flat sky approximation has been adopted.
### Two-dimensional mass mapping
In order to formulate weak lensing, we begin by defining the lensing potential for a single source located at the comoving distance of \(\chi_{\rm s}\):
\[\psi\left(\chi_{\rm s},\mathbf{\theta}\right)=\frac{2}{c^{2}}\int_{0}^{\chi_{\rm s }}{\rm d}\chi\,\frac{f_{K}\left(\chi_{\rm s}-\chi\right)}{f_{K}\left(\chi_{ \rm s}\right)f_{K}\left(\chi\right)}\Phi(\chi,\mathbf{\theta}), \tag{1}\]
where \(\mathbf{\theta}=(\theta,\phi)\) specifies the position on the sky, \(c\) is the speed of light, \(\Phi\) is the gravitational potential, and \(f_{K}(\chi)\) is the comoving angular diameter distance with the curvature \(K\):
\[f_{K}(\chi)=\begin{cases}\sin(\sqrt{K}\chi)/\sqrt{K}&(K>0),\\ \chi&(K=0),\\ \sinh(\sqrt{-K}\chi)/\sqrt{-K}&(K<0),\end{cases} \tag{2}\]
although we note that the flat Universe (\(K=0\)) is assumed throughout the paper. The gravitational potential can be derived from the Poisson equation:
\[\nabla_{\chi}^{2}\Phi(\chi,\mathbf{\theta})=\frac{3\Omega_{\rm m}H_{0}^{2}}{2a^{2} }\delta(\chi,\mathbf{\theta}), \tag{3}\]
where \(\nabla_{\chi}\) is the differential operator in the comoving coordinate, \(a\) is the scale factor, and \(\delta(\chi,\mathbf{\theta})\) is the density contrast. The convergence \(\kappa\) and shear \(\gamma\) fields, which correspond to the isotropic and anisotropic deformations of images, respectively, are introduced in the curved sky (Heavens, 2003; Castro et al., 2005; Chang et al., 2018; Jeffrey et al., 2021):
\[\kappa=\frac{1}{4}(\bar{\delta}\bar{\delta}\bar{\delta})\psi,\ \gamma=\frac{1}{2}\bar{ \delta}\bar{\delta}\psi, \tag{4}\]
where \(\delta\) and \(\bar{\delta}\) are the differential raising and lowering operators in spin-\(s\) spherical harmonics \(sY_{\ell m}\). The convergence field \(\kappa(\mathbf{\theta},\chi_{\rm s})\) with the single source at the comoving distance \(\chi_{\rm s}\) can be computed as
\[\kappa(\mathbf{\theta},\chi_{\rm s})=\] \[\frac{3H_{0}^{2}\Omega_{\rm m}}{2c^{2}}\int_{0}^{\chi_{\rm s}} \frac{\mathrm{d}\chi}{a(\chi)}\frac{f_{K}(\chi_{\rm s}-\chi)f_{K}(\chi)}{f_{K} (\chi_{\rm s})}\delta(f_{K}(\chi)\mathbf{\theta},\chi). \tag{5}\]
Hence, the convergence field is the projected density contrast with the distance kernel and referred to as the _mass map_. In real measurements, we use multiple source galaxies with different redshifts, and therefore the observed convergence field \(\kappa(\mathbf{\theta})\) should be weighted with the redshift distribution:
\[\kappa(\mathbf{\theta})=\int_{0}^{\chi_{\rm H}}\!\!\mathrm{d}\chi_{\rm s}\,p(\chi_ {\rm s})\kappa(\mathbf{\theta},\chi_{\rm s}), \tag{6}\]
where \(\chi_{\rm H}\) is the comoving distance to the horizon and \(p(\chi_{\rm s})\) is the redshift distribution of source galaxies normalised as \(\int_{0}^{\chi_{\rm H}}\!\!\mathrm{d}\chi_{\rm s}\,p(\chi_{\rm s})=1\). The expression can be recast as
\[\kappa(\mathbf{\theta})=\frac{3H_{0}^{2}\Omega_{\rm m}}{2c^{2}}\int_{0}^{\chi_{ \rm H}}\!\!\frac{\mathrm{d}\chi}{a(\chi)}q(\chi)f_{K}(\chi)\delta(f_{K}(\chi) \mathbf{\theta},\chi), \tag{7}\]
where the lens efficiency \(q(\chi)\) is defined as
\[q(\chi)=\int_{\chi}^{\chi_{\rm H}}\!\!\mathrm{d}\chi_{\rm s}\,p(\chi_{\rm s}) \frac{f_{K}(\chi_{\rm s}-\chi)}{f_{K}(\chi_{\rm s})}. \tag{8}\]
The shear and convergence fields are not independent but related in the harmonic space. The harmonic expansions of the lensing potential, convergence and shear fields are given as
\[\psi =\sum_{\ell m}\tilde{\psi}_{\ell m}\,0Y_{\ell m}(\theta,\phi), \tag{9}\] \[\kappa =\sum_{\ell m}\tilde{\kappa}_{\ell m}\,0Y_{\ell m}(\theta,\phi),\] (10) \[\gamma =\sum_{\ell m}\tilde{\gamma}_{\ell m}\,2Y_{\ell m}(\theta,\phi), \tag{11}\]
where \(\tilde{\psi}\), \(\tilde{\kappa}\), and \(\tilde{\gamma}\) are the lensing potential, convergence, and shear in harmonic space, respectively. By plugging the harmonic expansions into Eq. (4), the convergence and shear fields and the lensing potential are related linearly:
\[\tilde{\kappa}_{\ell m} =-\frac{1}{2}\ell(\ell+1)\tilde{\psi}_{\ell m}, \tag{12}\] \[\tilde{\gamma}_{\ell m} =\frac{1}{2}\sqrt{(\ell-1)\ell(\ell+1)(\ell+2)}\tilde{\psi}_{\ell m}\] (13) \[=-\sqrt{\frac{(\ell-1)(\ell+2)}{\ell(\ell+1)}}\tilde{\kappa}_{ \ell m}. \tag{14}\]
Thus, the convergence field, or equivalently the mass map, can be obtained once the shear field is estimated from the shape catalogue. The real and imaginary parts of convergence and shear field in harmonic space are referred to as E-mode and B-mode, respectively:
\[\tilde{\kappa}_{\ell m} =\tilde{\kappa}_{E,\ell m}+i\tilde{\kappa}_{B,\ell m}, \tag{15}\] \[\tilde{\gamma}_{\ell m} =\tilde{\gamma}_{E,\ell m}+i\tilde{\gamma}_{B,\ell m}. \tag{16}\]
Since weak lensing effect is induced by the scalar potential \(\psi\), the E-mode convergence \(\kappa_{E}\) prevails the B-mode convergence \(\kappa_{B}\). However, the systematic effect such as incomplete point spread function correction results in non-zero B-mode convergence. Thus, the B-mode convergence should be zero within statistical uncertainty and can be used as a null-test.
In the practical analysis of weak lensing, we estimate the shear field \(\tilde{\gamma}_{\alpha}(\mathbf{\theta})\) as
\[\tilde{\gamma}_{\alpha}(\mathbf{\theta})=\frac{\sum_{i}w_{i}(\gamma_{\alpha}(\mathbf{ \theta}_{i})-c_{\alpha,i})W(\mathbf{\theta};\mathbf{\theta}_{i})}{\sum_{i}w_{i}(1+m_{i} )W(\mathbf{\theta};\mathbf{\theta}_{i})}, \tag{17}\]
where the sum runs over all source galaxies, \(\tilde{\gamma}_{\alpha}(\mathbf{\theta}_{i})\) is the local shear field at the position of the \(i\)-th galaxy, \(w_{i}\) is the lensing weight (Mandelbaum, 2018), \(c_{\alpha,i}\) is the additive bias, \(m_{i}\) is the multiplicative bias (see Section 2.3), \(W\) is the Gaussian smoothing kernel. The local shear field is estimated as
\[\gamma_{\alpha}(\mathbf{\theta}_{i})=\frac{e_{\alpha}(\mathbf{\theta}_{i})}{2\Re}, \tag{18}\]
where \(e_{\alpha}(\mathbf{\theta}_{i})\) is the galaxy shape ellipticity and \(\Re\) is the shear responsivity given as
\[\Re=1-\frac{\sum_{i}w_{i}e_{\rm rms,\,i}^{2}}{\sum_{i}w_{i}}, \tag{19}\]
where \(e_{\rm rms,\,i}\) is the intrinsic shape dispersion. The smoothed shear field is estimated from the shape catalogue (Eq. 17) and pixellated based on the HEALPix (Gorski et al., 2005) pixellization. Spherical harmonic coefficients of the estimated shear are then calculated with the map2alm_spin routine of healpy (Zonca et al., 2019). Finally, the convergence field can be computed through the relation (Eq. 14) from the smoothed shear field in harmonic space. We mask pixels with few galaxies because the shear estimate in such pixels suffers from a large statistical uncertainty. In practice, we construct the source galaxy number density weighted with lensing weights and the pixels with less than 20 per cent of the mean of the number density map are masked.
The size of the Gaussian smoothing kernel \(W\) should be determined with careful consideration. In principle, this smoothing kernel size sets of the minimum size of troughs or voids we can find from mass maps. On one hand, the large kernel size is desired if we are interested in finding large voids or supervoids, but on the other hand, the effects of residual systematics arising from imperfect shape measurements on weak lensing mass maps become more important when the smoothing size is larger (Oguri et al., 2018; Li et al., 2022). The large smoothing size also reduces the effective search
area significantly, because many troughs touch the edge of the survey footprint and therefore will be discarded. In addition, an advantage of the HSC survey is its high number density of source galaxies, which enables us to accurately construct mass maps down to smaller scales. In this paper, we choose the Gaussian smoothing kernel with the full-width half maximum (FWHM) scale of \(40\,\mathrm{arcmin}\) (\(\simeq 11\,h^{-1}\) Mpc at \(z=0.2\)) as a compromise between the size of troughs and voids and possible systematic errors in weak lensing mass maps. We however note that we repeated our analysis in several different smoothing sizes to confirm that our results are insensitive to the choice of the smoothing size. In our analysis, we adopt the Healpix pixelization with \(N_{\mathrm{side}}=512\), which is sufficient to resolve the smoothing kernel.
In order to estimate the statistical significance of weak lensing mass maps, we construct random noise maps, i.e., mass maps computed from randomly rotated galaxy shapes. In these random noise maps, the cosmological signal is erased and only the pure noise signal remains. We create 100 random mass maps by changing the random seeds and measure the standard deviation among the 100 realisations, which corresponds to the statistical noise of the mass maps. We employ the signal-to-noise ratio (S/N) map, which is the mass map divided by the noise map. We define troughs as pixels in the S/N map that are lower than all neighbour pixels. We note that the convergence at troughs is negative in most cases, and correspondingly S/N values of troughs are negative.
Figure 1 shows the probability distribution functions (PDFs) of S/N map of E-mode and B-mode convergence, where the E-mode convergence map corresponds to the mass map. For B-mode convergence map, the PDF is well approximated as the normal distribution across all ranges. While the width of the best-fitting normal distribution is slightly larger than \(\sigma=1\) that is expected when the map is dominated by the shape noise, such broadening of the B-mode PDF is also seen in the analysis of mock weak lensing shape catalogues due to the boundary effect that partly mixes E- and B-mode signals (Li et al., 2022). We find that the E-mode PDF near the peak has the width much broader than the B-mode PDF, indicating that the weak lensing signal is clearly detected. Furthermore, the PDF is highly skewed at \(\mathrm{S/N}\simeq 10\), which corresponds to weak lensing peaks created by massive structures such as galaxy clusters.
### Three-dimensional mass mapping
The weak lensing mass map is the density distribution convoluted with the lensing kernel along the line-of-sight direction, and hence, the line-of-sight information of the density structure is not accessible due to the projection. On the other hand, the three-dimensional structure of the density distribution can be reconstructed with _lensing tomography_(Hu, 1999; Hu & Keeton, 2002) technique, where the source galaxy sample is divided into subsamples according to the photometric redshifts and mass maps are created for individual subsamples. Since the resultant mass maps have different lensing kernels, the line-of-sight structures can in principle be reconstructed.
For the reconstruction of three-dimensional mass maps, we follow the methodology presented in Simon et al. (2009) and Oguri et al. (2018). Specifically, we consider convergence fields for source galaxies with redshifts in the ranges of \([z_{i,\mathrm{min}},z_{i,\mathrm{max}}]\), where the subscript \(i=1,\ldots,N_{\kappa}\) denotes the source redshift bin and \(N_{\kappa}\) is the number of the source redshift bins. In this paper, we define the source redshift bin as the linear spacing with respect to the comoving distance:
\[\chi(z_{i,\mathrm{min}}) =(300+300\times i)\;h^{-1}\;\mathrm{Mpc}, \tag{20}\] \[\chi(z_{i,\mathrm{max}}) =\chi(z_{i,\mathrm{min}})+300\;h^{-1}\;\mathrm{Mpc}, \tag{21}\]
with \(N_{\kappa}=14\). The convergence \(\kappa_{i}\) at the \(i\)-th source redshift bin is expressed as the weighted sum of density contrasts in lens redshift bins. We consider density contrasts with redshifts in the range of \([z_{j,\mathrm{min}},z_{j,\mathrm{max}}]\), where the subscript \(j=1,\ldots,N_{\delta}\) denotes the lens redshift bin. In this paper, we define the lens redshift bin as the linear spacing with respect to the comoving distance:
\[\chi(z_{j,\mathrm{min}}) =(100+200\times j)\;h^{-1}\;\mathrm{Mpc}, \tag{22}\] \[\chi(z_{j,\mathrm{max}}) =\chi(z_{j,\mathrm{min}})+200\;h^{-1}\;\mathrm{Mpc}, \tag{23}\]
with \(N_{\delta}=10\). Using this expression of the three-dimensional density field, the convergence \(\kappa_{i}\) at the \(i\)-th source redshift bin is written as
\[\kappa_{i} \approx\sum_{z_{j}<z_{i}}\left[\int_{z_{j,\mathrm{min}}}^{z_{j, \mathrm{max}}}\frac{\tilde{\rho}(z)}{H(z)(1+z)\Sigma_{\mathrm{crit},i}(z)} \mathrm{d}z\right]\delta_{j}\] \[\equiv\sum_{z_{j}<z_{i}}Q_{ij}\delta_{j}, \tag{24}\]
where \(H(z)\) is the Hubble parameter. The summation runs over bins
Figure 1: Probability distribution functions of E-mode (_upper_) and B-mode (_lower_) S/N maps. The orange solid line shows the normal distribution with the sample mean and standard deviation.
that satisfy \(z_{j}<z_{i}\) because only the foreground structure contributes to the convergence. To put it another way, we set \(\delta_{ij}=0\) if \(z_{j}>z_{i}\). The critical surface mass density \(\Sigma_{\rm crit,i}\) for \(i\)-th source redshift bin is approximated as
\[\Sigma_{\rm crit,i}^{-1}(z)\approx\frac{4\pi G}{c^{2}}f_{K}\left(\chi(z)\right) \frac{f_{K}\left(\chi(z)-\chi(\bar{z}_{i})\right)}{f_{K}\left(\chi(\bar{z}_{i} )\right)}, \tag{25}\]
where \(\bar{z}=(z_{i,\rm min}+z_{i,\rm max})/2\) is the mean redshift.
In principle, the three-dimensional density distribution \(\delta_{j}\) can be obtained by inverting the linear equation Eq. (24). However, this operation is numerically unstable in general, and thus, we employ the Wiener filtering to reduce the noise in Fourier space. To this end, we compute the power spectrum signal \(S_{lm}\) and noise \(N_{lm}\) for \(l\)-th and \(m\)-th redshift bins:
\[S_{lm}=\delta_{lm}^{K}\frac{1}{(\Delta\chi_{l})^{2}}\int_{z_{l, \rm min}}^{z_{l,\rm max}}\frac{cdz}{H(z)}\frac{1}{\chi^{2}(z)}P_{\rm m}(k=\ell /\chi,z), \tag{26}\] \[N_{lm}=\delta_{lm}^{K}\frac{\sigma_{e}^{2}}{\bar{n}_{l}}, \tag{27}\]
where \(\delta_{lm}^{K}\) is the Kronecker delta, \(\Delta\chi_{l}\approx c/H(\bar{z}_{l})(z_{l,\rm max}-z_{l,\rm min})\) is the width of comoving distance for the \(l\)-th redshift bin, \(P_{\rm m}(k,z)\) is the non-linear matter power spectrum computed with _halofit_ prescription (Smith et al., 2003) with the updated parameters (Takahashi et al., 2012), \(\sigma_{e}\approx 0.35\) is the root mean square of the ellipticity computed directly from the weak lensing shape catalogue, and \(\bar{n}_{k}\) is the mean number density of source galaxies in \(k\)-th redshift bin, which is again computed directly from the weak lensing shape catalogue. The reconstructed three-dimensional density field in harmonic space is given by the minimum-variance estimator:
\[\tilde{\delta}_{\ell m}=\tilde{W}(\ell)D(\ell)\left[\alpha S^{-1}+Q^{\rm T}N^ {-1}Q\right]^{-1}Q^{\rm T}N^{-1}\tilde{\gamma}_{\ell m}, \tag{28}\]
where \(\tilde{W}(\ell)\) is the Gaussian filter in harmonic space, \(D(\ell)\equiv-\sqrt{\ell(\ell+1)/((\ell-1)(\ell+2))}\) (Eq. 14), and \(\tilde{\gamma}_{\ell m}\equiv(\tilde{\gamma}_{1,\ell m},\ldots,\tilde{\gamma} _{N_{\ell},\ell m})\) is the shear field in harmonic space binned with the redshifts. We have introduced the parameter \(\alpha\) which controls the regularization with the signal power spectrum and we adopt \(\alpha=0.03\) following Oguri et al. (2018). Finally, the three-dimensional density field \(\delta_{i}\) can be obtained by applying the inverse spherical transformation to \(\tilde{\delta}_{\ell m}\equiv(\tilde{\delta}_{1,\ell m},\ldots,\tilde{\delta} _{N_{\kappa},\ell m})\).
### Subaru Hyper Suprime-Cam weak lensing shape catalogue
The Hyper Suprime-Cam (HSC) installed on the Subaru Telescope is a wide-field imaging camera (Miyazaki et al., 2018), with the field-of-view of \(1.77\,{\rm deg}^{2}\). In this work, we use the three-year (Y3) weak lensing shape catalogue based on the \(i\)-band coadded images from the wide layer of the HSC Strategic Survey Program (HSC-SSP; Aihara et al., 2018, 2019, 2022) taken from March 2014 to April 2019. The full details of the production pipeline of the Y3 weak lensing shape catalogue are described in Li et al. (2022). Here, we overview the basics of the shape catalogue. The galaxy ellipticities \(e\) are estimated with the GalSim code (Rowe et al., 2014) based on the re-Gaussianization method (Hirata and Seljak, 2003):
\[e=\frac{1-(b/a)^{2}}{1+(b/a)^{2}}(\cos 2\phi,\sin 2\phi), \tag{29}\]
where \(b/a\) is the major-to-minor axis ratio of the galaxy isophotes and \(\phi\) is the polar angle measured from the major axis. The re-Guassianization method is subject to the shear estimation bias and these biases are corrected by introducing the multiplicative bias \(m\) and additive biases \(c_{1,2}\). These bias terms are calibrated with image simulations as a function of galaxy properties, e.g., signal-to-noise ratio and redshift.
We only consider the area where the data in all five HSC broadbands \((g,r,i,z,y)\) reach the full-depth and the error of the point spread function is small. The total area of the catalogues is \(433.48\,{\rm deg}^{2}\), which are divided into six patches: XMM, VVDS, GAMA09H, GAMA15H, WIDE12H, and HECTOMAP. Figure 2 shows the S/N map derived from the HSC Y3 shape catalogue.
The photometric redshifts of source galaxies in Y3 shape catalogues are estimated with various approaches (for HSC Y1 weak lensing analysis, see Tanaka et al., 2018). We adopt dINHz method (Nishizawa et al., in prep.), which utilizes neural networks with inputs of cmodel magnitudes, size, and point spread function matched aperture magnitudes in all five HSC bands. For each galaxy, the probability distribution function of the redshift in the range of \(0<z<7\) with 100 bins is computed. The performance is evaluated with the test sample, which consists of 10 per cent of the whole sample and is not used in the training process; the bias is \(<10^{-4}\), the scatter is 3 per cent, and the outlier fraction is \(<10\) per cent.
## 3 Photometric and spectroscopic galaxy catalogues
In order to study the line-of-sight density structures at the positions of troughs in detail, we utilize the three-dimensional galaxy distributions, which are the biased tracers of the large-scale structures, adopting two different galaxy catalogues. First, we employ the photometric LRG catalogue constructed from the HSC photometric data. Since only the photometric redshifts are available for the LRG catalogue, the uncertainty of the line-of-sight distance of galaxies is relatively large. On the other hand, an advantage of the HSC photometric galaxy catalogue is that it uniformly covers the entire weak lensing mass map regions with the high galaxy number density out to the high redshift of \(z\approx 1.2\), enabling us to address the line-of-sight density structures with less statistical noise. Next, the spectroscopic galaxy samples from SDSS are used. This spectroscopic catalogue is complementary to the photometric LRG catalogue in the sense that the redshift is robustly determined with the spectroscopy and hence the line-of-sight structure is not smeared by the redshift uncertainty. On the other hand, the spectroscopic catalogue contains fewer galaxies with a narrower range of redshifts (\(0.1<z<0.8\)). Leveraging these two catalogues, we investigate the three-dimensional structures of high significance troughs in weak lensing mass maps.
### Photometric galaxy catalogue
First, we employ the photometric LRG catalogue in HSC survey regions identified using the stellar population synthesis model fitting of galaxy colours used for Cluster-finding Algorithm based on Multi-band Identification of Red-sequence gAlaxies (CAMIRA; Oguri, 2014; Oguri et al., 2018, 2018). With the help of the deep multi-band imaging by HSC, the photometric LRG catalogue is constructed out to the redshift of \(z\sim 1\) with redshift accuracy of \(\Delta z/(1+z)\sim 0.03\)(Oguri et al., 2018; Ishikawa et al., 2021). We use the photometric LRG catalogue based on the latest internal HSC-SSP data that contains 5,479,879 LRGs in the redshift range of \(0.05<z<1.25\). The redshift distribution of the photometric LRGs is shown in Figure 3. Although the redshift accuracy of the photometric catalogue is not so high as compared with the spectroscopic catalogue, its high number density is useful for the robust investigation of the density structure in the line-of-sight direction at the positions of weak lensing troughs.
### Spectroscopic galaxy catalogue
Next, we employ the spectroscopic galaxy catalogue that provides highly accurate galaxy redshifts. To this end, we use the galaxy catalogue of SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS) DR12 data (Reid et al., 2016). In particular, CMASS and LOWZ galaxy samples are employed to address the line-of-sight density structures. These two catalogues are constructed based on the different colour and magnitude cuts (Eisenstein et al., 2001; Cannon et al., 2006). These selection criteria are designed so that the redshift ranges of CMASS and LOWZ are \(0.4<z<0.8\) and \(z\leq 0.4\), respectively. The effective areas of CMASS and LOWZ are 9376 deg\({}^{2}\) and 8337 deg\({}^{2}\), respectively. We note that the HSC and BOSS survey regions are almost overlapped. The total number of galaxies is 777,202 for CMASS and 361,762 for LOWZ. Figure 4 shows the redshift distributions of CMASS and LOWZ galaxy samples. Although the number densities of CMASS and LOWZ samples are substantially lower than the HSC photometric LRG sample, the precise redshift determination of CMASS and LOWZ samples complements the analysis with the HSC photometric LRG sample.
## 4 Results
In this Section, we present troughs identified in mass maps constructed from the HSC Y3 shape catalogue. We then measure the radial profiles of the convergence for representative troughs to see how the underdense regions extend in the sky. Next, we measure the redshift distributions of photometric LRGs and CMASS/LOWZ galaxies at the positions of selected troughs to investigate whether the trough originates from single or multiple underdense regions along the line-of-sight. Finally, we present the three-dimensional mass maps around the troughs and discuss their consistency with the galaxy distributions.
Figure 3: The redshift distribution of the number density of the HSC photometric LRGs.
Figure 2: The S/N maps of E-mode convergence fields for six HSC survey patches: XMM, VVDS, GAMA09H, GAMA15H, WIDE12H, HECTOMAP.
### Properties of identified troughs
First, we present the S/N and positions of troughs identified in our analysis in descending order of significance in Table 1. The possible origin of each trough, i.e. single void or alignment of multiple voids along the line-of-sight, can be discriminated by the redshift distribution of the photometric LRGs and the CMASS/LOWZ galaxies at the position of the trough, which will be shown in Section 4.3. Some troughs are located close to the edge of the survey region; these troughs may be spurious because the mass map is noisier due to the mixing of E- and B-mode near the edge. We classify troughs whose distance to the nearest edge is smaller than the smoothing length as "Edge origin".
The redshift distribution describes the number density at the trough position divided by the mean number density as a function of redshift, to which we refer to the line-of-sight number density contrast. Specifically, it is defined as \(n_{\rm trough}/n_{\rm mean}\), where \(n_{\rm trough}(z_{i})\) is the number of galaxies within the \(z_{i}\) bin and within the top-hat aperture radius of 10 arcmin (50 arcmin) for HSC photometric LRGs (CMASS/LOWZ galaxies1) from the centre of the trough divided by the aperture area. The mean number density \(n_{\rm mean}(z_{i})\) for HSC photometric LRGs is calculated as the number of galaxies in each redshift bin divided by the total survey area. For CMASS/LOWZ galaxies, the random catalogues are provided in SDSS DR12. We employ the random catalogues to compute the mean number density to incorporate the survey mask effect and completeness of the CMASS/LOWZ galaxies. The redshift bins are equally spaced in the range of \(0.1<z<1.2\) with 10 bins. The error is estimated as Poisson noise.
Footnote 1: The aperture radius of 50 arcmin for CMASS/LOWZ galaxies is much larger than the fibre collision scale \(\sim\) 1 arcmin.
We classify troughs into two different classes according to their origin: "single-void _candidate_" or "multiple-voids". We define the single-void origin as the trough region where both LRG and CMASS/LOWZ catalogues exhibit the single underdense regions deeper than \(4\sigma\) with respect to the mean in the line-of-sight redshift distribution and multiple-void origin as those having multiple underdense regions, respectively. Here, we regard the single-void origin troughs as _candidates_ because the line-of-sight density structure probed by galaxy samples employed in this analysis has large uncertainties. In order to identify the void origins, we mainly use the photometric HSC catalogue because the CMASS/LOWZ catalogue does not fully cover the HSC Y3 weak lensing survey footprint. We use CMASS/LOWZ galaxies for cross-checking in the region where they are available. Note that the LOWZ and CMASS survey regions are not fully overlapped, and as a result, the number densities of spectroscopic galaxies can be extremely small for some of the troughs. Excluding troughs near the edge, there are four troughs with \(|\rm S/N|>7\), all of which turn out to be the multiple-voids origin. By lowering the \(|\rm S/N|\) threshold, we find 15 troughs with \(|\rm S/N|>5.7\), two of which are single-void origin candidates.
In what follows, we present a line-of-sight property of the most prominent two single-void origin candidates, T7 and T23. For comparison, we also show the line-of-sight property of T2 and T9 as representative examples of multiple-voids origin troughs. Figures 5 and 6 show S/N maps around these troughs. It is seen that these troughs are located far (\(>\) 1 deg) from the survey edge, and thus, they are not likely to be artefacts.
### Radial profile of the convergence around troughs
We measure radial profiles of the convergence centred at the troughs. The radial profile is derived by computing the mean of convergence in the annuli, which are equally spaced in the range of \(0<\theta\big{|}\rm arcmin\big{|}<110\) with 19 bins. The measured radial profiles are shown in Figures 7 and 8. The decrement of the convergence extends to \(\approx 1\) deg, which is larger than the smoothing scale 40 arcmin in FWHM, implying that the troughs originate from spatially extended underdense regions.
### Line-of-sight structure probed by galaxy distributions
In order to study the line-of-sight density structure of each trough, we utilize the galaxy distribution, which is a biased tracer of the underlying dark matter distribution. Specifically, we employ HSC photometric LRGs and CMASS/LOWZ catalogues, which are described in Section 3, and visually inspect line-of-sight distributions of HSC photometric LRGs and CMASS/LOWZ galaxies around the troughs.
Figures 9 and 10 show the line-of-sight number density contrasts of HSC photometric LRGs and CMASS/LOWZ galaxies. A strong depression appears both for LRGs and CMASS/LOWZ galaxies at \(z\simeq 0.2\) for T7 and \(z\simeq 0.3\) for T23, which make these troughs as strong candidates of single-void origin troughs. On the other hand, both T2 and T9 have multiple decrements at different redshifts in their number density contrasts. Specifically, there are decrements at \(z\simeq 0.2\) and \(z\simeq 0.5\) for T2 and at \(z\simeq 0.3\) and \(z\simeq 0.6\) for T9. These features indicate that trough signals of T2 and T9 are induced by multiple voids aligned along the line-of-sight.
### Three-dimensional mass map around troughs
An independent and complementary approach to probing the line-of-sight density structure is the three-dimensional mass mapping formulated in Section 2.2. This method does not require external data sets such as HSC photometric LRG or CMASS/LOWZ catalogues. Also, it is not affected by the uncertainty of the connection between galaxy and dark matter distributions. However, reconstructed three-dimensional mass maps tend to be noisy, and our reconstruction method described in Section 2.2 suffers from smearing along the
Figure 4: The redshift distributions of the number densities of LOWZ (_green dashed_), CMASS (_orange dotted_), and the sum of LOWZ and CMASS samples (_blue solid_) in SDSS DR12.
line-of-sight direction as well as the redshift bias in reconstructed mass maps (Simon et al., 2009; Oguri et al., 2018). Nevertheless, we study three-dimensional mass maps around single-void origin candidates for further cross-checking.
Figure 11 shows the reconstructed three-dimensional mass density fields around the region of single-void origin candidates, T7 and T23. According to the analysis of line-of-sight number density contrasts, voids that produce trough signals at T7 and T23 are located at \(z\simeq 0.2\) and \(z\simeq 0.3\), respectively. For the trough T23, there is a clear correspondence in the three-dimensional mass map; the mass density field of \(z=0.25\)-0.33 exhibits low-density structures around the trough. On the other hand, the source void of T7 is ambiguous from the three-dimensional mass maps. The density field at \(z=0.17\)-0.25 around the trough is not underdense but rather, low-density regions are found at \(z=0.10\)-0.17 and \(z=0.25\)-0.33, \(z=0.33\)-0.41.
There is an angular offset between the trough position and the low-density region found in the three-dimensional mass maps. The low-density regions appear at \(z=0.10\)-0.25 at the northern side from the centre of T7 but also at the southern side at \(z=0.25\)-0.41. The similar feature appears for T23; the low-density regions are found at \(z=0.03\)-0.10 at the northern side and \(z=0.25\)-0.41 at the southern side. To investigate the global density structure around the troughs, we measure the redshift distribution of galaxies at the positions shifted from the centres of T7 and T23 to the north and south by \(0.5\deg\) in declination in Figure 12. Although three-dimensional mass mapping suffers from large statistical uncertainty, the low-density region at the south of T23 leads to the decrement of photometric
\begin{table}
\begin{tabular}{l c c c c} \hline Label & \(\rm|S/N|\) & RA & Dec & Source & Redshifts of source voids \\ \hline \hline T1 & 8.77 & \(\rm 12^{h}29^{m}53^{s}\) & \(-00^{\circ}35^{\prime}49^{\prime\prime}\) & Multiple voids & 0.6, 1.0 \\ T2 & 7.83 & \(\rm 10^{h}04^{m}41^{s}\) & \(+01^{\circ}25^{\prime}03^{\prime\prime}\) & Multiple voids & 0.2, 0.5 \\ T3 & 7.21 & \(\rm 10^{h}57^{m}04^{s}\) & \(-00^{\circ}08^{\prime}57^{\prime\prime}\) & Multiple voids & 0.6, 0.7 \\ T4 & 7.15 & \(\rm 11^{h}36^{m}06^{s}\) & \(-00^{\circ}31^{\prime}20^{\prime\prime}\) & Multiple voids & 0.7, 1.0 \\ T5 & 7.01 & \(\rm 22^{h}53^{m}33^{s}\) & \(+02^{\circ}44^{\prime}12^{\prime\prime}\) & Edge & — \\ T6 & 6.92 & \(\rm 14^{h}54^{m}07^{s}\) & \(+44^{\circ}06^{\prime}8^{\prime\prime}\) & Multiple voids & 0.1, 0.3 \\ T7 & 6.79 & \(\rm 22^{h}03^{m}22^{s}\) & \(+02^{\circ}45^{\prime}39^{\prime\prime}\) & Single void (candidate) & 0.2 \\ T8 & 6.67 & \(\rm 02^{h}08^{m}19^{s}\) & \(-02^{\circ}14^{\prime}19^{\prime\prime}\) & Edge & — \\ T9 & 6.67 & \(\rm 02^{h}18^{m}52^{s}\) & \(-04^{\circ}55^{\prime}48^{\prime\prime}\) & Multiple voids & 0.3, 0.6 \\ T10 & 6.42 & \(\rm 12^{h}00^{m}00^{s}\) & \(+04^{\circ}15^{\prime}23^{\prime\prime}\) & Edge & — \\ T11 & 6.37 & \(\rm 11^{h}42^{m}46^{s}\) & \(+02^{\circ}14^{\prime}19^{\prime\prime}\) & Multiple voids & 0.2, 0.5 \\ T12 & 6.36 & \(\rm 10^{h}38^{m}05^{s}\) & \(+00^{\circ}00^{\prime}00^{\prime\prime}\) & Multiple voids & 0.6, 1.0 \\ T13 & 6.24 & \(\rm 09^{h}06^{m}41^{s}\) & \(-01^{\circ}02^{\prime}40^{\prime\prime}\) & Edge & — \\ T14 & 6.24 & \(\rm 14^{h}16^{m}45^{s}\) & \(-01^{\circ}29^{\prime}32^{\prime\prime}\) & Edge & — \\ T15 & 6.05 & \(\rm 22^{h}54^{m}37^{s}\) & \(+02^{\circ}22^{\prime}32^{\prime\prime}\) & Multiple voids & 0.4, 0.6 \\ T16 & 6.05 & \(\rm 12^{h}00^{m}21^{s}\) & \(+04^{\circ}28^{\prime}51^{\prime\prime}\) & Edge & — \\ T17 & 6.01 & \(\rm 22^{h}03^{m}38^{s}\) & \(+01^{\circ}38^{\prime}29^{\prime\prime}\) & Edge & — \\ T18 & 5.96 & \(\rm 09^{h}10^{m}54^{s}\) & \(+01^{\circ}02^{\prime}40^{\prime\prime}\) & Multiple voids & 0.1, 0.6, 0.7, 0.8 \\ T19 & 5.92 & \(\rm 12^{h}17^{m}14^{s}\) & \(+02^{\circ}32^{\prime}15^{\prime\prime}\) & Multiple voids & 0.2, 0.3 \\ T20 & 5.89 & \(\rm 11^{h}16^{m}48^{s}\) & \(+00^{\circ}53^{\prime}43^{\prime\prime}\) & Multiple voids & 0.2, 0.6 \\ T21 & 5.84 & \(\rm 02^{h}18^{m}52^{s}\) & \(-04^{\circ}19^{\prime}52^{\prime\prime}\) & Multiple voids & 0.3, 0.6 \\ T22 & 5.73 & \(\rm 11^{h}03^{m}03^{s}\) & \(+01^{\circ}16^{\prime}06^{\prime\prime}\) & Edge & — \\ T23 & 5.73 & \(\rm 22^{h}19^{m}27^{s}\) & \(+04^{\circ}06^{\prime}24^{\prime\prime}\) & Single void (candidate) & 0.3 \\ \hline \end{tabular}
\end{table}
Table 1: Troughs with \(\rm|S/N|>5.7\) in the mass maps constructed from the HSC Y3 weak lensing shape catalogue. The label is numbered in descending order with respect to \(\rm|S/N|\). Note that \(\rm S/N\) is negative in general because the convergence at a trough is negative. The possible source of each trough is judged by visual inspections of the redshift distribution of photometric LRGs and CMASS/LOWZ galaxies at the position of each trough.
Figure 5: The S/N maps around the troughs T7 (_circle symbol_) and T23 (_cross symbol_), which are single-void origin candidates.
LRGs and spectroscopic galaxies at redshifts \(z\sim 0.2\)-0.3. From the three-dimensional mass mapping, the size of the low-density region is more than \(10\,h^{-1}\,\mathrm{Mpc}\), which should be an interesting target for future spectroscopic surveys.
## 5 Discussions
Troughs are expected to originate from multiple voids aligned along the line-of-sight because single voids with modest sizes cannot produce significant weak lensing signals (Amendola et al., 1999). Our analysis confirms this expectation by finding that most of the troughs in our sample are multiple-void origin.
However, our sample of troughs also includes two troughs (T7 and T23) that are classified as single-void origin candidates. We estimate the weak lensing signals produced by single voids and compare them with observed signals to discuss the possible origin. For a spherical top-hat void with the radius \(r_{\mathrm{v}}\) and a constant density contrast \(\delta_{\mathrm{v}}\), the convergence at the void centre \(\kappa_{\mathrm{v}}\) is estimated simply as
\[\kappa_{\mathrm{v}}=\frac{2\delta_{\mathrm{v}}\bar{\rho}(z_{\mathrm{v}})r_{ \mathrm{v}}}{\Sigma_{\mathrm{crit}}}, \tag{30}\]
where \(\Sigma_{\mathrm{crit}}\) is the critical surface mass density given by Eq. (25). \(\bar{\rho}(z)=(1+z)^{3}\Omega_{\mathrm{m}}\rho_{\mathrm{crit}}\) is the mean mass density at the redshift \(z\), \(\rho_{\mathrm{crit}}\) is the critical density in the present Universe, and \(z_{\mathrm{v}}\) is the redshift of the void. While source galaxies in HSC Y3 weak lensing shape catalogue have a redshift distribution, we assume a fixed source redshift at \(z_{\mathrm{s}}=1\) for simplicity.
Under the assumption of a spherical void, the void radius \(r_{\mathrm{v}}\) can be estimated based on the perpendicular size of the trough in the sky. The radial profile of the convergence around the troughs shown in Figure 7 indicates that the transverse angular size of T7 and T23 is estimated as 1 deg. Together with the redshift of the void candidates, \(z_{\mathrm{v}}\sim 0.3\), the approximate size of the voids can be estimated as \(\sim 11\,h^{-1}\,\mathrm{Mpc}\). Since simulations predict that the average central density contrast of voids with this size is quite small, \(\delta_{\mathrm{v}}\sim-1\)(Hamaus et al., 2014), for brevity, we simply assume \(\delta_{\mathrm{v}}=-1\).
Assuming the parameter values mentioned above, from Eq. (30), we find that the convergence signal at the void centre is computed as \(\kappa_{\mathrm{v}}\sim-1\times 10^{-3}\). In contrast, Figure 7 indicates that the observed convergence values at the centres of single-void origin candidates are \(\kappa_{\mathrm{v}}\sim-8\times 10^{-3}\). To put it another way, the expected S/N from the candidate single void for troughs T7 and T23 is found to be small, with \(\mathrm{|S/N|}<1\), in contrast to the observed S/N of \(\mathrm{|S/N|}\sim 6\). This simple analytic estimate confirms the previous argument (Amendola
Figure 6: Similar to Figure. 5, but for T2 (_upper_) and T9 (_lower_).
Figure 7: Radial profiles of the convergence centred at T7 (_upper_) and T23 (_lower_).
et al., 1999) that single voids cannot produce significant signals in weak lensing mass maps.
One of the possible solutions to mitigate the tension is to consider a non-spherical void elongated along the line-of-sight direction. While the spherical symmetry is assumed in the calculation above, simulations predict that the three-dimensional shape of voids is quite non-spherical (e.g., Park & Lee, 2007; Platen et al., 2008). For instance, the central convergence value of a prolate void with axis ratios of \(f:1:1\) (\(f>1\)), when observed along the major axis, is modified as
\[\kappa_{\rm V}=\frac{2f\delta_{\rm V}\tilde{\rho}(z_{\rm v})r_{\rm v}}{\Sigma_ {\rm crit}}, \tag{31}\]
where \(r_{\rm v}\) is void size along the minor axis, which corresponds to the size of the trough observed on the sky. Considering a highly elongated void with \(f\sim 3-4\) or an even larger value, the weak lensing signal can be significantly enhanced in proportion to \(f\). Together with the so-called Eddington bias (Eddington, 1913) that can also enhance the observed S/N, we conclude that weak lensing signals of troughs T7 and T23 can be explained by single voids elongated along the line-of-sight. This hypothesis can be tested with dense spectroscopic follow-up observations in the candidate trough regions with e.g., Prime Focus Spectrograph (Takada et al., 2014).
The alignment of elongated voids along the line-of-sight can be easily explained by the selection effect. Since the weak lensing signal is significantly enhanced when observed along the major axis, the major axes of single voids found from mass maps should be preferentially aligned along the line-of-sight direction. This is indeed the case for peaks selected in mass maps. Hamana et al. (2012) confirm the presence of an orientation bias for weak lensing selected clusters through a meticulous analysis of ray-tracing simulations. They also find that this bias is stronger for higher S/N peaks. Since our trough sample represents the most significant troughs with the highest \(|\rm S/N|\), the orientation bias is expected to be quite strong.
## 6 Conclusions
Voids are large-scale underdense regions in the Universe and one of the building blocks of the cosmic web structure. Since there are few or no galaxies in voids, the void statistics is expected to be less affected by baryonic physics and hence offers a powerful approach to constrain the geometry of the Universe through Alcock-Paczynski test (Lavaux & Wandelt, 2012; Hamaus et al., 2016; Mao et al., 2017; Nadathur et al., 2019; Hamaus et al., 2020) and to measure the BAO scale (Kitaura et al., 2016; Liang et al., 2016; Nadathur et al., 2019; Zhao et al., 2020, 2022). However, the scarcity of galaxies in voids makes it challenging to indentify voids using spectroscopic galaxy surveys. Weak gravitational lensing has the potential to overcome this fundamental problem because it probes the density field in an unbiased manner.
On the other hand, the search for voids with weak lensing has been considered problematic because single voids typically cannot produce significant weak lensing signals. In this paper, we have performed the mass mapping analysis with the latest Subaru HSC Y3 shape catalogue (Li et al., 2022) with the survey area of \(433.48\,{\rm deg}^{2}\) and the mean source galaxy number density of \(22.9\,{\rm arcmin}^{-2}\) to identify troughs, which are defined as local minima in weak lensing mass maps and are created by aligned underdense regions, i.e. voids. Excluding troughs near the edge of the survey footprint, we have identified 4 high-significance troughs with \(|\rm S/N|>7\) and 11 medium-significance troughs with \(|\rm S/N|>5.7\). In order to study the line-of-sight structure of these troughs, we have utilized redshift distributions of two galaxy samples, HSC photometric LRGs (Oguri, 2014; Oguri et al., 2018, 2018) and spectroscopic CMASS/LOWZ galaxies from SDSS DR12 (Reid et al., 2016). We have investigated the line-of-sight density structures at the trough positions by cross-checking both galaxy samples. While multiple underdense regions are seen in the redshift distributions of the galaxies for most of the troughs, we find that there are two troughs for which only single underdense regions are identified in their redshift distributions of the galaxies. Weak lensing signals of these two troughs are potentially produced by single voids. We have also carried out the three-dimensional mass mapping (Simon et al., 2009; Oguri et al., 2018) around these two troughs to investigate the global density structures. From this analysis, we have identified large-scale (\(\gtrsim 10\,h^{-1}\,{\rm Mpc}\)) underdense regions around one of the troughs.
The convergence profiles of these two troughs indicate that their perpendicular physical sizes are \(\sim 11\,h^{-1}\,{\rm Mpc}\). Assuming a zero-density spherical void, this translates into the predicted convergence at the void centre of \(\sim-1\times 10^{-3}\), which is much smaller than the observed convergence at the centres of the troughs, \(\sim-8\times 10^{-3}\). We have argued that these two troughs can still originate from single voids if we consider single voids that are highly elongated along the line-of-sight direction. Simulations predict that voids are not spherical but rather prolate, and their weak lensing signals can be significantly boosted when their major axes are aligned with the line-of-sight direction. In addition, the Eddington bias (Eddington, 1913) can also help mitigate the difference between the predicted and observed convergence values. Our hypothesis can be tested with
Figure 8: Similar to Figure 7, but for T2 (_upper_) and T9 (_lower_).
dense spectroscopic follow-up observations in the trough regions to identify voids with great accuracy, which we leave for future work.
## Acknowledgements
This work was supported in part by MEXT/JSPS KAKENHI Grant Number JP22K14036 (K.O.), JP20H05856 (M.O.), JP23H00108 (A.N.), and JP2ZK21349 (M.O. and A.N.).
Some of the results in this paper have been derived using the healpy and HEALPix packages.
The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from the Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University.
This paper is based on data collected at the Subaru Telescope and retrieved from the HSC data archive system, which is operated by Subaru Telescope and Astronomy Data Center (ADC) at NAOJ. Data analysis was in part carried out with the cooperation of Center for Computational Astrophysics (CICA) at NAOJ. We are honored and grateful for the opportunity of observing the Universe from Maunakea, which has the cultural, historical and natural significance in Hawaii.
This paper makes use of software developed for Vera C. Rubin Observatory. We thank the Rubin Observatory for making their code available as free software at [http://pipelines.lsst.io/](http://pipelines.lsst.io/).
The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg, and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian
Figure 9: The line-of-sight number density contrasts of HSC photometric LRGs (_upper panels_) and CMASS/LOWZ galaxies (_lower panels_) as a function of redshift for trough T7 (_left panels_) and T23 (_right panels_). The error bar corresponds to the Poisson noise from the number of galaxies in each bin. The green horizontal line shows the cosmic mean.
Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation.
## Data Availability
The catalogue and data underlying this article are available on the public data release site of Hyper Suprime-Cam Subaru Strategic Program ([https://hsc.mtk.nao.ac.jp/ssp/data-release](https://hsc.mtk.nao.ac.jp/ssp/data-release)). The HSC Y3 shape catalogue will be available at the same site later.
|
2308.11282 | Response functions of many-body condensed matter systems | We discuss rigorous results about the response functions of gapped and
gapless interacting fermionic lattice models, relevant for the study of
transport in condensed matter physics. | Marcello Porta, Vieri Mastropietro, Alessandro Giuliani | 2023-08-22T08:50:42Z | http://arxiv.org/abs/2308.11282v1 | # Response functions of many-body condensed matter systems
###### Abstract
We discuss rigorous results about the response functions of gapped and gapless interacting fermionic lattice models, relevant for the study of transport in condensed matter physics.
###### Contents
* 1 Introduction
* 2 Interacting lattice models
* 3 Dynamics and linear response
* 4 Quantum Hall effect
* 5 Transport in semimetals
* 5.1 Graphene
* 5.2 Weyl semimetals
* 6 Transport in \(1d\) and quasi-\(1d\) metallic systems
* 6.1 Interacting \(1d\) systems
* 6.2 Edge modes of interacting quantum Hall systems
## 1 Introduction
A large part of modern condensed matter physics is devoted to the study of the transport properties of many-body quantum systems. In many physical applications, the collective behavior of complex systems can only be understood according to the laws of quantum mechanics. Despite the complexity of the microscopic structure of the material, in some cases the response properties of the system appear to be universal, that is they depend only on fundamental constants of nature. The paradigmatic example is the integer quantum Hall effect, where the quantization of the transverse conductivity of a class of ultrathin materials has a topological interpretation. Other examples include the minimal optical conductivity in graphene and the magneto-electric effect in Weyl semimetals, whose universal nature can be understood via the mechanism of anomaly non-renormalization, a key ingredient in the quantization of gauge theories.
Because of the extremely accurate experimental precision in the measurement of such transport phenomena, it is important to develop a theoretical understanding that does not rely on uncontrolled approximations or ad hoc regularizations. In this view, mathematical physics plays an important role in the fundamental un
derstanding of the behavior of complex systems from first principles. As a matter of fact, the study of transport in condensed matter physics provided a strong motivation for many works in mathematical physics in recent years, and it stimulated cross-fertilization between various areas of mathematics and physics. As a concrete example, our precise understanding of the quantum Hall effect, and nowadays of the field of topological phases of matter, has been made possible by insights coming from various branches of mathematics, ranging from the theory of topological invariants to functional analysis and probability.
In this article we will give an overview of rigorous results about the transport properties of many-body condensed matter systems. The framework will be the one of quantum statistical mechanics of interacting lattice models of fermionic type. We will give an overview of some of the recent progress, which has been obtained via the combination of several tools in the analysis of many-body quantum systems, such as Lieb-Robinson bounds, cluster expansion, renormalization group, adiabatic theory and Ward identities.
The article is organized as follows. In Section 2 we will define the class of lattice models we will consider, in Fock space. In Section 3 we will introduce the time-evolution of equilibrium Gibbs states under external perturbations, we will define linear response, and we will briefly review rigorous results about the validity of linear response. Then, in the rest of the article we will discuss rigorous results about the computation of response functions for many-body systems. In Section 4 we will discuss gapped many-body systems and the quantization of the Hall conductivity; in Section 5 we will focus on interacting gapless systems of semimetallic type; while in Section 6 we will discuss metallic transport, for interacting one-dimensional systems and for the edge currents of two-dimensional many-body quantum Hall systems.
## 2 Interacting lattice models
We shall describe interacting fermionic particles on a \(d\)-dimensional lattice, as models for electrons on crystalline solids in the tight-binding regime. An interesting question, not discussed in this article, is to justify the tight-binding approximation from the continuum Schrodinger equation, in the presence of periodic potentials describing the location of the ions forming the crystal. We refer the reader to _e.g._[39, 44, 73, 102, 104] and references therein, for the tight-binding reduction in a semiclassical regime, or to the more recent works [51, 52], for the discussion of graphene-like structures.
We shall describe such lattice models in the language of statistical mechanics, in the grand-canonical ensemble. Let \(\Gamma_{L}\) be a \(d\)-dimensional lattice of side \(L\) with periodic boundary condition, \(\Gamma_{L}=\mathbb{Z}^{d}/L\mathbb{Z}^{d}\). We will suppose that each site is decorated by \(M\) colors, that might take into account internal degrees of freedom such as the spin or the sublattice label. We shall collect together space labels and color labels in the decorated lattice \(\Lambda_{L}=\Gamma_{L}\times\{1,\ldots,M\}\). We shall use the notation \(\mathbf{x}=(x,\sigma)\) for points on \(\Lambda_{L}\), with \(x\in\Gamma_{L}\) the space label and \(\sigma\in\{1,\ldots,M\}\) the color label.
The single-particle Hilbert space for one quantum particle on \(\Lambda_{L}\) is given by \(\mathfrak{h}=\ell^{2}(\Lambda_{L})\), which we can view as \(\mathbb{C}^{ML^{d}}\). The dynamics is generated by a self-adjoint Hamiltonian \(H\) on \(\mathfrak{h}\), via the Schrodinger equation,
\[i\partial_{t}\psi_{t}=H\psi_{t}\,,\qquad\psi_{0}=\psi\in\mathbb{C}^{ML^{d}}\,. \tag{2.1}\]
where we set \(\hbar=1\).
Eq. (2.1) describes the dynamics of one particle on \(\Lambda_{L}\). Instead, we will be interested in describing many interacting fermionic particles on \(\Lambda_{L}\). We will work in a grand-canonical setting, in which the number of particles is not fixed. To this end, we introduce the fermionic Fock space as \(\mathcal{F}=\bigoplus_{n}\mathfrak{h}^{\wedge n}\). Notice that, being \(L\) and \(M\) finite, the fermionic Fock space
\(\mathcal{F}\) is finite dimensional. Vectors in \(\mathcal{F}\) are (finite) sequences of complex-valued antisymmetric functions, \(\psi=(\psi^{(0)},\psi^{(1)},\ldots,\psi^{(n)},\ldots)\), labelled by the particle number \(n\). Let us introduce the fermionic creation and annihilation operators \(a_{\mathbf{x}}^{*}\) and \(a_{\mathbf{x}}\), for \(\mathbf{x}\in\Lambda_{L}\), satisfying the canonical anticommutation relations [36, 37]:
\[\{a_{\mathbf{x}}^{*},a_{\mathbf{y}}\}=\delta_{\mathbf{x},\mathbf{y}}\,,\qquad \{a_{\mathbf{x}},a_{\mathbf{y}}\}=\{a_{\mathbf{x}},a_{\mathbf{y}}\}=0\,. \tag{2.2}\]
Given the vacuum vector \(\Omega=(1,0,\ldots,0)\), the fermionic Fock space can be generated by iterative application of the creation operators on the vacuum. Also, operators on \(\mathcal{F}\) can be represented as polynomials in the creation and annihilation operators. A simple example is the number operator \(\mathcal{N}=\sum_{\mathbf{x}\in\Lambda_{L}}a_{\mathbf{x}}^{*}a_{\mathbf{x}}\), which counts the number of particles in a given sector of the Fock space: \((\mathcal{N}\psi)^{(n)}=n\psi^{(n)}\).
The dynamics in Fock space is generated by a self-adjoint Hamiltonian \(\mathcal{H}\) on \(\mathcal{F}\). Given an operator \(\mathcal{O}\), its Heisenberg evolution generated by \(\mathcal{H}\) will be denoted by \(\tau_{t}(\mathcal{O})\):
\[\tau_{t}(\mathcal{O})=e^{i\mathcal{H}t}\mathcal{O}e^{-i\mathcal{H}t}\qquad t \in\mathbb{R}\,. \tag{2.3}\]
A typical form of Hamiltonian, that includes many physically relevant cases, is
\[\mathcal{H}=\sum_{X\subseteq\Lambda_{L}}\Phi_{X}\,, \tag{2.4}\]
with \(\Phi_{X}=\Phi_{X}^{*}\) given by a polynomial in the fermionic creation and annihilation operators labelled by \(\mathbf{x}\in X\), vanishing for \(\mathrm{diam}(X)>R\) for some \(R>0\). As \(L\) varies, Eq. (2.4) actually defines a sequence of Hamiltonians labelled by \(L\); if \(R\) is independent of \(L\), we shall say that \(\mathcal{H}\) is finite-ranged. An important example of such Hamiltonians is provided by many-body perturbations of non-interacting models,
\[\begin{split}\mathcal{H}&=\mathcal{H}_{0}+\lambda \mathcal{V}\\ \mathcal{H}_{0}&=\sum_{\mathbf{x},\mathbf{y}\in \Lambda_{L}}a_{\mathbf{x}}^{*}H_{0}(\mathbf{x};\mathbf{y})\,a_{\mathbf{y}}\\ \mathcal{V}&=\lambda\sum_{\mathbf{x},\mathbf{y}\in \Lambda_{L}}v(\mathbf{x};\mathbf{y})\,a_{\mathbf{x}}^{*}a_{\mathbf{y}}^{*}a_{ \mathbf{y}}a_{\mathbf{x}}\,,\end{split} \tag{2.5}\]
with \(H_{0},v\) finite-ranged. The quadratic operator \(\mathcal{H}_{0}\) is the second quantization of the single-particle Hamiltonian \(H_{0}\), while the quartic operator \(\mathcal{V}\) is the second quantization of the many-body interaction, specified by the two-body potential \(v\) and coupling strength \(\lambda\).
The Gibbs state of the system, describing its equilibrium state at inverse temperature \(\beta\) and chemical potential \(\mu\), is denoted by \(\langle\cdot\rangle_{\beta,\mu,L}\), and it is defined as:
\[\langle\mathcal{O}\rangle_{\beta,\mu,L}=\frac{\mathrm{Tr}_{\mathcal{H}} \mathcal{O}e^{-\beta(\mathcal{H}-\mu,\mathcal{N})}}{\mathrm{Tr}_{\mathcal{H}} \mathcal{E}^{-\beta(\mathcal{H}-\mu,\mathcal{N})}}\,. \tag{2.6}\]
Eq. (2.6) defines the statistical average of the observable associated with the operator \(\mathcal{O}\). Being \(\mathcal{F}\) finite dimensional, the expression (2.6) is obviously well-defined. However, at this level of generality it is typically extremely hard to extract information from the Gibbs state. For instance, given two observables \(\mathcal{O}_{X}\), \(\mathcal{O}_{Y}\), a typical question is about decay of their connected correlation function at equilibrium:
\[\langle\mathcal{O}_{X};\mathcal{O}_{Y}\rangle_{\beta,\mu,L}=\langle\mathcal{O }_{X}\mathcal{O}_{Y}\rangle_{\beta,\mu,L}-\langle\mathcal{O}_{X}\rangle_{ \beta,\mu,L}\langle\mathcal{O}_{X}\rangle_{\beta,\mu,L}\,, \tag{2.7}\]
as \(1\ll\mathrm{dist}(X,Y)\ll L\). The answer to this question strongly depends on the spectral properties of \(\mathcal{H}\). Consider the case of weakly interacting fermionic models, Eq. (2.5). Suppose that \(\mu\) belongs to a spectral gap of \(\mathcal{H}_{0}\):
\[\mathrm{dist}(\sigma(\mathcal{H}_{0}),\mu)\geq\delta\,, \tag{2.8}\]
for some \(\delta>0\) uniformly in \(L\). For \(|\lambda|\) small enough, uniformly in \(\beta\) and \(L\), exponential spatial decay estimates for the correlation functions can be proved thanks to the convergence of the fermionic cluster expansion, based on the Brydges-Battle-Federbush formula for fermionic truncated expectations; see _e.g._[17, 38, 57, 86, 88, 105, 107]. This result actually extends to prove exponential decay of Euclidean, or imaginary time, correlations. Let \(\gamma_{t}(\mathcal{O})\) be the imaginary-time evolution of \(\mathcal{O}\):
\[\gamma_{t}(\mathcal{O})=e^{t(\mathcal{H}-\mu,\mathcal{N})}\mathcal{O}e^{-t( \mathcal{H}-\mu,\mathcal{N})}\,. \tag{2.9}\]
In terms of the imaginary-time evolution operator, the Gibbs state satisfies the Kubo-Martin-Schwinger (KMS) identity:
\[\langle\mathcal{A}\mathcal{B}\rangle_{\beta,\mu,L}=\langle\gamma_{\beta}( \mathcal{B})\mathcal{A}\rangle_{\beta,\mu,L}\,, \tag{2.10}\]
which in the present finite-dimensional setting trivially follows from the cyclicity of the trace. If \(\mu\) belongs to a spectral gap of \(\mathcal{H}_{0}\) as in Eq. (2.8), the convergence of the cluster expansion for \(|\lambda|\) small enough allows to prove that, for all \(t\in[0,\beta)\):
\[|\langle\gamma_{t}(\mathcal{O}_{X});\mathcal{O}_{Y}\rangle_{\beta,\mu,L}| \leq Ce^{-c\mathrm{dist}(X,Y)-c\,t_{\beta}}\,, \tag{2.11}\]
with \(t_{\beta}=\min(t,\beta-t)\). The constant \(C\) depends on the observables, while the constant \(c\) depends only on \(\delta\) and on \(|\lambda|\). This type of estimate can actually be used to establish the existence of a spectral gap for the many-body Hamiltonian \(\mathcal{H}\), [43] for \(|\lambda|\) small. More generally, in the absence of a spectral gap, the bound (2.11) still holds at \(\beta<\infty\), however with \(\beta\)-dependent constants, and with a \(\beta\)-dependent smallness condition on \(\lambda\).
The requirement of \(|\lambda|\) small is due to the fact that the proof of (2.11) is based on a convergent expansion for Gibbs state of \(\mathcal{H}\) around the Gibbs state of \(\mathcal{H}_{0}\). If one assumes the existence of a spectral gap of \(\mathcal{H}\), the analogue of (2.11) for \(t=0\) at zero temperature can be proved with different, non-perturbative methods, [70, 99]. Concerning decay of real-time correlation functions for many-body systems, at the moment no rigorous methods are available to prove such bounds for non-integrable many-body systems.
In the absence of a spectral gap, one does not expect exponential decay of correlations uniformly in \(\beta\). Nevertheless, in some cases of physical relevance discussed later on, such as \(1d\) metallic systems, and \(2d\) and \(3d\) semimetals, algebraic decay of ground state correlations can be proved for weak interactions. This requires the combination of fermionic cluster expansion with rigorous renormalization group methods, in order to resolve the infrared singularity that plagues naive perturbation theory around the Gibbs state of \(\mathcal{H}_{0}\), and to cure it by a renormalization of physical quantities, such as the Fermi velocity or the critical exponents.
Rigorous renormalization group methods for gapless condensed matter systems have been pioneered in [20, 49], see [21, 88, 107] for reviews. These methods have been used to study the equilibrium state of interacting Fermi systems with extended Fermi surface at positive temperatures [45, 46, 28], exponentially small in the interaction strength, and up to zero temperature for systems with asymmetric Fermi surface [50]. In one dimensional systems, these methods have been used to construct the interacting ground state, starting from the works [22, 34], relying on the exact integrability of the Luttinger model [94]. This property turns out to imply the vanishing of the beta function for the effective quartic coupling, which is marginal in the renormalization group sense. The integrability of the Luttinger model holds at the Hamiltonian level; such key property is broken by the presence of cut-offs, typically needed in order to set up any renormalization group analysis, and this makes the implementation of the integrability of the Luttinger model within the renormalization group analysis particularly delicate. A different strategy has been introduced in [24, 29, 89], where the role of integrability is replaced by the combination of Schwinger-Dyson equations and Ward identities for a regularized version of the Luttinger field theory, which allows to compute all correlation functions of the field theory and to estimate the effect of cut-offs.
More recently, rigorous renormalization group methods have been applied to study semimetallic systems in two and three dimensions, such as graphene and Weyl semimetals, and to study transport properties at zero temperature. These extensions will be at the center of the present paper, and will be discussed in more detail below.
Dynamics and linear response
We are interested in the dynamical properties of the interacting lattice models defined in the previous section. To begin, notice that the Gibbs state \(\rho_{\beta,\mu,L}\propto e^{-\beta(\mathcal{H}^{-}\mu\cdot\mathcal{N})}\) is obviously invariant under the time-evolution generated by \(\mathcal{H}\). In order to have a nontrivial time-evolution of the state, we have to introduce a perturbation in the Hamiltonian, and evolve the initial equilibrium state according to the new dynamics. A standard procedure is to follow an adiabatic protocol: namely, we introduce a slow time dependence in the Hamiltonian,
\[\mathcal{H}(\eta\,t)=\mathcal{H}+\varepsilon g(\eta\,t)\mathcal{P}\,, \tag{3.1}\]
where: \(\mathcal{P}\) is a sum of finite-range potentials, similarly to (2.5); \(\varepsilon\) is a small parameter; \(g(t)\) is a switch function, namely a smooth function such that \(g(-\infty)=0\) and \(g(0)=1\); and \(\eta>0\) allows to tune the rate of variation of the switch function. We will be interested in the situation in which \(\eta\) is small, uniformly in all the other parameters in the model. A typical choice of switch function in applications is \(g(t)=e^{t}\), which we shall consider from now on.
The evolution of the system is defined by the Schrodinger-von Neumann equation for the state:
\[\begin{split} i\partial_{t}\rho(t)&=\left[ \mathcal{H}(\eta\,t),\rho(t)\right]\,,\qquad t\leq 0\,,\\ \rho(-\infty)&=\rho_{\beta,\mu,L}\,.\end{split} \tag{3.2}\]
The question addressed here is: what is the variation of the expectation values of physical observables due to the slow drive, for \(t\leq 0\)? To this end, it is useful to represent the variation of the expectation value of the observable \(\mathcal{O}\) due to the perturbation \(\mathcal{P}\) as:
\[\begin{split}&\frac{1}{\varepsilon}\Big{(}\operatorname{Tr} \mathcal{O}\rho(0)-\operatorname{Tr}\mathcal{O}\rho(-\infty)\Big{)}\\ &=-i\int_{-\infty}^{0}dt\,e^{\eta\,t}\operatorname{Tr}\big{[} \mathcal{O},e^{i\mathcal{H}t}\mathcal{P}e^{-i\mathcal{H}t}\big{]}\,\rho_{ \beta,\mu,L}\\ &\quad+\mathcal{R}^{\beta,L}_{\mathcal{O},\mathcal{P}}(\varepsilon,\eta)\\ &\equiv\chi^{\beta,L}_{\mathcal{O},\mathcal{P}}(\eta)+\mathcal{R} ^{\beta,L}_{\mathcal{O},\mathcal{P}}(\varepsilon,\eta)\,,\end{split} \tag{3.3}\]
where the first equality follows from the Duhamel expansion in \(\varepsilon\) for the non-autonomous dynamics (3.2). The quantity \(\chi^{\beta,L}_{\mathcal{O},\mathcal{P}}(\eta)\) is the linear response of the observable \(\mathcal{O}\) due to the perturbation \(\mathcal{P}\), while \(\mathcal{R}^{\beta,L}_{\mathcal{O},\mathcal{P}}(\varepsilon,\eta)\) is of higher order in \(\varepsilon\). In many physical applications, one is interested in computing the linear response coefficient \(\chi^{\beta,L}_{\mathcal{O},\mathcal{P}}(\eta)\) as \(\beta,L\to\infty\), and possibly \(\eta\to 0^{+}\). A complementary question is to prove the smallness in \(\varepsilon\) of the error term \(\mathcal{R}^{\beta,L}_{\mathcal{O},\mathcal{P}}(\varepsilon,\eta)\), uniformly in \(\eta\), \(\beta\), \(L\), which allows to rigorously justify the validity of linear response.
For gapped non-interacting systems, the validity of linear response at zero temperature, _i.e._ for \(\beta\to\infty\), is related to the adiabatic theorem of quantum mechanics [35, 82]; see [109] for a mathematical review of adiabatic theory. The standard adiabatic theorem however does not allow to study many-body quantum systems uniformly in their size. Recently, the adiabatic theorem has been extended to the many-body setting in [13], in a way that also allows to prove validity of linear reponse, see [14, 74] for reviews and for further references. The adaptation from quantum spin systems to fermionic models has been discussed in [97]. In the case of the quantum Hall effect, the linear response approximation can actually be proved to be exact, see [84] for non-interacting fermions and [15] for the extension to many-body systems. The many-body adiabatic theorem of [13] applies to the setting in which the time-dependent Hamiltonian \(\mathcal{H}(\eta\,t)\) has a non-degenerate ground state separated by the rest of the spectrum by a positive gap, for all times and uniformly in \(L\). The extension to time-dependent perturbations that might close the spectral gap has been obtained in [110]. Finally, the extension to infinite systems, with a uniform gap or with a bulk gap, has been discussed in [75, 76].
It is an important open problem to extend the current many-body adiabatic methods, and the validity of linear response, to positive temperature systems. The case of small systems coupled
to infinite reservoirs at nonzero temperature, under suitable ergodicity assumptions on the reservoirs, has been discussed in [1, 2, 77]. Concerning the adiabatic theorem and validity of linear response for extended many-body systems, recent progress for small temperatures, vanishing as \(\eta\to 0^{+}\) but uniformly in \(L\), has been obtained in [66], via the combination of a rigorous analytic continuation (Wick rotation) to imaginary times of the full Duhamel expansion for the non-autonomous dynamics (3.2), and cluster expansion methods to prove the convergence of the resulting series. It is a challenging open problem to understand the interplay of the many-body adiabatic dynamics and approach to equilibrium at fixed positive temperature; see [78, 79] for further recent insights on the topic.
In the rest of this article, we will focus on rigorous results about the linear response contribution \(\chi^{\beta,L}_{\mathcal{O},\mathcal{P}}(\eta)\), for various physical systems. One of the main difficulties in studying this object is the control of the time integral, in the adiabatic limit \(\eta\to 0^{+}\). In fact, it is extremely difficult to gain direct insight on the decay of real-time correlation functions for non-integrable many-body systems. One way to approach this problem is by complex deformation, by writing the transport coefficient in terms of Euclidean, or imaginary-time, correlation functions; at least for the case of weakly interacting fermionic models, Euclidean correlation functions can be estimated via fermionic cluster expansion. Let \(\eta_{\beta}\in\frac{2\pi}{\beta}\mathbb{Z}\). Then, the following identity holds true, as a consequence of the KMS property (2.10) and of Cauchy theorem for analytic function:
\[\chi^{\beta,L}_{\mathcal{O},\mathcal{P}}(\eta_{\beta})=-\int_{0}^{\beta}ds\, e^{-i\eta_{\beta}s}\langle\gamma_{s}(\mathcal{P})\,;\mathcal{O}\rangle_{\beta, \mu,L}\,, \tag{3.4}\]
with \(\gamma_{s}(\mathcal{P})\) the Euclidean time evolution of \(\mathcal{P}\), recall (2.9). Given \(\eta>0\), let \(\eta_{\beta}\geq\eta\) be its best approximation in \(\frac{2\pi}{\beta}\mathbb{Z}\) (obviously, \(\eta_{\beta}-\eta\leq 2\pi/\beta\)). Then, using only the boundedness of the fermionic operators and Lieb-Robinson bounds, see [100] for a review, it is possible to prove that:
\[\left|\chi^{\beta,L}_{\mathcal{O},\mathcal{P}}(\eta_{\beta})-\chi^{\beta,L}_{ \mathcal{O},\mathcal{P}}(\eta)\right|\leq\frac{C}{\eta^{d+2}\beta}\,, \tag{3.5}\]
where the constant \(C\) depends only on the support of the observable \(\mathcal{O}\) (and it is independent of \(L\) if \(\mathcal{O}\) is local). See _e.g._[7, 92] for a proof of (3.4), (3.5), and [66] for an extension to higher commutators. Eqs. (3.4), (3.5) allow to apply methods of Euclidean quantum field theory to the study of real-time quantum dynamics, and in particular to the evaluation of transport coefficients as \(\beta\to\infty\) and \(\eta\to 0^{+}\).
## 4 Quantum Hall effect
The quantum Hall effect is the paradigmatic example of topological transport phenomenon in condensed matter physics. It has been discovered experimentally in [114], and the first theoretical explanation dates back to the seminal works [85, 113]. The quantum Hall effect is a measurement of the charge current propagating in a \(2d\) system, exposed to a transverse magnetic field and to an in-plane electric field. The measurement is performed at temperatures of the order of the Kelvin, and for a class of materials of insulating type.
To describe the experimental observation, let us denote by \(j_{i}\), \(i=1,2\), the current density propagating in the two orthogonal directions on the plane. Let us denote by \(E=(E_{1},E_{2})\) the electric field. Then, neglecting higher order terms, the current in the steady state and the electric field are related by the linear response formula:
\[j=\sigma E\,, \tag{4.1}\]
where \(\sigma=(\sigma_{ij})_{1\leq i,j\leq 2}\) is the conductivity matrix. The entries \(\sigma_{11}\), \(\sigma_{22}\) are called the longitudinal conductivities, while the entries \(\sigma_{12}\), \(\sigma_{21}\) are called the transverse conductivities. From a phenomenological viewpoint, the insulating behavior of the system is associated to \(\sigma_{11}=\sigma_{22}=0\): this
means that transport is purely non-dissipative. Furthermore, if the system is invariant under rotations, one also has \(\sigma_{12}=-\sigma_{21}\). In units such that \(e^{2}=\hbar=1\), the integer quantum Hall effect (IQHE) is the observation that the transverse, or Hall, conductivity satisfies:
\[\sigma_{12}\in\frac{1}{2\pi}\cdot\mathbb{Z}\,, \tag{4.2}\]
with a nine-digits experimental precision. The Hall conductivity turns out to be piecewise constant in the density of charge carries, and to display sudden jumps from one plateau to another, corresponding to a transition from different integers in (4.2). This behavior is in glaring contrast with the expected classical behavior of \(\sigma_{12}\), which in an idealized system would be linear in the density of charge carriers. See _e.g._[115] for a review of the experimental discovery of this phenomenon. Nowadays, quantum Hall systems are recognized as paradigmatic examples of topological phase of matter, see _e.g._[96] for a recent review on the topic.
The IQHE is a special case of the fractional quantum Hall effect (FQHE), which predicts the quantization of \(\sigma_{12}\) in rational multiples of \(1/2\pi\). In constrast to the IQHE, the theoretical understanding of the FQHE starting from microscopic models is still an outstanding problem in mathematical physics. See _e.g._[33, 55] for reviews of the effective field theoretic approach to this phenomenon. In what follows we will restrict the attention to the IQHE for lattice many-body quantum systems.
Let us consider many-body Hamiltonians of the form (2.5) in dimension \(d=2\). We define the lattice current associated with \(\mathbf{x},\mathbf{y}\) in \(\Lambda_{L}\) as:
\[j_{\mathbf{x},\mathbf{y}}=i\,a_{\mathbf{x}}^{*}H(\mathbf{x};\mathbf{y})\,a_{ \mathbf{y}}+\text{h.c.}. \tag{4.3}\]
These operators are related to the time variation of the density operator \(n_{\mathbf{x}}=a_{\mathbf{x}}^{*}a_{\mathbf{x}}\) via the continuity equation:
\[\partial_{t}\tau_{t}(n_{\mathbf{x}})=\sum_{\mathbf{y}\in\Lambda_{L}}\tau_{t}( j_{\mathbf{x},\mathbf{y}}). \tag{4.4}\]
The right-hand side of (4.4) is a lattice divergence. To see this, let \(Q_{X}\) be the charge operator associated with \(X\subset\Lambda_{L}\), \(Q_{X}=\sum_{\mathbf{x}\in X}n_{\mathbf{x}}\). Then, Eq. (4.4) implies that the time-variation of \(Q_{X}\) is due to a flow across the boundary of \(X\):
\[\partial_{t}\tau_{t}(Q_{X})=\sum_{\begin{subarray}{c}\mathbf{x}\in X\\ \mathbf{x}\in X\end{subarray}}\tau_{t}(j_{\mathbf{x},\mathbf{y}}). \tag{4.5}\]
Next, we introduce the total current operator as:
\[\mathcal{J}_{k}=\frac{1}{2}\sum_{\mathbf{x},\mathbf{y}\in\Lambda_{L}}(x_{k}-y _{k})j_{\mathbf{x},\mathbf{y}}\,\qquad k=1,2\, \tag{4.6}\]
which can be viewed as \(i[\mathcal{H},\mathcal{K}_{k}]\) with \(\mathcal{K}_{k}\) the second quantization of the \(k\)-th component of the position operator. Notice that in general the position operator is not well defined in the presence of periodic boundary conditions; nevertheless, being the Hamiltonian short-ranged, we can interpret \(i[\mathcal{H},\mathcal{K}_{k}]\) as the right-hand side of (4.6). The total current operator can be viewed as the value at \(p=(0,0)\) of:
\[\hat{j}_{p,k}=\frac{1}{2}\sum_{\mathbf{x},\mathbf{y}\in\Lambda_{L}}\frac{e^{ip \cdot y}-e^{ip\cdot x}}{ip\cdot(y-z)}(y-x)_{k}j_{\mathbf{x},\mathbf{z}}\,, \tag{4.7}\]
which satisfies the momentum-space continuity equation:
\[\partial_{t}\tau_{t}(\hat{n}_{p})=\sum_{k=1,2}ip_{k}\tau_{t}(\hat{j}_{p,k}). \tag{4.8}\]
We are interested in describing the variation of the average current, after having introduced a weak electric field. The coupling with the electromagnetic field is introduced via the standard procedure used in lattice gauge theories, the Peierls' substitution:
\[H(\mathbf{x};\mathbf{y})\to H(\mathbf{x};\mathbf{y})e^{i\int_{x-y}d\ell \cdot A(t,\ell)}\, \tag{4.9}\]
where \(A\) is a time-dependent vector potential, defined in \(\mathbb{R}^{2}\). The integral in Eq. (4.9) is taken over a straight line connecting \(x\) to \(y\). Let us suppose that \(A(t,z)\) is constant in space, \(A(t,z)\equiv A(t)\), and \(A(t)=e^{\eta t}A\), so that the vector potential
generates the space-homogeneous electric field \(E(t)=-\eta e^{\eta t}A\).
Let \(\mathcal{H}(\eta t)\) be the time-dependent Hamiltonian, coupled to the gauge field via (4.9), and let \(\rho(t)\) be the solution of the Schrodinger-von Neumann equation (3.2). Let \(\mathcal{J}_{k}(A(t))\) the current operator in the presence of the gauge field. We are interested in computing the variation
\[\frac{1}{L^{2}}\operatorname{Tr}\mathcal{J}_{i}(A(0))\rho(0)-\frac{1}{L^{2}} \operatorname{Tr}\mathcal{J}_{i}\rho_{\beta,\mu,L}\,, \tag{4.10}\]
at first order in the electric field. By performing one step of Duhamel expansion in the gauge field, we see that the expression for the conductivity matrix is, in the linear response approximation:
\[\sigma_{ij}^{\beta,L}(\eta) \tag{4.11}\] \[= \frac{i}{\eta L^{2}}\Big{[}\Delta_{ij}^{\beta,L}+\int_{-\infty}^ {0}ds\,e^{\eta s}\Big{\langle}\big{[}\mathcal{J}_{i},\tau_{s}(\mathcal{J}_{j} )\big{]}\Big{\rangle}_{\beta,\mu,L}\Big{]}\]
where \(\Delta_{i,j}^{\beta,L}=\langle[\mathcal{J}_{i},\mathcal{J}_{j}]\rangle\). The mathematical problem that we address is about the computation of:
\[\sigma_{ij}=\lim_{\eta\to 0^{+}}\lim_{\beta,L\to\infty}\sigma_{ij}^{\beta,L}( \eta)\,, \tag{4.12}\]
for many-body systems with Hamiltonian (2.5). We will further specify that the system is in an insulating phase. To this end, it is sufficient to assume that the chemical potential \(\mu\) lies in a spectral gap of the Hamiltonian, uniformly in \(L\).
Let us first briefly discuss the case of non-interacting systems. We refer the reader to [5, 6] for mathematical reviews on the topic. For \(T=0\), the state of the system is completely specified by the Fermi projector \(P_{\mu}=\chi(H\leq\mu)\), with \(\chi(\cdot)\) the characteristic function and \(\mu\) in a spectral gap of \(H\). Under this assumption, the matrix elements of the Fermi projector are known to decay exponentially fast:
\[|P_{\mu}(\mathbf{x};\mathbf{y})|\leq Ce^{-c|x-y|}\,. \tag{4.13}\]
The infinite volume conductivity matrix can be written as a suitable trace per unit volume via Kubo-Streda formula, see _e.g._[5]:
\[\sigma_{ij}=\lim_{L\to\infty}\frac{i}{L^{2}}\operatorname{Tr}\mathbb{1}_{L}P_ {\mu}[[P_{\mu},X_{i}],[P_{\mu},X_{j}]]\,, \tag{4.14}\]
where the trace is over \(\ell^{2}(\mathbb{Z}^{2};\mathbb{C}^{M})\) and \(\mathbb{1}_{L}=\chi(x\in[-L/2,L/2]^{2})\). In particular, the longitudinal conductivity \(\sigma_{ii}\) is zero. Instead, \(\sigma_{12}\) might be non-vanishing, and it turns out to have a beautiful mathematical interpretation. At first, suppose that \(H\) commutes with translations on \(\mathbb{Z}^{2}\). Then, the Hamiltonian \(H\) can be fibered in momentum space, as follows:
\[H=\int_{\mathbb{T}^{2}}^{\oplus}dk\,\hat{H}(k)\,, \tag{4.15}\]
where \(\hat{H}(k)\in\mathbb{C}^{M\times M}\) is the Bloch Hamiltonian. Similarly, the Fermi projector \(P_{\mu}\) can be written as the direct integral of the fibered projectors \(\hat{P}_{\mu}(k)=\chi(\hat{H}(k)\leq\mu)\). In terms of these objects, the Hall conductivity can be rewritten as an integral over the Brillouin zone:
\[\sigma_{12}=i\int_{\mathbb{T}^{2}}\frac{dk}{(2\pi)^{2}}\operatorname{Tr}\hat{ P}_{\mu}(k)\Big{[}\partial_{1}\hat{P}_{\mu}(k),\,\partial_{2}\hat{P}_{\mu}(k) \Big{]}\,, \tag{4.16}\]
where the trace is now over \(\mathbb{C}^{M}\). The key observation [9, 113] is that the argument of the integral is a curvature, and the resulting expression is equal \(\frac{1}{2\pi}\cdot\mathcal{C}_{1}\), with \(\mathcal{C}_{1}\) the first Chern number of the Bloch bundle, which has \(\mathbb{T}^{2}\) as base space and \(\operatorname{Ran}\hat{P}_{\mu}(k)\) as fibers. Thus, the Hall conductivity can only take values in \(\frac{1}{2\pi}\cdot\mathbb{Z}\), and a nonzero value is associated to a nontrivial topology of the Bloch bundle. A further perspective on the triviality/nontriviality of the Bloch bundle is through the localization properties of the Wannier functions, constructed via the Bloch transform of the eigenstates of the Bloch Hamiltonian (also called Bloch functions). Informally, a vanishing Hall conductivity is equivalent to the exponential decay of the Wannier functions associated with the energy bands below the Fermi level,
while a nonzero Hall conductivity is equivalent to a slow algebraic decay, implying the divergence of the squared position operator. We refer the reader to [98] for a precise statement and for a proof of this result.
This elegant explanation of the quantization of the Hall conductivity apparently breaks down in the absence of translation invariance. This is of course always the case in all physical applications, due to the presence of unavoidable impurities on the sample. Remarkably, the quantization of \(\sigma_{12}\) survives the presence of disorder. Assuming the existence of a spectral gap for \(H\), this has been proved in [18] via the methods of non-commutative geometry, and in [10] using functional analytic tools. Later, [5] extended the result of [10] to the case in which the spectral gap is replaced by a mobility gap, which is the relevant setting in the presence of Anderson localization (that is, at strong disorder), an essential ingredient to establish the existence of plateaux in the plot of \(\sigma_{12}\) as a function of the density of charge carries.
None of the mentioned rigorous results applies to interacting many-body systems, _e.g._\(\lambda\neq 0\) in Eq. (2.5). A general field-theoretic approach to the quantum Hall effect, based on the classification of effective actions of non-relativistic Fermi gases at low length scales, has been introduced in [53, 54], see [33, 55] for reviews. This approach describes quantum Hall systems on domains with nontrivial boundaries, and it allows to understand the quantization of the Hall conductivity in terms of a cancellation mechanism between the anomaly of the Chern-Simons field theory describing the scaling limit of the bulk degrees of freedom, and the anomaly of the chiral Luttinger liquid describing the scaling limit of the edge degrees of freedom.
A first attempt in the direction of formulating the many-body conductivity matrix as a topological invariant in the spirit of [113] has been proposed in [11]. There, following the original intuition of Laughlin [85], it is argued that the many-body Hall conductivity can be computed by probing the response of the system to a 'phase twist' of the boundary conditions, physically equivalent to the introduction of a suitable magnetic flux in the system. Under an extra averaging assumption over a second magnetic flux and the assumption that the ground state of the system is gapped and non-degenerate, one can show that the many-body Hall conductivity is quantized [11].
The problem of removing the extra averaging assumption in [11] remained open for a long time [12]. The proof that the extra averaging assumption can be relaxed has been given in [71], thus giving the first proof of quantization of the Hall conductivity for a many-body quantum system up to exponentially small corrections in the size of the system, under a spectral gap assumption for the many-body ground state.
Coming back to the formulation of the conductivity matrix as linear response with respect to a constant electric field, Eq. (4.11), a different route to quantization is to show that Eq. (4.12) is actually constant in the strength of the many-body interaction, provided the spectral gap of \(\mathcal{H}\) does not close along the path that connects the Hamiltonian to its noninteracting counterpart (\(\lambda=0\)). The persistence of the spectral gap is a consequence of the convergence of the fermionic cluster expansion, which can be proved for \(|\lambda|\) small enough, and which allows to prove exponential decay of correlations (2.11). After proving that the conductivity matrix stays constant along the path that deforms the model into a non-interacting one, quantization of \(\sigma_{12}\) follows from the known results for non-interacting systems. This approach to quantization has been introduced in [60], for translation-invariant systems, and it has been extended in [61, 62] to study the topological phase transition of the Haldane-Hubbard model, including the construction of an interacting critical line across which \(\sigma_{12}\) has a jump discontinuity. The starting point of the approach is the rigorous Wick rotation, that allows
to represent the conductivity matrix in terms of Euclidean correlation functions, recall Eqs. (3.4), (3.5). Then, the constancy of the conductivity matrix along the path of Hamiltonians that deforms \(\mathcal{H}\) into \(\mathcal{H}_{0}\) is proved using lattice Ward identities, following from the conservation of lattice current (4.8), in a way inspired by the non-renormalization of the so-called topological mass in QED3, [41].
Another approach to quantization of \(\sigma_{12}\) has been proposed in [16], where the quantization of the Hall conductivity for gapped many-body systems is proved via a many-body index theorem for the ground state projector. Assuming a suitable degeneracy condition for the many-body ground state, the result of [16] also shows fractional quantization of the response coefficient. Proving the required degeneracy assumption on the many-body projectors starting from an interacting microscopic model is a difficult open problem in mathematical physics.
All the above results have been obtained for many-body systems under a spectral gap assumption. A limitation of this setting is that it does not allow to prove the emergence of plateaux in the plot of the Hall conductivity as a function of the density of charge carriers. As for the non-interacting case, in order to understand the emergence of plateaux, one should include strong disorder in the microscopic model. It is an important problem to understand the interplay of interactions and disorder effects in many-body quantum Hall systems.
## 5 Transport in semimetals
So far, we have focused on two-dimensional gapped many-body systems, for which transport is non-dissipative, and where the transverse conductivity exhibits interesting quantization properties. In the absence of a spectral gap, it turns out that the transport properties of the system strongly depend on the nature of the low-energy excitations at the Fermi level. In this section we shall consider translation-invariant models, with Bloch Hamiltonian \(\hat{H}(k)\in\mathbb{C}^{M\times M}\), \(k\in\mathbb{T}^{d}\). Given a value of the chemical potential \(\mu\), the Fermi surface is defined as:
\[\left\{k\in\mathbb{T}^{d}\mid\mu\in\sigma(\hat{H}(k))\right\}\,. \tag{5.1}\]
An insulator corresponds to the situation in which the Fermi surface is empty. Instead, a metallic system corresponds to the situation in which the Fermi surface has co-dimension \(1\). The standard example is the one of the lattice Laplacian \(-\Delta_{\mathbb{Z}^{d}}\) and \(\mu\) chosen within the spectrum of the Laplacian. For this class of systems, the zero-temperature conductivity matrix is typically infinite.
Insulators and metals do not exhaust all possible physical situations. Semimetals are a class of physical systems for which the Fermi surface is nonempty but nevertheless the conductivity matrix is finite. Here we shall review the cases of graphene and of Weyl semimetals.
### Graphene
Graphene is a paradigmatic example of semimetal in two dimensions, consisting of a one-atom thick layer of graphite [56]. Its Fermi surface in the absence of doping is given by two points, \(\{k_{F}^{+},k_{F}^{-}\}\). Around the two Fermi points, the dispersion relation of graphene takes a peculiar shape, mimicking the dispersion relation of \(2+1\) dimensional massless Dirac fermions, which is responsible for its remarkable transport properties [83].
The simplest tight-binding model for graphene is the Laplacian on the honeycomb lattice, [116]. The honeycomb lattice can be viewed as the superposition of two triangular sublattices, generated by the basis vectors \(a_{1}=(1/2)(3,\sqrt{3})\), \(a_{2}=(1/2)(3,-\sqrt{3})\):
\[\begin{split}\Lambda_{L}^{A}&=\left\{x_{1}a_{1}+x _{2}a_{2}\,\Big{|}\,0\leq x_{i}\leq L-1\right\}\,,\\ \Lambda_{L}^{B}&=\Lambda_{L}^{A}+(1,0)\,.\end{split} \tag{5.2}\]
with edges connecting sites of \(\Lambda_{L}^{A}\) with sites of \(\Lambda_{L}^{B}\) at distance one. Equivalently, the vertex set \(\Lambda_{L}^{A}\cup\Lambda_{L}^{B}\) of the honeycomb lattice can be thought of as the union over \(x\in\Lambda_{L}^{A}\) of the pairs of sites \((x,x+(1,0))\); that is, as a decorated triangular lattice with two internal degrees of freedom. The advantage of this representation is that, in contrast to the honeycomb lattice, the triangular lattice is a Bravais lattice. Thus, given \(x=(x_{1},x_{2})\in\Lambda_{L}^{A}\), the single-particle wave function \(\psi(x)\) is a two-component function, collecting values in the two shifted sublattices labelled by \(x\), \(\psi(x)=(\psi_{A}(x),\psi_{B}(x+(1,0)))\).
The Bloch Hamiltonian \(\hat{H}(k)\) of the Laplacian on the honeycomb lattice is:
\[\hat{H}(k)=-t\begin{pmatrix}0&\Omega(k)\\ \Omega(k)^{*}&0\end{pmatrix}\,, \tag{5.3}\]
where \(t>0\) is the hopping parameter and the complex-valued function \(\Omega(k)\) is given by:
\[\Omega(k)=1+e^{-ik_{1}}+e^{-ik_{2}}\qquad k\in\mathbb{T}^{2}\,. \tag{5.4}\]
The energy bands of the model are \(\pm\,t|\Omega(k)|\), and they touch at the two inequivalent Fermi points on \(\mathbb{T}^{2}\):
\[k_{F}^{+}=\left(\frac{2\pi}{3},\frac{4\pi}{3}\right),\qquad k_{F}^{-}=\left( \frac{4\pi}{3},\frac{2\pi}{3}\right)\,. \tag{5.5}\]
Taking into account the two spin degrees of freedom, the half-filling condition corresponds to the choice \(\mu=0\), for which the Fermi surface is given by \(k_{F}^{+}\) and by \(k_{F}^{-}\). This choice of chemical potential is of course non-generic, but it is the physically relevant one to describe neutral graphene. In proximity of the Fermi surface, the Bloch Hamiltonian can be approximated by the expression
\[\hat{H}(k^{\prime}+k_{F}^{\prime\prime}) =-\frac{3t}{2}\begin{pmatrix}0&ik_{1}^{\prime}-\omega k_{2}^{ \prime}\\ -ik_{1}^{\prime}-\omega k_{2}^{\prime}&0\end{pmatrix} \tag{5.6}\] \[\quad+O(|k^{\prime}|^{2})\,,\]
which mimics, at low energy, the Hamiltonian of \(2+1\) dimensional massless Dirac fermions. We shall denote by \(\langle\cdot\rangle_{\beta,L}\equiv\langle\cdot\rangle_{\beta,0,L}\) the corresponding Gibbs state.
In the absence of interactions, the Gibbs state of the system, and more generally the Euclidean correlation functions of the system, are completely specified by the two-point function,
\[\langle\mathbf{T}\,\gamma_{t}(a_{\mathbf{x}}^{+})\gamma_{s}(a_{\mathbf{y}}^{- })\rangle_{\beta,L}^{0}\qquad t,s\in[0,\beta)\,, \tag{5.7}\]
with \(\mathbf{T}\) the fermionic imaginary-time ordering and where \(\mathbf{x},\mathbf{y}\in\Lambda_{L}^{A}\times\{A,B\}\) label points on the honeycomb lattice, understood as points on a decorated triangular lattice. In the absence of interactions the Gibbs state is quasi-free, and hence higher order, time-ordered Euclidean correlations can be computed from (5.7) by application of the fermionic Wick's rule. Let us focus on the ground state properties of the system, in the infinite volume limit. We denote by \(\langle\cdot\rangle_{\infty}^{0}=\lim_{\beta\to\infty}\lim_{L\to\infty}\langle \cdot\rangle_{\beta,L}^{0}\) the (thermal) ground state average for the non-interacting system. Due to the absence of a spectral gap at the Fermi level, it turns out that the ground state correlation functions have a non-integrable space-time decay:
\[\left|\langle\mathbf{T}\,\gamma_{t}(a_{\mathbf{x}}^{+})\gamma_{s}(a_{\mathbf{ y}}^{-})\rangle_{\infty}^{0}\right|\sim\frac{1}{|x-y\|^{2}+|t-s|^{2}}\,. \tag{5.8}\]
This asymptotic behavior can be easily checked from an explicit computation, starting from the Bloch Hamiltonian (5.3).
Let us now focus on many-body models for graphene. The same qualitative algebraic decay of correlations can be proved for weakly interacting graphene [58], with Hamiltonian \(\mathcal{H}=\mathcal{H}_{0}+\lambda\mathcal{V}\), where \(\mathcal{H}_{0}\) is the second quantization of the lattice Laplacian, \(\mathcal{V}\) is a short-ranged many-body Hamiltonian, and \(|\lambda|\) is small. The analysis of [58] is based on rigorous renormalization group methods, and on the convergence of the fermionic cluster expansion. The universality of the critical exponents of graphene in the presence of weak short range interactions ultimately relies on the fact that short-range interactions are irrelevant in the renormalization group sense
for this class of systems. Nevertheless, many-body interactions do affect the large scale behavior of the system, in the sense of a non-trivial wave function renormalization and a non-trivial renormalization of the Fermi velocity.
The slow decay of correlations has remarkable consequences on the transport properties of the system. Let \(\langle\cdot\rangle_{\beta,L}\) be the Gibbs state of weakly interacting graphene. We shall focus on the conductivity matrix of the system, following [59]. Let \(\sigma_{ij}^{\beta,L}(\eta_{\beta})\) be the conductivity matrix of the system (4.11) after Wick's rotation:
\[\sigma_{ij}^{\beta,L}(\eta_{\beta})=\] \[\frac{1}{\eta_{\beta}L^{2}A}\left[i\Delta_{ij}^{\beta,L}+\int_{- \beta/2}^{\beta/2}ds\,e^{-i\eta_{\beta}s}\langle\mathbf{T}\mathcal{J}_{i}\,; \gamma_{s}(\mathcal{J}_{j})\rangle_{\beta,L}\right]\,, \tag{5.9}\]
where \(A=3\sqrt{3}/2\) is the area of the lattice fundamental cell. Let \(\sigma_{ij}(\eta)=\lim_{\beta\to\infty}\lim_{L\to\infty}\sigma_{ij}^{\beta,L} (\eta_{\beta})\) be the ground-state conductivity matrix. Lattice Ward identities, following from the conservation of the lattice current (4.8), can be used to express \(\sigma_{ij}(\eta)\) in terms of the Euclidean current-current correlation function [59]:
\[\sigma_{ij}(\eta)=\] \[\lim_{\beta,L\to\infty}\frac{1}{\eta_{\beta}L^{2}A}\Big{[}\int_{- \frac{\beta}{2}}^{\frac{\beta}{2}}ds\,\Big{(}e^{-i\eta_{\beta}s}-1\Big{)} \langle\mathbf{T}\mathcal{J}_{i}\,;\gamma_{s}(\mathcal{J}_{j})\rangle_{\beta,L}\,. \tag{5.10}\]
The expression (5.10) is the starting point of a renormalization group analysis of the conductivity matrix of graphene. A key role in the analysis is played by the function:
\[K_{ij}(\eta)=\lim_{\beta,L\to\infty}\frac{1}{L^{2}A}\int_{-\frac{\beta}{2}}^{ \frac{\beta}{2}}ds\,e^{-i\eta_{\beta}s}\langle\mathbf{T}\mathcal{J}_{i}\,; \gamma_{s}(\mathcal{J}_{j})\rangle_{\beta,L} \tag{5.11}\]
in terms of which we can rewrite (5.10) as:
\[\sigma_{ij}=\lim_{\eta\to 0^{*}}\frac{1}{\eta}\Big{(}K_{ij}(\eta)-K_{ij}(0) \Big{)}\,. \tag{5.12}\]
The off-diagonal part of the conductivity matrix turns out to be zero, by time-reversal symmetry [59]. Instead, the diagonal part of the conductivity matrix, called the longitudinal conductivity, is expected to be non-zero: this is related to the presence of a non-zero value of the minimal conductivity in graphene [101]. Remarkably, as measured in [101], the minimal longitudinal conductivity of graphene appears to be universal:
\[\sigma_{ii}=\frac{e^{2}}{h}\frac{\pi}{4}\,, \tag{5.13}\]
or equivalently \(\sigma_{ii}=1/8\) in units such that \(e^{2}=\hbar=c=1\). Thus, up to very good precision, the longitudinal conductivity appears to depend only on fundamental constants.
Concerning theoretical predictions, for non-interacting systems the value (5.13) can be obtained via an explicit computation [112], starting from the model (5.3). Remarkably, the expression found in [112] agrees with the analogous quantity computed for \(2+1\) dimensional massless Dirac fermions [111].
The main challenge is thus to understand the universality of \(\sigma_{ii}\) for many-body models describing interacting graphene. This has been achieved in [59]; let us sketch the strategy of the proof. Coming back to Eq. (5.12), this expression shows that if the function that \(K_{ij}(\eta)\) is differentiable at \(\eta=0\), then \(\sigma_{ij}\) would simply be the derivative at \(\eta=0\) of \(K_{ij}(0)\). At the same time, it is not difficult to see that \(K_{ii}(\eta)\) is even in \(\eta\)[59]. Thus, this symmetry combined with differentiability would imply a trivial longitudinal conductivity, giving a result in contradiction with the experimental observation [101].
The key point is that, due to the slow decay of correlations, already visible in the absence of interactions (5.8), the function \(K_{ij}(\eta)\) is continuous but not differentiable at zero. A careful analysis of the renormalized expansion for the current-current correlations of weakly interacting graphene allows to isolate the singular con
tribution to \(K_{ii}(\eta)\) from its regular part:
\[K_{ii}(\eta)=K_{ii}^{\rm Dirac}(\eta)+K_{ii}^{\rm R}(\eta)\, \tag{5.14}\]
where \(K_{ii}^{\rm Dirac}\) is the quantum field theory analogue of (5.11), associated with the integration over a non-interacting \(2+1\) dimensional massless Dirac field, with renormalized Fermi velocity and wave function, and with a fixed ultraviolet cutoff. Instead, the remainder term \(K^{\rm R}(\eta)\) is much less explicit, and it collects irrelevant contributions generated in the renormalization group analysis, such as those taking into account the non-linearity of the energy band away from the Fermi points. Crucially, the term \(K_{ii}^{\rm R}(\eta)\) turns out to be differentiable at \(\eta=0\). By inspection, \(K_{ii}^{\rm Dirac}(\eta)\) is even in \(\eta\), which ultimately implies that \(K_{ii}^{\rm R}(\eta)\) is also even, and hence, by its improved regularity, it does not contribute to \(\sigma_{ii}\). We thus find:
\[\sigma_{ii}=\lim_{\eta\to 0^{+}}\frac{1}{\eta}\left(K_{ij}^{\rm Dirac}(\eta)-K_{ ij}^{\rm Dirac}(0)\right)\,. \tag{5.15}\]
The right-hand side of (5.15) is now amenable to an explicit computation. The remaining interaction dependence, due to the finite renormalizations of the Dirac propagator, turns out to cancel out exactly, which allows to prove the validity of (5.13) for weakly interacting graphene [59].
### Weyl semimetals
Weyl semimetals are three-dimensional electron systems, whose low energy properties are well described by \(3+1\) dimensional massless Weyl fermions, in the same way in which massless Dirac fermions emerge in graphene.
The simplest setting one can use to model Weyl semimetals is the one of non-interacting, three dimensional lattice fermions on a bipartite lattice, with Bloch Hamiltonian \(\hat{H}(k)\) on \(\mathbb{T}^{3}\), such that
\[\hat{H}(k^{\prime}+k_{F}^{\alpha})=v_{1}\sigma_{1}k_{1}^{\prime}+v_{2}\sigma_{ 2}k_{2}^{\prime}+\omega v_{3}\sigma_{3}k_{3}^{\prime}+O(|k^{\prime}|^{2})\, \tag{5.16}\]
where \(k_{F}^{\alpha}=(0,0,\omega k_{F,3})\) with \(\omega=\pm\). The points \(k_{F}^{\pm}\) are called Weyl points. In Eq. (5.16), \(\sigma_{i}\), \(i=1,2,3\), are the standard Pauli matrices, and the parameters \(v_{i}\) play the role of emergent velocities for the low-energy excitations. A concrete example of microscopic lattice model with Bloch Hamiltonian satisfying Eq. (5.16) has been introduced in [42]. Eq. (5.16) simulates a relativistic Hamiltonian at energies in proximity of \(\mu=0\). The two Weyl points introduce two effective chiralities for the low energy excitations around \(\mu=0\), while the two sublattices play the role of spin degrees of freedom for the emergent relativistic field.
One of the most peculiar phenomena of QED in \(3+1\) dimensions is the presence of the chiral anomaly. Consider the Lagrangian of QED for a massless field \(\psi\) coupled with a gauge field \(A_{\mu}\),
\[\mathcal{L}(\psi,A)=\int dx\,\overline{\psi}_{x}\gamma^{\mu}(i\partial_{\mu}-A _{\mu,x})\psi_{x}. \tag{5.17}\]
At the classical level, the invariance of the Lagrangian under \(U(1)\) gauge transformations and under chiral gauge transformations implies the following conservation laws, by Noether's theorem:
\[\partial_{\mu}j_{\mu}=0\,\qquad\partial_{\mu}j_{\mu}^{5}=0\, \tag{5.18}\]
with \(j_{\mu}=\overline{\psi}\gamma_{\mu}\psi\) the current and \(j_{\mu}^{5}=\overline{\psi}\gamma_{\mu}\gamma_{5}\psi\) the chiral current. It is well-known that these conservation laws might be broken in the quantization of the gauge theory, because of the unavoidable presence of cutoffs; these are needed to make sense of the theory, and might break the gauge symmetries of the model. The typical question is whether the conversion laws are restored after removing the cutoffs. As discovered in [3, 19], this is not the case for the conservation of the chiral current. One has, in momentum space, and in units such that \(e^{2}=\hbar=c=1\):
\[\begin{split}&(p_{1,\mu}+p_{2,\mu})\langle j_{\mu,p_{1}+p_{2}} ^{5}\,;j_{\nu,-p_{1}}\,;j_{\sigma,-p_{2}}\rangle^{QED}\\ &\qquad=-\frac{1}{2\pi^{2}}\varepsilon_{\alpha\beta\nu\sigma}p_{1,\alpha}p_{2,\beta}\end{split} \tag{5.19}\]
where \(\varepsilon\) is the totally antisymmetric tensor, and \(\langle\cdot\rangle^{QED}\) denotes the average taken over the QED's
path integral, expressed in terms of a (formal) sum over connected Feynman diagrams, after renormalization. The fact that the right-hand side of (5.19) is nonzero implies that the classical conservation law of the chiral current in (5.18) does not survive quantization: it is an example of anomaly in quantum field theory. The right-hand side of (5.19) takes an extremely simple form, and it is manifestly not affected by radiative corrections. This cancellation has been first discussed, order by order in perturbation theory, in [4], showing that the only contribution to the right-hand side of (5.19) arises from the so-called 'triangle graph'. The physical consequence of the chiral anomaly (5.19) is the decay of the neutral pion into photons.
In the condensed matter setting, the counterpart of the chiral anomaly (5.19) has been first discussed in [103]. There, it was predicted that, for three dimensional lattice models displaying the relativistic structure (5.16), a condensed matter counterpart of the chiral anomaly in QED should take place, after coupling the system to an external time-dependent gauge field. Calling \(\dot{N}_{+}\) and \(\dot{N}_{-}\) the time variations of the charge around the Fermi points with \(\omega=+\), resp. \(\omega=-\), the prediction of [103] is that:
\[\dot{N}_{+}-\dot{N}_{-}=\frac{1}{2\pi^{2}}E\cdot B\,. \tag{5.20}\]
where \(E\) and \(B\) are the electric and magnetic field associated to the external gauge field. Eq. (5.20) has to be understood as a steady charge flow in momentum space from one Weyl node to the other, which mimics the creation and annihilation of particles with opposite chiralities in QED. In the condensed matter setting, this phenomenon is made possible by the fact that the two Weyl nodes are connected via the energy bands. This transport phenomenon has been experimentally observed in the last years, we refer the reader to [8] for a review on Weyl semimetals and for their transport properties.
We shall now describe the mathematical analysis of the phenomenon, following [63]. We shall consider weakly interacting Weyl semimetals, with Hamiltonian \(\mathcal{H}=\mathcal{H}_{0}+\lambda\mathcal{V}\), where \(\mathcal{H}_{0}\) is the second quantization of a single-particle Hamiltonian with the properties (5.16), and \(\mathcal{V}\) is a short-ranged many-body interaction. We denote by \(\langle\cdot\rangle_{\beta,L}\) the Gibbs state of such Hamiltonian, in the presence of a suitably chosen staggered chemical potential that ensures the existence of two Weyl nodes for the interacting system as well.
The first non-trivial issue is to identify an operator which allows to capture the chiral charge transfer. This is nontrivial, because the charge of the emergent Weyl fermions cannot be defined exactly using local operators. Nevertheless, one can identify operators whose correlation functions behave in the desired way, at large scales. For instance, we consider:
\[n_{\mathbf{x}}^{5}=\frac{iZ^{5}}{2}\big{(}a_{\mathbf{x}}^{\star}\,a_{\mathbf{ x}+e_{3}}-a_{\mathbf{x}+e_{3}}^{\star}a_{\mathbf{x}}\big{)}\,, \tag{5.21}\]
with \(e_{3}=(0,0,1)\). The parameter \(Z^{5}\) is a suitable normalization, that has to be fixed consistently with the renormalization of the usual charge. Observe that, setting \(N^{5}=\sum_{\mathbf{x}}n_{\mathbf{x}}^{5}\),
\[N^{5}=Z^{5}\!\int\frac{dk}{(2\pi)^{2}}\sin k_{3}\,\hat{n}_{k}\,, \tag{5.22}\]
with \(\hat{n}_{k}\) the density operator in momentum space. In the integral (5.22), thanks to the presence of \(\sin k_{3}\), the contribution due to the momenta close to \(k_{F}^{\pm}\) mimics the difference of charges with opposite chiralities. The parameter \(Z^{5}\) is fixed imposing the equality of the chiral vertex function with the charge vertex function, at low energy:
\[\langle\hat{n}_{p}^{5};\hat{a}_{k+p};\hat{a}_{k}^{\star}\rangle_{\infty}= \omega\langle\hat{n}_{p}\,;\hat{a}_{k+p}\,;\hat{a}_{k}^{\star}\rangle_{\infty} (1+o(1))\,, \tag{5.23}\]
where \(o(1)\) is a quantity that vanishes for \(k\to k_{F}^{\omega}\) and \(p\to 0\).
We couple the Hamiltonian \(\mathcal{H}\) to an external gauge potential \(e^{\eta t}\,A_{\mu,x}\), via the Peierls' substitution (4.9), with \(\eta>0\) and \(t\leq 0\), and we denote
by \(\mathcal{H}(\eta t)\) the corresponding time-dependent Hamiltonian. We are interested in the quadratic variation in the gauge field for the evolution of the expectation value of \(N^{5}\). Notice that, being the observable \(N^{5}\) nonlocal, it also couples to the gauge field via the introduction of a Wilson line, similarly to (4.9).
Let \(\rho(t)\) be the solution of the Schrodinger equation (3.2). We consider the quadratic response,
\[\begin{split}&\left[\partial_{t}\operatorname{Tr}N^{5}(A(t)) \rho(t)\right]^{(2)}\\ &=\int\frac{dp}{(2\pi)^{3}}\hat{A}_{\mu,p}\hat{A}_{\nu,-p}\eta \Gamma^{5;\beta,L}_{\mu,\nu}((\eta,p),(\eta,-p))\,\end{split} \tag{5.24}\]
and the goal is to compute the response function \(\Gamma^{5}_{\mu,\nu}\). The result of [63] is that, for \(|\lambda|\) small enough, setting \(\Gamma^{5}=\lim_{\beta\to\infty}\lim_{L\to\infty}\Gamma^{5;\beta,L}\):
\[\begin{split}& 2\eta\Gamma^{5}_{\mu,\nu}((\eta,p),(\eta,-p))\\ &=-\frac{1}{2\pi^{2}}p_{1,\alpha}p_{2,\beta}\varepsilon_{\alpha \beta\mu\nu}+o(|p_{i}|^{2})\end{split} \tag{5.25}\]
where \(p_{1}=(\eta,p)\) and \(p_{2}=(\eta,-p)\) are Euclidean momenta with four components. Eq. (5.25), combined with (5.24), agrees with the prediction (5.20), with the same prefactor. As for the universality of graphene's conductivity, the method of proof relies on rigorous renormalization group methods, allowing us to isolate the scaling limit contribution to \(\Gamma^{5}_{\mu,\nu}\), which can be computed explicitly in terms of the renormalized triangle graph, from a remainder term, which is not explicit but enjoys better regularity properties. Universality follows from the combination of lattice Ward identities, due to the conservation of the lattice current, with emergent Ward identities, valid for the correlation functions of QED. Eq. (5.25) can be viewed as a rigorous version of the non-renormalization mechanism of Adler and Bardeen [4] for QED, for a short-ranged lattice model. Remarkably, the non-renormalization of the lattice counterpart of the chiral anomaly survives the breaking of rotation symmetry due to the lattice. The proof relies on lattice conservation laws, and holds at a nonperturbative level.
## 6 Transport in \(1d\) and quasi-\(1d\) metallic systems
In the previous section we discussed condensed matter systems whose Fermi surface is formed by isolated points, around which the energy dispersion relation displays a typical'relativistic' shape. The Gibbs state in the presence of weak many-body interactions can be constructed rigorously, via renormalization group methods. For these models, many-body interactions of order four and higher turn out to be irrelevant in the renormalization group sense, and the large scale behavior for the system is effectively described in terms of a non-interacting relativistic field, whose covariance is renormalized by the integration of the degrees of freedom on smaller scales.
Here we discuss the transport properties of one-dimensional lattice models, where the low energy excitations are again described by emergent relativistic fields, but where the quartic many-body interaction turns out to be marginal in the renormalization group sense. This means that the large scale behavior of the system is described by an interacting quantum field theory. The analysis of the emergent quantum field theory is facilitated by its special integrability features, which are typical of the one-dimensional world. In the following, we will focus on the response functions of one-dimensional quantum systems, and of the edge currents of two-dimensional quantum Hall systems.
### Interacting \(1\,d\) systems
Let us consider interacting spinless fermions on \(\Lambda_{L}=[-L/2,L/2]\cap\mathbb{Z}\) with periodic boundary conditions. The many-body Hamiltonian is
\(\mathcal{H}_{0}+\lambda\mathcal{V}\) with:
\[\begin{split}\mathcal{H}_{0}&=-t\sum_{x\in\Lambda_{L} }(a_{x}^{*}a_{x+1}+a_{x+1}^{*}a_{x})\\ \mathcal{V}&=\sum_{x,y\in\Lambda_{L}}a_{x}^{*}a_{x} \nu(x-y)a_{y}^{*}a_{y}\,,\end{split} \tag{6.1}\]
with \(\nu(x-y)\) finite-ranged and \(t>0\). The Gibbs state of the model is denoted by \(\langle\cdot\rangle_{\beta,\mu,L}\), where \(\mu\) is chosen within the spectrum of the infinite volume lattice Laplacian, \(\mu\in(-2t,2t)\). For the special choice of nearest-neighbour interactions, the Hamiltonian \(\mathcal{H}\) is also related to the Hamiltonian of the XXZ quantum spin chain, via the Jordan-Wigner transformation, and it is therefore integrable via Bethe ansatz. For generic choices of the interaction potential, however, the model is not integrable. Unless otherwise stated, in the following we will not assume any special integrability property.
We shall focus on the transport properties of the density and the current operator, defined as:
\[n_{x}=a_{x}^{*}a_{x}\,,\qquad j_{x}=-it(a_{x}^{*}a_{x+1}-a_{x+1}^{*}a_{x})\,, \tag{6.2}\]
related by the lattice continuity equation
\[\partial_{t}\tau_{t}(n_{x})+\mathrm{d}_{x}\tau_{t}(j_{x})=0\,, \tag{6.3}\]
with \(\mathrm{d}_{x}\) the discrete derivative, \(\mathrm{d}_{x}f(x)=f(x)-f(x-1)\).
We will be interested in the variation of the density and of the current operators, after introducing a slowly varying external potential \(e^{\eta t}\mu(x)\), or a slowly varying electromagnetic field generated by an external vector potential \(e^{\eta t}A_{x}\), compatible with the periodicity of \(\Lambda_{L}\). That is, the quantities of interest will be:
\[\operatorname{Tr}n_{x}\rho(t)\,,\qquad\operatorname{Tr}j_{x}\rho(t)\,, \tag{6.4}\]
where \(\rho(t)\) is the solution of the Schrodinger-von Neumann equation, with time-dependent Hamiltonian:
\[\begin{split}\mathcal{H}(\eta t)&=\mathcal{H}+ \varepsilon e^{\eta t}\sum_{x\in\Lambda_{L}}\mu(x)n_{x}\quad\text{(for the density)}\\ \mathcal{H}(\eta t)&=\mathcal{H}_{0}(A_{t})+\lambda \mathcal{V}\quad\text{(for the current)}\,,\end{split} \tag{6.5}\]
where \(\mathcal{H}_{0}(A_{t})\) is the free Hamiltonian coupled with an external vector potential \(e^{\eta t}A_{x}\), via the Peierls substitution (4.9). We define the susceptibility \(\kappa_{\beta,L}(\eta,p)\) and the Drude weight \(D_{\beta,L}(\eta,p)\) as the linear response of the average density and of the average current:
\[\begin{split}\kappa_{\beta,L}&(\eta,p)\\ &=\frac{i}{L}\int_{-\infty}^{0}dt\,e^{\eta t}\langle[\hat{\rho}_{ -p},\tau_{t}(\hat{\rho}_{p})]\rangle_{\beta,\mu,L}\\ D_{\beta,L}&(\eta,p)\\ &=\frac{-i}{L}\Bigl{(}\int_{-\infty}^{0}dt\,e^{\eta t}\langle[ \hat{j}_{-p},\tau_{t}(\hat{j}_{p})]\rangle_{\beta,\mu,L}+\Delta_{\beta,L} \Bigr{)}\,,\end{split} \tag{6.6}\]
with \(\Delta^{\beta,L}=\langle[\mathcal{J},\mathcal{K}]\rangle_{\beta,\mu,L}\) (compare with Eq. (4.11)). In terms of these quantities, one can write the linear response of the current and of the density in (6.4) as:
\[\begin{split}\operatorname{Tr}\,n_{0}\rho(0)&=\frac{1 }{L}\sum_{p}\hat{\mu}(p)\kappa_{\beta,L}(\eta,p)+O(\mu^{2})\\ \operatorname{Tr}\,j_{0}\rho(0)&=\frac{1}{L}\sum_{p} \hat{A}_{p}D_{\beta,L}(\eta,p)+O(A^{2})\,.\end{split} \tag{6.7}\]
The Drude weight is related to the conductivity of the model, defined as \(\sigma_{\beta,L}(\eta,p)=(1/\eta)D_{\beta,L}(\eta,p)\). Thus, a finite Drude weight as \(\eta\to 0^{+}\) implies a divergent conductivity, _i.e._ metallic behavior. Several alternative definitions of the Drude weight exists, see [92] for a review; we shall refer to the expression in (6.6) as the canonical Drude weight.
As mentioned above, for nearest-neighbour interactions the Hamiltonian (6.1) can be mapped to the XXZ model, which is exactly solvable via Bethe ansatz [118]. As a consequence, in this special case the Drude weight and the susceptibility can be explicitly computed at zero temperature. Much less is known at positive temperature. There, the Drude weight of the XXZ chain can be proved to be bounded below by a nonzero quantity [119] by the Mazur bound [95]. More generally, it has been proposed that
the Drude weight of many-body one dimensional systems is nonzero or zero depending on whether the model is integrable or not; we refer the reader to [32] for a review of this topic.
The question addressed here, which is amenable to a rigorous analysis, is about the computation of the Drude weight and the susceptibility at zero temperature, for non-integrable many-body quantum systems. Heuristic insight on the value of such transport coefficients can be gained from the comparison of the original lattice model (6.1) with its scaling limit, the Luttinger model [87]. The Hamiltonian of this one-dimensional model is:
\[\begin{split}\mathcal{H}_{\mathrm{L}}=&\sum_{ \omega=\pm}\int dx\,\overline{\psi}_{\omega,x}i\,c_{\omega}\partial_{x}\psi_ {\omega,x}\\ &+\sum_{\omega}\int dxdy\,\overline{\psi}_{+,x}\psi_{+,x}w(x-y) \,\overline{\psi}_{-,y}\psi_{-,y}\end{split} \tag{6.8}\]
where \(\overline{\psi}\), \(\psi\) are conjugate fermionic fields, with chirality \(\omega=\pm\), and velocities \(c_{\omega}=\omega c\), and where \(w(\cdot)\) is a smooth and short-ranged potential. This model arises from the linearization of the dispersion relation \(\varepsilon(k)=-2t\cos(k)\) around the two Fermi points \(k_{F}^{\pm}\), which are solutions of the equation \(\mu=-2t\cos(k_{F}^{\omega})\). It is well known that the Luttinger model can be solved by bosonization, [94]: it can be mapped into a non-interacting bosonic quantum field theory, and the main effect of the interaction is to remove the discontinuity in the occupation number of the fermionic modes in the ground state. More generally, the many-body interaction gives rise to the appearence of interaction-dependent anomalous exponents in the scaling of the correlation function. Notice that with respect to the original Luttinger model, here we are considering a many-body interaction that is smooth and slightly nonlocal. This does not affect the infrared properties of the model, and introduces an ultraviolet regularization at small scales, which is particularly convenient in the renormalization group analysis.
The Luttinger model plays a key role in the renormalization group analysis of the original lattice model (6.1). It captures the large-scale behavior of the correlation functions of the system, and its exact solvability allows to compute the critical exponents of the lattice model even in the absence of integrability; see [88] for a review of the results and of the renormalization group analysis. In particular, the Euclidean two-point function of the model (6.1) is given by, at zero temperature and in the infinite volume limit [23]:
\[\begin{split}\langle\Gamma\,a_{0}\gamma_{t}(a_{x}^{*})\rangle_{ \infty}&=g_{0}(t,x)\,\frac{1+O(\lambda)}{(t^{2}+\nu^{2}x^{2})^{ \frac{\eta}{2}}}+R(x,t)\\ g_{0}(t,x)&=\sum_{\omega=\pm}\frac{e^{i\omega k_{F} x}}{-i\,t+\omega\,vx}\,\end{split} \tag{6.9}\]
where: \(\nu\equiv\nu(\lambda)\) is the interacting Fermi velocity, given at lowest order in \(\lambda\) by the slope of the energy-dispersion relation at the Fermi level \(\mu\); \(R(x,t)\) is a faster-decaying error term as \(t\to\infty\), \(x\to\infty\); and \(\eta=a_{0}\lambda^{2}+O(\lambda^{3})\) with \(a_{0}>0\) is the anomalous exponent of the two-point function, analytic in \(\lambda\) for \(|\lambda|\) sufficiently small (the exact solution of the Luttinger model [94] provides the optimal existence and analyticity interval).
Renormalization group methods have been used to compute transport coefficients of \(1d\) chains, in particular the susceptibility and the Drude weight [30, 31]. The starting point of the papers [30, 31] is the formulation of these transport coefficients in terms of Euclidean correlation functions, via the use of a Wick rotation, that can be rigorously justified [92]. Let:
\[\begin{split} D(\eta,p)&=\lim_{\beta\to\infty}\lim_ {L\to\infty}D_{\beta,L}(\eta,p)\\ \kappa(\eta,p)&=\lim_{\beta\to\infty}\lim_{L\to\infty} \kappa_{\beta,L}(\eta,p)\.\end{split} \tag{6.10}\]
Then, as proved in [30, 31, 92, 90]:
\[\begin{split} D(\eta,p)&=\frac{\nu K}{\pi}\frac{ \eta^{2}}{\eta^{2}+\nu^{2}p^{2}}+R_{D}(\eta,p)\\ \kappa(\eta,p)&=\frac{K}{\pi\nu}\frac{\nu^{2}p^{2}}{ \eta^{2}+\nu^{2}p^{2}}+R_{\kappa}(\eta,p)\,\end{split} \tag{6.11}\]
where \(K\) is the Luttinger parameter, related to the anomalous exponent of the two-point function by the formula \(2\eta=K+K^{-1}-2\), while the error terms \(R_{D},R_{\kappa}\) are continuous in a neighbourhood of \((0,0)\) and vanish in the limit \((\eta,p)\to(0,0)\). In particular, setting
\[D=\lim_{\eta\to 0^{+}}\lim_{p\to 0}D(\eta,p)\,,\quad\kappa=\lim_{p\to 0}\lim_{ \eta\to 0^{+}}\kappa(\eta,p)\,, \tag{6.12}\]
we obtain the following remarkable identity between Drude weight, susceptibility and Fermi velocity:
\[\frac{D}{\kappa}=v^{2}\,. \tag{6.13}\]
The relation (6.13) was first predicted to hold by Haldane in [67] on the basis of non-rigorous bosonization arguments, and it has been rigorously established in [26, 27, 30, 31]. The identity (6.13) relates three non-universal quantities for non-integrable \(1d\) systems in an exact way, and it is one of the defining properties of the Luttinger liquid universality class [67, 68]. Recently, a similar 'Haldane relation' has been discovered in the completely different setting of an interacting dimer model in classical statistical mechanics [64], which turns out to belong to the Luttinger liquid universality class. In the context of classical statistical mechanics models, other remarkable identities for non-universal scaling exponents of non-solvable models belonging to the Luttinger liquid universality class have been proved in [25].
Finally, we conclude by mentioning that spinful fermions can also be studied, at the price of a considerably more involved renormalization group analysis [26, 27]. The main extra technical difficulty, solved in [26, 27], is that the presence of the spin introduces further quartic marginal terms in the effective quantum field theory description of the model. These terms cannot be easily described in terms of emergent bosonic modes, and their renormalization group flow must be controlled via direct inspection of the beta function.
### Edge modes of interacting quantum Hall systems
One-dimensional physics also arises at the boundary of two-dimensional topological insulators. The connection between the bulk topological order of insulating materials and the emergence of stable, metallic quasi-\(1d\) modes is the content of the bulk-edge duality [69], a central concept in the theory of topological materials.
To begin, let us consider non-interacting quantum Hall systems, in the half-plane \(\mathbb{Z}\times\mathbb{N}\). Let \(H\) be the single-particle Hamiltonian, a self-adjoint operator on \(\ell^{2}(\mathbb{Z}\times\mathbb{N};\mathbb{C}^{M})\). We impose Dirichlet boundary conditions at \(x_{2}=0\). We shall also view the operator \(H\) as the restriction to the half-plane of another Hamiltonian \(H_{\text{B}}\), defined on \(\mathbb{Z}^{2}\).
Let us suppose that \(H_{\text{B}}\) is gapped, and that the Fermi level \(\mu\) lies in a spectral gap. The bulk-edge duality establishes an identity between the value of the Hall conductivity of \(H_{\text{B}}\), also called the bulk Hall conductivity, and the emergence of edge modes for \(H\). For simplicity, let us further assume that both \(H\) and \(H_{\text{B}}\) are invariant under translations along the direction of the edge. Then, the Hamiltonians admit a partial Bloch decomposition,
\[H=\int_{\mathbb{T}^{1}}^{\oplus}\frac{dk}{2\pi}\,\hat{H}(k)\,,\qquad H_{\text{ B}}=\int_{\mathbb{T}^{1}}^{\oplus}\frac{dk}{2\pi}\,\hat{H}_{\text{B}}(k)\,. \tag{6.14}\]
Edge modes might appear for energies in the 'bulk gap', corresponding to the spectral gap of \(H_{\text{B}}\), and are associated to solutions of the Schroedinger equation for \(H_{\text{B}}\), localized in the proximity of \(x_{2}=0\):
\[\hat{H}_{\text{B}}(k)\varphi_{\omega}(k)=\varepsilon_{\omega}(k)\varphi_{ \omega}(k)\,, \tag{6.15}\]
where \(\varphi_{\omega}(k)\equiv\varphi_{\omega}(k,x_{2})\) is exponentially decreasing in the bulk,
\[|\varphi(k,x_{2})|\leq Ce^{-cx_{2}}\,. \tag{6.16}\]
The decay rate \(c\) can be estimated in terms of the distance of \(\varepsilon_{\omega}(k)\) to the rest of the spectrum of \(\hat{H}(k)\).
The edge modes become relevant for the zero-temperature transport properties of the system if the Fermi level intersects the edge modes dispersion relations \(\varepsilon_{\omega}(k)\) at some value of the quasi-momentum \(k\). We define the edge modes Fermi momenta as the solutions of:
\[\mu=\varepsilon_{\omega}(k_{F}^{\omega})\,. \tag{6.17}\]
Around the Fermi level, the energy-momentum dispersion relation of the edge modes can be linearized, \(\varepsilon_{\omega}(k^{\prime}+k_{F}^{\omega})-\mu=v_{\omega}k^{\prime}+o(k^ {\prime})\), which suggests that the low energy excitations are effectively described by massless relativistic particles with velocities \(v_{\omega}\), exponentially localized in proximity of the \(x_{2}=0\) edge, recall (6.16).
In this setting, the bulk-edge duality relates the value of the bulk Hall conductivity and the sum of the chiralities of the edge modes, defined as \(\chi_{\omega}=v_{\omega}/|v_{\omega}|\), encoding the directions of propagation of the edge currents. This duality is encoded by the following amazing identity:
\[\sigma_{12}=\frac{1}{2\pi}\sum_{\omega}\chi_{\omega}\,. \tag{6.18}\]
As we shall see below, the right-hand side of (6.18) can also be understood as an edge response function, which describes the variation of the edge current after introducing a perturbation localized at the boundary.
The first proof of (6.18) has been given in [72]. It has then been extended to disordered systems whose Fermi energy lies in a spectral gap [47, 108], or in a mobility gap [48]. More recently, the bulk-edge duality has been extended to other classes of topological insulators, such as time-reversal invariant systems [65]; see [106] for extensions to other topological phases. Field theoretic methods for the classification of topological phases have been introduced in [54], which allow to understand the bulk-edge duality in terms of anomaly cancellations between the Chern-Simons effective field theory for the bulk degrees of freedom, and the Luttinger liquid field theory arising on the boundary.
Let us now focus on rigorous results for many-body quantum systems. We shall consider weakly interacting fermions on the cylinder, namely on \(\Gamma_{L}=[0,L]^{2}\cap\mathbb{Z}^{2}\) with periodic boundary conditions in the horizontal direction and Dirichlet boundary conditions at \(x_{2}=0,L\). We shall allow for internal degrees of freedom, and we shall denote by \(\Lambda_{L}\) the decorated lattice; recall the discussion in Section 2. The points in \(\Lambda_{L}\) are denoted by \(\mathbf{x}=(x,\sigma)\), with \(x\in\Gamma_{L}\) the space coordinate and \(\sigma\in\{1,\ldots,M\}\) the color label. Let \(H\) be a single-particle Hamiltonian, displaying edge modes, in the sense of exponentially localized solutions of (6.15), at \(x_{2}=0\) and \(x_{2}=L\). As usual, we shall denote by \(\mathcal{H}_{0}\) the second quantization of the Hamiltonian, and we shall consider many-body Hamiltonians of the form \(\mathcal{H}=\mathcal{H}_{0}+\lambda\mathcal{V}\), with \(\mathcal{V}\) a short-ranged many-body interaction. We also suppose that the model is invariant under translations in the horizontal direction, which will allow us to perform a partial Bloch reduction.
We are interested in the edge transport properties of this model. Let \(j_{i,\mathbf{x}}\) be the current density, such that \(\mathcal{J}_{i}=\sum_{\mathbf{x}\in\Lambda_{L}}j_{i,\mathbf{x}}\), and satisfying the lattice continuity equation:
\[\partial_{t}\tau_{t}(n_{\mathbf{x}})+\sum_{i=1}^{2}\mathrm{d}_{i}\tau_{t}(j_{ i,\mathbf{x}})=0\,, \tag{6.19}\]
with \(\mathrm{d}_{i}f(x)=f(x)-f(x-e_{i})\) the discrete derivative in the \(i\)-th direction. We define the edge current \(\mathcal{J}_{x_{1}}^{\ell}\) as:
\[\mathcal{J}_{x_{1}}^{\ell}:=\sum_{\sigma}\sum_{x_{2}\leq\ell}j_{1,(x,\sigma)}\,. \tag{6.20}\]
This operator is associated with the charge current flowing in a strip of width \(\ell\ll L\), adjacent to the \(x_{2}=0\) boundary. In order to probe the response of such current, we introduce the time
dependent Hamiltonian:
\[\mathcal{H}(\eta t)=\mathcal{H}+e^{\eta t}\sum_{\mathbf{y}\in\Lambda_{L}}\mu(y) \,a_{\mathbf{y}}^{*}\,a_{\mathbf{y}} \tag{6.21}\]
where \(\mu(y)\equiv\mu(y_{1})\) for \(y_{2}\leq\ell^{\prime}\) and \(\mu(y)=0\) otherwise. We will be interested in the variation of the average edge current, at first order in the external perturbation, in the range of parameters \(1\ll\ell\ll\ell^{\prime}\ll L\).
We define the edge response function as:
\[G^{\ell,\ell^{\prime}}_{\beta,L}(\eta,p) :=-\frac{i}{L}\int_{-\infty}^{0}dt\,e^{\eta t}\Big{\langle}\Big{[} \tau_{t}\Big{(}\hat{n}_{-p}^{\ell^{\prime}}\Big{)},\hat{\mathcal{J}}_{p_{1}}^{ \ell}\Big{]}\Big{\rangle}_{\beta,L}\] \[G^{\ell,\ell^{\prime}}(\eta,p) :=\lim_{\beta\to\infty}\lim_{L\to\infty}G^{\ell,\ell^{\prime}}( \eta,p)\,,\]
where \(\hat{n}_{p}^{\ell^{\prime}}=\sum_{\sigma}\sum_{x_{2}\leq\ell^{\prime}}\hat{n} _{p_{1},x_{2},\sigma}\) (the Fourier transform is over the variable \(x_{1}\)). In terms of this function, the linear response of the edge current can be written as:
\[\mathrm{Tr}\,\,\mathcal{J}_{0}^{\ell}\,\rho(0)=\frac{1}{L}\sum_{p}\hat{\mu}(p )\,G^{\ell,\ell^{\prime}}_{\beta,L}(\eta,p)+O(\mu^{2})\,. \tag{6.23}\]
We will be interested in the edge response function in the limits \(\eta\to 0^{+}\), \(p\to 0\) and then \(\ell,\ell^{\prime}\to\infty\) with \(1\ll\ell\ll\ell^{\prime}\ll L\), which is the relevant setting for studing slowly varying edge perturbations. Notice that the order of this limits matters: if one first takes \(p\to 0\) and then \(\eta\to 0^{+}\) one finds a trivial result: \(\lim_{\ell^{\prime}\to\infty}\lim_{p\to 0}G^{\ell,\ell^{\prime}}(\eta,p)=0\).
From a quantum field theory viewpoint, the large scale properties of the edge modes are expected to be described by a generalization of the Luttinger model, called the multi-channel Luttinger model, describing an arbitrary number of chiral relativistic fermions, interacting via a density-density type interaction. The Hamiltonian of the model is:
\[\mathcal{H}_{mL}=\sum_{\omega=1}^{M}\int dx\,\overline{\psi}_{ \omega,x}i\,c_{\omega}\partial_{x}\psi_{\omega,x}\] \[+\sum_{\omega,\omega^{\prime}}\lambda_{\omega,\omega^{\prime}} \int dxd\,y\,\overline{\psi}_{\omega,x}\psi_{\omega,x}w(x-y)\overline{\psi}_{ \omega^{\prime},y}\psi_{\omega^{\prime},y}\,, \tag{6.24}\]
where the sum runs over the labels of \(M\) chiral fermions, representing the edge modes at the Fermi level, with velocities \(c_{\omega}\), and the couplings \(\lambda_{\omega,\omega^{\prime}}\) describe the edge mode scattering. As for the Luttinger model (6.8), this quantum field theory can be studied via bosonization. It is the starting point of the chiral Luttinger liquid theory of edge modes [117], a non-rigorous field theory approach to edge transport in the quantum Hall effect.
Recently, the validity of this effective quantum field theory for the large-scale properties of edge currents has been rigorously proved; see [7] for the case of one edge mode, [91] for two counterpropagating edge modes with opposite spins, and [93] for the generic case of an arbitrary number of edge states. A key ingredient of the analysis is the vanishing of the beta function for the multi-channel Luttinger model [93], which allows to control the flow of the quartic marginal couplings, and to prove that the correlation functions decay with interaction-dependent anomalous exponents.
The renormalization group analysis, combined with Ward identities for the lattice model and for the effective quantum field theory, actually allows to compute the limiting edge response function. We obtain [93], for \(|\lambda|\) small enough, and for \(1\ll\ell\ll\ell^{\prime}\):
\[\lim_{p\to 0}\lim_{\eta\to 0^{+}}G^{\ell,\ell^{\prime}}(\eta,p)=\sum_{\omega} \frac{\chi_{\omega}}{2\pi}+O(e^{-c\ell})\,, \tag{6.25}\]
with \(\{\chi_{\omega}\}\) the chiralities of the non-interacting edge modes. In particular, taking the limits \(\ell^{\prime}\to\infty\) followed by \(\ell\to\infty\), we see that the edge response function is quantized, and it is equal to the sum of the chiralities of the edge modes. Thus, combined with the universality of the Hall conductivity [60], Eq. (6.25) allows to lift the bulk-edge duality to the realm of weakly interacting quantum Hall systems. Furthermore, the method also allows to compute the edge Drude weight and the edge susceptibility. In the case of one and two edge modes, this has been done in
[7, 91]. In particular, these results allow to check the validity of the Haldane relation (6.13) for the edge response functions in these cases.
Other ways of probing edge currents are possible, involving different material geometries. For instance, the two-terminal conductance is obtained by connecting a Hall bar to two leads at different chemical potential, and by measuring the current propagating in the bar at first order in the difference of the chemical potential of the source and of the drain. For clean and non-interacting samples, it is actually expected that the two-terminal conductance is given by the sum of the absolute values of the chiralities. This quantity is not expected to be protected against many-body interactions between counterpropagating edge modes [80, 81]. A possible mechanism to restore quantization in a way compatible with the bulk-edge duality relies on the introduction of disorder [80, 81], a relevant perturbation in the renormalization group sense. See [40] and references therein for recent experimental studies of the two-terminal conductance in the context of the fractional quantum Hall effect. The rigorous understanding of this phenomenon is an interesting open problem in mathematical physics.
**Acknowledgements.** A. G. and M. P. gratefully acknowledge financial support of the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (ERC CoG UniCoSM, grant agreement n. 724939 for A.G.; and ERC StG MaMBoQ, grant agreement n.80290 for M. P.). A.G. and V.M. gratefully acknowledge financial support of MIUR, PRIN 2017 project MaQuMA, PRIN201719VMAST01. Our work has been carried out under the auspices of the GNFM of INDAM.
|
2306.00623 | Physical Attacks on the Railway System | Recent attacks encouraged public interest in physical security for railways.
Knowing about and learning from previous attacks is necessary to secure against
them. This paper presents a structured data set of physical attacks against
railways. We analyze the data regarding the used means, the railway system's
target component, the attacker type, and the geographical distribution of
attacks. The results indicate a growing heterogeneity of observed attacks in
the recent decade compared to the previous decades and centuries, making
protecting railways more complex. | Lukas Iffländer, Thomas Buder, Teresa Loreth, Marina Alonso Villota, Walter Schmitz, Karl Adolf Neubecker, Stefan Pickl | 2023-06-01T12:46:05Z | http://arxiv.org/abs/2306.00623v1 | # Physical Attacks on the Railway System
###### Abstract
Recent attacks encouraged public interest in physical security for railways. Knowing about and learning from previous attacks is necessary to secure against them. This paper presents a structured data set of physical attacks against railways. We analyze the data regarding the used means, the railway system's target component, the attacker type, and the geographical distribution of attacks. The results indicate a growing heterogeneity of observed attacks in the recent decade compared to the previous decades and centuries, making protecting railways more complex.
## I Introduction
In October 2022, unknown attackers carried out an act of sabotage against the wireless communication infrastructure of Germany's primary railway infrastructure operator DB Netz AG [1]. By selectively cutting cables at two distinct positions across Germany, the attackers rendered large parts of GSM-R (Global System for Mobile Communication--Rail), used for communication between train dispatchers and the trains themselves, unusable. In the following, traffic in northern Germany had to stop entirely until the system's repair. This attack brought the vulnerability of the railway system into sharper focus.
The Russian invasion of Ukraine in the spring of 2022 also brings the vulnerability of railways further into the public eye--railway facilities are not only attractive military targets but also often targets for partisans.
Since the beginning of 2021, the project "Revealing Existing Attack Vulnerabilities in the Rail System" (REAVRS) has been running at the German Centre for Rail Traffic Research (DZSF) at the Federal Railway Authority. The contractors are the University of the Bundeswehr Munich and the Ingenieurgesellschaft fur Verkehrs- und Eisenbahnwesen mbH (IVE mbH). The project deals with attack potentials on the railway system both on a physical and cyber level. One of the project's first goals was a historical analysis of physical attacks. This resulted in a systematically structured data set described and evaluated in this article1. We considered the different types of attackers, the type of means of attack, the targets of the attacks and the damage incurred. The results show, among other things, that the number of attacks has increased significantly over time and that there has been a diversification of attacks in the last decade. The security situation assessment is becoming more complex due to this increased diversity of attack characteristics.
Footnote 1: Data set will become publically available upon acceptance, reviewers find a temporary link at the end of the paper.
The remainder of this paper is structured as follows: After this introduction, Section II briefly introduces relevant physical security aspects of the railway system. Next, Section III describes the structure of our data set. Building upon that, we analyze the data and present example attacks in Section IV. We discuss these findings and their limitations in Section V. Furthermore, we take a look at related work in Section VI. Finally, in Section VII, we present an outlook on the further research work of the DZSF in this field and indicate the first trends regarding physical attack characteristics from the beginning of the current decade.
## II Background
The railway system is a critical infrastructure considered a powerful economic motor worldwide. It enables connectivity facilitating the transportation of goods and people while maintaining a lower environmental impact than other transportation systems. However, society's dependence on the railway system makes it an attractive target for hostile actors. The accessible designs of its infrastructure, both in the stations and in the railroads, make it vulnerable to physical attacks. In addition, the high numbers of people who use the railway daily, particularly the presence of central stations that gather thousands of people simultaneously, increase potential damages should an attack succeed.
Despite the increasing reliance on public vigilance, CCTVs, and vigilant professionals at its stations, the railway system will never be completely secure against attacks. The composition of the railway system and overall design make it full of the so-called'soft targets', places highly vulnerable but with a low level of protection [2]. Thus, some criteria make specific nodes of the railway system more attractive than others to potential attacks: a high concentration of passengers in a given place, its centrality regarding the whole railway network
(e.g., railway stations in capital cities and railway hubs), the elements on its vicinity (e.g., shopping and cultural centers, companies), and the presence of transport infrastructure of connecting operators in the vicinity of the station, such as terminals, other transportation stations, and stops [2].
The destruction of railroads and railway infrastructure may give an advantage to an attacker in a warfare scenario, as it hinders the provision of resources and personnel, evacuations, and fast mobility [3]; terrorist attacks on the railway system primarily aimed at disrupting services and/or causing casualties [4]; and vandalism acts, such as the use of graffiti, cause a negative brand and reputation impact as well as considerable economic losses [5].
## III Definition of Data set Structure
In our data set, we created a structured representation of the attacks. Each attack has the following properties, which themselves often comprise several sub-properties:
* **Date:** When attacks lasted multiple days or were around midnight, we used the first of the relevant dates. Attack dates resemble local times.
* **Location:** The location specifies the country and the city in which the event occurred. If available, we also specified the state. Furthermore, we added the latitude and longitude of the event location. In cases where the sources gave imprecise locations (e.g., only the line segment between two stations), we used applicable simplification (e.g., for the previous example, we used the middle between both stations). Lastly, when available, we also collected the name of the line of the attack. When no line attribution was possible, we specify, e.g., the station or the attacked railway subsystem.
* **Attacker:** The attacker first comprises the type. We discern three types: 1) state actors, 2) non-state actors, and 3) unknown actors. When dealing with non-state actors, we consider whether states back their activities. For known actors, we further specify whether we deal with individual or group actors. Lastly, we note the name or alias of the actors.
* **Attack Target**: The attack target describes the intended target for the attack. We discern seven attack target types:
* _Tracks:_ Tracks comprise the rails themselves, sleepers, fastenings, and sub- and superstructures.
* _Bridges:_ We consider bridges a target if attacks target the bridge directly (e.g., explosives at support pillars). Attacks that target the tracks on a bridge (e.g., by loosening fasteners) fall under the category of _Tracks_.
* _Stations:_ In this category, we consider attacks at and in station buildings.
* _Rolling Stock:_ This category comprises attacks that target rolling stock (e.g., bombs in a carriage).
* _Passengers:_ For this category, we look at attacks that directly attack passengers (e.g., knife attacks on passengers).
* _Communication:_ These attacks comprise any attacks against communication infrastructure (e.g., taking out railway wireless communication) and signaling and interlocking components (e.g., setting fire to cables used for interlocking).
* _Other:_ This last category comprises all targets not covered by the other categories. A real-world example would be an attack on a railway authority building. Multiple events lack the specification of the concrete attack target in the available sources. Some events have multiple attack targets (e.g., bombs exploding in trains exactly when they pass through stations)
* **Means of Attack:*
* Similar to the attack targets, we tracked the means of attack. We discern the following means of attack:
* _Knife or Baton_
* _Fire:_ This category excludes secondary fires, e.g., explosions.
* _Rail Manipulation:_ This category excludes secondary rail manipulations, e.g., broken rails from an explosion.
* _Suicide Bomber_
* _Hacker:_ This category applies to physical attacks supported by hacker attacks and does not comprise pure hacker attacks Again, multiple events either do not indicate the used means of attack, and others have more than one means of attack.
* This category describes the actual impact. Note that attacks can target one part of the railway system but have effects on other parts or miss the effect entirely. For example, a sabotaged bridge can collapse under a train and cause casualties and damage to the rolling stock. Otherwise, loosening fasteners targets the tracks but could be detected and not cause any impact. We discern multiple impact types:
* _Infrastructure:_ We summarize all significant damage to tracks, bridges, and stations under infrastructure.
* _Rolling Stock:_ Whenever a rolling stock is significantly damaged, it falls in this category.
* _Injured:_ Here, we collect the number of injured people. If possible, we discern between light and severe injuries. If not, we consider all reported injuries as light. We use the lowest number when the sources provide ranges of injured casualties.
* _Dead:_ This category holds the number of deceased people. We use the lowest number when the sources provide ranges of deceased.
* **Description:** This last item contains the textual description of the event.
## IV Results and Data aggregation
In total, we collected 127 events. We deduced trends and insights into the characteristics of the collected physical attacks using descriptive analysis. Even though we collected data past 2020, we neglected attacks from 2020 onward in our analysis. We made this decision because many attacks are not yet solved, and, e.g., the identity of an attacker is not yet known but might be discovered shortly. Furthermore, many events have not been collected. Especially in countries that significantly limit the freedom of the press, the reporting on such events can be largely delayed. We will, however, take a short look at the events after 2020 in the outlook part of the conclusion in Section VII.
### _Overview of Results_
We first examined the development of the number of attacks carried out over time. As Figure 1 indicates, the number of attacks has risen significantly since the 1990s. This trend continued in the following decades, reaching a maximum of 35 documented attacks in the 2010s.
A similar result holds for the analysis of the number of casualties; see Figure 2. The analysis of the casualties per event depicted in Figures 3(a) and 3(b) yields that also the average impact of the attacks increased since the last decades of the 20\({}^{\text{th}}\) century. This is particularly evident because the upward outliers regarding the number of injured persons mainly appear only since the 1980s (Figure 3(a)). The high results from the 2000s are driven by multiple high-casualty events, e.g., the Madrid bombings in 2004 (1 969 lightly and 82 severely injured and 192 dead) or the 2006 Mumbai bombings (714 injured and 207 dead). The difference between the figures' median and average visualizes this effect.
Next, Figure 3 shows the development of the share each mean of attack has through decades. In the beginning, most attacks used either obstacles or the manipulation of rails. Especially in the early decades, we found multiple imprecise sources that did not give the used mean of attack. From the post-war (after World War II) period on, most attacks employed used explosives. This trend held until the second decade of the 21\({}^{\text{st}}\) century when attacks with melee weapons, obstacles, and fire took a significant share. Moreover, the distribution of means is generally more heterogeneous. It is worth mentioning that in this last decade, the first missile attack happened. Also, suicide bombers started to appear at the turn of the century.
Figure 5 shows the composition of the attackers. We find that in the early decades, most attackers are unknown. This changed around the mid of the 20\({}^{\text{th}}\)-century. This fact is due to better documentation of state-backed attacks and the strife from terrorists for recognition of their feats. Except for the 1940s and 1950s, most known attacks come from non-state actors. Also, in most decades, the majority of attacks come from groups. Notable exceptions are the 1960s due to multiple
Fig. 1: Development of the number of physical attacks per decade. Attacks up until 1900 are summarized as one bar.
Fig. 3: Share of different means of attacks of the total events in a decade. Events with more than one mean are described as ‘Multiple’ while events without a known mean are described as ‘Unknown.’
Fig. 2: Development of the number of casualties per decade separated between injured and dead. Attacks up until 1900 are summarized as one bar.
attacks in Germany from Alexander Bordan Hembluck and the 2010s from a single Daesh supporter, also in Germany. State-backed non-state actors are rare and only comprise Confederate guerrillas in the American Civil War, the Irish Republican Army, and a group backed by the Pakistani secret service.
As a last distribution, we consider the various attack targets. Figure 6 visualizes these results. In the first centuries, most attacks with known targets mainly focused on the infrastructure side (tracks and bridges). The outlier in the 1910s results from this decade only comprising a single attack. From the 1960s, the share of attacks on rolling stock grew and became the majority of known attack targets until the 2010s. Also, post-war, attacks against stations and communication infrastructure emerge. As for the means of attack, the 2010s show a rather heterogeneous composition.
Besides the distribution over all attacks regarding the target, means, and attacker type, we also considered the geographical distribution. Figure 7 gives a world map comprising all countries and the number of attacks.
The map shows that most attacks occurred in Germany. Further points of frequent attacks are Russia and India. In general, the majority of attacks happen in Europe and Asia.
Further, Figure 8 presents the casualties per country on a world map. While the number of attacks might indicate that Germany would also suffer from many casualties, the opposite is the case. We attribute the difference between attack frequency and impact to many sabotage events, especially
Fig. 4: Development of the number of casualties per event separated between injured and dead. Attacks up until 1900 are summarized as one bar. Since the whiskers for the outliers massively reduce the readability, we show a version of the same data with outliers on the left in Subfigure 3(a) and without on the right in Subfigure 3(b)
Fig. 5: Share of different attacker types of the total events in a decade. Events without a known attacker are described as ’Unknown.’ The coloring represents state or non-state actors while the hatches indicate groups and state-backing.
Fig. 6: Share of different attack targets of the total events in a decade. Events with more than one target are described as ’Multiple’ while events without a known mean are described as ’Unknown.’
from left-wing extremists. While Russia and India have many attacks and casualties, Spain and Japan have high casualties with fewer attacks. This effect results from very few high-intensity attacks. Again the majority of casualties come from Europe and Asia.
### _Representative Attack Examples_
In this section, we want to give some representative examples of observed attacks. Due to space limitations, we can not give a complete rundown of every event.
#### V-B1 29\({}^{th}\) of October 1888, Russia, Birky
An example of an event with little available data is the attack on the 29\({}^{\text{th}}\) of October 1888. On this date, the train of Tsar Russian Alexander III derailed. Russian authorities considered it to be an attack. Casualties comprised 24 injured and 23 dead. No further information on this attack is available. Thus, we considered the attacker, the attack target, and the used means as unknown [6].
#### V-B2 9\({}^{th}\) of September 1991, India, Mumbai
This example is typical for many attacks in the 1990s, resulting in a mid-two-digit number of injured (60) and a small two-digit number of dead (10) people. On the 9\({}^{\text{th}}\) of September 1991, an attack was carried out by Islamist terrorist groups on a suburban train in Mumbai, India. Here, the sources [7, 8, 9] allowed to designate the rolling stock as the target and explosives as the mean used.
Fig. 8: The amount of casualties per country since the invention of railways. The given amount is the sum of deceased and injured persons.
Fig. 7: The number of attacks per country since the invention of railways.
3 11\({}^{\text{th}}\) of March 2004, Spain, Madrid
The attack in Madrid is relevant as a maximum casualty event, with 191 dead and 2021 injured. Here the sources allow for a detailed accounting of the relevant information with a minute-by-minute description. On the 11\({}^{\text{th}}\) of March 2004, between 7:39 and 7:42, ten bombs simultaneously exploded in four fully occupied suburban trains and at Atocha station. Thus, we marked rolling stock and the station as the attack targets and explosives as the primary mean of attack. Furthermore, reports accredit responsibility to a terror cell financed by al-Qaida. Therefore, we marked the attacker as a non-state actor group without state support [10].
#### Iv-B4 29\({}^{\text{th}}\) of March 2010, Russia, Moscow
An attack close to the median casualties of its decade occurred on the 29\({}^{\text{th}}\) of March 2010 in Moscow. That day, 100 people were injured (12 light and 88 severely), and 40 died. Two explosions occurred on the platforms of metro stations, thus, targeting the stations and the trains on the platform using the means of explosives. The attack was credited to the Chechen Black Widows, a non-state group actor without state support [11, 12].
#### Iv-B5 19\({}^{\text{th}}\) of June 2017, Germany, nationwide
A last example is an attack without casualties. On the 19\({}^{\text{th}}\) of June 2017, 13 arson attacks were carried out simultaneously on cable installations of railway infrastructure in five German states, causing significant damage and severely degrading signaling and communication. Thus, the mean was fire, and the target was signaling and communication infrastructure. Left-wing extremists claimed responsibility for the attacks, leading to a classification as a non-state actor without state support. Furthermore, due to the simultaneity of the attack, we consider the actor to be a group actor [13].
## V Discussion and Limitations
Our research suggests that the total number of attacks and the average impact per attack have increased, especially since the end of the last century. The means of attack and targets get more heterogeneous. This increased diversity of attack properties requires a holistic, broad strategy for suitable safety measures. Moreover, the recent attack on the communication infrastructure in Germany indicates a better system knowledge of potential attackers. However, more classical means of attack, such as knives or fire, still play a significant role in recent days.
The derived quantitative estimates of attack trends and characteristics naturally depend on the search strategy and defined data structure. This results in two main reasons for the incompleteness of the researched attacks. First, we only searched in English and German, so we missed attacks only reported in other languages. However, the distribution of the researched attacks covers the whole world, indicating that we could cover at least a collection of the most relevant attacks. Second, we focused on digital references, which may have biased the number of attacks covered in earlier years. However, where available, we incorporated older text sources like [6, 13]. This suggests that our analysis underestimates the number of attacks until the late 1990s. Despite this limitation, the references suggested we covered the most significant attacks in this period. Furthermore, the general inferred trend of an increase in the number of attacks during the 2000s holds.
## VI Related Work
Concerning the related work of our quantitative analysis, a few other publications should be mentioned in this context. Hartong et al. [14], Sarkar & Sarkar [15] and De Cillis et al. [16] were therefore identified. They all provide an overview of developments of physical attacks and incidents in railway infrastructures.
In Hartong, et al. [14] deal with the significant role of railroads in the United States and highlights its weaknesses regarding attacks and incidents that occurred. By analyzing attacks characterized by a more straightforward structure that led to the disruption of the entire railway system, the importance of the passenger rail in the United States is the main focus throughout the paper. In conclusion, however, even if holistic protection against all kinds of attacks is not a realistic scenario, concrete measures to take for reducing the risk of attacks by analyzing the vulnerabilities of the entire railroad security system are represented.
Sarkar & Sarkar present a study [15] which is akin to our analysis. It shows how the number of injured and deaths in Indian railway accidents (natural- and man-made) varies from the mid-20th century to 2022. In general, the number of injured and dead people has declined in recent decades. In particular, during the 70ies, railway accidents were at their lowest point since 1947. However, from the 90ies to 2020, the number of accidents that were provoked by natural- (17%) and man-made causes (17%), e.g., terrorism, as well as by technology and mechanics (66%), is steady. While the number of accidents has been constant for the last decades, the number of injured and killed people is relatively high between 1981 and 2010. A reason for this is seen in the growing Indian population. From 2011 until 2022 the number of humans negatively affected declined because of technological- and medical improvements.
Cillis et al. provide a third publication related to our work [16]. The data collected by identifying around 540 attacks initiated by terrorist and security incidents in international railways between the early 70's to 2011 is saved in a specifically created open-source-based database called RISTAD (Railway Infrastructure Systems Terrorist Attacks). By applying the database, an analysis of the counterintuitive correlations between the attacks and the characteristics of an object becomes visible, for instance, the illustration of the lethality on the one side while showing the number of tracks available within a particular station on the other side. The general focus of this research lies in identifying railroad facilities related to infrastructural or environmental facets, which make a target more attractive to attack.
## VII Conclusions and future work
In this paper, we collected data about physical attacks on railway infrastructure in a structured data set. We analyzed the collected data concerning means of attack, attacker types,
attack targets, casualties and global distribution. Our findings allow us to discern trends. Especially the continuous growth in total attacks over multiple decades leads to the expectation that even more attacks might occur in the future. Also, especially in the 2010s, the means and targets of the observed attacks became more diverse. Thus, it is no longer possible to focus solely on protecting against attacks with explosives like in the previous decades, but instead, it becomes necessary to anticipate more complex attacks. The casualty data gives an optimistic lookout, since the number of casualties vastly shrank in the 2010s compared to the previous two decades. Combined with the knowledge about more frequent attacks, this leads to the assumption, that attacks are either becoming less lethal or security mechanisms become efficient enough to better prevent attacks like observed in the 1990s and 2000s.
The recording of attacks is part of the first work package of the REAVRS project. Further packages deal with the definition of relevant attack characteristics, their threat assessment, and the derivation and evaluation of countermeasures. In addition, the project also includes the recording and analysis of cyber attacks, which will be carried out in a methodologically similar way.
The DZSF plans to make the described data set available on its website. Furthermore, in the project context, the possibility is to be created to report errors in the data set, suggest corrections, and contribute new attacks. It is being examined how this can be mapped within the current and future IT infrastructure of the Federal Railway Authority.
Last, we take a look at the attacks of the current decade. In that case, it is noticeable that the attacks increasingly relate to communication and facilities of the control and safety technology and weaken the system's availability. Since technology development is advancing rapidly in this area in particular, the DZSF is currently investigating which IT security concepts will be necessary for the future and suitable for adequately protecting rail transport against cyber attacks in another project entitled "Forecast of security requirements and evaluation of possible security concepts for the railway system." The report on the technology forecast has already been published [17].
## Data Availability
We provide the data set in the form of an Excel sheet under [18]. Future updates of the data set will be published on the same platform.
## Acknowledgement
We want to thank our DZSF student interns Henriette Kobl and Mark Essegern for their support in migrating the data from plain text to structured lists.
|
2304.01344 | End-to-End Models for Chemical-Protein Interaction Extraction: Better
Tokenization and Span-Based Pipeline Strategies | End-to-end relation extraction (E2ERE) is an important task in information
extraction, more so for biomedicine as scientific literature continues to grow
exponentially. E2ERE typically involves identifying entities (or named entity
recognition (NER)) and associated relations, while most RE tasks simply assume
that the entities are provided upfront and end up performing relation
classification. E2ERE is inherently more difficult than RE alone given the
potential snowball effect of errors from NER leading to more errors in RE. A
complex dataset in biomedical E2ERE is the ChemProt dataset (BioCreative VI,
2017) that identifies relations between chemical compounds and genes/proteins
in scientific literature. ChemProt is included in all recent biomedical natural
language processing benchmarks including BLUE, BLURB, and BigBio. However, its
treatment in these benchmarks and in other separate efforts is typically not
end-to-end, with few exceptions. In this effort, we employ a span-based
pipeline approach to produce a new state-of-the-art E2ERE performance on the
ChemProt dataset, resulting in $> 4\%$ improvement in F1-score over the prior
best effort. Our results indicate that a straightforward fine-grained
tokenization scheme helps span-based approaches excel in E2ERE, especially with
regards to handling complex named entities. Our error analysis also identifies
a few key failure modes in E2ERE for ChemProt. | Xuguang Ai, Ramakanth Kavuluru | 2023-04-03T20:20:22Z | http://arxiv.org/abs/2304.01344v1 | End-to-End Models for Chemical-Protein Interaction Extraction: Better Tokenization and Span-Based Pipeline Strategies
###### Abstract
End-to-end relation extraction (E2ERE) is an important task in information extraction, more so for biomedicine as scientific literature continues to grow exponentially. E2ERE typically involves identifying entities (or named entity recognition (NER)) and associated relations, while most RE tasks simply assume that the entities are provided upfront and end up performing relation classification. E2ERE is inherently more difficult than RE alone given the potential snowball effect of errors from NER leading to more errors in RE. A complex dataset in biomedical E2ERE is the ChemProt dataset (BioCreative VI, 2017) that identifies relations between chemical compounds and genes/proteins in scientific literature. ChemProt is included in all recent biomedical natural language processing benchmarks including BLUE, BLURB, and BigBio. However, its treatment in these benchmarks and in other separate efforts is typically not end-to-end, with few exceptions. In this effort, we employ a span-based pipeline approach to produce a new state-of-the-art E2ERE performance on the ChemProt dataset, resulting in \(>4\%\) improvement in F1-score over the prior best effort. Our results indicate that a straightforward fine-grained tokenization scheme helps span-based approaches excel in E2ERE, especially with regards to handling complex named entities. Our error analysis also identifies a few key failure modes in E2ERE for ChemProt.
end-to-end relation extraction, chemical-protein relations, span-based relation extraction
## I Introduction
Although it is amazing to see rapid progress in biomedical research as we saw during the ongoing COVID-19 pandemic, the associated general explosion of peer-reviewed literature in life sciences can be daunting for researchers to keep up with on a regular basis. As shown by Lu [1], the exponential growth in scientific literature makes it generally untenable to stay abreast of all the exciting outcomes in a field. To mitigate this situation, natural language processing (NLP) methods that automatically extract relational information reported in literature have been on the rise. Popular relational information of this type includes protein-protein interactions (to understand disease etiology and progression), gene-disease associations (to identify potential drug targets), drug-disease treatment relations (to spot off-label usage or assess potential for repositioning), and drug-gene interactions (to design targeted therapies). Normalizing the extracted relations and storing them in a structured database enables researchers to quickly search for existing research outcomes to arrive at new hypotheses and expedite the knowledge discovery process.
### _Introduction to the ChemProt task_
Toward reliable benchmarking of NLP methods, over the past decade, there has been a general push to create expert annotated datasets that are used in shared tasks and are subsequently made publicly available for the wider community. The BioCreative series is one such popular venue which has led to many public datasets in BioNLP. The ChemProt extraction shared task that was part of the BioCreative VI series [2] is a popular task, included in well-known BioNLP benchmarks such as BLUE [3], BLURB [4], and BigBio [5]. The task deals with identifying relations between chemical compounds and proteins (gene compounds1) from scientific literature. For instance, consider the sentence: "Contribution of the **Na+K+-2Cl- cotransporter** NKCC1 to **Cl**- secretion in rat OMCD." Experts annotated this sentence with the _subject_ chemical entity **Cl**- to be a _substrate of_ the _object_ gene entity **Na+K+-2Cl- cotransporter**. The resulting relation (called "interaction" in the ChemProt task) is often expressed as a triple: (**Cl, substrate of**, **Na+K+-2Cl- cotransporter**), where the relation label (here, _substrate of_) is typically called a _predicate_. In the ChemProt shared task, the spans of both the subject and object entity within the sentence were provided and the participants were asked to predict the type of chemical-protein relation between them from a pre-determined set of such predicates shown in Table I. (The _substrate of_ predicate is coded as **CPR: 9** in the dataset.) Long chemical names involving non-alphabetic characters along with overlapping/nested entities
complicate named entity recognition (NER) in ChemProt; relation extraction (RE) is also hampered with long sentences with complex syntactic structures.
The original ChemProt shared task formulation in 2017 and many efforts that used it in the following years (including latest BioNLP benchmarks BLUE, BLURB, and BigBio) assume that the chemical and protein names were already spotted in the text. That is, the exact locations of the entities within the sentence are disclosed as part of the input and the task boils down to predicting which chemical-protein pairs participate in an interaction. While this non end-to-end (E2E) setting is important to isolate and evaluate the ability to correctly predict interactions when their spans are available, a more realistic setting for end-user applications is when only raw input is provided. To automatically parse literature to obtain new interactions, there is no scope for nicely spotted entities. Hence there has been a rise in E2ERE methods that incorporate NER as part of the RE process. There are few efforts that handled ChemProt in the E2E setting and that is the focus of our manuscript.
### _Prior efforts on the ChemProt task_
As indicated earlier, several prior efforts on ChemProt are modeled as relation classification, and hence not E2E. Researchers experimented with convolutional/recurrent neural networks [6] and tree long short-term memory neural networks (LSTMs) [7]. We also participated in the shared task and used ensembles of SVMs and neural models to achieve the best performance at the time [8], which was subsequently improved upon by Sun et al [9]. Recently, Choi et al. [10] used calibration methods and self-training using contextualized language models to further improve upon prior results.
Among the few efforts that attempted E2ERE on ChemProt, Luo et al. [11] used a sequence labeling approach using BiLSTMs with a conditional random field (CRF) layer with additional contextualized features to identify entities and used rules to extract relations. Zuo and Zhang [12] developed a span-based method, SpanMB_BERT, that considers all span representations up to a max length, subsequently determining span types and potential relations between spans with valid types. They also used another span-based model, DyGIE++ [13], that jointly models NER and RE using span representations built with a novel idea of span graph propagation, where a graph structure is imposed on spans via different heuristics. The most recent effort by Sun et al. [14] also conducts an E2E study on ChemProt. However, after carefully perusing their manuscript, they inadvertently appear to evaluate their model only on test sentences that contain at least one relation. As pointed out by Taille et al. [15], this can inflate the eventual performance as sentences that do not contain any relations could have produced false positives. In fact, 83% of sentences in the ChemProt test set do not lead to any relations.
### _Our contributions_
We believe it is important to continue building E2ERE models for biomedicine, especially on existing publicly available datasets. Using a span-based RE method, we do this for the ChemProt dataset with the following contributions.
* While span-based methods help with overlapping and nested entities that are common in ChemProt, tokenization has a major effect on which entities can be captured, no matter how sophisticated the NER model is. For example, the **Na+K+-2CI- cotransporter** span (from Section I-A) has "Na+" and "K+" as gold chemical entities besides the full span encoded as a gene name. The popular ScispaCy biomedical tokenizer [16] outputs only two tokens: "Na+K+-2CI-" and " cotransporter". The NLTK tokenizer [17] results in seven tokens: "Na", "+-", "K", "+-", "2CI", "-", and "catransporter", and all spans composed of them will still miss the gold spans "Na+" and "K+". We employ a simpler, more fine-grained tokenizer to ensure that we don't lose many entities in the pre-processing phase.
* Pipeline models typically have an NER model and a separate RE model, where the NER model output is fed to the RE model. Due to the snowball effect of errors in pipelines where NER errors lead to substantial losses in RE performance, they have fallen out of favor in E2ERE, with researchers looking more toward joint extraction methods [18, 19, 20]. However, Zhong and Chen [21]
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Group** & **Eval** & **ChemProt relations belonging to the group** \\ \hline CPR: 1 & N & PART\_OF \\ CPR: 2 & N & REGULATOR \(|\) DIRECT\_REGULATOR \(|\) INDIRECT\_REGULATOR \\
**CPR: 3** & Y & UPREGULATOR \(|\) ACTIVATOR \(|\) INDIRECT\_UPREGULATOR \\
**CPR: 4** & Y & DOWNREGULATOR \(|\) INHIBITOR \(|\) INDIRECT\_DOWNREGULATOR \\
**CPR: 5** & Y & AGONIST \(|\) AGONIST-ACTIVATOR \(|\) AGONIST-INHIBITOR \\
**CPR: 6** & Y & ANTAGONIST \\ CPR: 7 & N & MODULATOR \(|\) MODULATOR-ACTIVATOR \(|\) MODULATOR-INHIBITOR \\ CPR: 8 & N & COREATOR \\
**CPR: 9** & Y & SUBSTRATE \(|\) PRODUCT\_OF \(|\) SUBSTRATE\_PRODUCT\_OF \\ CPR: 10 & N & NOT \\ \hline \hline \end{tabular}
\end{table} TABLE I: ChemProt relations grouped based on biological semantic classes. The five bold CPR groups are used in evaluation.
showed that clever pipeline designs, specifically using typed markers for entities, can help achieve better results than joint models. We use their PURE approach [21] along with different relation-context representations in combination with our tokenization scheme to achieve new state-of-the-art performance for the ChemProt task.
* We analyze different E2ERE error types, those caused by NER errors and those that have to do with the inherent complexity of the ChemProt interaction types.
Although ChemProt is available from the creators of the dataset2, to enable fair end-to-end comparisons by other researchers, we make the exact spans (based on our tokenization) publicly available along with the associated pre-processing and modeling code: [https://github.com/bionlproc/end-to-end-ChemProt](https://github.com/bionlproc/end-to-end-ChemProt).
Footnote 2: [https://biocreative.bioinformatics.udel.edu/news/corpora/chemprot-corpus-biocreative-vi/](https://biocreative.bioinformatics.udel.edu/news/corpora/chemprot-corpus-biocreative-vi/)
## II Methods
### _The ChemProt dataset and task_
The ChemProt corpus contains 4,966 PubMed abstracts in total: 1,020 training documents, 612 development documents, and 800 test documents. Gold annotations for entities include the exact character start and end positions and entity types: CHEMICAL or GENE. A CHEMICAL and GENE entity can be connected by one of ten relation types indicated in Table I. However, only five of them are used for evaluation in the task (CPR:3, CPR:4, CPR:5, CPR:6, and CPR:9) because the others are not as interesting [2]. The numbers of entities and relations in the training, development, and test sets of ChemProt are shown in Table II. The original annotation task involves an input sentence that contains the pertinent GENE and CHEMICAL names already annotated. The output is expected to be the relations between all possible CHEMICAL-GENE pairs in it. In the E2E setting, we extend the original task to also spot the entity spans, determine entity types, and extract relations. Overall, the task is at the sentence level. That is, one can assume that the entities will be spelled out directly in the input sentence. However, the full PubMed abstract containing the input sentence is also provided. The annotators are only encouraged to refer to the full abstract when the sentence does not conclusively help with annotating the relations. Hence models are also allowed to look into the full abstract along with the input sentence.
### _Preprocessing_
Sentence segmentation and tokenization are the first steps in any NLP model and are essential here too. We used the Stanza [22] program for sentence segmentation. In Section I-C, we demonstrated how ChemProt dataset needs more fine-grained tokenization that does not come naturally with default tokenizers typically available in NLP software. This arises mostly because of the complexity of chemical and gene names that can be a mix of alphanumeric characters mixed with special symbols. We use tokenization based on spaces and special symbols using the standard regular expression package in Python [23]. Precisely, the full regex is
"[A-Za-zq-aA-Q]+|\d+|[^\(\verb)s]".
Note that the regex captures groups of alphabetical or Greek characters as single tokens and treats groups of digits similarly, while considering all other non-space characters as singleton tokens. For example, with this regex, our running example "Na+K+-2Cl- cotransporter" will be tokenized into "Na", "+", "-", "K", "+", "-", "2", "Cl", "-", and "cotransporter". Unlike the ScispaCy and NLTK tokenizers, these ten tokens fully capture all gold spans, including the full string and the entities "Na+" (combination of "Na" and "+") and "K+" (combination of "K" and "+"). Please see how this is different from the extreme tokenization of treating each character as a singleton token, which could lead to too many candidate spans.
We noticed Wadden et al. [13] used the ScispaCy method to preprocess the ChemProt dataset but claim to have lost about 10% of the named entities and 20% of the relations during the tokenization process.3 Zuo and Zhang [12] use ScispaCy tokens, subsequently split on '-' and '/' symbols, and disclose that they lost 2% of entities and relations. Our tokenization misses only 0.4% of total entities and 1.37% of total relations in the training and development datasets, improving over default strategies. There are still some entities we cannot identify even using our fine-grained approach. For example, consider a ChemProt string "KITD816V", which will be tokenized to "KITD", "816", and "V" as per our heuristic; however, these tokens cannot be combined in any way to form the gold spans "KIT" and "D816V" for that string.
Footnote 3: Wadden et al.’s pre-processing scripts and results are avaiable online: [https://github.com/dwadden/dysiepp](https://github.com/dwadden/dysiepp).
We also found that this preprocessing method is at times helpful in finding obvious annotation errors in the training dataset. For example, in the sentence fragment "...differentiated with retinoic acid and 12-O-tetradanoyl-phorbol-13-acetate," a clearly incorrect gold entity was provided as the span "tinoic acid a". Since the tokenization we perform will miss this, as we manually examined, we noticed that it was in fact an erroneous annotation while the correct one ought to be "retinoic acid". Hence we corrected the gold entity start and end character positions to reflect this new span. This was found
\begin{table}
\begin{tabular}{l r r r} \hline \hline
**Dataset** & **Training** & **Development** & **Test** \\ \hline CHEMICAL & 13,017 & 8,004 & 10,810 \\ GENE & 12,735 & 7,563 & 10,018 \\ \hline CPR: 3 & 768 & 550 & 665 \\ CPR: 4 & 2,254 & 1,094 & 1,661 \\ CPR: 5 & 173 & 116 & 195 \\ CPR: 6 & 235 & 199 & 293 \\ CPR: 9 & 727 & 457 & 644 \\ \hline \hline \end{tabular}
\end{table} TABLE II: The counts of entities and relations in the ChemProt dataset.
in the training abstract with doc_key = 23194825, in which 47 out of 48 entities were annotated incorrectly by one or two spaces, which were all corrected. Besides this particular example, no other corrections were made in the training or development datasets.
### _The PURE approach: NER and relation models_
As indicated in the introduction section, we use the Princeton University Relation Extraction (PURE) method [21] (and some variations). This method is a pipeline of two models, an NER model whose output is passed on to a relation model. PURE's NER model is span-based, in that all possible spans (up to a fixed length) of the input sequence of tokens are considered, one at a time, and tagged with an entity type (including the null type akin to the O tag in IOB tagging scheme). PURE uses a contextualized language model (such as SciBERT [24] or PubMedBERT [4] in our case) to process the input sequence and obtain contextualized embeddings for each token. For each span, the concatenation of the embeddings of the first and last tokens of the span along with a special token that represents span length is taken as the input to a softmax layer to predict the entity type. Figure 1 illustrates this span-based NER model with an example ChemProt sentence: "Contribution of the Na+-K+-2Cl- cotransporter NKCC1 to Cl- secretion in rat OMCD". For example, \([h_{1};h_{2};\phi(2)]\) is the span representation of the candidate entity "Contribution of". The figure also shows gold entities "Na+" (CHEMICAL), "Na+-K+-2Cl- cotransporter" (GENE), "K+" (CHEMICAL), "NKCC1" (CHEMICAL), and "Cl-" (CHEMICAL).
Once the chemicals and proteins are identified, a relation model takes the corresponding spans and predicts if there is a specific relation between each chemical-protein pair. A natural way to do this is to simply take the entity span embeddings from the entity model and use them in combination to predict relations (e.g., via concatenation input to a softmax layer). Note that this would mean that entity span embeddings do not change regardless of which chemical-protein pair is being assessed for a potential relation. Zhong and Chen [21] argue that entity representations ought to be tailored to each specific pair being considered. To this end, they introduce so called "entity markers", which are special start and end tokens that are placed on either side of the entity spans. In the ChemProt task chemicals are always subjects and the genes are objects. Thus we have four special tags [S:CHEM], [S:CHEM], [O:GENE], and [\O:GENE] that encapsulate the corresponding entity spans in the modified representation. For the candidate chemical-protein pair ("Cl-", "Na+-K+-2Cl- cotransporter"), the modified input sequence to the relation model for the earlier example sentence would be: "Contribution of the [O:GENE] Na+-K+-2Cl- cotransporter [\O:GENE] NKCC1 to [S:CHEM] Cl- [\S:CHEM] secretion in rat OMCD4. Next, this sequence is input to a pretrained language model to obtain contextualized embeddings for all tokens, including the entity marker tokens. The plan is to use the embeddings of these special tokens as representations of the entities they encapsulate to predict a potential relation between them. In this setting, given how attention modules work in BERT-like language models, it is straightforward to see that the representation of the same chemical (protein) span in a sentence changes based on what protein (chemical) span was being considered to form the candidate pair. In PURE, these entity marker embeddings are passed on to a two layer feed-forward
Fig. 1: Span-based NER model illustrated with a ChemProt sentence, where for the yellow span representations, the span boundary token embeddings (the \(h_{i}\)) are concatenated with the special token embeddings \(\phi(l)\) for each \(l\geq 1\) for span length.
network with ReLU activations followed by a softmax layers that predicts the relation type. (We use the null \(\epsilon\) class to indicate there is no relation along with CPR: 3, 4, 5, 6, and 9 classes from Table I) This entity marker scheme to enclose candidate subject and object spans before passing to a pre-trained language model are demonstrated in Figure 2.
The entity model is trained with multi-class cross entropy loss using the estimated probabilities of gold tags corresponding to each span (of length up to \(L\)). The relation model is trained assuming gold entities are available (given the pipeline approach), with multi-class cross-entropy using the probabilities estimates of the gold relation type for each candidate chemical-protein pair. At test time, chemicals and proteins from the entity model are passed on to the relation model to infer the interactions.
### _Relation representations_
In Section II-C, we conveyed the entity marker embeddings are used to predict the relation type between a chemical and protein. We did not, however, elaborate on how exactly this is to be done. Essentially, a relation representation built on top of the entity marker embeddings is needed. As per the original PURE paper [21], the concatenation of entity start contextual embeddings [S:CHEM] and [O:GENE], denoted \(r_{A}=[h_{\{\texttt{S:CHEM}\}}:h_{\{\texttt{O:GENE}\}}]\) was passed on to the feed-forward layers for relation prediction. However, we can consider other pieces of evidence too, including \(h_{\{\texttt{S:CHEM}\}}\) and \(h_{\{\texttt{O:GENE}\}}\), the entity end marker tokens. Furthermore, \(h_{[CLS]}\) and the tokens that occur in the middle between the two entities could also contribute signal to the eventual prediction. Inspired by prior efforts ([25] and [26]), we consider different relation representations as shown in Table III, where \(h_{M}\) refers to an average of all contextualized embeddings of tokens occurring between the chemical and protein spans. \(h_{M}\) is set to the \(\mathbf{0}\) vector when there are no tokens between the chemical and protein spans.
### _Cross-sentence context_
As demonstrated by Zhong and Chen [21], cross-sentence context can help language models perform better at E2ERE, especially if pronominal entities are involved. As indicated earlier, ChemProt task is predominantly designed as a sentence level task, while annotators were allowed to look at the full abstract when needed. So although for many scenarios, cross-sentence signal may not be necessary, it might help in some. To accommodate such situations, we extend each input sentence to the left and right by a fixed number of words denoted by hyperparameters \(W_{NER}\) for NER and \(W_{RE}\) for RE.
### _Evaluation metrics_
We use precision, recall, and F-scores for both NER performance and E2ERE performance. For NER, predicted entities are considered correct only if entity boundaries and entity types are both correct. For RE, predicted relations are considered correct only if boundaries of subject and object entities and relation types are both correct. Please note that tokenization already leads to a performance dip (as discussed in
\begin{table}
\begin{tabular}{l l} \hline \hline Notation & Relation representation based on entity markers \\ \hline \(r_{A}\) & \([h_{\texttt{S:CHEM}}:h_{\texttt{O:GENE}}]\) \\ \(r_{B}\) & \([h_{\texttt{[CLS]}}:r_{A}]\) \\ \(r_{C}\) & \([h_{\texttt{S:CHEM}}:h_{M}:h_{\texttt{O:GENE}}]\) \\ \(r_{D}\) & \([h_{\texttt{[CLS]}}:r_{C}]\) \\ \(r_{E}\) & \([h_{\texttt{[S:CHEM]}}:h_{\texttt{[S:CHEM]}}:h_{M}:h_{\texttt{[O:GENE]}}:h_{ \texttt{[O:GENE]}}]\) \\ \(r_{F}\) & \([h_{\texttt{[CLS]}}:r_{E}]\) \\ \hline \hline \end{tabular}
\end{table} TABLE III: Relation representations to predict relation types.
Fig. 2: Relation model with an example segment considering the candidate subject “Cl-” (CHEMICAL) and object “Na+–K+-2Cl- cotransporter” (GENE) enclosed by corresponding entity markers. Here, the relation representation is composed of the start entity marker embeddings along with the intervening text representation (\(h_{M}\)).
Section II-B) because some entities (and associated relations) cannot be recovered. For ChemProt test set, our tokenization approach loses 0.33% of entities and 0.98% of relations. When we determine recall, we consider these missed entities and relations as false negatives, in addition to those arising from recoverable entities/relations.
## III Experiments and Results
### _Model configurations and hyperparameters_
In our experiments, we combined the original training and development datasets in ChemProt to create a combined training dataset and selected 20% of this dataset as our new development dataset to tune hyperparameters. This way of splitting the dataset is consistent with other efforts [11, 14], as the test set is never involved. For pre-trained language models to be used as encoders for the entity and relation models, we used SciBERT, PubMedBERT (Abstracts), and PubMedBERT (Abstracts+PMC). SciBERT was trained on scientific texts: 18% of papers were from the computer science domain and 82% were from the broad biomedical domain. PubMedBERT (Abstracts) was trained from scratch (custom vocabulary) using abstracts from PubMed and PubMedBERT (Abstracts+PMC) was trained using abstracts from PubMed and full-text articles from PubMedCentral.
For all experiments, we trained the entity model for 50 epochs and relation model for 10 epochs. The batch size is 16 for both NER and RE. We set the context window sizes parameters \(W_{NER}=300\) and \(W_{RE}=100\) and maximum span length \(L=16\) based on empirical assessments with training data. All other model settings (including learning rates) are identical to the PURE model code [21]. There are very few entities that are composed of more than 16 tokens; since the model fails to capture them, they will be included as false negatives. The performances reported are averages determined across model runs with 5 different seeds.
### _Main results_
Regardless of the base language model used, the NER performances are very similar to each other with F-scores ranging from 90.3 to 91.2 across different seeds5. Coming to RE performance, the SciBERT relation model F1-scores were in the 65.8 to 66.1 range across different relation representations from Table III. Relation models with PubMedBERT (Abstracts) and PubMedBERT (Abstracts+PMC) had similar performances across different relation representations with PubMedBERT (Abstracts+PMC) achieving a top RE F-score of 68.8 and PubMedBERT (Abstracts) scoring a top F-score of 69.0. We believe SciBERT did not perform as well considering it was not fully focused on biomedical text. We only show the performances of the relation model across different relation representations for the PubMedBERT (Abstracts) base model in Table IV, due to it's slight edge over PubMedBERT (Abstracts+PMC). From the table, we see that \(r_{A}\), \(r_{B}\), and \(r_{C}\) have the same F-score but slightly different combinations of precision and recall scores; \(r_{C}\) trades-off recall to improve a bit in precision, while \(r_{A}\) achieves the best recall. Including the end tags did not seem to help much (\(r_{E}\) and \(r_{F}\)). We compare our best results with prior E2ERE efforts on ChemProt in Table VI, showing a 2% improvement in NER F-score and over 4% improvement in RE F-score compared to the prior best results.
Footnote 5: All performances reported in this paper (including those in the tables) are rounded to the nearest single decimal point.
### _Ablation of extra context_
We ran a simple experiment where the extra context outside the sentence boundaries designated by \(W_{NER}=300\) and \(W_{RE}=100\) is removed to see if the performance would dip. We perform this ablation for the \(r_{C}\) model with results shown in Table V, where the top row includes additional context in both NER and RE models. When the RE context is removed, the performance only dips by 0.4% in F-score (row 2). When both NER an RE contexts are taken away, the NER performance dips by 0.5%. Compared to the full context model, the F-score decreases by 0.8% without the NER and RE contexts. Considering these dips are all \(<1\)%, we conclude that the extra context may not have added significant performance boost. This is similar to the small gains observed in the PURE paper [21].
## IV Error Analysis
Our main focus in this section is on relation errors in the E2ERE pipeline. Although NER performance is over 90 (in F-score), NER false positives (FPs) and false negatives (FNs) lead to nearly 40% of RE errors. This is surprising because this implies that the 10% error rate in NER led to two out of every five RE errors. Potentially, a few missed
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Context window sizes**} & \multicolumn{3}{c}{**NER**} & \multicolumn{3}{c}{**RE**} \\ & P & R & F & P & R & F \\ \hline \(W_{NER}\) = 300, \(W_{RE}\) = 100 & 91.0 & 90.9 & 91.0 & 70.8 & 67.2 & 69.0 \\ \(W_{NER}\) = 300, \(W_{RE}\) = 0 & 91.0 & 90.9 & 91.0 & 70.5 & 66.8 & 68.6 \\ \(W_{NER}\) = 0, \(W_{RE}\) = 0 & 90.5 & 90.4 & 90.4 & 69.2 & 67.3 & 68.2 \\ \hline \hline \end{tabular}
\end{table} TABLE V: Performances with the \(r_{C}\) model with PubMedBERT (Abstracts) with/without extra context for NER and RE.
\begin{table}
\begin{tabular}{c c c} \hline \hline Relation representations & P & R & F \\ \hline \(r_{A}\) & 69.9 & **68.3** & **69.0** \\ \(r_{B}\) & 70.3 & 67.8 & **69.0** \\ \(r_{C}\) & **70.8** & 67.2 & **69.0** \\ \(r_{D}\) & 69.9 & 67.6 & 68.7 \\ \(r_{E}\) & 70.2 & 66.9 & 68.5 \\ \(r_{F}\) & 70.4 & 66.4 & 68.3 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Results with different relation representations with the PubMedBERT (Abstracts) encoder.
entities are involved in several gold relations or a few incorrectly spotted entities are leading to many relation FPs. Unfortunately, partial matches lead to both FPs and FNs (for both NER and RE phases). For example, consider the sentence segment -- "Since this compound retains good **AChE** inhibitory activity and its hexahydrochrome[4,3-b]pyrrole moiety is reminiscent of the **hexahdropyrrolo[2,3-b]indole of physostigmine** (3),...". Here "AChE" is the gold protein and "hexahdropyrrolo[2,3-b]indole of physostigmine" is the gold chemical. However, we predict a substring of the chemical, **hexahdropyrrolo[2,3-b]indole**, leading to an FN for the gold chemical span but also an FP for the substring tagged as the chemical. For this specific example, a gold chemical-protein relation between "AChE" and "hexahdropyrrolo[2,3-b]indole of physostigmine" is missed due to the chemical FN, causing a relation FN. However, the model actually predicts a new relation between "AChE" and the partial matched substring "hexahdropyrrolo[2,3-b]indole" leading to a relation FP. Thus, a partial match for a single entity lead to two NER errors and two RE errors. This particular entity is very long (consists of 11 tokens based on our pre-processing approach) and complex and could have been missed by the NER model. There were also occasions where short abbreviated entities are missed by the NER model, especially if they are similar to commonly occurring words or if they are homonymous.
Next we move on to errors that are not caused by NER errors. These are errors specific to the RE model, where the chemical and protein are correctly tagged by the NER model. We begin with FNs where we find that more than 80% of the errors are when the model incorrectly predicted a \(\epsilon\) (null) relation when the gold label is one of the five valid relations. So this is an not an FN owing to confusion between two different relation types but because of simply not being able to detect any interaction at all. We calculated the proportion of such FN errors for each relation type as shown in Figure 3. We clearly see that this happens for one in four CPR:9 (substrate or product of) gold relations. As an example, we had an FN for a gold CPR:9 relation connecting the bold entities in this sentence -- "The hypothesis of the present study was that differences among **dopamine transporter** (DAT) ligands in potency and effectiveness as a positive reinforcers were related to potency and effectiveness as **DA** uptake inhibitors." We speculate here that the substrate relation is not clearly asserted and is rather implied as part of long complicated sentence via an indirect relationship between the chemical and protein. We believe, such long sentences with nuanced expressions may have caused these FNs for CPR:9.
A remaining small proportion of FN RE errors is when the model predicted a different non-null relation than the gold relation, the case where both an FN and FP are created. We counted all such errors and found this only happened between certain relation type pairs. We show our results in Figure 4 where the most often confused relations are CPR:3 (upregulator or activator) and CPR:4 (downregulator or inhibitor). This type of errors correspond to what Sun et al. [9] found in their DS-LSTM model too. Consider the following sentence where the gold relation was CPR:3 -- "Reductions in striatal dopamine and **tyrosine hydroxylase** content were also less pronounced with **EHT** treatment." The model predicted this as CPR:4. While the phrases "reductions in" and "less pronounced" in isolation may indicate an inhibitor interaction, the double negative that they induce together seems to indicate an activator link. Since upregulation and downregulation share similar term usage regarding regulation, the model has a hard time telling them apart especially when complex constructs are used as shown in the example. We also analyzed the relative proportions of FPs arising from each relation type and found that a quarter of relations predicted as CPR:9 (substrate or product of) are actually FPs (Figure 5).
## V Concluding Remarks
Chemical-protein interactions are central to drug mechanisms leading to both therapeutic potential and side effects.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**NER**} & \multicolumn{3}{c}{**RE**} \\ & P & R & F & P & R & F \\ \hline Att-BiLSTM-CRF+ELMo [11] & 82.5 & 79.8 & 81.1 & 59.5 & 51.2 & 55.1 \\ DyGIE+ [12] & 89.7 & 87.6 & 88.7 & 65.4 & 60.5 & 62.9 \\ SpanMB\_BERT [12] & 89.3 & 88.3 & 88.8 & 68.0 & 61.5 & 64.6 \\
**Ours** (\(r_{C}\)) & **91.0** & **90.9** & **91.0** & **70.8** & **67.2** & **69.0** \\ \hline \hline \end{tabular}
\end{table} TABLE VI: A comparison of different end-to-end relation extraction methods for ChemProt.
Fig. 3: Proportion of gold relations predicted as the null relation
The ChemProt shared task set out to create an NLP benchmark for this important task and since its introduction, most attempts have treated the task as a relation classification problem where the entities are already annotated. Very few attempts were made to address the ChemProt extraction task in an end-to-end manner. In this paper, we improve over prior state-of-the-art in E2ERE for ChemProt using a span-based pipeline approach that additionally uses entity markers in the RE step. We also employ a fine-grained tokenization scheme that retains the ability to extract more entities than the default tokenizers in standard NLP packages. Our improvements are substantial enough (4.4% in F-score) to have not resulted purely from better tokenization schemes, because the prior best result's tokenization scheme loses 2.0% of relations due to tokenization while we lose 1.37%. Ablation experiments show that the extra sentential context adds \(<\)1% in performance.
Although our model improves over prior best scores to a nontrivial extent, the final F-score is still \(<70\%\), which is still 20 points away from 90%, when models are typically considered powerful and nearing human level performance. Error analyses showed that long entity spans could hurt NER performance, which stands at 91%. Since 40% of RE errors are due to NER errors, the last mile gains in NER performance could greatly improve the E2ERE scores. The substrate relation type (CPR:9) is involved in many FNs and FPs compared to other types and needs to be examined more carefully to potentially design customized strategies for that type. While fine-grained tokenization helped the NER step, it could have hurt the RE model because too many tokens within each entity may not capture the semantic representation of the span compared with using fewer but longer tokens. Once the NER step is completed, reverting back to a simpler tokenization scheme could help the RE model better leverage semantic priors in the base language models. Models based on generative approaches such as BioGPT [27] ([https://huggingface.co/docs/transformers/model_doc/biogpt](https://huggingface.co/docs/transformers/model_doc/biogpt)) and BioMedLM ([https://huggingface.co/stanford-crfm/BioMedLM](https://huggingface.co/stanford-crfm/BioMedLM)) could also be adapted to improve the end-to-end performance and need further exploration.
|
2308.06926 | OpenGCD: Assisting Open World Recognition with Generalized Category
Discovery | A desirable open world recognition (OWR) system requires performing three
tasks: (1) Open set recognition (OSR), i.e., classifying the known (classes
seen during training) and rejecting the unknown (unseen$/$novel classes)
online; (2) Grouping and labeling these unknown as novel known classes; (3)
Incremental learning (IL), i.e., continual learning these novel classes and
retaining the memory of old classes. Ideally, all of these steps should be
automated. However, existing methods mostly assume that the second task is
completely done manually. To bridge this gap, we propose OpenGCD that combines
three key ideas to solve the above problems sequentially: (a) We score the
origin of instances (unknown or specifically known) based on the uncertainty of
the classifier's prediction; (b) For the first time, we introduce generalized
category discovery (GCD) techniques in OWR to assist humans in grouping
unlabeled data; (c) For the smooth execution of IL and GCD, we retain an equal
number of informative exemplars for each class with diversity as the goal.
Moreover, we present a new performance evaluation metric for GCD called
harmonic clustering accuracy. Experiments on two standard classification
benchmarks and a challenging dataset demonstrate that OpenGCD not only offers
excellent compatibility but also substantially outperforms other baselines.
Code: https://github.com/Fulin-Gao/OpenGCD. | Fulin Gao, Weimin Zhong, Zhixing Cao, Xin Peng, Zhi Li | 2023-08-14T04:10:45Z | http://arxiv.org/abs/2308.06926v1 | # OpenGCD: Assisting Open World Recognition with Generalized Category Discovery
###### Abstract
A desirable open world recognition (OWR) system requires performing three tasks: (1) Open set recognition (OSR), _i.e._, classifying the known (classes seen during training) and rejecting the unknown (unseen/novel classes) _online_; (2) Grouping and labeling these unknown as novel known classes; (3) Incremental learning (IL), _i.e._, continual learning these novel classes and retaining the memory of old classes. Ideally, all of these steps should be automated. However, existing methods mostly assume that the second task is completely done manually. To bridge this gap, we propose OpenGCD that combines three key ideas to solve the above problems sequentially: (a) We score the origin of instances (unknown or specifically known) based on the uncertainty of the classifier's prediction; (b) For the first time, we introduce generalized category discovery (GCD) techniques in OWR to assist humans in grouping unlabeled data; (c) For the smooth execution of IL and GCD, we retain an equal number of informative exemplars for each class with diversity as the goal. Moreover, we present a new performance evaluation metric for GCD called harmonic clustering accuracy. Experiments on two standard classification benchmarks and a challenging dataset demonstrate that OpenGCD not only offers excellent compatibility but also substantially outperforms other baselines. Code: [https://anonymous.4open.science/r/OpenGCD-61F6/](https://anonymous.4open.science/r/OpenGCD-61F6/).
## 1 Introduction
Human cognition is the process of transforming, storing, learning and using the information received continually. For example, a child born in Asia will naturally recognize pandas, elephants and rhinocreoses. If he arrives in Australia, although he cannot recognize kangaroos and koalas, he can still identify them as unseen and two different animals based on his prior knowledge and their characteristics. After learning from his parents or others, he knows what species both are. In order not to forget these animals, he also takes pictures. In this way, this child can find and distinguish unseen animals according to his prior knowledge and their characteristics, and later recognize them by learning, and permanently remember these seen animals through photos. Inspired by this, several recent studies [1; 2; 3; 4; 5] have attempted to theorize this human mind and formulated an architecture called open world recognition (OWR).
A desirable OWR system requires performing three main tasks: (1) Open set recognition (OSR), _i.e._, classifying the known (classes seen during training) and rejecting the unknown (unseen/novel classes) _online_; (2) Grouping and labeling these unknown as novel known classes; (3) Incremental learning (IL), _i.e._, continual learning these novel classes and retaining the memory of old classes
[1; 2]. In this paper, we propose an approach called _assisting **open** world recognition with generalized category **d**iscovery_ (OpenGCD) that combines three key ideas to address the above tasks sequentially.
For the first task, _i.e_., OSR, thresholding the closed set predictions of a classifier and evaluating the likelihood that a instance is from an unknown class based on the marginal distribution are two popular options [6]. The former is lightweight, while the latter is intuitive. Inspired by this, our first idea is to develop an OSR method that combines the advantages of both. To this end, we evaluate the likelihood that a instance is from an unknown class based on the uncertainty of the classifier's closed set prediction. It is not only computationally lightweight as thresholding methods, but also allows to visualize the probability distribution of the instance over unknown and all known classes as evaluation methods.
For the second task, since Bendale and Boult [1] first formalized the OWR problem, the vast majority of subsequent work has followed their setting to solve this task exclusively manually, _e.g_., [2; 3; 4; 5]. It is laborious and expensive. Furthermore, we find that it is essentially a task to classify all data in the unlabeled set (rejected instance set) given a labeled dataset (available training set). Ideally, labeled and unlabeled datasets are class-disjoint. At this point, this task coincides with novel category discovery (NCD), with the difference that the latter only requires clustering unlabeled data, while the former requires further specifying explicit classes. However, the fact is that there are always some instances from known classes that are falsely rejected, _i.e_., labeled and unlabeled datasets may be class-intersecting. As an extension to NCD, generalized category discovery (GCD) takes this into account. Inspired by this, our second idea is to introduce GCD techniques to assist humans in grouping unlabeled data. To this end, we employ the semi-supervised \(k\)-means++ (ss-\(k\)-means++) algorithm [7] to filter and group instances from novel classes in the unlabeled dataset. Thus, the labeler simply picks out the obviously incompatible instances from each group, rather than struggling to label the messy data directly.
However, this approach requires knowledge of the total number of classes for both known and novel classes, which is not realistic in the open world. To this end, we fine-tune the class number estimation protocol proposed by Han _et al_.[8] to allow it to accelerate the search process by Brent's algorithm as in [9]. Moreover, we find that the average clustering accuracy (ACC) [10], an evaluation metric still widely used in NCD and GCD until now [8; 9; 11; 12; 13], fails to distinguish explicitly between known and novel classes, resulting in improper evaluation. Thus, we extend ACC to the harmonic clustering accuracy (HCA), which measures known and novel classes with classification accuracy and ACC, respectively, and then harmonizes the two.
For the third task, _i.e_., IL, the challenge lies in acquiring novel knowledge while avoiding catastrophic forgetting of old knowledge. After all, in an open dynamic world, full training data may only be temporarily available due to storage constraints or privacy concerns [14]. Furthermore, GCD cannot function smoothly without informative labeled data from known classes. Fortunately, we find the popular replay technique to be a straightforward yet effective solution. Inspired by this, our third idea is to select some informative exemplars when data are available and save them for subsequent GCD and IL. To this end, we employ the dissimilarity-based sparse subset selection (DS3) algorithm [15] for exemplar selection to ensure the diversity of pre-stored instances. Compared to methods that aim at selecting exemplars for representativeness, _e.g_., [11; 16], it preserves as much spatial information as possible from the original data thus reducing open space risk, _i.e_., the risk of classifying known instances into unknown.
Overall, the contributions of this work can be highlighted as follows: (i) A highly compatible OWR scheme dubbed OpenGCD is provided, which is independent of classifier, so any well-designed closed set classifier can be easily embedded in it for OWR; (ii) GCD is first introduced to assist the task of filtering and grouping unlabeled data in OWR to reduce labor costs, which drives OWR another small step towards automation; (iii) A new performance evaluation metric called HCA is presented for NCD and GCD, which solves the problem that ACC fails to distinguish explicitly between known and novel classes resulting in improper evaluation; (iv) A thorough empirical evaluation of OpenGCD is reported, showing significant performance improvements in various tasks of OWR.
## 2 Related work
We visually illustrate the similarities and differences between OWR and related settings in Fig. 1. Next, we briefly review the most representative related works.
Open set recognitionAs shown in Fig. 1(c), in the OSR scenario, incomplete knowledge of the world exists in the training set and unknown classes can be submitted to the system during testing. It requires the _online_ model not only to _classify_ the known/seen classes, but also to _reject_ the unknown/unseen/novel ones [17; 18]. 1-vs-all principle, thresholding, and unknown probability estimation are the three most popular OSR strategies [6]. 1-vs-all principle-based methods [19; 20] are the earliest in origin but relatively cumbersome. The threshold-based methods [21; 22] offer high compatibility and low computational overhead. The methods [23; 24] for estimating unknown probability are the most intuitive.
Generalized category discoverAs shown in Fig. 1(d), in the GCD scenario, the unlabeled test set (available for training) may contain both classes that have been seen and unseen during training. It requires the model not only to _classify_ known/seen classes, but also to _cluster_ unknown/unseen/novel ones [12]. Its three major differences from OSR are whether it supports online runs, whether the unlabeled test set is available for training, and whether the unknown classes should be rejected or clustered. Furthermore, if the test and training sets are class-disjoint, the problem degenerates into NCD, which can be illustrated by Fig. 1(d) with the white-emitting animals removed. As an emerging technology, representative works are [8; 9; 11; 13].
Incremental learningAs shown in Fig. 1(b), in the IL scenario, instead of unseen classes, novel known classes are submitted to the system during testing. Moreover, the full training data from old known classes may only be temporarily available due to storage constraints or privacy concerns. It requires the model to continuously learn knowledge of novel known classes while avoiding catastrophic forgetting of old known classes [25]. Regularization, parameter isolation and replay are the three most popular techniques [14]. The former two [26; 27] offer low compatibility due to their strong dependence on neural network classifiers. The last one [28; 16] is simple but effective and essential for GCD.
Open world recognitionAs shown in Fig. 1(e), in the OWR scenario, the settings of OSR and IL are perfectly followed. The GCD setting will also be catered if the replay IL scheme is adopted. It requires the model to OSR, group and label unlabeled data, and IL in sequence. As a challenging task, representative works are [1; 2; 3; 4; 5]. Interestingly, they both adopted thresholding methods for OSR and processed unlabeled data manually. Inspired by this, we developed OpenGCD, whose flow can be illustrated by Fig. 1(e), in which white-emitting workers are replaced by GCD.
## 3 Assisting open world recognition with generalized category discovery
Problem FormulationA solution to OWR is a tuple \([\mathcal{O},\mathcal{L},\mathcal{I}]\) with:
1. An OSR function \(\mathcal{O}\!:\!\mathbb{R}^{3\times H\times W}\mapsto\mathbb{N}\). Given an _online_ instance set \(\mathcal{X}\!\subset\!\mathbb{R}^{3\times H\times W}\), \(\mathcal{O}\) should assign \(\boldsymbol{x}\!\in\!\mathcal{X}\) to either \(\mathcal{C}_{t}^{l}\!\subset\!\mathbb{N}^{+}\) (known classes at phase \(t\)) or \(0\) (unknown classes). See Secs. 3.1-3.4 for our \(\mathcal{O}\).
Figure 1: Schematic of various problems in the open world.
2. A labeling process \(\mathcal{L}\!:\!\mathbb{R}^{3\times H\times W}\mapsto\mathbb{N}^{+}\). Given an unlabeled unknown instance set \(\mathcal{X}^{0}\!\subset\!\mathcal{X}\), \(\mathcal{L}\) should assign ground-truth labels to \(\mathbf{x}^{0}\!\in\!\mathcal{X}^{0}\). Assuming that the novel classes discovered are \(\mathcal{C}_{t}^{n}\!\subset\!\mathbb{N}^{+}\) where \(\mathcal{C}_{t}^{n}\cap\mathcal{C}_{t}^{l}=\varnothing\), then it yields \(\mathcal{C}_{t+1}^{l}=\mathcal{C}_{t}^{l}\cup\mathcal{C}_{t}^{n}\). See Sec. 3.5 for our \(\mathcal{L}\).
3. An IL function \(\mathcal{I}_{t}:\mathcal{H}^{|\mathcal{C}_{t}^{l}|}\!\mapsto\!\mathcal{H}^{| \mathcal{C}_{t+1}^{l}|}\). Given a labeled instance set \(\mathcal{X}^{n}\!\subset\!\mathcal{X}^{0}\) of novel classes, \(\mathcal{I}_{t}\) should allow \(\mathcal{O}\) to learn \(\mathcal{C}_{t}^{n}\) and retain the ability to recognize \(\mathcal{C}_{t}^{l}\). See Sec. 3.6 for our \(\mathcal{I}_{t}\).
### Feature embedding
Given an instance \(\mathbf{x}\!\in\!\mathbb{R}^{3\times H\times W}\), the goal of feature embedding is to convert it into a flat feature \(\mathbf{z}\!\in\!\mathbb{R}^{D}\). The benefit is that it gives an interface allowing us to design subsequent models as we wish. It is possible to add the classification head or plug in any other type of classifier, _e.g._, support vector machine (SVM), XGBoost.
The features generated by the vision transformer (ViT) [29] with self-supervised contrastive learning offer discriminative spatial representations. Thus, as in [9], we employ ViT trained on the _unlabeled_ ImageNet with DINO self-supervision as the feature extractor \(f\!:\!\mathbb{R}^{3\times H\times W}\!\mapsto\!\mathbb{R}^{D}\). We can get the feature representation \(\mathbf{z}\) of the instance \(\mathbf{x}\) via \(f\). All our subsequent procedures are executed on features extracted from the frozen ViT.
### Exemplar selection
At phase \(t\), given a temporarily available labeled feature set \(\mathcal{Z}_{t}^{l}\!=\!\{\mathbf{z}_{i}^{l},y_{i}^{l}\}_{i=1}^{N_{0}}\) where \(y_{i}^{l}\!\in\!\mathcal{C}_{t}^{l}\!\subset\!\mathbb{N}^{+}\), the goal of the exemplar selection is to retain informative instances of it for GCD (Sec. 3.5.1) and IL (Sec. 3.6).
We apply the DS3 algorithm [15] to select exemplars \(\mathcal{E}_{t}\) from \(\mathcal{Z}_{t}^{l}\) and store them in buffer \(\mathcal{M}_{r}\). DS3 defines the objective function based on the difference between instances and solves it by the alternating direction method of multipliers (ADMM). DS3 is only our default choice because of its ability to preserve diverse and informative instances. The exemplars selected with the goal of diversity retained as much spatial information as possible from the original data, avoiding the expansion of unknown spaces and thus reducing the open space risk, _i.e._, the risk of categorizing known instances as unknown. In fact, any similar exemplar selection approach is an alternative. Moreover, to avoid out of memory, \(|\mathcal{M}_{r}|\) is always fixed at \(N_{0}\), the size of the memory occupied by the data in the initial phase. To ensure class balance, DS3 is executed once on the feature subset of each known class, so that \(|\mathcal{M}_{r}|/|\mathcal{C}_{t}^{l}|\) exemplars are retained for each known class.
### Classifier (re)fitting
Given the labeled exemplar set \(\mathcal{E}_{t}\!=\!\{\mathbf{z}_{i}^{e},y_{i}^{e}\}_{i=1}^{N_{0}}\!\subseteq\! \mathcal{Z}_{t}^{l}\) where \(y_{i}^{e}\!\in\!\mathcal{C}_{t}^{l}\!\subset\!\mathbb{N}^{+}\), the goal of (re)fitting the classifier is to allow the classifier to learn the existing knowledge of the known classes. Regardless of the current phase, this is a process from scratch. It is uncomplicated benefiting from the fact that the total number of exemplars is constant and features are (re)fitted directly instead of instances.
We choose an appropriate classifier \(\varphi\!:\!\mathbb{R}^{D}\!\mapsto\!\mathbb{R}^{|\mathcal{C}_{t}^{l}|}\) to fit on \(\mathcal{E}_{t}\). Since OpenGCD has no dependency on classifier type, any well-designed classifier is an alternative. Considering that this is not the focus of this study, we take the multilayer perceptron (MLP, which can be considered as a classification head) or the SVM and XGBoost with default parameters as candidates.
### Uncertainty-based open set recognition
Given an unlabeled feature set \(\mathcal{Z}_{t}^{u}\!=\!\{\mathbf{z}_{i}^{u}\}_{i=1}^{M_{t}}\) (allowing continuous online delivery), the goal of OSR is to assign _label 0_ to features from unknown classes and assign other features to \(\mathcal{C}_{t}^{l}\).
By feeding \(\mathcal{Z}_{t}^{u}\) into \(\varphi\), the predicted probability distribution \(\mathcal{P}_{t}\!=\!\{\mathbf{p}_{i}\}_{i=1}^{M_{t}}\) over each known classes is obtained. Let the label set of \(\mathcal{Z}_{t}^{u}\) be \(\mathcal{C}_{t}^{u}\!\subset\!\mathbb{N}^{+}\). If \(\mathcal{C}_{t}^{u}\!\subseteq\!\mathcal{C}_{t}^{l}\) is known, the goal degenerates to closed set recognition (CSR) and the predicted labels can be assigned by \(y_{i}^{*}\!=\!\arg\max_{j\in\mathcal{C}_{t}^{l}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
uncertainty. Thus, we approximate the uncertainty of the classifier's prediction \(\mathbf{p}_{i}\) by:
\[u_{i}=1-\max_{j\in\mathcal{C}_{t}^{1}}p_{ij} \tag{1}\]
Then, we define the unknown probability as:
\[p_{i0}=\alpha\times u_{i} \tag{2}\]
where \(\alpha\) is a regulatory factor to control the temperature of the uncertainty.
Next, we can get a new probability distribution by:
\[p^{\prime}_{ij}=\frac{e^{p_{ij}}}{\sum_{j=0}^{|\mathcal{C}_{t}^{1}|}e^{p_{ij}}} \tag{3}\]
So far, we have visualized the probability distribution of the instance over the unknown (\(0\)) and all known classes (\(\mathcal{C}_{t}^{1}\)). Moreover, it is clear from Eqs. (1)-(3) that the related operations are quite lightweight, so the computational overhead is low. Essentially, it is along the same lines as the approach of thresholding \(\mathbf{p}_{i}\), which is to reject instances with low maximum prediction probability, while our approach is more intuitive. Specifically, in the case of \(\mathbf{p}_{i}=[0.1,0.2,0.6]\), the thresholding approach can reject \(\mathbf{z}_{i}^{u}\) by setting the threshold to be greater than \(0.6\) without quantitatively characterizing the likelihood that \(\mathbf{z}_{i}^{u}\) falling into the unknown. Whereas our approach can describe this likelihood by \(p^{\prime}_{i0}\) and reject \(\mathbf{z}_{i}^{u}\) based on \(p^{\prime}_{i0}>p^{\prime}_{ij},j\in\mathcal{C}_{t}^{1}\).
Finally, the predicted labels can be assigned by \(y_{i}^{*}\!=\!\arg\max_{j\in\{0,\mathcal{C}_{t}^{1}\}}\!p^{\prime}_{ij}\). Let the feature subset in \(\mathcal{Z}_{t}^{u}\) with predicted label of \(0\) be \(\mathcal{Z}_{t}^{0}\).
### Assisting manual annotation with generalized category discovery
Given the labeled exemplar set \(\mathcal{E}_{t}\!=\!\{\mathbf{z}_{i}^{e},y_{i}^{e}\}_{i=1}^{N_{0}}\!\subseteq\! \mathcal{Z}_{t}^{l}\) where \(y_{i}^{e}\!\in\!\mathcal{C}_{t}^{l}\!\subset\!\mathbb{N}^{+}\) and the rejected unlabeled feature set \(\mathcal{Z}_{t}^{0}\!=\!\{\mathbf{z}_{i}^{0}\}_{i=1}^{M_{0}^{0}}\!\subseteq\! \mathcal{Z}_{t}^{u}\) (available for training), the goal of assisting manual annotation with GCD is to first automatically filter and group features from novel classes in \(\mathcal{Z}_{t}^{0}\) using GCD, followed by manual correction and labeling.
#### 3.5.1 Generalized category discovery
As with the input of Sec. 3.5, the goal of GCD is to put the falsely rejected features in \(\mathcal{Z}_{t}^{0}\) back to \(\mathcal{C}_{t}^{l}\) and cluster other features in \(\mathcal{Z}_{t}^{0}\). We introduce GCD to assist in manually filtering and grouping features.
Let the novel classes in \(\mathcal{Z}_{t}^{0}\) be \(\mathcal{C}_{t}^{n}\!\subset\!\mathbb{N}^{+}\), and all classes in \(\mathcal{E}_{t}\!\cup\!\mathcal{Z}_{t}^{0}\) be \(\mathcal{C}_{t}\!=\!\mathcal{C}_{t}^{l}\cup\mathcal{C}_{t}^{n}\) where \(\mathcal{C}_{t}^{l}\cap\mathcal{C}_{t}^{n}\!=\!\varnothing\). However, we usually have no prior knowledge of \(|\mathcal{C}_{t}^{n}|\) or \(|\mathcal{C}_{t}|\). Here, we use the estimated \(|\mathcal{C}_{t}|\) instead of \(|\mathcal{C}_{t}|\) (the estimation problem is solved in Sec. 3.5.3). Afterwards, we employ ss-\(k\)-means++ [7] with \(\mathcal{E}_{t}\) as supervision to filter and group features from novel classes in \(\mathcal{Z}_{t}^{0}\) as in [9]. ss-\(k\)-means++ determines the centroids of \(|\mathcal{C}_{t}^{l}|\) known classes by \(\mathcal{E}_{t}\) and selects the remaining \(|\mathcal{C}_{t}^{n}|\) (\(|\mathcal{C}_{t}^{n}|\!=\!|\mathcal{\hat{C}}_{t}|\!-\!|\mathcal{C}_{t}^{l}|\)) centroids with a probability proportional to the distance from the feature to the nearest centroid. At each iteration, we force the data in \(\mathcal{E}_{t}\) to map to the ground-truth labels. Likewise, ss-\(k\)-means++ is only our default choice, and any approach with semi-supervised clustering capabilities is an option.
Finally, let the labels assigned to \(\mathcal{Z}_{t}^{0}\) by ss-\(k\)-means++ be \(\{\hat{y}_{i}^{0}\}_{i=1}^{M_{0}^{0}}\) where \(\hat{y}_{i}^{0}\in\mathcal{C}_{t}^{l}\cup\mathcal{N}_{t}\) and \(\mathcal{N}_{t}\) are \(|\mathcal{\hat{C}}_{t}^{n}|\) novel groups that are clustered. Then, let the features from \(\mathcal{N}_{t}\) in \(\mathcal{Z}_{t}^{0}\) and the corresponding predicted labels be combined into \(\mathcal{\hat{Z}}_{t}^{n}\).
#### 3.5.2 Manual annotation
Given the feature set \(\mathcal{\hat{Z}}_{t}^{n}\!=\!\{\mathbf{z}_{i}^{n},\hat{y}_{i}^{n}\}_{i=1}^{\hat{M} _{t}^{n}}\) with predicted cluster labels where \(\mathbf{z}_{i}^{n}\!\in\!\mathcal{Z}_{t}^{0}\) and \(\hat{y}_{i}^{n}\!\in\!\mathcal{N}_{t}\), the goal of manual annotation is to correct and label each cluster.
Engineers can fetch the instances corresponding to \(\mathcal{\hat{Z}}_{t}^{n}\), then visually locate the distinctive images in each cluster without much effort and put them into other appropriate clusters, and finally assign
ground-truth labels. Of course, it is also necessary to remove features from \(\mathcal{C}_{t}^{l}\) in \(\tilde{\mathcal{Z}}_{t}^{n}\) since there is no perfect model. Moreover, if \(|\mathcal{C}_{t}^{n}|>|\hat{\mathcal{C}}_{t}^{n}|\) or \(|\mathcal{C}_{t}^{n}|<|\hat{\mathcal{C}}_{t}^{n}|\), the corresponding clusters need to be added or removed. Nevertheless, compared with processing instances one by one, filtering and grouping data by GCD technology can still significantly reduce labor costs.
Let the combination of the features in \(\hat{\mathcal{Z}}_{t}^{n}\) from novel classes \(\mathcal{C}_{t}^{n}\) and the corresponding ground-truth labels be \(\mathcal{Z}_{t}^{n}\).
#### 3.5.3 Estimating the number of classes
As with the input of Sec. 3.5, the goal of estimating the number of classes is to determine \(k\) in ss-\(k\)-means++, _i.e._, \(|\mathcal{C}_{t}|\).
We observe that the class number estimation protocol in [9] improves the search efficiency by Brent's algorithm but is prone to fall into the greedy trap by using ACC as the only evaluation metric. Conversely, [8] circumvents this problem by evaluating labeled and unlabeled predictions separately but executes inefficiently due to traversal search. Thus, we fine-tune the protocol in [8] to allow it to accelerate the search process by Brent's algorithm as in [9].
Specifically, we first split \(\mathcal{E}_{t}\) into an anchor set \(\mathcal{E}_{t}^{a}\) with classes \(\mathcal{C}_{t}^{a}\) and a validation set \(\mathcal{E}_{t}^{v}\) with classes \(\mathcal{C}_{t}^{v}\) where \(\mathcal{C}_{t}^{a}\cup\mathcal{C}_{t}^{v}=\mathcal{C}_{t}^{l}\), \(\mathcal{C}_{t}^{a}\cap\mathcal{C}_{t}^{v}=\varnothing\), \(|\mathcal{C}_{t}^{a}|\cdot|\mathcal{C}_{t}^{v}|=2\cdot 1\). Then, we launch Brent's algorithm to execute ss-\(k\)-means on \(\mathcal{C}_{t}^{a}\cup\mathcal{C}_{t}^{v}\cup\mathcal{Z}_{t}^{0}\) bounded by \((|\mathcal{C}_{t}^{a}|,|\mathcal{C}_{t}^{\max}|)\) until convergence. \(|\mathcal{C}_{t}^{\max}|\) is an expected maximum number of total classes, and it is allowed to set a large value if there is no this knowledge. During semi-supervised learning, features in \(\mathcal{E}_{t}^{a}\) are forced to follow ground-truth labels and features in \(\mathcal{E}_{t}^{v}\) are considered as additional "unlabeled" data. The clustering performance of \(\mathcal{E}_{t}^{v}\) and \(\mathcal{Z}_{t}^{0}\) is evaluated using ACC and silhouette coefficient (SC), given below, respectively, and Brent's algorithm takes maximizing ACC+SC as the optimization objective. Finally, Brent's algorithm terminates at the optimal estimate \(|\mathcal{C}_{t}|\).
The two main differences between our protocol and the original one of [8] are whether the centroids of novel classes are initialized by \(k\)-means++ and whether the search process is accelerated by Brent's algorithm.
Cluster quality indicesThe first index is ACC, which is applicable to the \(\mathcal{C}_{t}^{v}\) labeled classes in the validation set \(\mathcal{E}_{t}^{v}\) and is given by:
\[\text{ACC}=\max_{g\in\mathcal{G}(\mathcal{C}_{t}^{v})}\frac{1}{L}\sum_{i=1}^{ L}\mathds{1}\{y_{i}^{v}=g(\hat{y}_{i}^{v})\} \tag{4}\]
where \(y_{i}^{v}\) and \(\hat{y}_{i}^{v}\) denote the ground-truth label and clustering assignment for each feature \(\mathbf{z}_{i}^{v}\) in \(\mathcal{E}_{t}^{v}\), \(L=|\mathcal{E}_{t}^{v}|\), and \(\mathcal{G}(\mathcal{C}_{t}^{v})\) is the group of permutations of \(|\mathcal{C}_{t}^{v}|\) elements (this discounts the fact that the cluster indices may not be in the same order as the ground-truth labels). Permutations are optimized using the Hungarian algorithm [30].
The other index is SC, which is applicable to the unlabeled features \(\mathcal{Z}_{t}^{0}\) and is given by:
\[\text{SC}=\sum_{i=1}^{M_{t}^{0}}\frac{b(\mathbf{z}_{i}^{0})-a(\mathbf{z}_{i}^{0})}{ \max\{a(\mathbf{z}_{i}^{0}),b(\mathbf{z}_{i}^{0})\}} \tag{5}\]
where \(a(\mathbf{z}_{i}^{0})\) is the average distance between \(\mathbf{z}_{i}^{0}\) and all other features within the same cluster, and \(b(\mathbf{z}_{i}^{0})\) is the smallest average distance of \(\mathbf{z}_{i}^{0}\) to all features in any other cluster (of which \(\mathbf{z}_{i}^{0}\) is not a member).
### Exemplar-based incremental learning
Given the labeled novel class feature set \(\mathcal{Z}_{t}^{n}=\{\mathbf{z}_{i}^{n},y_{i}^{n}\}_{i=1}^{M_{t}^{n}}\) where \(\mathbf{z}_{i}^{n}\in\hat{\mathcal{Z}}_{t}^{n}\) and \(y_{i}^{n}\in\mathcal{C}_{t}^{n}\subset\mathbb{N}^{+}\) and the labeled old class exemplar set \(\mathcal{E}_{t}=\{\mathbf{z}_{i}^{e},y_{i}^{e}\}_{i=1}^{N_{0}}\subseteq\mathcal{Z} _{t}^{l}\) where \(y_{i}^{e}\in\mathcal{C}_{t}^{l}\subset\mathbb{N}^{+}\), the goal of IL is for the classifier to continuously learn knowledge of novel classes \(\mathcal{C}_{t}^{n}\) and retain memory of old classes \(\mathcal{C}_{t}^{l}\).
Thus, we merge \(\mathcal{E}_{t}\) and \(\mathcal{Z}_{t}^{n}\) to get the labeled feature set \(\mathcal{Z}_{t+1}^{l}=\{\mathbf{z}_{i}^{l},y_{i}^{l}\}_{i=1}^{N_{t+1}}\) where \(N_{t+1}=N_{0}+M_{t}^{n}\) and \(y_{i}^{l}\in\mathcal{C}_{t+1}^{l}=\mathcal{C}_{t}^{l}\cup\mathcal{C}_{t}^{n}\), for stage \(t+1\). Since \(\mathcal{Z}_{t+1}^{l}\) is also temporarily available and the number of
features of the novel classes may differ significantly from that of the old classes, exemplar selection is required. Before that, we let \(t\!=\!t\!+\!1\) to enter Sec. 3.2 to formally launch the next stage.
The schematic of the formulated OpenGCD is shown in Appendix A.
## 4 Experiments
### Experimental setup
#### 4.1.1 Data
We evaluate OpenGCD on two standard benchmark datasets CIFAR10 [31], CIFAR100 [31], and a challenging dataset CUB [32]. CIFAR10\(/\)CIFAR100\(/\)CUB contains 50,000\(/\)50,000\(/\)5,994 training images and 10,000\(/\)10,000\(/\)5,794 test images from 10\(/\)100\(/\)200 classes. Since ViT is self-supervised trained on ImageNet [33], we do not involve ImageNet in our test experiments as it is not completely unknown to OpenGCD.
#### 4.1.2 Metrics
We adopt accuracy, harmonic normalized accuracy (HNA) [20], and HCA to evaluate the performance of IL, OSR, and GCD, respectively.
AccuracyA widely used metric for evaluating CSR performance, given by:
\[\text{Acc}=\frac{1}{M}\sum_{i=1}^{M}\mathds{1}\{y_{i}=\hat{y}_{i}\} \tag{6}\]
where \(M\) is the number of test instances, \(y_{i}\) and \(\hat{y}_{i}\) are the ground-truth and predicted labels. We adopt it to show the performance degradation of the same test set at different phases. The more severe the degradation, the more the IL fails.
Harmonic normalized accuracyA widely used metric for evaluating OSR performance, given by:
\[\text{HNA}=\begin{cases}0,&if\text{ AKS}=0\;or\text{ AUS}=0\\ \frac{2}{\left(\frac{1}{\text{AKS}}+\frac{1}{\text{AKS}}\right)},&otherwise \end{cases} \tag{7}\]
where AKS and AUS are the accuracy of known and unknown classes calculated by Eq. 6, respectively. It harmonizes AKS and AUS, and its higher score indicates more successful OSR.
Harmonic clustering accuracyA new metric for evaluating NCD or GCD performance, yielded by extending ACC, given by:
\[\text{HCA}=\begin{cases}0,&if\text{ AKS}=0\;or\text{ ANS}=0\\ \frac{2}{\left(\frac{1}{\text{AKS}}+\frac{1}{\text{AKS}}\right)},&otherwise \end{cases} \tag{8}\]
where AKS and ANS are the classification accuracy of known classes and the ACC of novel classes calculated by Eqs. 6 and 4, respectively. Inspired by HNA, we harmonize AKS and ANS to yield HCA, and its higher score indicates more successful NCD or GCD. Its rationale and differences from ACC are elaborated in Appendix B.
#### 4.1.3 Implementation details
To simulate the open world scenarios, we randomly pick \(4/40/80\) classes from CIFAR10\(/\)CIFAR100\(/\)CUB as the initial known classes, and then randomly pick \(2/20/40\) classes from the remaining classes at each incremental step (3 steps in total). Each OWR phase requires sequential evaluation of the IL, OSR, and GCD performance of various methods with accuracy, HNA, and HCA as metrics, respectively. Specifically, the performance of IL can be evaluated using the current (novel known classes) and all previous (old classes) test sets. The performance of OSR can be evaluated using the current and all previous test sets (known classes) as well as the next training and test sets (unknown classes). Since OSR is an online process, the next training set that the model has not seen can also participate in the evaluation. Typically, we should evaluate the performance of GCD using all instances rejected by OSR. However, instances rejected by different methods are not identical. To be fair, we still only perform GCD on the rejected
instances, but evaluate the performance of GCD with the same dataset as when evaluating OSR performance. Although this performance may be affected by the OSR results, convincing conclusions can still be drawn from comprehensive analysis and comparison.
For our method, ViT's DINO self-supervised pre-trained weights are provided by [29]. For CIFAR10/CIFAR100/CUB, \(|\mathcal{M}_{r}|\) is set to \(20\text{k}/20\text{k}/2.4\text{k}\). Considering that the default parameters in the original work [15] of DS3 have proven to be well inclusive, we keep the same configuration. To be lightweight, closed set classifiers are selected from MLP, SVM, and XGBoost with default parameters. The only hyperparameter \(\alpha\) in OpenGCD is determined in \(\{1e^{-10},\cdots,1e^{10}\}\) using the open set grid search protocol [20] with HNA as the criterion. ss-\(k\)-means++ is a non-parametric algorithm, and \(|\mathcal{C}_{t}^{\max}|\) is set to a larger number, 500, for all datasets.
For other methods, we employ the standard/open set grid search protocol with accuracy/HNA as the criterion to determine parameters about IL/OSR. \(|\mathcal{M}_{r}|\) is the same setting as our method. Given that this work is the first attempt to assist OWR with GCD, we embed the proposed GCD approach into existing baselines to give them GCD capability. Likewise, \(|\mathcal{C}_{t}^{\max}|\) is set to 500 for all datasets. As with our method, manual annotation is mandatory before the next phase begins.
We implement our method using PyTorch 1.13.1 and run experiments on a RTX 3090 GPU. Our results are averaged over 5 runs for all datasets.
### Experimental results
#### 4.2.1 Comparison with the baselines
We compare OpenGCD armed with MLP, SVM, and XGBoost against the state-of-the-art baselines for OWR, starting from CIFAR10, CIFAR100, and CUB in Tab. 1.
For IL (columns 3-7, 11-15, and 19-23 in Tab. 1), the best accuracies are almost all in the gray zones, which indicates that the proposed exemplar-based IL scheme substantially outperforms the other baselines. For column \(7/15/23\), the least decrease in average accuracy is found in OpenGCD\({}_{\text{SVM}}\)/OpenGCD\({}_{\text{SVM}}\)/L2AC with \(0.8\%/8.7\%/1.6\%\), which indicates that the proposed IL scheme excels in both learning novel knowledge and retaining old knowledge. L2AC's close win
\begin{table}
\begin{tabular}{l l l l l l l l l l l l l} \hline \hline \multicolumn{2}{c}{**CIFAR10**} & \multicolumn{4}{c}{**CIFAR10**} & \multicolumn{4}{c}{**CIFAR100**} & \multicolumn{4}{c}{**CIFAR10**} \\ \cline{2-13} \multicolumn{2}{c}{**H.**} & \multicolumn{2}{c}{**H.**} & \multicolumn{2}{c}{**GCD**} & \multicolumn{2}{c}{**H.**} & \multicolumn{2}{c}{**GCD**} & \multicolumn{2}{c}{**H.**} & \multicolumn{2}{c}{**GCD**} & \multicolumn{2}{c}{**H.**} & \multicolumn{2}{c}{**GCD**} \\ \hline \multicolumn{1}{c}{**Phase-Method**} & \multicolumn{1}{c}{**Accr.**} & \multicolumn{1}{c}{**FRA**} & \multicolumn{1}{c}{**Rex.**} & \multicolumn{1}{c}{**Rex.**} & \multicolumn{1}{c}{**Rex.**} & \multicolumn{1}{c}{**Rex.**} & \multicolumn{1}{c}{**Rex.**} & \multicolumn{1}{c}{**Rex.**} & \multicolumn{1}{c}{**Rex.**} & \multicolumn{1}{c}{**Rex.**} & \multicolumn{1}{c}{**Rex.**} \\ \hline \multicolumn{1}{c}{**T**} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{T} & \multicolumn{1}{c}{T} \\ \({}_{\text{N}}\)[20] & \(\mathbf{5}\).79 & \(\mathbf{6}\).79 & \(\mathbf{7}\).91 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 \\ \({}_{\text{N}}\)[20] & \(\mathbf{1}\).78 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 \\ \({}_{\text{N}}\)[20] & \(\mathbf{1}\).85 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 \\ \({}_{\text{T}}\)[20] & \(\mathbf{1}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 \\ \({}_{\text{N}}\)[20] & \(\mathbf{1}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 \\ \({}_{\text{N}}\)[20] & \(\mathbf{1}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).93 & \(\mathbf{5}\).
on CUB indicates that trading time and space for performance is costly but effective in the case of small sample size and large class number. Longitudinally, the same method shows a decreasing trend in recognition ability for the same test set at different phases. This is the dual effect of increasing difficulty due to increasing number of classes and decreasing number of instances in each class due to constant buffer size.
For OSR (columns 8, 16, and 24 in Tab. 1), the best HNAs are also concentrated in the gray zones, except for the first and second phases where EVM is slightly better on CIFAR10, which indicates that the proposed uncertainty-based OSR scheme significantly outperforms the other rivals. The proposed IL scheme's endeavor to avoid catastrophic forgetting and preserve the original spatial information are the magic bullet for OpenGCD to turn the tables. Moreover, the proposed OSR scheme is not only computationally lightweight, but also visualizes the probability distribution of instances over unknown and all known classes, which is not available in the other methods. We empirically found that HNA did not show a continuous downward trend over time, as accuracy did, but rather fluctuated downward. This is reasonable since while OSR performance is strongly dependent on model accuracy, it also plays a key role in whether the difference between unknown and known classes is significant.
For GCD (columns 9-10, 17-18, and 25-26 in Tab. 1), all results are generated using the proposed class number estimation and GCD schemes. The average estimation errors on the three datasets are 6.6%, 4.5%, and 8.0%, respectively, which indicates the effectiveness of the fine-tuned class number estimation protocol. It is worth mentioning that Brent's algorithm converges at most in the \(12^{th}\) epoch (occurring at the second phase of OpenGCD\({}_{\text{MLP}}\) on CUB), which improves the search efficiency by 30.7 times compared to the original protocol. Almost all the best HCAs are also located within the gray zones, benefiting from the excellent performance of the proposed IL and OSR schemes. If a method is slightly inferior on HNA but catches up on HCA, it means that the method offers a better fit with the proposed GCD, such as OpenGCD\({}_{\text{XGB}}\), which is at the third phase on CIFAR100. The comparison of EVM with NNO, DeepNNO and B-DOC reveals the importance of the exemplar selection strategy. The latter three focus excessively on representativeness rather than diversity of exemplars, resulting in inadequate retention of original information and hence poor performance. L2AC has a silver lining only by virtue of its multiple utilization of exemplars. Although the performance of the proposed GCD scheme is unsatisfactory in the case of a large number of classes, the results still demonstrate the feasibility of the attempt to assist OWR with GCD.
Overall, almost all best performance is concentrated in the gray zones, which well demonstrates the technical advancement and excellent compatibility of OpenGCD.
#### 4.2.2 Ablation study
We inspect the contributions of the various components of OpenGCD. Given that OpenGCDs armed with various classifiers all exhibit similar variations, we only present the ablation results for the more efficient OpenGCD\({}_{\text{XGB}}\) in Tab. 2. As we can see, all components contribute significantly, and removing any of them can result in significant performance degradation or even loss of functionality.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**CIFAR10**} & \multicolumn{3}{c}{**CUB**} \\ \cline{2-10} & **IL** & **OSR** & **GCD** & **IL** & **OSR** & **GCD** & **IL** & **OSR** & **GCD** \\ \cline{2-10}
**Plume Method** & **Arc** & **FUN** & **HCA** & **Arc** & **FUN** & **HCA** & **Arc** & **FUN** & **HCA** \\ \hline \multirow{3}{*}{**CIFAR10**} & CS\({}_{\text{train}}\) & \(\text{T}_{\text{train}}\) & \(\text{T}_{\text{train}}\) + \(\text{T}_{\text{train}}\) + \(\text{T}_{\text{train}}\) + \(\text{T}_{\text{train}}\) & \(\text{T}_{\text{train}}\) + \(\text{T}_{\text
The reason why OWR-UE still scores a little on HCA although it lacks GCD capability lies in the fact that it classifies all novel classes as unknown, which is equivalent to clustering into one class. The slightly inferior performance of OpenGCD for IL, especially on the latter two datasets, is due to the fact that OWR-UE is labor-intensive to label the data one by one, while OpenGCD only corrects for clusters of instances recognized as novel classes. Compared to the OWR-UE, the IL-E completely loses its OSR capability. This is a nightmare for an online recognition system towards the open world, as it cannot detect anomalies or isolate foreign intrusions promptly. The CSR that lost its IL capability maintains a consistent knowledge of \(\mathrm{T_{S1}}\), which allowed it to stay well ahead on \(\mathrm{T_{S1}}\) at different phases. At the first phase, there is no difference between CSR (training on the full training set) and the other three methods for IL performance, which further indicates the appropriateness of targeting diversity for exemplar selection in response to catastrophic forgetting.
We analyse the effects of \(|\mathcal{M}_{r}|\) and \(\alpha\) on performance in Appendix C.
## 5 Conclusion
In this paper, we proposed OpenGCD to address the three main tasks in OWR by combining a few new ideas. Firstly, we rejected the unknown based on the uncertainty of the classifier's prediction, which is lightweight and intuitive. Secondly, we clustered unlabeled unknown instances using ss-\(k\)-means++, which is the first attempt to assist manual grouping in OWR with GCD techniques driving OWR a small step closer to automation. Besides, we fine-tuned an existing class number evaluation protocol, which achieves efficiency gains using optimization instead of traversal. Further, we proposed a new metric called HCA to evaluate the performance of GCD, which achieves more reasonable results in a harmonic fashion. Finally, we selected informative exemplars with the goal of diversity to ensure smooth implementation of IL and GCD.
Remarkably, all procedures in OpenGCD are independent of the classifier type, which gives it excellent compatibility, _i.e._, it opens the gate towards the open world for any well-designed closed set classifier. Moreover, OpenGCD is also extremely scalable, and its OWR performance can be further improved by introducing classifier calibration technology, more advanced semi-supervised clustering and classification models, memory management strategies, etc. We consider the implementation of OWR in limited data scenarios, such as few-shot OWR, and its further automation as potential future research directions.
|
2305.09431 | Chiral and trace anomalies in Deeply Virtual Compton Scattering II: QCD
factorization and beyond | We extend the discussion of the recently discovered 'anomaly poles' in QCD
Compton scattering. We perform the complete one-loop calculation of the Compton
amplitude using momentum transfer $t$ as the regulator of collinear
divergences. In the gluon channel, we confirm the presence of poles $1/t$ in
both the real and imaginary parts of the amplitude. In the quark channel, we
find unexpected infrared single $1/\epsilon$ and double $1/\epsilon^2$ poles.
We then perform the one-loop calculation of the leading-twist quark generalized
parton distributions (GPDs) for quark and gluon external states with the same
regulators and find that all these singular terms can be systematically
absorbed into the GPDs, showing that QCD factorization is restored to this
order. Having established this, we discuss the fate of the $1/t$ poles. We
argue that they become the nonperturbative building blocks of GPDs that encode
the chiral and trace anomalies of QCD, in a way consistent with the known
constraints these anomalies impose on the nucleon axial and gravitational form
factors. The scope of research on GPDs can therefore be expanded to address the
manifestation and implications of quantum anomalies in high-energy exclusive
processes. | Shohini Bhattacharya, Yoshitaka Hatta, Werner Vogelsang | 2023-05-16T13:42:59Z | http://arxiv.org/abs/2305.09431v2 | # Chiral and trace anomalies in Deeply Virtual Compton Scattering II:
###### Abstract
We extend the discussion of the recently discovered 'anomaly poles' in QCD Compton scattering. We perform the complete one-loop calculation of the Compton amplitude using momentum transfer \(t\) as the regulator of collinear divergences. In the gluon channel, we confirm the presence of poles \(1/t\) in both the real and imaginary parts of the amplitude. In the quark channel, we find unexpected infrared single \(1/\epsilon\) and double \(1/\epsilon^{2}\) poles. We then perform the one-loop calculation of the leading-twist quark generalized parton distributions (GPDs) for quark and gluon external states with the same regulators and find that all these singular terms can be systematically absorbed into the GPDs, showing that QCD factorization is restored to this order. Having established this, we discuss the fate of the \(1/t\) poles. We argue that they become the nonperturbative building blocks of GPDs that encode the chiral and trace anomalies of QCD, in a way consistent with the known constraints these anomalies impose on the nucleon axial and gravitational form factors. The scope of research on GPDs can therefore be expanded to address the manifestation and implications of quantum anomalies in high-energy exclusive processes.
## I Introduction
The past several years have witnessed significant progress in the higher-order calculation of Deeply Virtual Compton Scattering (DVCS). In the flavor-nonsinglet channel, the three-loop evolution equation for the generalized parton distributions (GPDs) has been derived [1] together with the two-loop coefficient functions [2]. In the flavor-singlet channel, the two-loop coefficient functions for DVCS have been recently calculated [3] and even higher order resummation effects have been studied [4]. These developments are on a steady path toward achieving the NNLO accuracy in DVCS that is required for precision GPD studies at the future Electron-Ion Collider (EIC) [5].
Meanwhile, in a previous paper [6], we have explored a new approach to compute the NLO corrections in DVCS, following an earlier suggestion in polarized deep inelastic scattering (DIS) [7; 8]. The key ingredient is to use momentum transfer \(t=(P_{1}-P_{2})^{2}\) as an infrared cutoff to regulate the collinear divergence, instead of the usual dimensional regularization. Previously in the calculation of the Compton amplitude in the Bjorken limit, the variable \(t\) had always been neglected when computing partonic amplitudes. Naively, one would expect that the only new effect of introducing nonzero \(t\) would be to generate higher twist corrections of order \(|t|/Q^{2}\ll 1\) where \(Q^{2}\) is the photon virtuality. However, our explicit calculations with nonzero \(t\) have revealed 'anomaly poles' \(1/t\) which had not been detected in the previous calculations performed at \(t=0\)[9; 10; 11; 12; 13], but are consistent with the result in [7]. Moreover, these poles are accompanied by certain twist-_four_ GPDs but without an expected suppression factor \(1/Q^{2}\). (Rather, \(1/Q^{2}\) has been replaced by \(1/t\).) In fact, they can be interpreted as the manifestations of the QCD chiral [7; 8; 14] and trace [6] anomalies in high energy scattering. The finding thus points towards a novel connection between the study of GPDs and phenomena associated with quantum anomalies such as chiral symmetry breaking and confinement.
At face value, the emergence of poles is in apparent contradiction with the QCD factorization theorem [9; 15] which states that the QCD Compton scattering amplitude factorizes into the perturbatively calculable coefficient functions and the nonperturbative twist-two GPDs up to higher-twist corrections of order \(1/Q^{2}\). However, we have already suggested in [6] that the poles found in the one-loop calculation may be absorbed into the twist-two GPDs as a part of the infrared subtraction procedure. The purpose of this paper is to fully demonstrate that this is indeed the case.
We first perform a complete calculation of the Compton amplitude with nonzero \(t\) at one loop, both in the quark and gluon channels, in both the polarized and unpolarized sectors, and for the real and imaginary parts of the amplitude. (In [6], we only calculated the imaginary part in the gluon channel.) In the gluon channel, we find \(1/t\) poles also in the real part. Surprisingly, in the quark channel, we find uncancelled infrared single \(1/\epsilon\) and double \(1/\epsilon^{2}\) poles. We next perform the corresponding one-loop calculation of the unpolarized and polarized quark GPDs for free quark and gluon external states at finite \(t\) and show that all the singular terms can be systematically absorbed.
Therefore, at least to one loop, the emergence of \(1/t\) poles does not contradict the QCD factorization theorem. The calculation with nonzero \(t\) may be regarded as an alternative factorization scheme. Having established this, we shift our focus to the fate of the \(1/t\) poles absorbed into the twist-two GPDs. It is well known that the chiral and trace anomalies impose constraints on the nucleon axial and gravitational form factors, respectively. Since these form factors are certain moments of the twist-two GPDs, there must be corresponding constraints directly on GPDs [6]. A preliminary discussion of this has been already presented in [6]. Our extended treatment here will lend more support to the idea that this new scheme can uniquely address such profound aspects of QCD in GPD studies.
## II Compton scattering
The amplitude for QCD Compton scattering off a proton target, \(\gamma^{*}(q_{1})p(P_{1})\to\gamma^{*}(q_{2})p(P_{2})\), is given by
\[T^{\mu\nu} = i\int d^{4}ye^{iq\cdot y}\langle P_{2}|{\rm T}\{j^{\mu}(y/2)j^{ \nu}(-y/2)\}|P_{1}\rangle, \tag{1}\]
where \(j^{\mu}=\sum_{q}e_{q}\bar{\psi}_{q}\gamma^{\mu}\psi_{q}\) is the electromagnetic current and \(q^{\mu}=\frac{q_{1}^{\mu}+q_{2}^{\mu}}{2}\) is the average of the incoming and outgoing photon momenta. The momentum transfer is denoted as \(t=l^{2}\) where \(l^{\mu}=P_{2}^{\mu}-P_{1}^{\mu}=q_{1}^{\mu}-q_{2}^{\mu}\). We introduce the generalized Bjorken variable \(x_{B}\) and the skewness parameter \(\xi\),
\[x_{B}=\frac{Q^{2}}{2P\cdot q},\quad\xi=\frac{q_{2}^{2}-q_{1}^{2}}{2P\cdot q} \approx-\frac{l^{+}}{2P^{+}}, \tag{2}\]
where \(Q^{2}=-q^{2}\) is the photon virtuality and \(P^{\mu}=\frac{P_{1}^{\mu}+P_{2}^{\mu}}{2}\). In DVCS, \(q_{2}^{2}=0\) and \(x_{B}\approx\xi\), but we shall keep general \(x_{B}\) and \(\xi\) throughout the paper.
In the generalized Bjorken limit \(Q^{2}\), \(2P\cdot q\to\infty\) with \(x_{B}\), \(t\) fixed and \(Q^{2}\gg|t|\), the Compton amplitude can be expanded as [16; 17]
\[T^{\mu\nu}=\frac{g_{\perp}^{\mu\nu}}{2P^{+}}\bar{u}(P_{2})\left[\gamma^{+} \mathcal{H}+\frac{i\sigma^{+\nu}l_{\nu}}{2M}\mathcal{E}\right]u(P_{2})-i\frac {\epsilon_{\perp}^{\mu\nu}}{2P^{+}}\bar{u}(P_{2})\left[\gamma^{+}\gamma_{5} \tilde{\mathcal{H}}+\frac{\gamma^{5}l^{+}}{2M}\tilde{\mathcal{E}}\right]u(P_{ 2})+\cdots, \tag{3}\]
where \(M\) is the proton mass. \(g_{\perp}^{\mu\nu}\) and \(\epsilon_{\perp}^{\mu\nu}\) are transverse projectors such that \(g_{\perp}^{ij}=-\delta^{ij}\) and \(\epsilon_{\perp}^{ij}=\epsilon^{ij}\) for transverse indices \(i,j=1,2\) and the other components are zero. Our convention is \(\gamma_{5}=i\gamma^{0}\gamma_{1}\gamma^{2}\gamma^{3}\) and \(\epsilon^{0123}=\epsilon^{-+12}=\epsilon^{12}=1\). The ellipsis in (3) stand for the contributions from the (generalized) longitudinal structure function and the so-called gluon transversity GPD. As observed in [6], they are not sensitive to anomalies and are thus left for future work.
According to QCD factorization, the Compton form factors \(\mathcal{H}\), \(\mathcal{E}\), \(\tilde{\mathcal{H}}\) and \(\tilde{\mathcal{E}}\) can be written as convolutions of nonperturbative GPDs and partonic hard-scattering amplitudes. The latter are commonly calculated in dimensional regularization in \(d=4-2\epsilon\) dimensions, with \(\epsilon\) regularizing both ultraviolet (UV) and infrared (IR) divergences. We shall also work in \(d\) dimensions, since individual diagrams contain UV divergences. But we regularize the collinear singularity by introducing the physical variable \(t\) at the partonic level. Such a calculation is safely justified when \(\sqrt{|t|}\gg\Lambda_{\rm QCD}\) (still keeping \(Q\gg\sqrt{|t|}\)),1 but we shall eventually be interested in the region \(\sqrt{|t|}\sim\Lambda_{\rm QCD}\). The result
of the one-loop calculation can be summarized in the form
\[\begin{pmatrix}\mathcal{H}(x_{B},\xi,t)\\ \mathcal{E}(x_{B},\xi,t)\end{pmatrix} = \sum_{q}e_{q}^{2}\int_{0}^{1}dx\Bigg{[}\Big{(}C_{0}(x,x_{B})+\frac {\alpha_{s}}{2\pi}C_{1}^{q}(x,x_{B},\xi)\Big{)}\begin{pmatrix}H_{q}(x,\xi,t)-H_ {q}(-x,\xi,t)\\ E_{q}(x,\xi,t)-E_{q}(-x,\xi,t)\end{pmatrix} \tag{4}\] \[+\frac{\alpha_{s}}{2\pi}C_{1}^{q}(x,x_{B},\xi)\begin{pmatrix}H_{g }(x,\xi,t)\\ E_{g}(x,\xi,t)\end{pmatrix}\] \[+\frac{\alpha_{s}}{2\pi}\frac{M^{2}}{t}A(x,x_{B},\xi)\begin{pmatrix} \mathcal{F}(x,\xi,t)\\ -\mathcal{F}(x,\xi,t)\end{pmatrix}\Bigg{]}+\mathcal{O}(1/Q^{2})+\mathcal{O}( \alpha_{s}^{2}),\]
\[\begin{pmatrix}\tilde{\mathcal{H}}(x_{B},\xi,t)\\ \tilde{\mathcal{E}}(x_{B},\xi,t)\end{pmatrix} = \sum_{q}e_{q}^{2}\int_{0}^{1}dx\Bigg{[}\Big{(}\tilde{C}_{0}(x,x_ {B})+\frac{\alpha_{s}}{2\pi}\tilde{C}_{1}^{q}(x,x_{B},\xi)\Big{)}\begin{pmatrix} \tilde{H}_{q}(x,\xi,t)+\tilde{H}_{q}(-x,\xi,t)\\ \tilde{E}_{q}(x,\xi,t)+\tilde{E}_{q}(-x,\xi,t)\end{pmatrix} \tag{5}\] \[+\frac{\alpha_{s}}{2\pi}\tilde{C}_{1}^{g}(x,x_{B},\xi)\begin{pmatrix} \tilde{H}_{g}(x,\xi,t)\\ \tilde{E}_{g}(x,\xi,t)\end{pmatrix}\] \[+\frac{\alpha_{s}}{2\pi}\frac{M^{2}}{t}\tilde{A}(x,x_{B},\xi) \begin{pmatrix}0\\ \tilde{\mathcal{F}}(x,\xi,t)\end{pmatrix}\Bigg{]}+\mathcal{O}(1/Q^{2})+ \mathcal{O}(\alpha_{s}^{2}),\]
where the notations for the twist-two quark and gluon GPDs \(H_{q,g}\), \(E_{q,g}\), \(\tilde{H}_{q,g}\), \(\tilde{E}_{q,g}\) are standard [16; 17]. (The gluon GPDs are normalized as \(H_{g}(x)=xG(x)\) and \(\tilde{H}_{g}(x)=x\Delta G(x)\) in the forward limit, where \(G\), \(\Delta G\) are the unpolarized and polarized gluon PDFs.) Note that (4) and (5) are still in their 'unsubtracted' forms in the sense that some of the coefficients contain divergences in the formal limit \(t\to 0\). Their subtraction is rather unconventional, and to elaborate on this is one of the main objectives of our paper.
The leading-order coefficient functions are well known:
\[C_{0}\left(x,x_{B}\right) = \frac{1}{x-x_{B}+i\epsilon}+\frac{1}{x+x_{B}-i\epsilon},\] \[\tilde{C}_{0}\left(x,x_{B}\right) = \frac{1}{x-x_{B}+i\epsilon}-\frac{1}{x+x_{B}-i\epsilon}. \tag{6}\]
The one-loop corrections, \(C_{1}^{q}\) etc., have the following generic structure:
\[C_{1}^{q}(x,x_{B},\xi) = \frac{C_{F}}{x}\left(\kappa_{qq}(\hat{x},\hat{\xi})\ln\frac{Q^{2} }{-l^{2}}+\delta C_{1}^{q}(\hat{x},\hat{\xi})\right),\qquad\tilde{C}_{1}^{q}( x,x_{B},\xi)=\frac{C_{F}}{x}\left(\tilde{\kappa}_{qq}(\hat{x},\hat{\xi})\ln\frac{Q^{2} }{-l^{2}}+\delta\tilde{C}_{1}^{q}(\hat{x},\hat{\xi})\right), \tag{7}\] \[C_{1}^{g}(x,x_{B},\xi) = \frac{2T_{R}}{x^{2}}\left(\kappa_{qg}(\hat{x},\hat{\xi})\ln\frac{ Q^{2}}{-l^{2}}+\delta C_{1}^{q}(\hat{x},\hat{\xi})\right),\qquad\tilde{C}_{1}^{g}(x,x_{B}, \xi)=\frac{2T_{R}}{x^{2}}\left(\tilde{\kappa}_{qg}(\hat{x},\hat{\xi})\ln\frac{ Q^{2}}{-l^{2}}+\delta\tilde{C}_{1}^{g}(\hat{x},\hat{\xi})\right), \tag{8}\]
where \(C_{F}=\frac{4}{3}\) and \(T_{R}=\frac{1}{2}\) are the usual color factors. We have introduced the partonic variables \(\hat{x}=\frac{x_{B}}{x}\) and \(\hat{\xi}=\frac{\xi}{x}\), and set the \(\overline{\rm MS}\) renormalization scale to be \(4\pi e^{-\gamma_{E}}\mu^{2}=Q^{2}\). The logarithm \(\ln\frac{Q^{2}}{-l^{2}}\) originates from the collinear singularity and replaces the \(-\frac{1}{\epsilon_{\rm IR}}\) pole in the usual calculation in dimensional regularization with \(t=0\). The coefficients \(\kappa\), \(\tilde{\kappa}\) are fixed by the evolution equation of GPDs and must agree with the known results in the literature. On the other hand, the coefficient functions \(\delta C_{1}^{q}\), \(\delta C_{1}^{g}\), \(\delta\tilde{C}_{1}^{q}\) and \(\delta\tilde{C}_{1}^{q}\) are potentially scheme dependent. The results in the \(\overline{\rm MS}\) scheme can be found in [9; 10; 11; 17]. Note that, somewhat at variance with the previous literature, we have used the reflection symmetry in \(x\) to restrict the \(x\)-integral to the region \(0<x<1\). Namely, \(\tilde{C}_{0}\), \(\tilde{C}_{1}^{q}\), \(C_{1}^{g}\), \(H_{g}\), \(E_{g}\) are even functions and \(C_{0}\), \(C_{1}^{q}\), \(\tilde{C}_{1}^{g}\)\(\tilde{H}_{g}\), \(\tilde{E}_{g}\) are odd functions, respectively, under \(x\to-x\). This is convenient for the discussion below.
Eqs. (4) and (5) resemble the usual structure dictated by the QCD factorization theorem except for the 'anomaly pole' terms proportional to \(1/t\). These poles are accompanied by the twist-four gluon GPDs \(\mathcal{F}\) and \(\tilde{\mathcal{F}}\) defined as [7; 19; 20; 21; 22]
\[\mathcal{F}(x,\xi,t) \equiv \frac{-4xP^{+}}{M}\int\frac{dz^{-}}{2\pi}e^{ixP^{+}z^{-}}\frac{ \langle P_{2}|F^{\mu\nu}(-z^{-}/2)WF_{\mu\nu}(z^{-}/2)|P_{1}\rangle}{\bar{u}(P _{2})u(P_{1})}\,, \tag{9}\] \[\tilde{\mathcal{F}}(x,\xi,t) \equiv \frac{iP^{+}}{M}\int\frac{dz^{-}}{2\pi}e^{ixP^{+}z^{-}}\frac{ \langle P_{2}|F^{\mu\nu}(-z^{-}/2)W\tilde{F}_{\mu\nu}(z^{-}/2)|P_{1}\rangle}{ \bar{u}(P_{2})\gamma_{5}u(P_{1})}\,, \tag{10}\]
where \(W\) is the straight Wilson line between \([-z^{-}/2,z^{-}/2]\). We have changed the normalization with respect to [6] in order to make these distributions dimensionless. Despite involving twist-four GPDs, these terms are not suppressed by \(1/Q^{2}\) and apparently cause problems in the forward limit \(t\to 0\). As discussed in [6] and will be further elaborated later, the emergence of poles and their fate may shed new light on the nonperturbative structure of GPDs, connecting to deep issues such as chiral symmetry breaking and the origin of hadron masses.
## III Calculations
In this section, we outline our calculation of the perturbative corrections to Compton scattering at one-loop order. The relevant Feynman diagrams for the subprocess initiated by the gluons are shown in Fig. 1 and the ones initiated by the quarks are shown in Fig. 2. For the latter case, we choose to work in Feynman gauge. (We have also worked in a general covariant gauge and confirmed that the final results are independent of the gauge as it should be.) As in Ref. [6], we parameterize the incoming and outgoing momenta as
\[q_{1}=q+\frac{l}{2},\quad q_{2}=q-\frac{l}{2},\quad p_{1}=p-\frac{l}{2},\quad p _{2}=p+\frac{l}{2}. \tag{11}\]
We also define the partonic versions of the Bjorken and skewness variables (2) as
\[\hat{x}=\frac{Q^{2}}{2p\cdot q}=\frac{x_{B}}{x},\qquad\hat{\xi}=\frac{\xi}{x} =-\hat{x}\frac{q\cdot l}{Q^{2}}. \tag{12}\]
The incoming and outgoing partons are assumed to be massless, \(p_{1}^{2}=p_{2}^{2}=0\), which leads to the conditions \(p^{2}=-l^{2}/4\) and \(p\cdot l=0\). The virtuality of the photons can be written as
\[q_{1}^{2}=-Q^{2}\frac{\hat{x}+\hat{\xi}}{\hat{x}}+\frac{l^{2}}{4},\qquad q_{2} ^{2}=-Q^{2}\frac{\hat{x}-\hat{\xi}}{\hat{x}}+\frac{l^{2}}{4}. \tag{13}\]
We will abbreviate the polarization vectors for the gluons in Fig. 1 as \(\epsilon^{\mu}(p_{1})\equiv\epsilon_{1}^{\mu}\) and \(\epsilon^{*\mu}(p_{2})\equiv\epsilon_{2}^{*\mu}\).
The collinear singularity in the above diagrams will be regularized by \(t=l^{2}\). We emphasize that, in the present 'handbag' approximation, \(t=(p_{2}-p_{1})^{2}=(P_{2}-P_{1})^{2}\) is the same at the hadronic and partonic levels. However, we still have to work in \(d=4-2\epsilon\) dimensions because the individual diagrams will contain UV divergences in the real part. At the same time, working in \(d\)-dimensions also helps to check if there are any leftover IR divergences that are not regularized by nonzero \(t\) alone. This point will be relevant for the quark-channel diagrams in Fig. 2. Our
convention is that, if \(\epsilon\) is used for the UV divergences, then \(\epsilon\to\epsilon_{\textsc{uv}}>0\), while if it is used for the IR divergences then \(\epsilon\to\epsilon_{\textsc{ir}}<0\).
In the following, we shall refer to the two terms in (3) as the symmetric and antisymmetric parts of the Compton amplitude, respectively. The symmetric part can be extracted with the help of the projector
\[g_{\perp}^{\mu\nu}=g^{\mu\nu}+\frac{1}{q^{2}(1+\gamma^{2})}\left(q^{\mu}- \frac{q^{2}}{p\cdot q}p^{\mu}\right)\left(q^{\nu}-\frac{q^{2}}{p\cdot q}p^{\nu }\right)-\frac{q^{\mu}q^{\nu}}{q^{2}}\,,\qquad\gamma^{2}=-\frac{p^{2}q^{2}}{(p \cdot q)^{2}}=\frac{l^{2}q^{2}}{4(p\cdot q)^{2}}, \tag{14}\]
such that
\[g_{\perp\mu}^{\mu}=d-2=2(1-\epsilon),\qquad\mathcal{H},\mathcal{E}\sim\frac{1 }{2(1-\epsilon)}g_{\mu\nu}^{\perp}T^{\mu\nu}\,. \tag{15}\]
For the antisymmetric part, we use the projector \(\epsilon^{\alpha p\mu\nu}\equiv\epsilon^{\alpha\beta\mu\nu}p_{\beta}\).
We evaluate the above diagrams with the help of the Mathematica package 'Package-X' [23]. Below we first discuss the main features of our results specific to the gluon and quark channels. The complete results will then be presented in Sec. IV.
### Gluon channel
Our results feature (i) a \(1/l^{2}\) pole and (ii) a logarithm \(\ln(Q^{2}/-l^{2})\), both arising from the first and third diagrams in Fig. 1. For the symmetric case, the UV poles from the first and third diagrams add up to cancel the one arising from the second diagram. For the antisymmetric case, the UV poles from the first and third diagram cancel. There are no leftover \(1/\epsilon_{\textsc{ir}}\) divergences, meaning that \(l^{2}\neq 0\) functions as a genuine regulator of collinear divergences.
In the symmetric case, the result for the one-loop Compton scattering amplitude with external gluon polarization vectors \(\epsilon_{1}\), \(\epsilon_{2}^{*}\) (Fig. 1) has the following generic structure:
\[-\epsilon_{1}\cdot\epsilon_{2}^{*}\left(A\ln\frac{Q^{2}}{-l^{2}}+B\right)+ \frac{C}{l^{2}}\epsilon_{1}\cdot l\epsilon_{2}^{*}\cdot l=-\epsilon_{1}\cdot \epsilon_{2}^{*}\left(A\ln\frac{Q^{2}}{-l^{2}}+B-\frac{C}{2}\right)+\frac{C}{l ^{2}}\left(\epsilon_{1}\cdot l\epsilon_{2}^{*}\cdot l-\frac{\epsilon_{1}\cdot \epsilon_{2}^{*}}{2}l^{2}\right), \tag{16}\]
where \(A,B,C\) are coefficients that depend on \(\hat{x}\) and \(\hat{\xi}\). In the asymmetric case, we find instead
\[i\epsilon^{\alpha p\epsilon_{2}^{*}\epsilon_{1}}\left(\tilde{A}\ln\frac{Q^{2} }{-l^{2}}+\tilde{B}\right)+\tilde{C}\frac{l^{\alpha}}{l^{2}}i\epsilon^{\epsilon _{1}*\hat{\epsilon}_{2}^{*}lp}, \tag{17}\]
where \(\epsilon^{\epsilon_{1}*\hat{\epsilon}_{2}^{*}lp}\equiv\epsilon^{\mu\nu\rho \lambda}\epsilon_{1\mu}\epsilon_{2\nu}^{*}l_{\rho}p_{\lambda}\). The first terms in (16) and (17) can be interpreted as the usual one-loop corrections to the Compton amplitude \(\sim C_{1}^{g}H_{g},\tilde{C}_{1}^{g}\tilde{H}_{g}\) through the identifications
\[-\epsilon_{1}\cdot\epsilon_{2}^{*}\sim\frac{\langle p_{2}|F^{+\mu}F_{\mu}{}^ {+}|p_{1}\rangle}{1-\hat{\xi}^{2}},\qquad i\epsilon^{+p\epsilon_{2}^{*} \epsilon_{1}}\sim\frac{\langle p_{2}|iF^{+\mu}\tilde{F}_{\mu}{}^{+}|p_{1} \rangle}{1-\hat{\xi}^{2}}. \tag{18}\]
However, the second terms in (16) and (17) cannot be attributed to twist-two GPDs. Their structures can only arise from the twist-four operators \(F^{\mu\nu}F_{\mu\nu}\) and \(F^{\mu\nu}\tilde{F}_{\mu\nu}\)
\[\epsilon_{1}\cdot l\epsilon_{2}^{*}\cdot l-\frac{\epsilon_{1}\cdot\epsilon_{2 }^{*}}{2}l^{2}\sim\langle p_{2}|F^{\mu\nu}F_{\mu\nu}|p_{1}\rangle,\qquad 2i \epsilon^{\epsilon_{1}*\hat{\epsilon}_{2}^{*}lp}\sim\langle p_{2}|iF^{\mu\nu} \tilde{F}_{\mu\nu}|p_{1}\rangle, \tag{19}\]
and this is how the twist-four GPDs (9), (10) come into play. It should be mentioned, however, that the present argument only concerns the two-parton matrix element of the operators \(FF\) and \(F\tilde{F}\). Further justifications from other approaches are desirable.
### Quark channel
In this case, our results do not contain any terms \(1/l^{2}\). This is consistent with the expectation that the anomalies, being of purely gluonic nature, should not affect the quark sector, at least at this order. Quite unexpectedly though,
we find (i) double IR poles \(1/\epsilon_{\rm IR}^{2}\), (ii) single IR poles \(1/\epsilon_{\rm IR}\), apart from (iii) a logarithm \(\ln(Q^{2}/-l^{2})\). Besides, we also find (iv) UV poles \(1/\epsilon_{\rm UV}\). It is interesting to discuss the origin of these poles. The UV poles arise from all the diagrams in Fig. 2 excluding the first. To cancel them, we need to include the square root of the self-energy corrections \(0=\frac{1}{\epsilon_{\rm UV}}-\frac{1}{\epsilon_{\rm IR}}\) on the incoming and outgoing (massless) quark lines. This converts the UV poles into single IR poles that add to the ones in (ii). The double IR poles arise from the first diagram while the single IR poles arise from all diagrams except the fourth. It is interesting to note that, in inclusive DIS, the second and third diagrams also give rise to double poles, canceling the one from the first diagram, but in the present case they do not because the quark lines (after re-absorption of gluons) are massive. Another interesting feature is that, in the usual \(\overline{\rm MS}\) scheme, one obtains the evolution kernel of GPDs as the coefficient of single IR poles for which all the aforementioned diagrams contribute. In our case, we reproduce the kernel as the coefficient of the logarithm (iii) for which only the first diagram contributes.
In the antisymmetric case, we project the result onto \(\bar{u}(p_{2})\gamma^{+}\gamma_{5}u(p_{1})\) in order to extract the twist-two component. This makes it necessary to specify the treatment of the Dirac matrix in \(d\neq 4\) dimensions. We have used both the 'naive' fully anticommuting \(\gamma_{5}\) and the HVBM scheme [24; 25]. For the latter we have computed the Dirac traces using the Mathematica package 'Tracer' [26]. The HVBM scheme provides the preferred scheme [27; 28] because, unlike the one with the fully anticommuting \(\gamma_{5}\), it is known to be algebraically consistent. Remarkably, however, in the present case the result is the same for both schemes. The reason is that all \(1/\epsilon\) pole terms we find enter with a part of the Dirac trace that manifestly gives the same answer for both treatments of \(\gamma_{5}\). All further collinear singularities are regularized by the logarithm \(\ln(Q^{2}/-l^{2})\), so that this part of the calculation can essentially be carried out in four dimensions where of course both schemes coincide. The same is true for the gluonic coefficient function which has no poles in \(1/\epsilon\) and can be obtained in four dimensions. Hence there are no ambiguities related to the Levi-Civita tensor being an entirely four-dimensional object. The issue of the scheme nevertheless will show up later when we discuss the one-loop calculation of the GPDs.
## IV Results
### One-loop evolution kernels
The coefficients of the logarithm \(\ln(Q^{2}/-l^{2})\) in (7), (8) are dictated by the evolution of the twist-two GPDs and therefore must agree with the known results in the literature [9; 10; 11]. We find that this is indeed the case, meaning that the physical parameter \(t\) does the job of regularizing the collinear singularity associated with GPDs. For completeness, here we reproduce the results:
\[\kappa_{qq}(\hat{x},\hat{\xi}) = \frac{3}{2(1-\hat{x})}+\frac{\hat{x}^{2}+1-2\hat{\xi}^{2}}{(1- \hat{\xi}^{2})(1-\hat{x})}\ln\frac{\hat{x}-1}{\hat{x}}+\frac{(\hat{x}-\hat{ \xi})(1-\hat{x}^{2}-2\hat{x}\hat{\xi}-2\hat{\xi}^{2})}{(1-\hat{x}^{2})\hat{ \xi}(1-\hat{\xi}^{2})}\ln\frac{\hat{x}-\hat{\xi}}{\hat{x}}+(\hat{x}\to-\hat{x}), \tag{20}\] \[\tilde{\kappa}_{qq}(\hat{x},\hat{\xi}) = \frac{3}{2(1-\hat{x})}+\frac{\hat{x}^{2}+1-2\hat{\xi}^{2}}{(1- \hat{\xi}^{2})(1-\hat{x})}\ln\frac{\hat{x}-1}{\hat{x}}-\frac{(\hat{x}-\hat{ \xi})(1+\hat{x}^{2}+2\hat{x}\hat{\xi})}{(1-\hat{x}^{2})(1-\hat{\xi}^{2})}\ln \frac{\hat{x}-\hat{\xi}}{\hat{x}}-(\hat{x}\to-\hat{x}),\] (21) \[\kappa_{qg}(\hat{x},\hat{\xi}) = \frac{1-2\hat{x}+2\hat{x}^{2}-\hat{\xi}^{2}}{(1-\hat{\xi}^{2})^{ 2}}\ln\frac{\hat{x}-1}{\hat{x}}+\frac{(\hat{x}-\hat{\xi})(1-2\hat{\xi}\hat{x}- \hat{\xi}^{2})}{\hat{\xi}(1-\hat{\xi}^{2})^{2}}\ln\frac{\hat{x}-\hat{\xi}}{ \hat{x}}+(\hat{x}\to-\hat{x}),\] (22) \[\tilde{\kappa}_{qg}(\hat{x},\hat{\xi}) = \frac{2\hat{x}-1-\hat{\xi}^{2}}{(1-\hat{\xi}^{2})^{2}}\ln\frac{ \hat{x}-1}{\hat{x}}-2\frac{\hat{x}-\hat{\xi}}{(1-\hat{\xi}^{2})^{2}}\ln\frac{ \hat{x}-\hat{\xi}}{\hat{x}}-(\hat{x}\to-\hat{x}), \tag{23}\]
where as before \(\hat{x}=\frac{x\mu}{x}\), \(\hat{\xi}=\frac{\xi}{x}\). Note that \(\hat{x},\hat{\xi}\) are always positive because we restricted to \(0<x<1\) in (4) and (5). Also, an infinitesimal, negative imaginary part is understood in \(\hat{x}\), namely, \(\hat{x}\to\hat{x}-i\epsilon\) and
\[\ln(\hat{x}-1)=\ln(1-\hat{x})-i\pi. \tag{24}\]
### Coefficient functions
The 'coefficient functions' in (7), (8) are given as follows:
\[\delta C_{1}^{q}(\hat{x},\hat{\xi}) =-\frac{\left(\frac{Q^{2}}{-l^{2}}\right)^{\epsilon_{\text{IR}}}}{ \epsilon_{\text{IR}}^{2}(1-\hat{x})}-\frac{3\bigg{(}\frac{Q^{2}}{-l^{2}}\bigg{)} ^{\epsilon_{\text{IR}}}}{2\epsilon_{\text{IR}}(1-\hat{x})}+\frac{1-2\hat{x}-2 \hat{x}^{2}+3\hat{\xi}^{2}}{2(1-\hat{x})(1-\hat{\xi}^{2})}\ln\frac{\hat{x}-1}{ \hat{x}}+\frac{(\hat{x}-\hat{\xi})(-1+\hat{x}^{2}+3\hat{x}\hat{\xi}+3\hat{\xi}^ {2})}{(1-\hat{x}^{2})(1-\hat{\xi}^{2})\hat{\xi}}\ln\frac{\hat{x}-\hat{\xi}}{ \hat{x}}\] \[+\frac{1+\hat{x}^{2}-2\hat{\xi}^{2}}{2(1-\hat{x})(1-\hat{\xi}^{2 })}\ln^{2}\frac{\hat{x}-1}{\hat{x}}+\frac{\hat{x}}{2(1-\hat{\xi}^{2})\hat{\xi} }\ln^{2}\frac{\hat{x}-\hat{\xi}}{\hat{x}}+\frac{-1-\hat{x}^{2}+2\hat{\xi}^{2} }{2(1-\hat{x}^{2})(1-\hat{\xi}^{2})}\ln\frac{\hat{x}-\hat{\xi}}{\hat{x}}\ln \frac{\hat{x}+\hat{\xi}}{\hat{x}}+\frac{\pi^{2}-54}{12(1-\hat{x})}\] \[+\frac{\hat{x}}{(1-\hat{\xi}^{2})\hat{\xi}}\text{Li}_{2}\frac{-2 \hat{\xi}}{\hat{x}-\hat{\xi}}+\frac{1+\hat{x}^{2}-2\hat{\xi}^{2}}{(1-\hat{x}) (1-\hat{\xi}^{2})}\left(\text{Li}_{2}\frac{1-\hat{\xi}}{1-\hat{x}}+\text{Li}_ {2}\frac{1+\hat{\xi}}{1-\hat{x}}\right)\,+(\hat{x}\rightarrow-\hat{x}), \tag{25}\]
\[\delta\tilde{C}_{1}^{q}(\hat{x},\hat{\xi}) =-\frac{\left(\frac{Q^{2}}{-l^{2}}\right)^{\epsilon_{\text{IR}} }}{\epsilon_{\text{IR}}^{2}(1-\hat{x})}-\frac{3\bigg{(}\frac{Q^{2}}{-l^{2}} \bigg{)}^{\epsilon_{\text{IR}}}}{2(1-\hat{x})}+\frac{-1+2\hat{x}-4\hat{x}^{2} +3\hat{\xi}^{2}}{2(1-\hat{x})(1-\hat{\xi}^{2})}\ln\frac{\hat{x}-1}{\hat{x}}+ \frac{(\hat{x}-\hat{\xi})(1+2\hat{x}^{2}+3\hat{x}\hat{\xi})}{(1-\hat{x}^{2}) (1-\hat{\xi}^{2})}\ln\frac{\hat{x}-\hat{\xi}}{\hat{x}}\] \[+\frac{1+\hat{x}^{2}-2\hat{\xi}^{2}}{2(1-\hat{x})(1-\hat{\xi}^{2 })}\ln^{2}\frac{\hat{x}-1}{\hat{x}}+\frac{\hat{\xi}}{2(1-\hat{\xi}^{2})}\ln^ {2}\frac{\hat{x}-\hat{\xi}}{\hat{x}}-\frac{\hat{x}(1+\hat{x}^{2}-2\hat{\xi}^{ 2})}{2(1-\hat{x}^{2})(1-\hat{\xi}^{2})}\ln\frac{\hat{x}-\hat{\xi}}{\hat{x}} \ln\frac{\hat{x}+\hat{\xi}}{\hat{x}}+\frac{\pi^{2}-54}{12(1-\hat{x})}\] \[+\frac{\hat{\xi}}{1-\hat{\xi}^{2}}\text{Li}_{2}\frac{-2\hat{\xi}} {\hat{x}-\hat{\xi}}+\frac{1+\hat{x}^{2}-2\hat{\xi}^{2}}{(1-\hat{x})(1-\hat{ \xi}^{2})}\left(\text{Li}_{2}\frac{1-\hat{\xi}}{1-\hat{x}}+\text{Li}_{2}\frac{ 1+\hat{\xi}}{1-\hat{x}}\right)-(\hat{x}\rightarrow-\hat{x}), \tag{26}\]
\[\delta C_{1}^{q}(\hat{x},\hat{\xi}) =-\frac{1-2\hat{x}+2\hat{x}^{2}-\hat{\xi}^{2}}{(1-\hat{\xi}^{2})^ {2}}\ln\frac{\hat{x}-1}{\hat{x}}+\frac{1-2\hat{x}+2\hat{x}^{2}-\hat{\xi}^{2}}{2 (1-\hat{\xi}^{2})^{2}}\ln^{2}\frac{\hat{x}-1}{\hat{x}}\] \[-\frac{(\hat{x}-\hat{\xi})(1-2\hat{x}\hat{\xi}-\hat{\xi}^{2})}{ \hat{\xi}(1-\hat{\xi}^{2})^{2}}\ln\frac{\hat{x}-\hat{\xi}}{\hat{x}}+\frac{\hat{x }(1+\hat{\xi}^{2})}{2\xi(1-\hat{\xi}^{2})^{2}}\ln^{2}\frac{\hat{x}-\hat{\xi}}{ \hat{x}}-\frac{1+2\hat{x}^{2}-\hat{\xi}^{2}}{2(1-\hat{\xi}^{2})^{2}}\ln\frac{ \hat{x}-\hat{\xi}}{\hat{x}}\ln\frac{\hat{x}+\hat{\xi}}{\hat{x}}\] \[+\frac{\hat{x}(1+\hat{\xi}^{2})}{\hat{\xi}(1-\hat{\xi}^{2})^{2}} \text{Li}_{2}\frac{-2\hat{\xi}}{\hat{x}-\hat{\xi}}+\frac{1-2\hat{x}+2\hat{x}^{2} -\hat{\xi}^{2}}{(1-\hat{\xi}^{2})^{2}}\left(\text{Li}_{2}\frac{1-\hat{\xi}}{1- \hat{x}}+\text{Li}_{2}\frac{1+\hat{\xi}}{1-\hat{x}}\right)\,+(\hat{x}\rightarrow- \hat{x}), \tag{27}\]
\[\delta\tilde{C}_{1}^{q}(\hat{x},\hat{\xi}) =-\frac{2\hat{x}-1-\hat{\xi}^{2}}{(1-\hat{\xi}^{2})^{2}}\ln\frac{ \hat{x}-1}{\hat{x}}+\frac{2\hat{x}-1-\hat{\xi}^{2}}{2(1-\hat{\xi}^{2})^{2}}\ln^ {2}\frac{\hat{x}-1}{\hat{x}}\] \[+2\frac{\hat{x}-\hat{\xi}}{(1-\hat{\xi}^{2})^{2}}\ln\frac{\hat{x}- \hat{\xi}}{\hat{x}}+\frac{\hat{\xi}}{(1-\hat{\xi}^{2})^{2}}\ln^{2}\frac{\hat{x}- \hat{\xi}}{\hat{x}}-\frac{\hat{x}}{(1-\hat{\xi}^{2})^{2}}\ln\frac{\hat{x}-\hat{ \xi}}{\hat{x}}\ln\frac{\hat{x}+\hat{\xi}}{\hat{x}}\] \[+\frac{2\hat{\xi}}{(1-\hat{\xi}^{2})^{2}}\text{Li}_{2}\frac{-2\hat {\xi}}{\hat{x}-\hat{\xi}}+\frac{2\hat{x}-1-\hat{\xi}^{2}}{(1-\hat{\xi}^{2})^{2}} \left(\text{Li}_{2}\frac{1-\hat{\xi}}{1-\hat{x}}+\text{Li}_{2}\frac{1+\hat{\xi}}{ 1-\hat{x}}\right)\,-(\hat{x}\rightarrow-\hat{x}), \tag{28}\]
where \(\text{Li}_{2}\) is the dilogarithm function. As mentioned before, in the quark channel, we find single \(1/\epsilon_{\text{IR}}\) and double \(1/\epsilon_{\text{IR}}^{2}\) infrared poles. Note that, since \(\epsilon_{\text{IR}}<0\), \((Q^{2}/l^{2})^{\epsilon_{\text{IR}}}\to 0\) if one takes the \(l^{2}\to 0\) limit first. However, if one keeps \(l^{2}\) finite and expands in \(\epsilon_{\text{IR}}\), the first term \(\frac{3}{2(1-\hat{x})}\) in (20) and (21) gets canceled. We discuss below how these problematic terms eventually disappear. As we also mentioned, the result (26) is independent of the scheme for \(\gamma_{5}\).
### Anomaly pole terms
The coefficients of the 'anomaly poles' \(1/t\) in (4) and (5) are found to be
\[A(x,x_{B},\xi) = \frac{2T_{R}}{x}\left(1+\frac{\hat{x}(1-\hat{x})\ln\frac{\hat{x}-1} {\hat{x}}+\hat{x}(\hat{x}-\hat{\xi})\ln\frac{\hat{x}-\hat{\xi}}{\hat{x}}+(\hat{ x}\to-\hat{x})}{1-\hat{\xi}^{2}}\right), \tag{29}\] \[\tilde{A}(x,x_{B},\xi) = \frac{8T_{R}}{x}\frac{(1-\hat{x})\ln\frac{\hat{x}-1}{x}+(\hat{x}- \hat{\xi})\ln\frac{\hat{x}-\hat{\xi}}{\hat{x}}-(\hat{x}\to-\hat{x})}{1-\hat{\xi }^{2}}. \tag{30}\]
The imaginary parts of these expressions agree with our results in [6]. Clearly, there are \(1/t\) poles also in the real part of the Compton amplitude.2 While (29) and (30) look unfamiliar and complicated, remarkably the \(x\)-integrals in (4) and (5) can be exactly rewritten in the following form
Footnote 2: In Ref. [6] we computed only the imaginary part of the Compton amplitude by directly extracting the discontinuity across the variables \(s=(p+q)^{2}\) and \(q_{2}^{2}\). In this paper, we compute the full amplitude. We have checked that the imaginary parts of (29), (30) and all the other results in this paper are consistent with the corresponding results in [6].
\[\int_{0}^{1}dxA(x,x_{B},\xi){\cal F}(x,\xi,t)\] \[=2T_{R}\int_{0}^{1}dxC_{0}(x,x_{B})\left[\int_{x}^{1}\frac{dx^{ \prime}}{x^{\prime}}K\left(\frac{x}{x^{\prime}},\frac{\xi}{x^{\prime}}\right){ \cal F}(x^{\prime},\xi,t)-\theta(\xi-x)\int_{0}^{1}\frac{dx^{\prime}}{x^{ \prime}}L\left(\frac{x}{x^{\prime}},\frac{\xi}{x^{\prime}}\right){\cal F}(x^{ \prime},\xi,t)\right]\] \[\equiv 2T_{R}\int_{0}^{1}dxC_{0}(x,x_{B})C^{\rm anom}\otimes{\cal F }(x,\xi,t), \tag{31}\]
\[\int_{0}^{1}dx\tilde{A}(x,x_{B},\xi)\tilde{\cal F}(x,\xi,t)\] \[=2T_{R}\int_{0}^{1}dx\tilde{C}_{0}(x,x_{B})\left[\int_{x}^{1} \frac{dx^{\prime}}{x^{\prime}}\tilde{K}\left(\frac{x}{x^{\prime}},\frac{\xi}{ x^{\prime}}\right)\tilde{\cal F}(x^{\prime},\xi,t)-\theta(\xi-x)\int_{0}^{1} \frac{dx^{\prime}}{x^{\prime}}\tilde{L}\left(\frac{x}{x^{\prime}},\frac{\xi}{ x^{\prime}}\right)\tilde{\cal F}(x^{\prime},\xi,t)\right]\] \[\equiv 2T_{R}\int_{0}^{1}dx\tilde{C}_{0}(x,x_{B})\bar{C}^{\rm anom }\otimes\tilde{\cal F}(x,\xi,t), \tag{32}\]
where
\[K(x,\xi) =\frac{x(1-x)}{1-\xi^{2}}\,,\qquad L(x,\xi)=\frac{x(\xi-x)}{1-\xi ^{2}}\,, \tag{33}\] \[\tilde{K}(x,\xi) =\frac{4(1-x)}{1-\xi^{2}},\qquad\tilde{L}(x,\xi)=\frac{4(\xi-x)} {1-\xi^{2}}. \tag{34}\]
That is, the leading-order kernels \(C_{0}\) and \(\tilde{C}_{0}\) can be factored out. The resulting convolution \(C^{\rm anom}\otimes{\cal F}\) agrees with what was anticipated in [6] following the general argument in [29] where actually the same integral (31) can be found. We now have the corresponding result with \(\xi\neq 0\) in the polarized sector. As mentioned already in [6], the two terms in \(C^{\rm anom}\) and \(\bar{C}^{\rm anom}\) come from the first and third diagrams in Fig. 1. The latter is nonzero only when the outgoing photon becomes timelike, \(q_{2}^{2}>0\), see (13).
The identities (31), (32) guarantee that, if the \(1/t\) poles are cancelled in the imaginary part of the Compton amplitude [6], the same cancellation automatically occurs in the real part as well.
## V GPD at one loop
We have seen that the Compton scattering amplitudes at one loop contain three types of singular behaviors: (i) logarithms \(\ln(-t)\), (ii) anomaly poles \(1/t\), (iii) single \(1/\epsilon\) and double \(1/\epsilon^{2}\) infrared poles (only in the quark channel).
The logarithms are as expected, but the other two are unusual and potentially cause problems with factorization. We now demonstrate that all these singular structures can be absorbed into the quark GPDs in the leading-order terms of (4) and (5). Specifically, we compute the unpolarized and polarized quark GPDs3
Footnote 3: The variables \(x\) and \(\xi\) in this section (and also in the appendix) should better be written as \(\hat{x}\) and \(\hat{\xi}\) to be more consistent with the notation in the previous sections. We however abbreviate \(\hat{x},\hat{\xi}\to x,\xi\) for simplicity.
\[f_{q}(x,\xi,t) = \int\frac{dz^{-}}{4\pi}e^{ixP^{+}z^{-}}\langle p_{2}|\bar{q}(-z/2 )W\gamma^{+}q(z^{-}/2)|p_{1}\rangle, \tag{35}\] \[\tilde{f}_{q}(x,\xi,t) = \int\frac{dz^{-}}{4\pi}e^{ixP^{+}z^{-}}\langle p_{2}|\bar{q}(-z/2 )W\gamma^{+}\gamma_{5}q(z^{-}/2)|p_{1}\rangle, \tag{36}\]
to one loop for on-shell \(p_{1}^{2}=p_{2}^{2}=0\) quark and gluon external states, keeping \(t=(p_{2}-p_{1})^{2}\neq 0\). We need to separately consider the DGLAP region \(0<\xi<x\leq 1\) and the Efremov-Radyushkin-Brodsky-Lepage (ERBL) region \(0<x<\xi\)[16]. We work in \(d=4-2\epsilon\) dimensions to regularize the UV divergences and any leftover IR divergences. As before, they are distinguished by \(1/\epsilon_{\mbox{\tiny UV}}\) and \(1/\epsilon_{\mbox{\tiny IR}}\), respectively. The \(\overline{\mbox{MS}}\) scale is denoted by \(\tilde{\mu}^{2}=4\pi e^{-\gamma\varepsilon}\mu^{2}\).
### Quark matrix element
The divergent part of the quark matrix element in the DGLAP region \(\xi<x<1\) is, omitting the common prefactor \(\frac{\alpha_{\mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\leftleftleftleft}}}{{{\mbox{\tiny{\tiny{\tiny{ \tiny{\left}}}}{{\mbox{{\tiny{\tiny{\tiny{\left}}}{{{\tiny{ \tiny{\left}}}{{{\tiny{\tiny{\left}}}{{{\tiny{\tiny{\tiny{\left}}}{{{ \tiny{\left}}{{{\tiny{\left}}{{\tiny{\left}}{{{\tiny{\left}}{{\tiny{{ \left}}}{{{\tiny{\left}}{{\tiny{\left}}{{{\tiny{\left}}{{\tiny{{\left}} {{{\left}}{{\tiny{{\left}}{{\tiny{\left}}{{\tiny{{\left}}{{\tiny{\left}}{{{ \left}}{{{\tiny{\left}}{{{\left}}{{\tiny{{\left}}{{\tiny{\left}}{{{ \left}}{{{\tiny{\left}}{{\tiny{\left}}{{{\left}}{{{\tiny{\left}}{{{\left} }{{{\tiny{{\left}}}{{{\tiny{\left}}{{{\left}}{{\tiny{{\left}}{{\left}}{{{ \tiny{{\left}}{{{\left}}{{\tiny{\left}}{{{\left}}{{{\left}{{\left}}{{{ \left}}{{{\left}{{\left}}{{{\tiny{\left}}{{{\left}}{{\tiny{{\left}}{{ \left{{\left}}{{{\left}}{{{\left}}{{\tiny{{\left}}{{\left{{\left}}{{{ \left}}{{{\left}}{{{\left}{{\left}}{{{\left}}{{{\left{\left}}{{{ \left}}{{{\left}}{{{\left{\left}}{{{\left}}{{\left{{\left}}{{\left{{ \left}}}{{{\left{{\left}}{{\left{{\left}}{{\left}}{{{\left}}{{{ \left}}{{\left{{\left}}{{\left{{\left}}{{\left{{\left}}{{\left}}{{{ \left}}{{{\left}}{{{\left{\left}}{{{\left}}{{\left{\left}}{{{\left}}{{{ \left}}{{{\left}}{{\left{{\left}}{{{\left}}{{\left{{\left}}{{{\left}}{{{ \leftleft}}{{{\left{\left}}{{{\left}}{{{\left{\left}}{{\left{{\left}} {{\left{\left}}{{{\left{\left}}{{{\left{\left}}{{{\left}}{{\left{{\left}}}{{{ \left{\left{\left}}{{\left{{\left}}}{{\left{{\left{\left}}{{\left{{ \left}}}{{{\left{\left{\left}}{{{\left{\left}}{{\left{{\left}}}{{{ \leftleft{{\left}}}{{\left{{\left}}{{\left{\left{{\left}}}{{{ \left{\left}}}{{\left{{\left{{\left}}}{{\left{\left{{\left}}}{{{ \left{\left}}{{{\left{\left}}{{\left{\left{\left}}}{{{\left{{\left}} {{\left{\left}}{{\left{{\left}}}{{\left{{\left{\left}}{{\left{{\left}} {{\left{\left}}{{\left{{\left}}}{{\left{\left{{\left}}}{{\left{{\left{ }}}}{{{\left{\left{\left{{\left}}}{{\left{{\left{\left}}}{{\left{{ \left}}}{{\left{{\left{\left}}{{\left{{\left}}}{{\left{{\left{ }}}}{{\left{{\left{\left{{\left}}}{{\left{\left{{\left}}}{{ {\left{\left{\left}}}{{\left{{\left{\left}}}{{\left{{\left{\left}}}{{ {\left{\left{\left}}{{\left{{\left}}}{{\left{{\left{\left}}}{{{\left{{ \left}}}}{{\left{{\left{\left{{\left}}}{{\left{{\left{\left}}}{{ {\left{\left}}}{{{\left{{\left{\left}}}{{\left{{\left{\left}}}{{\left{ {\left}}}{{\left{{\left{\left}}}{{\left{{\left{{\left}}}}{{{\left{ \left{\left}}}{{{\left{{\left{\left}}}{{\left{\left{{\left}}}{{{ \left{\left}}}{{\left{{{\left{\left}}}{{\left{{\left{\left}}}{{ \left{{\left}}}{{\left{{{\left{\left}}}}{{\left{{\left{{\left{\left}}}}{{{ \left{\left{\left}}}{{{\left{{\left{\left}}}{{\left{{\left{\left}}}{{ {\left{\left{{\left}}}}{{{\left{{\left{\left}}}}{{\left{{\left{{ }}}}{{\left{\left{{\left{\left}}}{{\left{{\left{}}}}{{ \left{{\left{{\left}}}}{{\left{{\left{{\left}}}{{\left{{\left{{\left}}}}{{ \left{{\left{{\left}}}}{{\left{{\left{{\left}}}}{{\left{{{\left{{{{}}}}} {\left{{\left{{\left}}}}{{\left{{\left{{\left}}}}{{\left{{\left{ }}}}{{\left{{{\left{{\left{{{{{}}}}}}}{\left{{\left{\left}}} {{\left{{{\left}}}}{{\left{{{{\left}}}}}{{\left{{{\left{{\left}}}}}{{ \left{{{{\left{{\left}}}}}{{\left{{\left{{\left{{\left}}}}}{{\left{{{{{\left}}} {\left{{\left{{{\left}}}}}{\left{{{{\left}}}}{{\left{{{\left{{\left}}}}}{ \left{{{{{\left}}}}}{{\left{{\left{{{\left}}}}}{{\left{{\left{{{{{{\left}}}}}} {{\left{{\left{{{{\left}}}}}}{{\left{{\left}}}}{{\left{{{ }}}}}{{\left{{{\left}}}}{{\left{{{\left}}}}{{{ \left{{\left{{\left}}}}}{{\left{{\left{{{\left}}}}}{{\left{{{ }}}}}{{\left{{\left{{{\left}}}}}{{\left{{{\left}}}}}{{ {\left{{\left{{\left}}}}}{{{\left{{\left{{\left}}}}}{{\left{{{{\left}}}}}{{ {\left{{\left{{\left}}}}}{{\left{{\left{{\left}}}}}{{\left{{{\left{ }}}}}{{\left{{{\left{\left}}}}}{{{\left{{\left{{\left}}}}}{{ \left{{{\left{\left}}}}}{{\left{{\left{{\left{{\left}}}}}}{{{ \left{{{\left{\left}}}}}{{\left{{\left{{\left}}}}}{{{\left{{\left{{ \left}}}}}{{\left{{\left{{\left}}}}}{{\left{{\left{{ {\left}}}}}}{{\left{{\left{\left{{\left}}}}}{{{\left{{\left}}}}}{{{ \left{{\left{\left}}}}}{{{\left{{\left{{\left}}}}}{{\left{{\left}}}}{{{ \left{{\left}}}}{{\left{{\left}}}}{{{\left}}}{{\left}}{{\left}}{{}}{{\left}}}{{{ \left}}{}{{\left}}{}{{\left}}{}{{\left}}{}{{\left}}{}{{\left}}{}{\frac{}}{}}{{}}}{{{ {\left}}}{{}}{{\left}}{}}{{}}{{\left}}}{{}}{{\left}}{}{}}{{\left}}{}{}
and in the polarized case there is an additional term
\[4\frac{x+\xi}{2\xi(1+\xi)}, \tag{42}\]
in the HVBM scheme.
The coefficients of the UV pole in (37) and (40) constitute the \(q\to q\) evolution kernel of the GPDs. After expanding \(\left(\frac{\tilde{\mu}^{2}}{-l^{2}}\right)^{\epsilon}=1+\epsilon\ln\frac{ \tilde{\mu}^{2}}{-l^{2}}\) and convoluting with \(C_{0}\) and \(\tilde{C}_{0}\), we recover (20) and (21). In other words, the logarithmic terms in (7) can be absorbed into the GPDs, as expected. The same comment applies to the \(g\to q\) evolution kernel below.
### Gluon matrix element, unpolarized
For the gluon matrix elements \(\langle p_{2}\epsilon_{2}|...|p_{1}\epsilon_{1}\rangle\) of the quark GPD (35), (36), we find it convenient to work in the light-cone gauge \(\epsilon_{1}^{+}=\epsilon_{2}^{+}=0\). The result for the unpolarized GPD in the DGLAP region is
\[f_{q} = \frac{\alpha_{s}T_{R}}{2\pi}\Biggl{[}-(1-\xi^{2})\epsilon_{1} \cdot\epsilon_{2}^{*}\left(\frac{\left(\frac{\tilde{\mu}^{2}}{-l^{2}}\right) ^{\epsilon}}{\epsilon_{\rm UV}}\,\frac{2x^{2}-2x+1-\xi^{2}}{(1-\xi^{2})^{2}}- \frac{(2x^{2}-2x+1-\xi^{2})\ln\frac{(1-x)^{2}}{1-\xi^{2}}+2x(1-x)}{(1-\xi^{2} )^{2}}\right) \tag{43}\] \[\qquad\qquad-\frac{4}{l^{2}}\frac{x(1-x)}{1-\xi^{2}}\left( \epsilon_{1}\cdot l\epsilon_{2}^{*}\cdot l-\frac{l^{2}}{2}\epsilon_{1}\cdot \epsilon_{2}^{*}\right)\Biggr{]},\]
where we have factored out the structure that represents the twist-two GPD, see (18). Note the anomaly pole \(1/l^{2}\) which was absent in the quark matrix elements. Its coefficient matches \(K\) in (33). (The factor of 4 is from (9).)
In the ERBL region, we find, omitting the prefactor \(\frac{\alpha_{s}T_{R}}{2\pi}\),
\[-(1-\xi^{2})\epsilon_{1}\cdot\epsilon_{2}^{*}\frac{\left(\frac{ \tilde{\mu}^{2}}{-l^{2}}\right)^{\epsilon}}{\epsilon_{\rm UV}}\frac{(x+\xi)(1 +\xi-2x)}{2\xi(1+\xi)(1-\xi^{2})}-\left(\epsilon_{1}\cdot l\epsilon_{2}^{*} \cdot l-\frac{l^{2}}{2}\epsilon_{1}\cdot\epsilon_{2}^{*}\right)\frac{1}{l^{2} }\frac{2x(x+\xi)}{\xi(1+\xi)}-(x\rightarrow-x), \tag{44}\]
The first term comes from the same ladder diagram as in (43). The \(x\rightarrow-x\) term comes from a diagram in which the gluon legs are crossed. The latter contributes only in the ERBL region. The finite terms read, including the \(x\rightarrow-x\) contribution,
\[-(1-\xi^{2})\epsilon_{1}\cdot\epsilon_{2}^{*}\frac{1}{\xi(1-\xi^ {2})^{2}}\Biggl{[}-x(1+\xi^{2})\ln\frac{\xi^{2}-x^{2}}{4}-\xi(1+2x^{2}-\xi^{2 })\ln\frac{(x+\xi)(1-x)}{(\xi-x)(1+x)}\] \[\qquad\qquad\qquad\qquad+2x\Bigl{(}\xi\ln(1-x^{2})-2\xi\ln(1+\xi )+\xi^{2}-\xi+(1+\xi^{2})\ln\xi\Bigr{)}\Biggr{]}. \tag{45}\]
The coefficient of the anomaly pole is thus
\[\frac{1}{l^{2}}\left(\frac{2x(x+\xi)}{\xi(1+\xi)}-\frac{2x(x-\xi)}{\xi(1+\xi) }\right)=\frac{4}{l^{2}}\frac{x}{1+\xi}. \tag{46}\]
This agrees with
\[K(x,\xi)-L(x,\xi)=\frac{x}{1+\xi}, \tag{47}\]
which is the relevant linear combination in the ERBL region, see (31). These results, together with the observation (31), show that the \(1/t\) pole in (4) can be absorbed into the unpolarized quark GPDs \(H_{q}\), \(E_{q}\) in the leading order as a part of the infrared subtraction procedure.
### Gluon matrix element, polarized
In the polarized case (36), we find
\[\tilde{f}_{q}=\frac{\alpha_{s}T_{R}}{2\pi}\left[(1-\xi^{2})i\epsilon^{+pe^{2}_{2} \epsilon_{1}}\left(\frac{2x-1-\xi^{2}}{(1-\xi^{2})^{2}}\left(\frac{\left(\frac{ \tilde{\mu}^{2}}{l^{2}}\right)^{\epsilon}}{\epsilon_{\mbox{\tiny UV}}}-\ln \frac{(1-x)^{2}}{1-\xi^{2}}\right)-2\frac{1-x}{(1-\xi^{2})^{2}}\right)+\frac{2 il^{+}\epsilon^{\epsilon_{1}\epsilon_{2}^{2}lp}}{l^{2}}\frac{1-x}{1-\xi^{2}} \right], \tag{48}\]
in the DGLAP region. A simplified version of this result (with a different UV prescription) was already reported in [6]. The coefficient of the pole agrees with \(\tilde{K}\) in (34). Note that the pole is proportional to \(l^{\mu=+}\), meaning that it contributes to a shift in the GPD \(\tilde{E}_{q}\). In the ERBL region, the singular terms are, omitting the prefactor \(\frac{\alpha_{s}T_{R}}{2\pi}\),
\[\frac{x+\xi}{2\xi(1+\xi)}\left((1-\xi^{2})i\epsilon^{+pe^{2}_{2} \epsilon_{1}}\frac{\left(\frac{\tilde{\mu}^{2}}{l^{2}}\right)^{\epsilon}}{ \epsilon_{\mbox{\tiny UV}}}\frac{-1}{1+\xi}+\frac{2il^{+}\epsilon^{\epsilon_{ 1}\epsilon_{2}^{2}lp}}{l^{2}}\right)+(x\to-x)\] \[=(1-\xi^{2})i\epsilon^{+pe^{2}_{2}\epsilon_{1}}\frac{\left(\frac{ \tilde{\mu}^{2}}{l^{2}}\right)^{\epsilon}}{\epsilon_{\mbox{\tiny UV}}}\frac{- 1}{(1+\xi)^{2}}+\frac{2il^{+}\epsilon^{\epsilon_{1}\epsilon_{2}^{2}lp}}{l^{2}} \frac{1}{1+\xi}, \tag{49}\]
where again the \(x\to-x\) term comes from the crossed-leg diagram. The finite terms are, including the \(x\to-x\) contribution,
\[(1-\xi^{2})i\epsilon^{+pe^{2}_{2}\epsilon_{1}}\frac{1}{(1-\xi^{2})^{2}}\Biggl{[} -2\xi\ln(\xi^{2}-x^{2})+(1+\xi^{2})\ln(1-x^{2})-2x\ln\frac{(1-x)(x+\xi)}{(1+x )(\xi-x)}\]
\[-2(1+\xi^{2})\ln(1+\xi)+4\xi\ln(2\xi)+2\xi-2\Biggr{]}. \tag{50}\]
The coefficient of the UV pole \(\frac{-1}{(1+\xi)^{2}}\) is the correct evolution kernel in the ERBL region as can be seen by taking the imaginary part of (23):
\[\frac{2x-1-\xi^{2}}{(1-\xi^{2})^{2}}-\frac{2(x-\xi)}{(1-\xi^{2})^{2}}=\frac{- 1}{(1+\xi)^{2}}. \tag{51}\]
Again the coefficient of the anomaly pole agrees with
\[\tilde{K}(x,\xi)-\tilde{L}(x,\xi)=\frac{4}{1+\xi}, \tag{52}\]
from (34). With the help of (32), we can absorb the \(1/t\) pole in (5) into the polarized quark GPD \(\tilde{E}_{q}\).
### Relation to the \(\overline{\mbox{MS}}\) scheme
We have thus shown that all the singular structures \(1/t\), \(\ln(-t)\), \(1/\epsilon_{\rm IR}\) and \(1/\epsilon_{\rm IR}^{2}\) in the 'unsubtracted' expressions (4) and (5) can be systematically absorbed into the twist-two GPDs in the leading order. Since the matrix elements (35) and (36) contain non-singular terms, one might choose to perform this subtraction also for the finite terms (25)-(28). An interesting question then arises as to whether, after such a subtraction, (25)-(28) reduce to the known coefficient functions in the \(\overline{\mbox{MS}}\) scheme.4 Here we partially address this question by explicitly performing the subtraction in the imaginary part of the Compton amplitude.
Footnote 4: We thank Vladimir Braun for raising this question.
Let us first consider the DGLAP region \(0<\xi<x\). For simplicity, we assume \(x<1\) to avoid the delta function \(\delta(1-x)\). The imaginary part of (25) is
\[\frac{1-2x-2x^{2}+3\xi^{2}}{2(1-x)(1-\xi^{2})}+\frac{1+x^{2}-2\xi^{2}}{(1-x)(1- \xi^{2})}\ln\frac{1-\xi^{2}}{x(1-x)}, \tag{53}\]
where we used
\[{\rm Im\,Li}_{2}\frac{1\pm\xi}{1-x+i\epsilon}=-\pi\ln\frac{1\pm\xi}{1-x}. \tag{54}\]
On the other hand, the finite terms in the unpolarized quark GPD are, from (38),
\[-\frac{1+x^{2}-2\xi^{2}}{(1-x)(1-\xi^{2})}\ln\frac{(1-x)^{2}}{1-\xi^{2}}-\frac {1-x}{1-\xi^{2}}. \tag{55}\]
The convolution with the leading-order kernel (6) is trivial for the imaginary part since \({\rm Im\,}C_{0}\propto\delta(1-x)\). We just need to subtract (55) from (53) to obtain
\[\frac{1+x^{2}-2\xi^{2}}{(1-x)(1-\xi^{2})}\ln\frac{1-x}{x}+\frac{3(1-2x+\xi^{2} )}{2(1-x)(1-\xi^{2})}\to 2+x-\frac{3}{2(1-x)}+\frac{1+x^{2}}{1-x}\ln\frac{1-x}{x }+1-x, \tag{56}\]
where we have set \(\xi=0\) on the right-hand side. This agrees with the imaginary part of the \(q\to q\) coefficient function in the \(\overline{\rm MS}\) scheme [9; 10; 11; 12; 13]. In particular, the right-hand side is the familiar \(q\to q\) coefficient function for the \(F_{1}\) structure function in DIS [30] for \(x<1\). Similarly, the imaginary part of (26) reads
\[\frac{-1+2x-4x^{2}+3\xi^{2}}{2(1-x)(1-\xi^{2})}+\frac{1+x^{2}-2\xi^{2}}{(1-x)( 1-\xi^{2})}\ln\frac{1-\xi^{2}}{x(1-x)}. \tag{57}\]
The finite terms in the polarized quark PDF depend on the scheme adopted for the treatment of \(\gamma_{5}\). In the HVBM scheme, we find from (38) and (39),
\[-\frac{1+x^{2}-2\xi^{2}}{(1-x)(1-\xi^{2})}\ln\frac{(1-x)^{2}}{1-\xi^{2}}+3 \frac{1-x}{1-\xi^{2}}. \tag{58}\]
After the subtraction,
\[\frac{-7+14x-10x^{2}+3\xi^{2}}{2(1-x)(1-\xi^{2})}+\frac{1+x^{2}-2\xi^{2}}{(1- x)(1-\xi^{2})}\ln\frac{1-x}{x}\to 2+x-\frac{3}{2(1-x)}+\frac{1+x^{2}}{1-x}\ln \frac{1-x}{x}-4(1-x), \tag{59}\]
in agreement with the \(q\to q\) coefficient function for the \(g_{1}\) structure function in the HVBM prescription. As was discussed in Refs. [28; 31], it is necessary to subtract this term in order to avoid a conflict with helicity conservation and with the known first-order correction to the Bjorken sum rule. Incidentally, in the present case, the result obtained after this finite subtraction coincides with that found for a fully anticommuting \(\gamma_{5}\). Either way, instead of (59) the correct result becomes
\[\frac{1-2x-2x^{2}+3\xi^{2}}{2(1-x)(1-\xi^{2})}+\frac{1+x^{2}-2\xi^{2}}{(1-x)( 1-\xi^{2})}\ln\frac{1-x}{x}\to 2+x-\frac{3}{2(1-x)}+\frac{1+x^{2}}{1-x}\ln \frac{1-x}{x}. \tag{60}\]
As for the \(g\to q\) coefficients, the imaginary part of (27) is
\[\frac{1-2x+2x^{2}-\xi^{2}}{(1-\xi^{2})^{2}}\left(\ln\frac{1-\xi^{2}}{x(1-x)}- 1\right). \tag{61}\]
From this, we subtract the finite terms in (43),
\[-\frac{1-2x+2x^{2}-\xi^{2}}{(1-\xi^{2})^{2}}\ln\frac{(1-x)^{2}}{1-\xi^{2}}- \frac{2x(1-x)}{(1-\xi^{2})^{2}}, \tag{62}\]
to obtain
\[\frac{1-2x+2x^{2}-\xi^{2}}{(1-\xi^{2})^{2}}\ln\frac{1-x}{x}+\frac{-1+4x-4x^{2} +\xi^{2}}{(1-\xi^{2})^{2}}\rightarrow(1-2x+2x^{2})\left(\ln\frac{1-x}{x}-1 \right)+2x(1-x). \tag{63}\]
This agrees with the \(g\to q\) coefficient function for the \(F_{1}\) structure function in the \(\overline{\rm MS}\) scheme. Finally, the imaginary part of (28) is
\[\frac{2x-1-\xi^{2}}{(1-\xi^{2})^{2}}\left(\ln\frac{1-\xi^{2}}{x(1-x)}-1\right). \tag{64}\]
Subtracting from this the finite terms in (48),
\[-\frac{2x-1-\xi^{2}}{(1-\xi^{2})^{2}}\ln\frac{(1-x)^{2}}{1-\xi^{2}}-\frac{2(1-x)} {(1-\xi^{2})^{2}}, \tag{65}\]
we find
\[\frac{2x-1-\xi^{2}}{(1-\xi^{2})^{2}}\ln\frac{1-x}{x}+\frac{3-4x+\xi^{2}}{(1- \xi^{2})^{2}}\to(2x-1)\left(\ln\frac{1-x}{x}-1\right)+2(1-x), \tag{66}\]
in agreement with the \(\overline{\rm MS}\)\(g\to q\) coefficient function for the \(g_{1}\) structure function. It is interesting to recall that the last term \(2(1-x)\otimes\Delta G(x)\) in (66) caused a lot of discussion (see, e.g., [32; 33]) in the wake of the proton'spin crisis' in the late 80s. In the standard \(\overline{\rm MS}\) calculation in the forward limit \(t=0\), this term arises from the IR region of the loop diagram, and therefore does not seem to qualify as a part of the 'hard' coefficient. In our calculation of the Compton amplitude, this term is replaced by the pole term \(\frac{1}{l^{2}}(1-x)\otimes\tilde{\cal F}(x)\) and gets absorbed into the GPD \(\tilde{E}_{q}\). Nevertheless, the \(2(1-x)\) term is restored after the subtraction because the polarized GPD (36) generates it from the UV region of the loop momentum. Therefore, even though the final result (66) is the same, from our perspective the term \(2(1-x)\) is legitimately considered a 'hard' contribution.
We have further performed the subtraction of the constant terms (41), (45) and (50) in the ERBL region \(x<\xi\) from the imaginary part of (25)-(28) and observed consistent agreement with the \(\overline{\rm MS}\) coefficient functions [17]. We have thus partially verified that the "off-forward" regularization is equivalent to the \(\overline{\rm MS}\) scheme after the subtraction of finite terms is made. Extending this to the real part of the Compton amplitude is left for future work. On the other hand, since the treatment of finite terms is a matter of scheme choice, one can choose to subtract only the singular terms. Eqs. (25)-(28), with the single and double poles omitted, are then the coefficient functions in such a scheme.
## VI Imprints of anomalies on GPD
Let us discuss the implications of our results. Superficially, it may seem as if nothing has happened in the end. After absorbing all the singular terms into the twist-two GPDs, the Compton amplitude will be given by the usual factorized form with possibly different coefficient functions due to a different scheme choice. The common attitude is that one does not care about these singular terms once they have been 'discarded' into a parton distribution, as they will be taken care of by the nonperturbative QCD dynamics. One can also take the view that the \(1/t\) poles should disappear in the limit \(t\to 0\), because nonperturbative effects must intervene when \(\sqrt{|t|}\sim\Lambda_{\rm QCD}\). However, from the point of view of factorization, technically speaking one is allowed to choose any infrared regulator that can isolate the collinear divergences, as long as they can be eventually absorbed into parton distributions when the latter are computed with the same IR regulator. In this sense, the use of \(t\) is no different from other regulators such as the current quark mass \(m_{q}\) and the dimensionality \(d\neq 4\). One may even argue that it is a more physical scheme, since \(t\neq 0\) in actual experiments and naturally cuts off collinear divergences.
Technicalities aside, the real reason we are pursuing the off-forward calculation with nonzero \(t\) is that this approach has the potential to uncover nonperturbative connections between GPDs and QCD anomalies. Indeed, the very idea that twist-four GPDs are absorbed into twist-two GPDs is quite non-standard and needs to be investigated further, rather than dismissed as a routine infrared subtraction procedure. This is all the more so because, as discussed in [6; 8] and elaborated further below, the results we shall get are consistent with what we know about the axial and gravitational form factors which are certain moments of the twist-two GPDs.
### Axial and gravitational form factors
#### vi.1.1 Isovector axial form factors
In order to motivate our discussion, let us first recall the familiar example of the isovector axial current \(J^{\alpha}_{5a}=\sum_{q}\bar{q}\gamma^{\alpha}\gamma_{5}\frac{\tau^{a}}{2}q\) where \(\tau^{a=1,2,3}\) are the Pauli matrices. Its nucleon matrix element is parameterized by the axial form factors,
\[\langle P_{2}|J^{\alpha}_{5a}|P_{1}\rangle=\bar{u}(P_{2})\left[\gamma^{\alpha} \gamma_{5}F_{A}(t)+\frac{l^{\alpha}\gamma_{5}}{2M}F_{P}(t)\right]\frac{\tau^ {a}}{2}u(P_{1})\,. \tag{67}\]
In QCD with \(n_{f}=2\) massless flavors, the current is exactly conserved, \(\partial_{\alpha}J^{\alpha}_{5a}=0\), due to chiral symmetry. This imposes a constraint among the form factors
\[2MF_{A}(t)+\frac{tF_{P}(t)}{2M}=0. \tag{68}\]
Clearly, \(F_{P}(t)\) has a pole at \(t=0\):
\[F_{P}(t)\approx\frac{-4M^{2}g_{A}^{(3)}}{t}\qquad(t\to 0), \tag{69}\]
where \(g_{A}^{(3)}=F_{A}(0)\approx 1.3\) is the isovector axial coupling constant. The pole is generated by the exchange of the massless pion which is the Nambu-Goldstone boson of spontaneously broken chiral symmetry. This requirement leads to the well-known Goldberger-Treiman relation
\[g_{A}^{(3)}=\frac{f_{\pi}g_{\pi NN}}{M}, \tag{70}\]
where \(f_{\pi}\) is the pion decay constant and \(g_{\pi NN}\) is the pion-nucleon coupling. Now recall that \(F_{P}(t)\) is the first moment of the isovector GPD \(\tilde{E}\),
\[F_{P}(t)=\int_{-1}^{1}dx\left(\tilde{E}_{u}(x,\xi,t)-\tilde{E}_{d}(x,\xi,t) \right). \tag{71}\]
Barring an unlikely possibility that the pole \(1/t\) is generated by the \(x\)-integral, the GPDs themselves hence have a massless pole at \(t=0\):
\[\tilde{E}_{u}(x,\xi,t)-\tilde{E}_{d}(x,\xi,t)\sim\theta(\xi-|x|)\frac{g_{A}^{ (3)}}{t}\qquad(t\to 0). \tag{72}\]
Indeed, such a pole has been discussed in the GPD literature (see, e.g., [34]) where it has been argued that the pole exists only in the ERBL region \(\xi>x\) where GPDs probe the mesonic degrees of freedom inside the nucleon. In actual QCD with massive quarks, the pole is shifted to the physical pion mass, \(\frac{1}{t}\to\frac{1}{t-m_{\pi}^{2}}\).
#### iv.2.2 Isoscalar axial form factors
The story is more complicated for the singlet axial current \(J^{\alpha}_{5}=\sum_{q}\bar{q}\gamma^{\alpha}\gamma_{5}q\). The associated form factors \(g_{A}\), \(g_{P}\), appearing in the nucleon matrix element via
\[\langle P_{2}|J^{\alpha}_{5}|P_{1}\rangle=\bar{u}(P_{2})\left[\gamma^{\alpha} \gamma_{5}g_{A}(t)+\frac{l^{\alpha}\gamma_{5}}{2M}g_{P}(t)\right]u(P_{1})\,, \tag{73}\]
are related to the flavor-singlet polarized GPDs as
\[g_{A}(t)=\sum_{q}\int_{-1}^{1}dx\tilde{H}_{q}(x,\xi,t)=\sum_{q} \int_{0}^{1}dx(\tilde{H}_{q}(x,\xi,t)+\tilde{H}_{q}(-x,\xi,t))\,, \tag{74}\] \[g_{P}(t)=\sum_{q}\int_{-1}^{1}dx\tilde{E}_{q}(x,\xi,t)=\sum_{q} \int_{0}^{1}dx(\tilde{E}_{q}(x,\xi,t)+\tilde{E}_{q}(-x,\xi,t))\,. \tag{75}\]
In contrast to the isovector current above, \(J^{\alpha}_{5}\) is not conserved due to the chiral (\(\mathrm{U}_{A}(1)\)) anomaly,
\[\partial_{\alpha}J^{\alpha}_{5}=-\frac{n_{f}\alpha_{s}}{4\pi}F^{\mu\nu}\tilde{ F}_{\mu\nu}. \tag{76}\]
This leads to the following exact relation:
\[2Mg_{A}(t)+\frac{tg_{P}(t)}{2M}=i\frac{\langle P_{2}|\frac{n_{f}\alpha_{s}}{4 \pi}F\tilde{F}|P_{1}\rangle}{\bar{u}(P_{2})\gamma_{5}u(P_{1})}\,. \tag{77}\]
We see that, in the absence of the anomaly (i.e., if the right-hand side were zero), \(g_{P}(t)\) would have a pole at \(t=0\),
\[\frac{g_{P}(t)}{2M}\approx-\frac{2M\Delta\Sigma}{t}\qquad(t\to 0), \tag{78}\]
where \(\Delta\Sigma=g_{A}(0)\) is the quark helicity contribution to the nucleon spin. Such a pole can be interpreted as due to the exchange of the massless ninth Nambu-Goldstone boson, the 'primordial' \(\eta_{0}\) meson. Moreover, (75) suggests that already the flavor-singlet GPD \(\sum_{q}\tilde{E}_{q}\) would have a pole \(1/t\), just like (72).
In reality, however, the U\({}_{A}\)(1) axial symmetry is explicitly broken due to the anomaly, and \(g_{P}(t)\) exhibits a pole at the physical \(\eta^{\prime}\) meson mass \(t=m_{\eta^{\prime}}^{2}\). The exact mechanism behind this scenario was a great debate in the late 70s culminating in the works of Witten [35] and Veneziano [36]. In a nutshell, \(\eta_{0}\) acquires mass via a resummation [36]
\[\frac{1}{t}+\frac{m_{\eta^{\prime}}^{2}}{t^{2}}+\frac{m_{\eta^{\prime}}^{4}}{ t^{3}}+\cdots=\frac{1}{t-m_{\eta^{\prime}}^{2}}=-\left(\frac{1}{t}\frac{m_{ \eta^{\prime}}^{2}}{m_{\eta^{\prime}}^{2}-t}-\frac{1}{t}\right), \tag{79}\]
due to its coupling with the gluonic topological fluctuations \(m_{\eta^{\prime}}^{2}\propto\langle(F\tilde{F})^{2}\rangle\). On the right-hand side, we have deliberately expressed the resulting propagator as the difference of two poles at \(t=0\). Now let us compare this with (77) which can be identically rewritten in the form
\[\frac{g_{P}(t)}{2M} = \frac{1}{t}\left(i\frac{\langle P_{2}|\frac{\pi_{f}\alpha_{s}}{4 \pi}F\tilde{F}|P_{1}\rangle}{\bar{u}(P_{2})\gamma_{5}u(P_{1})}-2Mg_{A}(t)\right) \tag{80}\] \[= \frac{1}{t}\left(i\frac{\langle P_{2}|\frac{\pi_{f}\alpha_{s}}{4 \pi}F\tilde{F}|P_{1}\rangle}{\bar{u}(P_{2})\gamma_{5}u(P_{1})}-i\left.\frac{ \langle P_{2}|\frac{\pi_{f}\alpha_{s}}{4\pi}F\tilde{F}|P_{1}\rangle}{\bar{u}( P_{2})\gamma_{5}u(P_{1})}\right|_{t=0}\right)+2M\frac{g_{A}(0)-g_{A}(t)}{t}\,.\]
We neglect the last term assuming \(g_{A}(t)\approx g_{A}(0)=\Delta\Sigma\) to be varying only slowly with \(t\).5 The right-hand side can then be interpreted as a cancellation of two poles at \(t=0\), just like (79), between the 'anomaly pole' (first term) and the naive pole (78) from the massless \(\eta_{0}\) meson exchange (second term). Eqs. (79) and (80) are actually identical in the single-pole approximation where (80) is saturated by
Footnote 5: A partial justification of this comes from the large-\(N_{c}\) approximation where \(m_{\eta^{\prime}}\sim{\cal O}(1/\sqrt{N_{c}})\) is considered as small, at least parametrically, compared to the singlet axial vector meson masses \(m_{A}\sim{\cal O}(N_{c}^{0})\). Thus, as long as one is interested in the region \(|t|\sim m_{\eta^{\prime}}^{2}\), the variation of \(g_{A}(t)\sim 1/(t-m_{A}^{2})\) can be neglected. In practice, however, the \(\eta^{\prime}(957)\) is only slightly lighter than the \(f_{1}(1285)\).
\[\frac{g_{P}(t)}{2M}\approx\frac{2M\Delta\Sigma}{m_{\eta^{\prime}}^{2}-t}\qquad i \frac{\langle P_{2}|\frac{\pi_{f}\alpha_{s}}{4\pi}F\tilde{F}|P_{1}\rangle}{ \bar{u}(P_{2})\gamma_{5}u(P_{1})}\approx 2M\Delta\Sigma\frac{m_{\eta^{\prime}}^{2}} {m_{\eta^{\prime}}^{2}-t}. \tag{81}\]
In the context of polarized DIS, the cancellation of poles just described had been originally envisaged in [14] and further elaborated in [8] to resolve issues with the \(g_{1}\) structure function. Compton scattering and GPDs offer a more general setup to explore the physics of the anomaly to its full extent.
#### iv.2.3 Gravitational form factors
We now point out that one can repeat the same story for the QCD energy momentum tensor \(\Theta^{\alpha\beta}\) and its nucleon matrix element that defines the gravitational form factors,
\[\langle P_{2}|\Theta^{\alpha\beta}|P_{1}\rangle = \bar{u}(P_{2})\left[A(t)\frac{P^{\alpha}P^{\beta}}{M}+(A(t)+B(t)) \frac{P^{(\alpha}i\sigma^{\beta)\lambda}l_{\lambda}}{2M}+D(t)\frac{l^{\alpha}l ^{\beta}-g^{\alpha\beta}t}{4M}\right]u(P_{1}), \tag{82}\]
where \(a^{(\mu}b^{\nu)}=\frac{1}{2}(a^{\mu}b^{\nu}+a^{\nu}b^{\mu})\). Taking the trace, we find an exact constraint among the form factors:
\[\langle P_{2}|(\Theta)^{\alpha}_{\alpha}|P_{1}\rangle=M\left(A(t)+\frac{B(t)}{4 M^{2}}t-\frac{3D(t)}{4M^{2}}t\right)\bar{u}(P_{2})u(P_{1})=\langle P_{2}|\frac{ \beta(g)}{2g}F^{\mu\nu}F_{\mu\nu}|P_{1}\rangle. \tag{83}\]
The right-hand side, on which \(\beta(g)\) is the QCD beta function, is the trace anomaly which signifies the explicit breaking of conformal symmetry. If one naively neglects it, one finds a massless pole in \(D(t)\) at \(t=0\):
\[\frac{3D(t)}{4}\approx\frac{M^{2}}{t}\qquad(t\to 0), \tag{84}\]
where the conditions \(A(0)=1\) and \(B(0)=0\) have been used (so that one can omit \(tB(t)\) as \(t\to 0\)). By analogy with the massless \(\eta_{0}\) pole in (78), one might interpret the pole in (84) as due to the exchange of spin-0 glueballs which would couple to the operator \(\Theta^{\alpha\beta}\) and which would have been massless in the absence of the trace anomaly [6]. In reality, however, the anomaly modifies (84) to
\[\frac{3D(t)}{4} \approx \frac{M^{2}}{t}\left(A(t)-\frac{\langle P_{2}|\frac{\beta(g)}{2g} F^{2}|P_{1}\rangle}{M\bar{u}(P_{2})u(P_{1})}\right) \tag{85}\] \[= -\frac{M}{t}\left(\frac{\langle P_{2}|\frac{\beta(g)}{2g}F^{2}|P _{1}\rangle}{\bar{u}(P_{2})u(P_{1})}-\left.\frac{\langle P_{2}|\frac{\beta(g) }{2g}F^{2}|P_{1}\rangle}{\bar{u}(P_{2})u(P_{1})}\right|_{t=0}\right)+M^{2} \frac{A(t)-A(0)}{t}.\]
Note the similarity to (80). The \(D(t)\)-form factor can be interpreted as the difference of two poles at \(t=0\), between the 'anomaly pole' (first term in the brackets) and the naive glueball pole (84) (second term in the brackets). As a result of this cancellation, the pole in \(D(t)\) is shifted from \(t=0\) to physical glueball masses \(t=\eta_{G}^{2}\) presumably in a way similar to (79). However, unlike the situation in (80), in the present case the last term \(A(t)-A(0)\) of (85), which is related to spin-2 glueballs [6], is likely important at least from the large-\(N_{c}\) perspective. Since the trace anomaly cannot be turned off in the large-\(N_{c}\) limit, the masses of \(2^{++}\) and \(0^{++}\) glueballs are both \({\cal O}(N_{c}^{0})\). Moreover, the analysis in [37] suggests that the single pole approximation (cf., (81)) may not be a good approximation. The \(D(t)\)-form factor thus exhibits 'glueball dominance'
\[D(t)=\sum_{i}^{0^{++}}\frac{a_{i}}{m_{G_{i}}^{2}-t}+\sum_{j}^{2^{++}}\frac{b_{ j}}{m_{G_{j}}^{2}-t}, \tag{86}\]
where the two contributions come from the \(\langle F^{2}\rangle\) and \(A(t)\) terms in (85), respectively. Incidentally, by taking the \(t\to 0\) limit of (85), one finds [38]
\[\frac{3D(0)}{4}=-M\left.\frac{d}{dt}\frac{\langle P_{2}|\frac{\beta(g)}{2g}F^ {2}|P_{1}\rangle}{\bar{u}(P_{2})u(P_{1})}\right|_{t=0}+\left.M^{2}\frac{dA(t) }{dt}\right|_{t=0}. \tag{87}\]
The slope of a form factor at \(t=0\) defines a 'radius' of the hadron. Eq. (87) shows that the D-term \(D(t=0)\) is related to the difference between two radii, one defined by the scalar form factor \(\langle F^{2}\rangle\) (related to the \(0^{++}\) glueball masses) and the other by the \(A\)-form factor (related to the \(2^{++}\) glueball masses), see recent discussions in [37; 39; 40; 41].
As we have seen in the above three examples, the existence or not of a massless pole in form factors teaches us fundamental insights into the nonperturbative dynamics of QCD. However, despite the known connections between form factors and GPDs, the corresponding discussion at the GPD level has been limited to the isovector sector (72) in the literature (see, however, [42]). Our main purpose is to extend this argument to the singlet sector.
### Anomaly poles in GPDs
Let us now return to our context. We have argued in Section V that the pole \(1/t\) in the one-loop Compton amplitude (5) should be absorbed into \(\tilde{E}_{q}\). This means that \(\tilde{E}_{q}\) acquires a component related to the twist-four GPD \(\tilde{\cal F}\):
\[\sum_{q}(\tilde{E}_{q}(x,\xi,t)+\tilde{E}_{q}(-x,\xi,t))=\frac{T_{R}n_{f} \alpha_{s}}{\pi}\frac{M^{2}}{t}\tilde{C}^{\rm anom}\otimes\tilde{\cal F}(x, \xi,t)+\cdots, \tag{88}\]
where \(\tilde{C}^{\rm anom}\) is defined in (32). Integrating over \(x\), we exactly reproduce the first term of (80). Moreover, (80) suggests that there is another, 'primordial' pole in \(\tilde{E}_{q}\) which exactly cancels the \(1/t\) pole to make \(\tilde{E}_{q}\) finite for all values of \(x\) and \(\xi\) in the limit \(t\to 0\). A simple, yet ad-hoc fix consistent with (80) is to add a 'counterterm'
\[\sum_{q}(\tilde{E}_{q}(x,\xi,t)+\tilde{E}_{q}(-x,\xi,t))\approx\frac{T_{R}n_{f} \alpha_{s}}{\pi}\frac{M^{2}}{t}\tilde{C}^{\rm anom}\otimes(\tilde{\cal F}(x, \xi,t)-\tilde{\cal F}(x,\xi,0)). \tag{89}\]
This may be viewed as the non-local version of the local relation (80). The second, added term is an analog of (72), but interestingly, in the present case the pole is not limited to the ERBL region \(x<\xi\). We postulate (89) as a nonperturbative relation between the twist-two and twist-four GPDs mediated by the chiral anomaly.
The fate of the \(1/t\) pole in the unpolarized sector and its connection to the trace anomaly are more involved. This is partly because the QCD energy momentum tensor consists of a quark and a gluon part, \(\Theta^{\alpha\beta}=\sum_{q}\Theta_{q}^{\alpha\beta}+\Theta_{q}^{\alpha\beta}\), in contrast to \(J_{5}^{\alpha}\) which is purely a quark operator. Accordingly, one can define gravitational form factors separately for quarks and gluons [43]:
\[\langle P_{2}|\Theta_{q,g}^{\alpha\beta}|P_{1}\rangle = \bar{u}(P_{2})\left[A_{q,g}(t)\frac{P^{\alpha}P^{\beta}}{M}+(A_{q,g}(t)+B_{q,g}(t))\frac{P^{(\alpha}i\sigma^{\beta)\lambda}l_{\lambda}}{2M}+D_ {q,g}(t)\frac{l^{\alpha}l^{\beta}-g^{\alpha\beta}t}{4M}+\bar{C}_{q,g}(t)Mg^{ \alpha\beta}\right]u(P_{1}).\]
They are related to the second moments of the unpolarized quark GPDs,
\[\int_{-1}^{1}dxxH_{q}(x,\xi,t) = \int_{0}^{1}dxx(H_{q}(x,\xi,t)-H_{q}(-x,\xi,t))=A_{q}(t)+\xi^{2} D_{q}(t)\,, \tag{91}\] \[\int_{-1}^{1}dxxE_{q}(x,\xi,t) = \int_{0}^{1}dxx(E_{q}(x,\xi,t)-E_{q}(-x,\xi,t))=B_{q}(t)-\xi^{2} D_{q}(t)\,, \tag{92}\]
and similarly for the gluon GPDs. Taking the trace of (82), we find
\[\langle P_{2}|\sum_{q}(\Theta_{q})_{\alpha}^{\alpha}|P_{1}\rangle = \sum_{q}M\left(A_{q}(t)+4\bar{C}_{q}(t)+\frac{B_{q}(t)}{4M^{2}}t- \frac{3D_{q}(t)}{4M^{2}}t\right)\bar{u}(P_{2})u(P_{1}) \tag{93}\] \[= \langle P_{2}|\frac{\beta_{q}(g)}{2g}F^{2}|P_{1}\rangle\approx \langle P_{2}|\frac{T_{R}n_{f}\alpha_{s}}{6\pi}F^{2}|P_{1}\rangle,\]
where \(\frac{\beta_{q}}{2g}\) is the quark part of the trace anomaly that can be systematically calculated in perturbation theory [44; 45; 46; 47]. To lowest order, it is simply the \(n_{f}\) term of the beta function:
\[(\Theta_{q})_{\alpha}^{\alpha}+(\Theta_{g})_{\alpha}^{\alpha}= \frac{\beta(g)}{2g}F^{\mu\nu}F_{\mu\nu}=-\frac{\alpha_{s}}{8\pi}\left(\frac{1 1N_{c}}{3}-\frac{4T_{R}n_{f}}{3}\right)F^{2}+\cdots. \tag{94}\]
Clearly, (93) is not as constraining as (83) because of the new form factors \(B_{q}(t),\bar{C}_{q}(t)\). (Note that \(B_{q}(0),\bar{C}_{q}(0)\neq 0\) although \(B_{q}(0)+B_{g}(0)=\bar{C}_{q}(t)+\bar{C}_{g}(t)=0\).) Nevertheless we may try to rewrite it in a way similar to (85)
\[\sum_{q}\frac{3D_{q}(t)-B_{q}(t)}{4}=-\frac{M}{t}\left(\frac{ \langle P_{2}|\frac{\beta_{q}}{2g}F^{2}|P_{1}\rangle}{\bar{u}(P_{2})u(P_{1})} -\left.\frac{\langle P_{2}|\frac{\beta_{q}}{2g}F^{2}|P_{1}\rangle}{\bar{u}(P_{ 2})u(P_{1})}\right|_{t=0}\right)+\frac{M^{2}}{t}\sum_{q}\Bigl{(}A_{q}(t)+4 \bar{C}_{q}(t)-A_{q}(0)-4\bar{C}_{q}(0)\Bigr{)}. \tag{95}\]
Let us now discuss how the constraint (95) from the trace anomaly is encoded in the GPDs. We have argued that the anomaly poles in (4) should be absorbed into the unpolarized GPDs,
\[\sum_{q}(H_{q}(x,\xi,t)-H_{q}(-x,\xi,t)) = \frac{T_{R}n_{f}\alpha_{s}}{\pi}\frac{M^{2}}{t}C^{\rm anom} \otimes\mathcal{F}(x,\xi,t)+\cdots, \tag{96}\] \[\sum_{q}(E_{q}(x,\xi,t)-E_{q}(-x,\xi,t)) = -\frac{T_{R}n_{f}\alpha_{s}}{\pi}\frac{M^{2}}{t}C^{\rm anom} \otimes\mathcal{F}(x,\xi,t)+\cdots, \tag{97}\]
where \(C^{\rm anom}\) is defined in (31). Taking the second moment and comparing with (91), we find
\[\sum_{q}A_{q}(t)\Bigg{|}_{\rm pole}=-\sum_{q}B_{q}(t)\Bigg{|}_{ \rm pole}=-\frac{M}{t}\frac{\langle P_{2}|\frac{T_{R}n_{f}\alpha_{s}}{6\pi}F^ {\mu\nu}(i\overleftrightarrow{D}^{+})^{2}F_{\mu\nu}|P_{1}\rangle}{(P^{+})^{2} \bar{u}(P_{2})u(P_{1})}, \tag{98}\]
\[\sum_{q}D_{q}(t)\Bigg{|}_{\rm pole}=-\frac{M}{t}\frac{\langle P_{ 2}|\frac{T_{R}n_{f}\alpha_{s}}{6\pi}F^{2}|P_{1}\rangle}{\bar{u}(P_{2})u(P_{1})}. \tag{99}\]
Eq. (99) seems to reproduce the first term of (95) after \(\beta_{q}\) is expanded to lowest order. However, apparently there is a factor \(\frac{3}{4}\) mismatch in the normalization. Besides, the previous argument around (85) did not hint at the possible existence of an anomaly pole in the \(A_{q},B_{q}\) form factors.
In order to understand these differences, we quote the one-loop result for the energy momentum tensor matrix element between on-shell gluon (not nucleon) states [48]:
\[\langle p_{2}|\Theta^{\alpha\beta}_{q}|p_{1}\rangle=-\frac{T_{R}\alpha_{s}}{6 \pi}\left(\frac{p^{\alpha}p^{\beta}}{t}+\frac{l^{\alpha}l^{\beta}-tg^{\alpha \beta}}{4t}\right)\langle p_{2}|F^{\mu\nu}F_{\mu\nu}|p_{1}\rangle+\cdots. \tag{100}\]
A superficial comparison with (90) suggests that poles of equal magnitude are induced in the \(A_{q},B_{q},D_{q}\) form factors
\[A_{q}(t)\approx-B_{q}(t)\approx D_{q}(t)\sim\frac{\langle\alpha_{s}F^{2} \rangle}{t}, \tag{101}\]
and the issue of the factor \(\frac{3}{4}\) goes away because \(\frac{3D_{q}(t)-B_{q}(t)}{4}\approx D_{q}(t)\) on the left hand side of (95). Taking the trace of (100), we find
\[\langle p_{2}|(\Theta_{q})^{\alpha}_{\alpha}|p_{1}\rangle=\langle p_{2}|\frac {T_{R}\alpha_{s}}{6\pi}F^{2}|p_{1}\rangle, \tag{102}\]
which is the correct trace anomaly relation to this order. To obtain (102), it is important to use the on-shell condition \(p^{2}=-t/4\) of the external states, so that the two terms in (100) contribute \(\frac{1}{4}\) and \(\frac{3}{4}\) of the total anomaly, respectively. Going from gluon to nucleon targets, we see that the way the trace anomaly relation (93) is fulfilled among various form factors is highly nontrivial. A different, spin-2 operator \(F(D^{+})^{2}F\) is involved in the \(A_{q},B_{q}\) form factor (98) due to the convolution integral in \(x\). Moreover, a naive identification \(p^{\alpha}p^{\beta}\to P^{\alpha}P^{\beta}\) is precarious because the nucleon is massive \(P^{2}=M^{2}-t/4\). While the difference is negligible when \(\sqrt{|t|}\gg M\), this obscures the fate of the poles in (98) as \(t\) gets smaller.
On the other hand, the tensor \(l^{\alpha}l^{\beta}\) is formally identical for both the nucleon and gluon targets, \(l^{\alpha}=p^{\alpha}_{2}-p^{\alpha}_{1}=P^{2}_{2}-P^{\alpha}_{1}\). We may therefore expect that the anomaly relation at the partonic level is better reflected in the \(D_{q}(t)\) form factor even at the hadronic level, just like the \(g_{P}(t)\) form factor which is the coefficient of \(l^{\alpha}\). Indeed, the opposite signs in (97) suggests that the pole terms mainly feed into the so-called Polyakov-Weiss D-term [49] of the unpolarized GPDs,
\[H_{q}^{\rm PW}(x,\xi,t)=-E_{q}^{\rm PW}(x,\xi,t)=\theta(\xi-|x|)D_{q}(x/\xi,t). \tag{103}\]
The distribution \(D_{q}(z,t)\) is odd in \(z\) and is solely responsible for the highest power of \(\xi\) in the moments of GPDs. In order to extract it, we take the \(n\)-th moment of (97) with odd integers \(n\),
\[\sum_{q}\int_{-1}^{1}dxx^{n}H_{q}(x,\xi,t) \approx \frac{T_{R}n_{f}\alpha_{s}}{\pi}\frac{M^{2}}{t}\int_{0}^{1}dx \frac{x^{n}}{(n+2)(n+3)}\frac{1-\left(\frac{\xi}{x}\right)^{n+3}}{1-\frac{\xi ^{2}}{x^{2}}}\left({\cal F}(x,\xi,t)-{\cal F}(x,\xi,0)\right) \tag{104}\] \[\equiv \sum_{i=0}^{n+1}\sum_{q}h_{qn}^{i}\xi^{i},\]
where we have minimally subtracted the pole at \(t=0\) as in (89). The highest power \(h_{qn}^{n+1}\) is related to \(D_{q}(z,t)\) as
\[\sum_{q}\int_{-1}^{1}dzz^{n}D_{q}(z,t) \approx \sum_{q}h_{qn}^{n+1}(t) \tag{105}\] \[= \frac{T_{R}n_{f}\alpha_{s}}{\pi}\frac{M^{2}}{t}\frac{1}{(n+2)(n+3 )}\int_{0}^{1}\frac{dx}{x}\left({\cal F}(x,\xi,t)-{\cal F}(x,\xi,0)\right)\] \[= -2\frac{T_{R}n_{f}\alpha_{s}}{\pi}\frac{M}{t}\frac{1}{(n+2)(n+3) }\left(\frac{\langle P_{2}|F^{2}|P_{1}\rangle}{\bar{u}(P_{2})u(P_{1})}-\left. \frac{\langle P_{2}|F^{2}|P_{1}\rangle}{\bar{u}(P_{2})u(P_{1})}\right|_{t=0} \right).\]
By definition, the \(n=1\) moment is the gravitational form factor \(\int_{-1}^{1}dzzD_{q}(z,t)=D_{q}(t)\). Inverting the Mellin transform (105) and noting that \(D_{q}(z,t)\) is an odd function of \(z\), we obtain
\[\sum_{q}D_{q}(z,t)\approx-\frac{T_{R}n_{f}\alpha_{s}}{\pi}z(1-|z|)\frac{M}{t} \left(\frac{\langle P_{2}|F^{2}|P_{1}\rangle}{\bar{u}(P_{2})u(P_{1})}-\left. \frac{\langle P_{2}|F^{2}|P_{1}\rangle}{\bar{u}(P_{2})u(P_{1})}\right|_{t=0} \right), \tag{106}\]
and in particular,
\[\sum_{q}D_{q}(t)\approx-\frac{M}{t}\left(\frac{\langle P_{2}|\frac{T_{Rn}/\alpha_ {s}}{6\pi}F^{2}|P_{1}\rangle}{\bar{u}(P_{2})u(P_{1})}-\left.\frac{\langle P_{2}| \frac{T_{Rn}/\alpha_{s}}{6\pi}F^{2}|P_{1}\rangle}{\bar{u}(P_{2})u(P_{1})} \right|_{t=0}\right). \tag{107}\]
Since \(\langle P|F^{2}|P\rangle<0\) in QCD and the form factor \(\langle P_{2}|F^{2}|P_{1}\rangle\) is a decreasing function of \(|t|\), the right-hand side of (106) is positive, whereas \(D_{q}(t)\) is usually believed to be negative. While we expect that eventually the leading-order coefficient \(\frac{T_{Rn}/\alpha_{s}}{6\pi}\) will be replaced by \(\frac{\beta_{q}}{2g}\) after including higher-order corrections, according to the three-loop analyses in [45; 46; 47; 50], the sign does not flip \(\frac{\beta_{q}}{2g}>0\). This suggests that the other terms in (95) that were neglected in the above minimal subtraction procedure may be numerically important as we already suspected in the argument below (85). Note also that the sign does flip if one includes the gluon contribution to recover the full beta function of QCD \(\frac{\beta_{q}}{2g}\rightarrow\frac{\beta}{2g}<0\).
## VII Conclusions
In this work, we have performed a complete one-loop calculation of the Compton scattering amplitude using momentum transfer \(t\) as the regulator of the collinear singularity. Our approach differs from all the previous calculations in the GPD literature where one typically uses dimensional regularization to isolate the collinear singularity and sets \(t=0\) right from the start, assuming that nonzero \(t\) only generates higher-twist corrections of order \(t/Q^{2}\). In practice, the introduction of an additional variable \(t\) makes the calculation more cumbersome and brings in unusual features. In the gluon initiated channel, we have found anomaly poles \(1/t\) (29), (30) accompanied by twist-four GPDs (9), (10) in both the real and imaginary parts of the Compton amplitude, confirming and extending our previous finding [6]. In the quark initiated channel, we have unexpectedly found uncancelled single and double IR poles in the 'coefficient functions' (25), (26). Each of these features potentially implies the violation of factorization. However, we have also performed the one-loop calculation of GPDs for quark and gluon states with the same set of regulators and showed how all these poles can be systematically absorbed into the GPDs themselves. This shows that QCD factorization is restored at least to this order.
This is however not the end of the story. We have also explored connections between GPDs and anomalies, as a natural and necessary consequence of the known connections between form factors and anomalies. We have argued that once the poles \(1/t\) have been absorbed into GPDs, they become a part of the GPDs. In other words, anomalies nonperturbatively relate twist-two and twist-four GPDs. Such relations, once integrated over \(x\), are expected to reproduce the constraints among the corresponding form factors. This scenario seems to be working for the polarized GPD \(\bar{E}_{q}\) and its connection to the chiral anomaly. Relation (89), partly supported by the large-\(N_{c}\) argument, can be viewed as the \(x\)-dependent generalization of the form factor relation (77). The situation is more complicated (and more interesting) for the unpolarized GPDs \(H_{q},E_{q}\) and their relation to the trace anomaly. We have argued that the anomaly mostly constrains the \(D_{q}(t)\) form factor and its GPD analog, the Polyakov-Weiss D-term. The results we have arrived at (106) (107) are roughly consistent with the anomaly relation (95), but they differ in detail. Further investigation in this direction is necessary.
In conclusion, we have proposed finite-\(t\) regularization as an alternative factorization scheme that elucidates the physics of anomalies. This is a scheme where we are able to unravel novel connections between twist-two and twist-four GPDs mediated by the anomalies of QCD. Admittedly, the calculation is more cumbersome than the standard dimensional regularization with \(t=0\). Still, the chiral and trace anomalies are among the most fascinating phenomena of QCD with far-reaching consequences, and we believe that research on GPDs is enriched by incorporating such fundamental problems. There are a number of directions along which the current work can be refined or extended, in addition to the aforementioned tension between (95) and (107). First, we strongly suspect that anomaly poles are present in higher order perturbation theory. Especially in the symmetric case, we expect that each additional loop provides the corresponding term in the expansion of the (quark part of the) beta function. A related question is whether there are anomaly poles in the _gluon_ GPDs that complement the quark ones to restore the full beta function \(\beta=\beta_{q}+\beta_{g}\)[44]. Another important question which has not been addressed at all in this paper is how to understand the new relations from a renormalization group point of view. In the present scheme, the mixing between the twist-two and twist-four GPDs occurs as a result of a finite subtraction rather than the DGLAP evolution of GPDs. The UV properties of the twist-four GPDs (9) and (10) have been studied in [19; 20; 21; 22], but more work is certainly needed. Furthermore, it is well known that at twist-3 accuracy, the amplitude for DVCS off the nucleon contains twist-3 GPDs apart from the usual twist-2 GPDs. It is also interesting to pursue whether or not there are imprints of anomalies on twist-3 GPDs and related observables. Finally, constraints from anomalies should be implemented in the modeling
of GPDs. In particular, the specific functional form given in (106) might be helpful to model this poorly constrained distribution.
###### Acknowledgements.
We are very grateful to Vladimir Braun and Anatoly Radyushkin for many useful discussions. We also thank Kornelija Passek-Kumericki, Swagato Mukherjee, Kazuhiro Tanaka, Raju Venugopalan and Christian Weiss for discussions. S. B. and Y. H. are supported by the U.S. Department of Energy under Contract No. DE-SC0012704, and Laboratory Directed Research and Development (LDRD) funds from Brookhaven Science Associates. Y. H. is also supported by the framework of the Saturated Glue (SURGE) Topical Theory Collaboration. W. V. has been supported by Deutsche Forschungsgemeinschaft (DFG) through the Research Unit FOR 2926 (project 409651613).
## Appendix A Derivation of Eq. (37)
In this appendix we give an outline of the derivation of (37) and (38). The other results in Section V can be derived similarly. For the quark matrix elements, we work in the Feynman gauge. The ladder diagram reads, up to a prefactor,
\[\mu^{2\epsilon}\int\frac{dk^{-}d^{2-2\epsilon}k_{\perp}}{(2\pi)^{3-2\epsilon} }\bar{u}(p+l/2)\frac{\gamma_{\mu}(\not{k}+\not{l}/2)\gamma^{+}(\not{k}-\not{l} /2)\gamma^{\mu}}{(p-k)^{2}(k-l/2)^{2}(k+l/2)^{2}}u(p-l/2), \tag{107}\]
where
\[k^{+}=xp^{+},\quad l^{+}=-2\xi p^{+},\quad l^{-}=-\frac{\xi l^{2}}{4p^{+}}, \quad\overline{l}_{\perp}^{2}=(\xi^{2}-1)l^{2},\quad p^{-}=-\frac{l^{2}}{8p^{ +}}. \tag{108}\]
In the DGLAP region \(\xi<x<1\), the \(k^{-}\) integral can be done by picking up the pole of \((p-k)^{2}+i\epsilon=0\) at
\[k^{-}=-\frac{k_{\perp}^{2}+\frac{1-x}{4}l^{2}}{2(1-x)p^{+}}, \tag{109}\]
in the upper half plane. The remaining propagators can be combined as
\[\frac{1}{1-x}\int_{0}^{1}da\frac{A(k_{\perp})}{(k^{2}+(1-2a)k\cdot l+\frac{l^ {2}}{4})^{2}}=\frac{1-x}{(1-\xi(1-2a))^{2}}\int_{0}^{1}da\frac{A(k_{\perp}^{ \prime}-\frac{(1-2a)(1-x)}{2(1-\xi(1-2a))}l_{\perp})}{\left(k_{\perp}^{\prime 2}- \frac{(1-a)a(1-x)^{2}l^{2}}{(1-\xi(1-2a))^{2}}\right)^{2}}, \tag{110}\]
where in the denominator we have shifted momentum \(k_{\perp}\to k_{\perp}^{\prime}\) to complete the square. In the numerator we have projected onto the twist-two component,
\[\bar{u}(p+l/2)\cdots u(p-l/2)\rightarrow\left[(1-\epsilon)k_{\perp}^{\prime 2}+( B-\epsilon C)l^{2}\right]\bar{u}\gamma^{+}u\equiv A\bar{u}\gamma^{+}u, \tag{111}\]
with
\[B=\frac{(1-\xi+a(x+2\xi-1))(x-\xi-a(x-2\xi-1))}{(1-\xi(1-2a))^{2}},\quad C= \frac{(1-a)a(1-x)^{2}}{(1-\xi(1-2a))^{2}}. \tag{112}\]
The terms linear in \(k_{\perp}^{\prime}\) have been dropped in (111) since they vanish after the \(k_{\perp}^{\prime}\) integral:
\[\mu^{2\epsilon}\int_{0}^{1}da\frac{1-x}{(1-\xi(1-2a))^{2}}\int \frac{d^{2-2\epsilon}k_{\perp}^{\prime}}{(2\pi)^{2-2\epsilon}}\frac{(1- \epsilon)k_{\perp}^{\prime 2}+(B-\epsilon C)l^{2}}{(k_{\perp}^{\prime 2}-Cl^{2})^{2}}\] \[\approx\left(\frac{\mu^{2}}{-l^{2}}\right)^{\epsilon}\int_{0}^{1} da\frac{1-x}{(1-\xi(1-2a))^{2}}\frac{1}{(4\pi)^{1-\epsilon}}\left(\frac{(1-2 \epsilon)\Gamma(\epsilon)}{C^{\epsilon}}-\frac{B\Gamma(1+\epsilon)}{C^{1+ \epsilon}}\right). \tag{113}\]
The first integral gives a UV pole:
\[\frac{\Gamma(\epsilon_{\rm UV})}{4\pi}\left(\frac{4\pi\mu^{2}}{-l ^{2}}\right)^{\epsilon}\int_{0}^{1}da\frac{(1-x)(1-2\epsilon)}{(1-\xi(1-2a))^ {2}C^{\epsilon}}\] \[=\frac{1}{4\pi}\left(\frac{\tilde{\mu}^{2}}{-l^{2}}\right)^{ \epsilon}\left(\frac{1}{\epsilon_{\rm UV}}\frac{1-x}{1-\xi^{2}}-\frac{(1-x)(2 \ln(1-x)-\ln(1-\xi^{2}))}{1-\xi^{2}}\right), \tag{114}\]
while the second integral gives double and single IR poles:
\[\left(\frac{4\pi\mu^{2}}{-l^{2}}\right)^{\epsilon}\int_{0}^{1}da \frac{1-x}{(1-\xi(1-2a))^{2}}\,\frac{-B\Gamma(1+\epsilon)}{C^{1+\epsilon}} \tag{103}\] \[= \left(\frac{\tilde{\mu}^{2}}{-l^{2}}\right)^{\epsilon}\frac{1}{(1 -x)^{1+2\epsilon}}\left(\frac{2(x-\xi^{2})}{1-\xi^{2}}\frac{1}{\epsilon_{\rm{ IR}}}-\frac{(1-x)^{2}-2(x-\xi^{2})\ln(1-\xi^{2})}{1-\xi^{2}}+\epsilon f(x)\right)\] \[= \left(\frac{\tilde{\mu}^{2}}{-l^{2}}\right)^{\epsilon}\Bigg{[}- \delta(1-x)\left(\frac{1}{\epsilon_{\rm{IR}}^{2}}+\frac{\ln(1-\xi^{2})}{ \epsilon_{\rm{IR}}}\right)+\frac{2(x-\xi^{2})}{(1-x)_{+}(1-\xi^{2})}\frac{1}{ \epsilon_{\rm{IR}}}-\frac{4(x-\xi^{2})}{1-\xi^{2}}\left(\frac{\ln(1-x)}{1-x} \right)_{+}\] \[-\frac{(1-x)^{2}-2(x-\xi^{2})\ln(1-\xi^{2})}{(1-\xi^{2})(1-x)_{+ }}-\frac{1}{2}\delta(1-x)\left(\ln^{2}(1-\xi^{2})-\frac{\pi^{2}}{6}\right) \Bigg{]}.\]
Here \(f(x)\) is a certain function whose value at \(x=1\) is the only thing we need.
In Feynman gauge, there are two other diagrams, giving
\[\int\frac{d^{d}k}{(2\pi)^{d-1}}\bar{u}(p+l/2)\frac{\gamma^{+}k^{ \prime}\gamma^{+}\big{(}\delta(k^{+}-(x+\xi)p^{+})-\delta((1-x)p^{+})\big{)}} {k^{2}(p-l/2-k)^{2}((1+\xi)p^{+}-k^{+})}u(p-l/2), \tag{104}\]
and the corresponding contribution for its mirror diagram. They can be similarly evaluated. The result is
\[\left(\frac{2(x-\xi^{2})}{(1-\xi^{2})(1-x)_{+}}+\delta(1-x)\left(2-\ln(1-\xi^ {2})\right)\right)\frac{\left(\frac{\tilde{\mu}^{2}}{-l^{2}}\right)^{\epsilon }}{4\pi}\left(\frac{1}{\epsilon_{\rm{UV}}}-\frac{1}{\epsilon_{\rm{IR}}}\right). \tag{105}\]
Adding also the quark self energy diagrams on the external legs, we arrive at (37) and (38). We note that the calculation can also be performed in Landau gauge which has the advantage that the self-energy diagrams vanish identically.
In the ERBL region \(\xi>x\), the pole of \((k+l/2)^{2}\) in (102) moves to the upper half plane. We thus pick up the pole of \((k-l/2)^{2}\) at
\[k_{c}^{-}=\frac{k_{\perp}^{2}-(x\xi+1)\frac{l^{2}}{4}-\vec{k}_{ \perp}\cdot\vec{l}_{\perp}}{2(x+\xi)p^{+}}, \tag{106}\]
in the lower half plane. The first term in (104) also contributes (but not its mirror diagram).
|
2310.13807 | Learning to (Learn at Test Time) | We reformulate the problem of supervised learning as learning to learn with
two nested loops (i.e. learning problems). The inner loop learns on each
individual instance with self-supervision before final prediction. The outer
loop learns the self-supervised task used by the inner loop, such that its
final prediction improves. Our inner loop turns out to be equivalent to linear
attention when the inner-loop learner is only a linear model, and to
self-attention when it is a kernel estimator. For practical comparison with
linear or self-attention layers, we replace each of them in a transformer with
an inner loop, so our outer loop is equivalent to training the architecture.
When each inner-loop learner is a neural network, our approach vastly
outperforms transformers with linear attention on ImageNet from 224 x 224 raw
pixels in both accuracy and FLOPs, while (regular) transformers cannot run. | Yu Sun, Xinhao Li, Karan Dalal, Chloe Hsu, Sanmi Koyejo, Carlos Guestrin, Xiaolong Wang, Tatsunori Hashimoto, Xinlei Chen | 2023-10-20T20:42:00Z | http://arxiv.org/abs/2310.13807v2 | # Learning to (Learn at Test Time)
###### Abstract
We reformulate the problem of supervised learning as learning to learn with two nested loops (_i.e._ learning problems). The inner loop learns on each individual instance with self-supervision before final prediction. The outer loop learns the self-supervised task used by the inner loop, such that its final prediction improves. Our inner loop turns out to be equivalent to linear attention when the inner-loop learner is only a linear model, and to self-attention when it is a kernel estimator. For practical comparison with linear or self-attention layers, we replace each of them in a transformer with an inner loop, so our outer loop is equivalent to training the architecture. When each inner-loop learner is a neural network, our approach vastly outperforms transformers with linear attention on ImageNet from \(224\times 224\) raw pixels in both accuracy and FLOPs, while (regular) transformers cannot run.1
Footnote 1: Code release: [https://github.com/test-time-training/mttt](https://github.com/test-time-training/mttt).
## 1 Introduction
Test-time training (TTT) is an algorithmic framework for machine learning. The core idea is that each test instance defines its own learning problem, with its own target of generalization (Sun et al., 2020). Since the test instance comes without its label, TTT is performed with a self-supervised task such as reconstruction. Performance should improve on this particular instance for the self-supervised task, because that is the objective optimized by TTT. But will such a process lead to better performance for the main task we actually care about?
If improvement for a self-supervised task transfers to a given main task, we say the two tasks are _aligned_(Sun et al., 2020). In prior work, task alignment has been an art, combining ingenuity with trial and error (Gandelsman et al., 2022; Wang et al., 2023). Crucially, the amount of ingenuity in task design does not scale with more data and compute. Our main approach is to learn an aligned self-supervised task from data, instead of handcrafting it from human priors. Specifically, we learn a self-supervised task such that TTT on it actually improves performance on the main task.
Since TTT already defines a learning problem, learning its self-supervised task is a form of _learning to learn, i.e._ meta-learning or bi-level optimization (Schmidhuber, 1987). The literature refers to the two nested learning problems as the inner and outer loop. At training time, the _inner loop_ learns with self-supervision on each training instance individually, as if it were a test instance. The _outer loop_ learns to align the self-supervised task with the main task on the entire training set. At test time, we only invoke the inner loop, _i.e._ TTT. We name our algorithm MTTT, with M for meta.
To better understand MTTT, we look at its simplest nontrivial instantiation, where all components are linear models, and the inner loop takes only one gradient step. Given fixed outer-loop parameters, the inner loop turns out to be equivalent to forward inference with linear attention, _i.e._ self-attention without softmax (Katharopoulos et al., 2020). For a linear transformer, _i.e._ transformer with only linear attention layers, we can replace each with an inner loop. Nesting multiple such inner loops into one outer loop, the most naive case of MTTT is equivalent to training a linear transformer.
It also turns out that our inner loop with a particular kernel estimator is theoretically equivalent to self-attention (with softmax), so MTTT with multiple such inner loops is equivalent to training a transformer. This suggests that our framework is compatible with existing, successful architectures.
To extend beyond existing equivalences, we investigate TTT with neural networks. This performs much better than TTT with linear models (_i.e._ linear transformers), in settings where transformers run out of memory and time. Given the freedom inside our inner loop, we can augment it with heuristics like output normalization and stochastic gradient descent that improve results even more.
Our inner loop _mirrors_ regular (non-meta) learning in design, because it breaks each instance into pieces, _i.e._ tokens, that are explicitly treated as data. This perspective is further validated by our empirical evidence, which is not explained through any existing perspective for architecture design. Given the historic success of deep learning over kernels and linear models, we conjecture that such success can potentially be replicated in our inner loop, with more compute and data under MTTT.
## 2 Inner Loop: Test-Time Training with Reconstruction
The architecture for TTT has a shared feature extractor with two output heads. The self-supervised task has a head \(g\), and the main task has a head \(h\). At test time, the model can only learn from the self-supervised task, so the heads share a feature extractor \(f\). This way, TTT can update the shared features, thus helping the main task if it uses the same kind of features as the self-supervised task. Altogether, this architecture looks like the letter 'Y', where \(f\) is the stem, \(g\) and \(h\) are the branches.
In principle, TTT is compatible with any choice of self-supervised task. Here we focus on one general-purpose and domain-agnostic family of self-supervised tasks - reconstruction, since it has been highly effective in prior work (Vincent et al., 2008; Pathak et al., 2016; Brown et al., 2020; Bao et al., 2021; He et al., 2021). For reconstruction, the feature extractor \(f\) is also known as the encoder, and the self-supervised head \(g\) as the decoder; \(g\circ f\) together is called an autoencoder.
Following a standard process called tokenization, each instance is always broken into a sequence of \(n\) tokens, so we denote both the instance and sequence by \(X=(x_{1},\dots,x_{n})\), with token \(x_{i}\in\mathbb{R}^{d}\).2 Our basic unit of reconstruction is each individual token \(x_{i}\). The reconstruction target is \(x_{i}\) itself, but the input is transformed by a given function \(\phi\), such as adding noise (Vincent et al., 2008) and random masking (He et al., 2021). For each \(X\), we optimize the parameters of \(f\), denoted by \(W\). Overall, the self-supervised loss is
Footnote 2: To be precise, \(x_{i}\in\mathbb{R}^{d}\) is actually the token’s embedding, not the token itself. For \(X\) a paragraph of text, each token is usually a (sub-)word; for \(X\) an image, each token is usually a patch or pixel. While the type of tokens can potentially be non-numeric, standard techniques are available to embed them into vectors.
\[\ell(W;X)=\frac{1}{2n}\sum_{i=1}^{n}\big{\|}g\circ f\left(\phi(x_{i});W\right)- x_{i}\big{\|}^{2}. \tag{1}\]
Note that the decoder \(g\) is also considered given within the scope of TTT, which only updates \(W\).3 Optimization is performed with \(T\) gradient steps. For each \(t=1,\dots,T\),
Footnote 3: While the decoder \(g\) also contains learnable parameters, we do not optimize them during TTT in this paper. Our choice, although nonstandard for autoencoders, makes learning to learn conceptually easier in Section 3. Moreover, Sun et al. (2020) and Gandelsman et al. (2022) have shown that whether or not \(g\) is optimized during TTT makes little empirical difference. In fact, for \(T=1\) (using notations defined for Equation 2), whether or not a gradient step is taken on \(g\) does not matter at all, because \(g\) affects the final prediction only through \(W_{1}\).
\[W_{t}=W_{t-1}-\eta\nabla\ell(W_{t-1};X), \tag{2}\]
where the initial value \(W_{0}\) and the learning rate \(\eta\) are given, like \(\phi\) and \(g\).
For the main task, we also transform its input \(x_{i}\) by a given function \(\psi\), in the spirit of symmetry to \(\phi\) for the self-supervised task. In prior work, \(\psi\) has mostly been the identity transform, but Section 3 will make \(\psi\) nontrivial, adding expressiveness to the outer loop. Next, we produce the main task outputs by applying \(h\circ f\) individually on each \(\psi(x_{i})\). For convenience, we overload \(h,f\) and \(\phi\) so they can produce an output sequence from an input sequence:
\[X_{\text{out}}=h\circ f\left(\psi(X);W_{T}\right)=\bigg{(}h\circ f\left(\psi( x_{1});W_{T}\right),\dots,h\circ f\left(\psi(x_{n});W_{T}\right)\bigg{)}. \tag{3}\]
Equation 3 could be the last step for main tasks that require \(n\) predictions (_e.g._ language modeling), but for other tasks that require a single prediction (_e.g._ object recognition), it is standard to apply an aggregation function across the output sequence, predicting \(\hat{y}=\texttt{aggregate}(X_{\text{out}})\) in the end.
### Context Window as a Dataset
In standard terminology, \(X=(x_{1},\ldots,x_{n})\) is called the context window, and \(n\) the window length. But for TTT, \(X\) is a dataset of size \(n\), where each token \(x_{i}\) is actually a non-independent and non-identically distributed piece of data. This intuition is consistent with our algorithm: Equation 1 simply sums the losses individually across tokens, just like across pieces of data; Equation 3 also processes each \(x_{i}\) individually as a "test token", like how a fixed model processes each test instance.
Tokenization enables us to reuse \(f\) on \(n\) different parts (tokens) of \(X\), by treating them as pieces of data, and \(X\) as a dataset. It brings the units of operation for TTT "one level below" their traditional sense in machine learning, where \(X\) is a piece of data, and a collection of \(X\)s is a dataset. TTT can be applied without tokenization, but then \(X\) would be singleton, unless augmentations are used to create an artificial batch like in Sun et al. (2020).
## 3 Outer Loop: Learning the Self-Supervised Task for TTT
As noted above, TTT does not modify the initialization \(W_{0}\) for encoder \(f\), the transformations \(\phi\) and \(\psi\), or the decoder \(g\) and main task head \(h\). Altogether, these important components must be determined outside of the scope of TTT. Prior work has tried various heuristics, discussed in Subsection 6.2. Here we take the more principled approach of directly optimizing the final prediction loss on the main task after \(T\) steps of TTT.
We first explicitly express the learnable parameters that were hidden in Section 2 because they were considered given within the scope of the inner loop. These are the parameters of \(g,h\), \(\phi\) and \(\psi\), denoted by \(\theta_{g}\), \(\theta_{h}\), \(\theta_{\phi}\) and \(\theta_{\psi}\). We group them together with \(W_{0}\) into \(\boldsymbol{\theta}=(\theta_{g},\theta_{h},\theta_{\phi},\theta_{\psi},W_{0})\), since they will all be learned in the outer loop. Technically, \(\boldsymbol{\theta}\) should also contain the learnable parameters of aggregate, which we omit for convenience.
Now we derive the outer-loop objective \(\mathcal{L}_{T}\). Denote the main task loss by \(\mathcal{L}\), _e.g._ the cross-entropy loss. In the trivial case, for \(T=0\), _i.e._ without TTT, the final prediction loss is exactly \(\mathcal{L}\). To be precise, for each instance \(X\) with unknown label \(y\),
\[\mathcal{L}_{0}\big{(}\boldsymbol{\theta};X,y\big{)}=\mathcal{L}\big{(}h \circ f(\psi(X);W_{0}),y\big{)}. \tag{4}\]
For \(T=1\), the parameters of \(f\) become \(W_{1}=W_{0}-\eta\nabla\ell(W_{0};X)\), as defined in Equation 1. Therefore, the final prediction loss for the main task is
\[\mathcal{L}_{1}\big{(}\boldsymbol{\theta};X,y\big{)}=\mathcal{L}\big{(}h \circ f\left(\psi(X);W_{1}\right),y\big{)}=\mathcal{L}\big{(}h\circ f\left( \psi(X);W_{0}-\eta\nabla\ell(W_{0};X)\right),y\big{)}. \tag{5}\]
For any \(T\geq 1\), \(\theta_{g}\) and \(\theta_{\phi}\) implicitly determine the inner-loop loss function \(\ell\) defined in Equation 1, therefore affect \(\mathcal{L}_{T}\) through \(\nabla\ell\). In other words, \(\theta_{g}\) and \(\theta_{\phi}\) parameterize the self-supervised task.4 Going further, for \(T\geq 2\),
Footnote 4: Note that even though \(\theta_{g}\) and \(\theta_{\phi}\) are included as arguments of \(\mathcal{L}_{T}\) for all values of \(T\), they do not actually matter for \(\mathcal{L}_{0}\). When the inner loop is trivial, _i.e_ runs for 0 iteration, learning to learn collapses to regular (non-meta) learning, and the self-supervised task does not matter.
\[\mathcal{L}_{T}\big{(}\boldsymbol{\theta};X,y\big{)}=\mathcal{L}\big{(}h \circ f\left(\psi(X);W_{T}\right),y\big{)} \tag{6}\]
would be cumbersome to write out in terms of \(W_{0}\), but can be expressed recursively, with \(W_{t}\) defined in Equation 2 for each \(t=1,\ldots,T\).
At training time, the outer loop calculates \(\mathcal{L}_{T}\) individually for each labeled training instance \(X\), then optimizes the average \(\mathcal{L}_{T}\) on the entire training set with (a variant of) stochastic gradient descent. Calculating \(\nabla\mathcal{L}(\boldsymbol{\theta};X,y)\) requires taking gradients through \(\nabla\ell(W_{t};X)\) for \(t=0,\ldots,T-1\)., since the latter is implicitly a function of \(W_{0}\), \(\theta_{g}\) and \(\theta_{\phi}\). This turns out to be easily programmable in JAX, and surprisingly efficient in practice, as we will show in Section 5.
## 4 Choice of Learner for Inner Loop
While our inner loop is a sequence of forward and backward operations, it can also be represented as a single forward operation on its unrolled computation graph, so the outer loop becomes regular (non-meta) learning using this graph as a fixed model. It turns out that for simple choices of the inner-loop learner, this equivalent graph can be interpreted through the lens of architecture design.
### TTT with Linear Models: Equivalence to Linear Attention
The simplest choice for the feature extractor \(f\) is a linear model:
\[f(x;W)=Wx. \tag{7}\]
And the outer-loop components \(g\), \(h\), \(\phi\) and \(\psi\) are linear as well. Specifically,
\[g(x;\theta_{g})=\theta_{g}^{T}x,\;\;h(x;\theta_{h})=\theta_{h}x,\;\;\phi(x; \theta_{\phi})=\theta_{\phi}x,\;\;\psi(x;\theta_{\psi})=\theta_{\psi}x. \tag{8}\]
To make the math even simpler, we always initialize the feature extractor with \(W_{0}=0\). Under this construction, the self-supervised loss in Equation 1 becomes
\[\ell\big{(}W;X\big{)}=\frac{1}{2n}\sum_{i=1}^{n}\|g\circ f\left(\phi(x_{i});W \right)-x_{i}\|^{2}=\frac{1}{2n}\sum_{i=1}^{n}\|\theta_{g}^{T}W\theta_{\phi}x_ {i}-x_{i}\|^{2}. \tag{9}\]
For \(W_{0}=0\), one gradient step with learning rate \(\eta=1\) produces
\[W_{1}=W_{0}-\nabla\ell\left(W_{0};X\right)=\frac{1}{n}\sum_{i=1}^{n}(\theta_{ g}x_{i})(\theta_{\phi}x_{i})^{T}. \tag{10}\]
Using \(W_{1}\) as the updated weights for the feature extractor, the updated features for each token \(x_{j}\), \(j=1,\ldots,n\), becomes
\[f\left(\psi(x_{j});W_{1}\right)=\frac{1}{n}\sum_{i=1}^{n}(\theta_{g}x_{i})( \theta_{\phi}x_{i})^{T}\theta_{\psi}x_{j}. \tag{11}\]
This happens to be linear attention (explained in Appendix A), where \(\theta_{\phi}\), \(\theta_{\psi}\), \(\theta_{g}\) are the key, query, value weights. \(h\) is the projection operation used for multi-head attention, discussed in Appendix B.
### TTT with Kernels: Equivalence to Self-Attention
So far, we have considered \(f\) with explicit parameters. But machine learning is more than just parametric models and gradient-based optimization. Here we consider \(f\) as a non-parametric learner.
Recall that non-parametric learning produces an algorithmic function controlled by the training data \(x_{1},\ldots,x_{n}\), without explicit parameters of a fixed shape. So our notation for the encoder changes from \(f(x;W)\) to \(f(x;x_{1},\ldots,x_{n})\). For example, the nearest neighbor \(f(x;x_{1},\ldots,x_{n})\) simply looks for the most similar piece of training data. Some other non-parametric learners are: support vector machines (SVMs), radial basis function networks, and kernel ridge regression.
But unlike most cases of non-parametric learning, our data for TTT come without labels, since \(x_{1},\ldots,x_{n}\) are just tokens of an unlabeled test instance \(X\). Analogous to parametric learners, non-parametric ones can also learn with self-supervision to produce better features for a main task downstream. So for each \(i=1,\ldots,n\), we create each label \(z_{i}=\theta_{V}x_{i}\) from the unlabeled input \(x_{i}\) itself, where \(\theta_{V}\) is an outer-loop parameter like \(\theta_{g}\) in the parametric case.
The popular self-attention (with softmax) is equivalent to TTT with \(f\) as the time-honored Nadaraya-Watson estimator (Bierens, 1988; Cai, 2001), which outputs a locally weighted average of labels \(z_{i}\), \(i=1,\ldots,n\), using a kernel \(\kappa\) as the weighting function:
\[f(x;x_{1},\ldots,x_{n})=\frac{1}{\sum_{i=1}^{n}\kappa(x,x_{i})}\sum_{i=1}^{n} \kappa(x,x_{i})\;z_{i}. \tag{12}\]
See Appendix C for a detailed derivation of this estimator. We choose the kernel \(\kappa\) to be
\[\kappa(x,x^{\prime};\theta_{K},\theta_{Q})\propto e^{(\theta_{K}x)^{T}\theta_ {Q}x^{\prime}} \tag{13}\]
where \(\theta_{K}\) and \(\theta_{Q}\) are known as bandwidth hyper-parameters for kernels. But for MTT, they are outer-loop parameters like \(\theta_{V}\). As detailed in Appendix C, asymmetric kernels like our \(\kappa\) above have enjoyed a long tradition (Breiman et al., 1977; Chen, 2017). Altogether, Equation 12 and 13 combined is the same as self-attention, where \(\theta_{K},\theta_{Q}\), \(\theta_{V}\) are the key, query, value weights.
Unlike the parametric case, TTT with kernels does not solve an optimization problem, therefore does not produce a different implementation from self-attention. While our equivalence here only provides an alternative interpretation, the fact that both linear models and kernels are empirically effective as inner-loop learners suggests that other learners might also be effective.
### TTT with Neural Networks
From the past three decades of progress in machine learning, we observe that the performance of
\[\textit{deep learning}\ >\ \textit{kernels}\ >\ \textit{linear models}\]
given enough data and compute. In Subsection 2.1, we discussed the perspective that our inner loop mirrors regular (non-meta) learning, at least in terms of algorithmic design. To collect empirical evidence for this perspective, we investigate if the ordering above is preserved within our inner loop.
It is well known that transformers with self-attention (TTT with kernels) often outperform those with linear attention (TTT with linear models), _i.e._ linear transformers (Katharopoulos et al., 2020). This validates the rightmost link of the ordering within our inner loop. But TTT with neural networks has no existing equivalence, so we devote the rest of the paper to taking a small step in this huge search space. We delay implementation details such as architecture and optimization to Section 5, and end this subsection with one remaining conceptual implication.
TTT with neural networks and linear models, or any parametric learner, has complexity linear in \(n\) for each test instance \(X=(x_{1},\dots,x_{n})\), since complexity for each token is constant in \(n\), and only proportional to the number of parameters. TTT with any non-parametric learner, however, cannot have linear complexity by definition, since its complexity for each token cannot be constant in \(n\), _i.e._ amount of training data. For Nadaraya-Watson, complexity for each token happens to be linear. This serves as an alternative explanation for the quadratic complexity of self-attention.
## 5 Experiments
The goal of our experiments is not to be the top on leaderboards, but to evaluate our key perspective, that the inner loop mirrors regular (non-meta) learning, in terms of three qualities. 1) _Descriptive_: Does our equivalence to linear attention hold in practice? 2) _Prescriptive_: Does our perspective show a path for new methods with better performance? 3) _Predictive_: Does our perspective accurately explain the empirical behaviors of new methods?
TTT layers.The cleanest and most practical way to answer these questions is to replace every attention layer in an architecture with a TTT inner loop, because ultimately, attention layers are only used as parts of an architecture. Since the inner loop here functions as a _drop-in replacement_ for attention, we call it a _TTT layer_, which can also be thought of as an equivalent computation graph (discussed in Section 4). After dropping in the TTT layers, the entire architecture can be trained with MTTT, using the same recipe as that with attention layers, without TTT.
Variants of MTTT.We call our method _MTTT-Linear_ when encoder \(f\) is linear in each TTT layer, and _MTTT-MLP_ when \(f\) is a multi-layer perception (MLP). We always keep \(g,h,\phi,\psi\) linear following Subsection 4.1. For MTTT-Linear, we always keep \(W_{0}=0\) fixed to ensure equivalence to linear attention, since MTTT-Linear is only used to investigate descriptiveness. For MTTT-MLP, we experiment with the two design choices below, to investigate the prescriptive power of our perspective. For simplicity, we always set the inner-loop learning rate \(\eta=1\).
Inner-loop architecture.For MTTT-MLP, the MLP architecture simply follows standard design in transformers. Concretely, our MLP has 2 linear layers with GELU activation in between; the input and output dimension are the same, and the hidden dimension is \(4\times\) as large. The only architectural change, called _Decoder LN_, is that we add a layer norm (LN) after the output of \(g\), to normalize the reconstruction outputs, in the spirit of He et al. (2021). We explain this design choice in Figure 2, deferred to the appendix due to space constraints.
Inner-loop optimization.When the inner loop takes \(T>1\) steps, each gradient step, by default, uses the average loss over all the tokens, defined in Equation 1. But \(T\) steps make the inner loop \(T\times\) slower. Given the popularity of stochastic gradient descent (SGD) in deep learning, we use it for our inner loop. Specifically, we randomly split the \(n\) tokens into \(T\) mini-batches, each of size \(T/n\), and take one inner-loop step per mini-batch. Therefore, \(T\) steps of SGD combined consumes the same amount of compute as a full-batch gradient step over all the \(n\) tokens together.
### ImageNet
We first experiment with the standard setting of ImageNet object recognition (Deng et al., 2009). Our benchmark architecture is Vision Transformer (ViT) (Dosovitskiy et al., 2020). We adopt the well-known recipe of Beyer et al. (2022) by the ViT authors, and their recommended setup for fast research turnaround - training ViT-Small for 90 epochs. With an accuracy of 76.5%, it is often regarded as a fast and competitive baseline. Its recipe splits each image into \(14\times 14\) patches, then embeds each patch with a learned projection. So each \(X\) becomes \(n=196\) tokens.
Thinking of the context window as training data for TTT, a dataset of size 196 is not nearly enough for deep learning, if adequate for a linear model. Since over-parameterized neural networks are known to be able to regularize themselves (Zhang et al., 2021), MTTT-MLP should not do poorly, but might not justify the extra compute. In addition, small \(n\) means our linear complexity is less of an advantage, in comparison to self-attention (with softmax).
Our results in Table 1 confirm those expectations. MTTT-MLP outperforms MTTT-Linear by a small margin, but uses more FLOPs. If MTTT-MLP was using a smaller architecture that matches the FLOPs of MTTT-Linear, it would have performed worse. Self-attention, for which the training recipe was originally designed, performs the best.
In terms of descriptiveness, MTTT-Linear almost exactly matches linear attention (identity map) - the 0.2% difference is likely due to random noise and loss of numeric precision. However, MTTT-Linear uses \(0.1\times\) more FLOPs as linear attention. This extra factor exists because the JAX compiler is unaware that the compiled inner loop will receive \(W_{0}=0\) so all those terms involved can be eliminated. We manually calculated the total number of FLOPs for those terms involving \(W_{0}\), and found that it matches the difference in FLOPs between MTTT-Linear and linear attention.
Taking more gradient steps in the inner loop significantly improves accuracy of MTTT-MLP up to \(T=4\), as shown in the left panel of Figure 1. However, \(T\) steps on the full batch costs \(T\times\) number of FLOPs. So this improvement is predictive but not practically useful. We have experimented with SGD and found that it does not help here. Since \(n=196\) is already a small batch size, splitting 196 tokens into even smaller mini-batches for SGD is usually considered bad practice for deep learning.
The right panel of Figure 1 shows the average \(\ell(W_{t};X)\) across the test set, for TTT layer 6 (out of 12 in total). The plot for all layers is deferred to Figure 3 in the appendix due to space constraints, but the overall behavior is essentially the same across layers. The five lines are for \(t=0,\dots,T\), where \(T=4\), _i.e._ the optimal choice of \(T\) according to the left panel. For every epoch of outer-loop learning, average inner-loop loss decreases monotonically with more steps. The behavior of this novel inner loop matches that of regular learning with successful optimization.
While MTTT has not been practically useful in this setting, its behavior matches our expectations, indicating that our perspective is predictive on top of descriptive. Note that every hyper-parameter
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline \hline Drop-in layer & Acc. (\%) & Params. (M) & FLOPs \\ \hline Linformer (Wang et al., 2020) & 71.9 & 22.2 & 0.9\(\times\) \\ Longformer (Hehuy et al., 2020) & 76.3 & 27.4 & 1.1\(\times\) \\ SOFT (Lin et al., 2021) & 74.6 & 23.5 & 0.9\(\times\) \\ Hyena (Puli et al., 2023) & 74.8 & 23.5 & 1.0\(\times\) \\ \hline Self-attn. (Beyer et al., 2022) & 76.5 & 22.1 & 1.1\(\times\) \\ Linear attn. (Kathuropoulos et al.) & 73.2 & 22.1 & 1.0\(\times\) \\ Linear attn. identity map & 73.0 & 22.1 & 1.0\(\times\) \\ \hline MTTT-Linear & 72.8 & 22.1 & 1.1\(\times\) \\ MTTT-MLP & 74.6 & 24.6 & 1.5\(\times\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on ImageNet. FLOPs are presented as relative to linear attention. Our inner-loop dataset is tiny, with \(n=196\). MTTT-Linear matches linear attention with identity map, as expected. MTTT-MLP outperforms both by a nontrivial margin, but is \(1.5\times\) slower than linear attention. Also as expected, self-attention, _i.e._ the original ViT performs the best. See Subsection 5.2 for details.
is set according to Beyer et al. (2022), and we have not changed any to get the expected behavior. Our inner-loop learning rate \(\eta\) has always been 1, derived from equivalence to linear attention.
In Table 2, we ablate MTTT-MLP with the four combinations of whether or not to use Decoder LN and train \(W_{0}\) in the outer loop. We choose these two factors since Decoder LN is our own design, and training \(W_{0}\) goes a step further from equivalence to linear attention, which requires fixing \(W_{0}=0\). Empirically, both components prove to be important for good performance. Therefore, we always keep them for future experiments, without spending more resources to ablate them.
For additional context around our results, we run a few baselines that also have linear complexity. Linear attention as proposed by Katharopoulos et al. (2020) uses manually engineered features of the input tokens, instead of the input tokens themselves. We label the former with citation, and the latter with _identity map_. Other baselines have roughly the same accuracy as MTTT-MLP. Longformer stands out with the same accuracy as self-attention, but we find that the default window size for its sliding attention is \(512>196\), so it happens to be the same as self-attention for \(n=196\).
### ImageNet from \(224\times 224\) Raw Pixels
To better evaluate our perspective that the inner loop mirrors regular (non-meta) learning, we need a setting where the sequence length \(n\), _i.e._ amount of training data for the inner loop, is actually comparable to the amount in typical applications of deep learning. Inspired by Chen et al. (2020), we experiment with ImageNet object recognition using raw pixels instead of patches as input tokens. This gives us \(n=224\times 224=50,176\).
For Chen et al. (2020), the point of using pixels is to eliminate image-specific prior knowledge.5 At a high level, the progress in deep learning over the past decade can be seen as gradually eliminating human priors, in favor of general methods that take advantage of data and compute. Following their setting, we use learned positional embeddings, instead of engineered positional encoding. Therefore, our entire system is permutation invariant.
Footnote 5: While transformers have already eliminated the locality prior in convolutions, most papers on ImageNet still use patches instead of pixels as input tokens. This is equivalent to a first layer of convolutions where the filter size and stride size both equal to the patch size, and is in fact often implemented as such. Using raw pixels as input tokens eliminates locality prior completely.
While Chen et al. (2011) do not use any data augmentation, they use a much larger collection of images. We have been able to remove the augmentations except one - random resize crop (Szegedy et al., 2015), without which all methods fail to get more than 40% accuracy. Since random resize crop does not add any synthetic artifact to natural images, we justify it as using more data without actually using another dataset. We always use random resize crop for the rest of the subsection.
Experiments in this subsection are conducted with ViT-Tiny unless noted otherwise, because training with 50k tokens per instance is very compute-intensive. Every other aspect of our recipe follows Beyer et al. (2022), like in Subsection 5.1. Our results are in Table 3. Self-attention, which performed the best with patches, cannot fit in memory. Even if memory was not an issue, it would still need at least \(200\times\) more FLOPs than linear attention according to our estimations.
We highlight two results. First, taking \(T=4\) steps of SGD improves accuracy by 3.3% on top of MTTT-MLP with \(T=1\), without costing extra FLOPs. To the best of our knowledge, this improvement cannot be explained through any existing perspective without an explicit inner loop.
Like in Figure 1, our inner-loop loss with SGD steps also behaves like regular learning, as shown in Figure 4 of the appendix. Second, MTTT-MLP with SGD improves almost 10% on top of even a ViT-Small with linear attention, which uses more than \(3\times\) parameters and \(2\times\) FLOPs. For SGD, \(T=4\) was simply chosen according to the optimal on patches.
These pieces of empirical evidence indicate that our perspective is prescriptive, by showing a path to new methods with better performance. It is also predictive, since expectations derived from regular learning accurately explain novel behaviors of the inner loop, without any hyper-parameter tuning. In terms of descriptiveness, MTTT-Linear matches linear attention (identity map) within 0.1%.
## 6 Related Work
### In-Context Learning as Explicit Learning
To the best of our knowledge, three pieces of prior work (Akyurek et al., 2022; Dai et al., 2022; Von Oswald et al., 2023) have independently proposed the idea that linear transformers can simulate some variant of linear regression on in-context data, as an explanation for in-context learning. Take Von Oswald et al. (2023) as an example. Given a labeled dataset, their work first trains a linear regression model with \(T\) gradient steps, then constructs the weights of a \(T\)-layer linear transformer to produce the same output as the trained linear model.
Our work differs in two main aspects: self-supervision and direction of claims. First, prior work focuses on showing that (linear) transformers can simulate learning on specific, supervised objectives, _e.g._ ridge regression, so their constructions rely on labeled pairs of in-context training data. If there is a meta-learning component, it is restricted to specific hyper-parameters, _e.g._ the learning rate. On the other hand, our inner loop implements a general objective that itself is mostly learned, so it does not need labeled data. This makes our inner loop less interpretable but more practical.
At a higher level, transformers are complex models, and linear models are simple. Prior work uses the complex to construct the simple. Our construction takes the converse direction. In prior work, empirical performance of meta-learning with linear regression has been significantly worse than linear transformers, even on labeled in-context data. Again, with the goal of explaining transformers, their claims often indicate that linear transformers are superior to meta-learning. Our experiments also point towards the converse.
Recently, Mahankali et al. (2023); Zhang et al. (2023); Ahn et al. (2023) and Tarzanagh et al. (2023) have further extended the arguments in prior work, therefore inheriting their two aspects above. Tarzanagh et al. (2023), in particular, argues that transformers implement non-parametric learners (SVMs) on labeled data, supporting our intuition in the converse direction. In summary, our paper complements prior work, with the different goal of inspiring potentially more powerful systems.
\begin{table}
\begin{tabular}{|l|l|c|c|c|} \hline \hline Model & Drop-in layer & Acc. (\%) & Params. (M) & FLOPs \\ \hline \multirow{8}{*}{ViT-T Tiny} & Self-attn. (Beyer et al., 2022) & - & 5.6 & \(200\times\) \\ & Linear atttn. (Kathnopoulos et al.) & 53.7 & 5.6 & \(1.0\times\) \\ & Linear atttn. identity map & 49.9 & 5.6 & \(1.0\times\) \\ \cline{1-1} & MTTT-Linear & 50.0 & 5.6 & \(1.1\times\) \\ & MTTT-MLP & 61.9 & 6.8 & \(1.8\times\) \\ & MTTT-MLP SGD \(T=4\) & 65.2 & 6.8 & \(1.8\times\) \\ \hline ViT-Small & Linear atttn. (Kathnopoulos et al.) & 54.4 & 21.8 & \(3.9\times\) \\ & Linear atttn. identity map & 55.7 & 21.8 & \(3.9\times\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results on ImageNet from pixels. FLOPs are presented as relative to linear attention. MTTT-MLP with SGD outperforms without by 3.3%, and does not cost extra FLOPs. It improves almost 10% on top of a ViT-Small with linear attention, which uses more than \(3\times\) parameters and \(2\times\) FLOPs. See Subsection 5.2 for details.
### Learning at Test Time
The idea of learning at test time has a long history in machine learning. One of the earliest instantiations of this idea is Bottou and Vapnik (1992): For each test input, train on its neighbors before making a prediction. This idea continues to be effective for SVMs (Zhang et al., 2006) and large language models (Hardt and Sun, 2023). In computer vision, the general idea of learning at test time has also been applied to specific applications (Jain and Learned-Miller, 2011; Shocher et al., 2018; Mullapudi et al., 2018; Luo et al., 2020; Nitzan et al., 2022).
_Transductive learning_(Gammerman et al., 1998) is the first to articulate our philosophy in Section 1. As stated by Vapnik (2013): "Try to get the answer that you really need, but not a more general one." Implementation-wise, it uses test data to add constraints to the margin of SVMs (Joachims, 2002; Collobert et al., 2006). This is an example of non-parametric learning at test time, similar to our kernel estimator in Subsection 4.2. However, transductive learning usually needs multiple test instances to be practically effective, unlike our method, which only needs a single instance at a time.
Next we have an in-depth discussion of two particular relevant lines of work: TTT and fast weights.
#### 6.2.1 Test-Time Training with Self-Supervision
Our inner loop performs TTT with self-supervision, discussed in Section 2. This general framework was first proposed by Sun et al. (2020), with results for supervised learning under distribution shifts. Unlike previous lines of work, TTT can be used in principle with any self-supervised task, on any type of data, for any application, making it particularly suitable for deep learning. Follow-up work has applied TTT to batches of data (Wang et al., 2020; Liu et al., 2021), and other main tasks like robot manipulation (Hansen et al., 2020) and locomotion (Sun et al., 2021), among others.
Particularly relevant to our inner loop, Gandelsman et al. (2022) performs TTT with reconstruction as the self-supervised task, and Wang et al. (2023) applies this method online to video streams. The biggest difference is that our reconstruction task is parameterized for meta-learning. In addition, our inner loop obtains multiple units of learning, \(x_{1},\dots,x_{n}\), out of a single test instance through tokenization. In prior work, each unit of learning is created through either data augmentations or a randomized \(\phi\), such as masking random patches (He et al., 2021).
#### 6.2.2 Fast Weights
The general idea of _fast weights_ is to update the parameters of a "fast" model on the most relevant data, as opposed to a "slow" model on all data (Hinton and Plaut, 1987; Tieleman and Hinton, 2009), which most people today simply refer to as training or learning. The most relevant data can be the test instance itself, where the update is performed without human supervision at test time. Our work shares the same general idea, but formulates an explicit learning problem for each inner-loop update, with the goal of generalizing to that test instance.
To make fast weights "fast", _i.e._ efficient, their update rules avoid forming an optimization problem with explicit objectives on the training data, _i.e._ a learning problem. For example, given each input \(x\), one popular update rule for fast weights is to add \(xx^{T}\) (or some variant thereof) (Ba et al., 2016) like in Hebbian learning and Hopfield networks (Hopfield, 1982). In contrast, our update rule for TTT is an explicit training process as its name suggests.
_Fast weight programmers_ (FWPs) (Schmidhuber, 1992) produce the updates to fast weights with a "slow" model. MTT's outer loop can be seen as training the "slow" model, if its inner loop is viewed as updating fast weights. In particular, FWPs with the Hebbian update rule above are equivalent to linear transformers (Schlag et al., 2021), therefore also to MTT with linear models. Clark et al. (2022) add a final layer of fast weights to a transformer and train its initialization with a FWP to improve performance on language modeling.
Given the broadest definition of FWPs, MTT with parametric models can be seen as a special case (Kirsch and Schmidhuber, 2021). But the difference in update rules between TTT and fast weights, as discussed, carries over to MTT and FWPs. Irie et al. (2021) have tried "fast" networks with weights directly produced as output of a "slow" network, without forming a learning problem. In contrast, our inner loop mirrors regular (non-meta) learning. This helps us with empirical intuitions like in Figure 1, and heuristics like output normalization and stochastic gradient descent.
### Learning to Learn
For decades, researchers have been arguing that learning to learn should be an important component of intelligence (Schmidhuber, 1987; Bengio et al., 1990; Thrun and Pratt, 1998; Lake et al., 2017).
Most prior work on learning to learn, such as Andrychowicz et al. (2016); Finn et al. (2017) and Metz et al. (2018), try to generalize across datasets or tasks instead of instances, since meta-learning lifts its units "one level above". Their inner loop learns from an entire dataset at a time, instead of a single instance, so the outer loop needs a collection of datasets or tasks. Since it is hard to collect millions of datasets or tasks, their outer loop is hard to scale.
But for TTT, each instance itself is a task and defines its own generalization problem, so MTTT is only a solution to the canonical problem of supervised learning, reformulated as learning to learn. It does not propose a new problem setting like generalization across datasets or tasks. Its inner loop only needs a single instance, so the outer loop only needs a collection of instances - a dataset in the traditional (non-meta) sense, _e.g._ the ImageNet training set. This makes it easier to scale.
## 7 Limitations and Future work
The search space for practically effective instantiations of MTTT is huge, and our paper has only taken a baby step. We believe this search needs to be a community effort, and our biggest motivation for writing this paper is to inspire future steps. Fortunately, if our perspective holds, then successful heuristics for regular learning can transfer to our inner loop, and search can be much more efficient. Next we outline some especially promising directions for future work, given our current limitations.
Multi-level learning to learn.We have already discussed the possibility of a more ambitious architecture for \(f\) (_e.g._ larger MLP or CNN), but when \(f\) is a transformer, it can be interpreted as yet another inner loop nested inside the existing one. In this fashion, we can potentially build many levels of nested learning problems, instead of the existing two-level paradigm for learning to learn. This possibility has been mentioned in Irie et al. (2021), but might become practically useful under MTTT, given the functional programming capabilities of JAX.
Better infrastructure.Since learning to learn has been relatively under-explored as a practical solution to supervised learning, its support in JAX is still primitive. For example, MTTT-Linear only costs \(0.1\times\) more FLOPs than linear attention (already unnecessary in principle), but turns out to be \(2\times\) slower in wall-clock time. SGD is slower by an additional factor of \(2\times\) regardless of \(T\), even though it costs the same number of FLOPs. We believe these systems-level inefficiencies will eventually disappear once the community builds a better infrastructure.
Autoregressive language modeling.For this application, \(T=n\) because each \(W_{t}\) only updates on the gradient from \(x_{t}\) in an online fashion, like in Wang et al. (2023). In this paper, we have not been able to try autoregressive tasks because of a rather technical reason: JAX saves every intermediate \(W_{t}\), taking \(O(TD)\) memory when \(W\) is of size \(D\). But in principle, only \(O(D)\) memory is needed. Implementing this solution turns out to be a large engineering effort beyond the scope of our paper.
Outer-loop parameterization and optimization.There are many other ways to parameterize a family of reconstruction tasks, or even a more general family. For clean comparison with attention, our way has been simple but probably far from optimal. For the same reason, we have also refrained from searching the outer-loop optimization recipe, even though the best recipe with neural networks as inner loop is almost certainly different from that with kernels.
Gradient-free optimization.Optimizing \(\mathcal{L}_{T}\) with zeroth-order techniques (Salimans et al., 2017; Hinton, 2022; Malladi et al., 2023) might sound radical, but can bring many practical benefits for MTTT. It frees us from taking gradients of gradients, and the engineering challenges that follow, such as backpropagation through time. We simply avoid the aforementioned problem with recovering intermediate weights \(W_{t}\), and all the systems-level inefficiencies for learning to learn in JAX. Altogether, we believe that MTTT could become a killer application for zeroth-order techniques.
## Acknowledgements
We are grateful to Guandao Yang and Beidi Chen for helpful discussions, also to Yossi Gandelsman and Yutong Bai for their help at an early stage of this project. Yu Sun is grateful to his PhD advisors, Alexei A. Efros and Moritz Hardt, for their many insights that eventually became part of this paper. Yu Sun is supported in part by Oracle Cloud credits and related resources, generously provided by the Oracle for Research program.
|
2305.06702 | Deployment of an Online Feedback Optimization Controller for Reactive
Power Flow Optimization in a Distribution Grid | Optimization is an essential part of power grid operation and lately, Online
Optimization methods have gained traction. One such method is Online Feedback
Optimization (OFO) which uses measurements from the grid as feedback to
iteratively change the control inputs until they converge to the solution of
the optimization problem. Such algorithms have been applied to many power
system problems and experimentally validated in lab setups. This paper
implements an OFO controller in a real distribution grid for 24/7 operation
using off-the-shelf hardware and software. The proposed control strategy
optimizes the reactive power flow at the substation while satisfying voltage
constraints. As part of an existing coordination scheme between
(sub)transmission grid operator (TSO) and distribution grid operator (DSO),
this comes with a financial reward and simultaneously it virtually reinforces
the grid by regulating the voltage on the feeder and therefore allowing higher
levels of distributed generation/consumption. We present how a distribution
grid is retrofitted such that we can use existing inverters, we analyze the
controller's interaction with legacy infrastructure, and investigate its
overall control behavior. Finally, we demonstrate the successful deployment of
an OFO controller in an operational environment which corresponds to Technology
Readiness Level (TRL) 7. | Lukas Ortmann, Christian Rubin, Alessandro Scozzafava, Janick Lehmann, Saverio Bolognani, Florian Dörfler | 2023-05-11T10:24:45Z | http://arxiv.org/abs/2305.06702v2 | Deployment of an Online Feedback Optimization Controller for Reactive Power Flow Optimization in a Distribution Grid
###### Abstract
Optimization is an essential part of power grid operation and lately, Online Optimization methods have gained traction. One such method is Online Feedback Optimization (OFO) which uses measurements from the grid as feedback to iteratively change the control inputs until they converge to the solution of the optimization problem. Such algorithms have been applied to many power system problems and experimentally validated in lab setups. This paper implements an OFO controller in a real distribution grid for 24/7 operation using off-the-shelf hardware and software. The proposed control strategy optimizes the reactive power flow at the substation while satisfying voltage constraints. As part of an existing coordination scheme between (sub)transmission grid operator (TSO) and distribution grid operator (DSO), this comes with a financial reward and simultaneously it virtually reinforces the grid by regulating the voltage on the feeder and therefore allowing higher levels of distributed generation/consumption. We present how a distribution grid is retrofitted such that we can use existing inverters, we analyze the controller's interaction with legacy infrastructure, and investigate its overall control behavior. Finally, we demonstrate the successful deployment of an OFO controller in an operational environment which corresponds to Technology Readiness Level (TRL) 7.
## I Introduction
The operation of power systems comprises many tasks that can be formulated as optimization problems A famous example is Optimal Power Flow. Defining a control objective as an optimization problem is powerful, flexible, and versatile. Often, optimization problems even arise naturally e.g. when constraints, like voltage limits, need to be satisfied. It is therefore important to develop, deploy and evaluate different methods that can solve optimization problems under real operating conditions in a grid. More precisely, methods are needed that are fast and robust, and, especially in distribution grids, need to be able to work with little model information. If an exact model is available these optimization problems are solved offline on a computer by using an optimization algorithm and the model. The solution of the optimization is then deployed onto the grid, see Figure 0(a). However, solving these optimization problems offline can be computationally intense and they need to be robustified to be able to deal with model mismatch. Otherwise, a model mismatch could lead to a constraint violation. Unfortunately, such robustification prohibits to utilize the grid to its full capacity because some margin needs to be included to deal with a model mismatch. In distribution grids, no good system model might exist in the first place.
To circumvent these problems, Online Optimization methods have been developed that take feedback into account, see Figure 0(b) and consult [1] for a detailed review. One such method is called OFO. This method allows to steer a system to the solution of an optimization problem by taking decisions that are not based on an available model of the grid but on measurements collected in real-time. It is computationally light, robust to model mismatch, can utilize a grid to its full capacity, and needs very limited model information, see [2] for a review paper. In simulations, it has been applied to a vast number of power system problems [3, 4, 5, 6, 7, 8, 9, 10] and it has also been experimentally tested with hardware-in-the-loop simulations [11, 12]. Experiments using a real power grid setup have also been done, however, those tests were either done in dedicated lab environments [13, 14, 15] or on microgrids [16, 17] using a specialized hardware and communication setup. In contrast, this paper presents the deployment of an OFO controller in a real distribution grid for 24/7 operation utilizing existing hardware. The distribution grid we chose is operated by AEW Energie AG, is located in the north of Switzerland, and supplies 100.000 people. The objective of the controller is twofold: On the one hand, it is tasked to optimize the reactive power flow at the substation, based on a TSO-DSO coordination scheme, that yields financial rewards for the distribution grids. On the other hand, it is used to regulate the voltage inside the distribution feeder. Such voltage support virtually reinforces the grid through automation and has the potential to mitigate or postpone grid reinforcements [18]. The potential of such virtual grid reinforcement through coordinated reactive power sources was analyzed in [19] and the authors concluded that 9% more active power can be conducted before voltage constraints limit the possible active power flow.
The paper documents an example of successful TSO-DSO coordination in the Swiss power system and provides a demonstration of the effectiveness of OFO for real-time optimization problems in the power grid. Our contributions can be structured as follows: 1) we present how we retrofitted the distribution grid infrastructure both in terms of the communication and hardware setup, 2) we investigate the consequences of using off-the-shelf hardware and the interaction of a real
distribution grid with an OFO controller which serves as a robustness test of OFO, 3) we give a tutorial on OFO, including potential extensions of the controller, that will assist the power system community in using this new technology for other control and optimization problems.
Overall, with this deployment for 24/7 operation in a real distribution grid, OFO has reached TRL 7 ("system prototype demonstration in an operational environment") [20].
## II Reactive Power Prices in Switzerland
The Swiss transmission grid operator, Swissgrid, is controlling its voltage with the help of generators and distribution grids that are connected to the transmission grid. Generators connected to the transmission grid have to participate in so-called active voltage support while subtransmission grid operators have to participate in so-called semi-active voltage support and they can opt-in for active voltage support. The basis for this voltage support scheme is that Swissgrid calculates a voltage reference for every bus in the transmission grid. This is done every 15 minutes through an Optimal Power Flow solver. All entities connected to the transmission grid are incentivized to adjust their reactive power flow such that it helps to drive the voltage at their connection point to the provided reference. The incentive scheme works as follows: Reactive power flows that are helping to drive the voltage to the reference are considered conform whereas reactive power flows that have the wrong sign and drive the voltage away from the reference are considered non-conform. In both active and semi-active voltage support, the generators and subtransmission grid operators are financially rewarded when they provide conform reactive power flows and they pay penalties when their reactive power flows are non-conform. The prices and penalties differ between active and semi-active voltage support. Furthermore, in semi-active voltage support, a tolerance band exists within which no reward nor penalty is billed. See [21] for more information.
The subtransmission grid operators forward this pricing scheme to the distribution grid operators and charge or pay the distribution grids depending on the reactive power flow at the connection points between their subtransmission and the distribution grid. Hence, the distribution grid operators have a financial incentive to control their reactive power flows as well. This can be done with inverters and generators connected to the distribution grid as they can serve as reactive power sources. However, their reactive power injections have lower and upper limits (\(q_{min}\) and \(q_{max}\)) due to the hardware limits of the inverters and generators. Furthermore, reactive power flows also affect the voltages in the grid and one needs to make sure that all voltages \(v\) stay within their lower and upper limits (\(v_{min}\) and \(v_{max}\)). Therefore, an optimization problem arises: How can reactive power injections \(q\) be used to minimize the cost and maximize the reward from the subtransmission grid operator while satisfying the voltage and hardware limits. Mathematically speaking, we define the constraint optimization problem:
\[\min_{q}\;cost(q)-reward(q) \tag{1}\] \[\text{s.t.}\quad q_{min}\leq q\leq q_{max}\] \[\quad v_{min}\leq v\leq v_{max}\] \[\quad v=h(q,d)\]
We will describe the relationship between \(q\) and \(v\) as \(v=h(q,d)\) where \(h(\cdot)\) represents the power flow equations and \(d\) is a vector of all active and reactive injections in the grid. The goal of our OFO controller and its implementation in the distribution grid will be to iteratively change the reactive power injections \(q\) until they converge to \(q^{*}\) that optimally solves the optimization problem (1).
## III Online Feedback Optimization
OFO is a method to solve optimization problems using measurements instead of models. This means it is a feedback control method instead of a model-based feedforward approach, compare Figure 1. The advantage is that a system model to evaluate \(v=h(q,d)\) is not needed. Therefore, no cable and line parameters nor the topology need to be known, and also no active and reactive generation and consumption \(d\) need to be measured or estimated. The only model information needed is \(\nabla_{q}h(q,d)\), where \(\nabla\) is the gradient operator. It describes how a small change in the reactive power injections \(q\) will change the voltage \(v\). Note that, this is not the same as knowing which voltage \(v\) will result for a specific \(q\). We only need to know the derivative of \(v\) with respect to \(q\). This is very similar to power transfer distribution factors which describe how a change in active power injections will change the line flows. We will from now on refer to this relationship between the effect of a change in \(v\) for a small change in \(q\) as the sensitivity.
Now, we explain how to drive a power system to the optimal solution of an optimization problem using feedback. To do so we turn an optimization algorithm into a feedback controller, which is the core idea of OFO. This enables us to profit from
Fig. 1: Comparison of Offline Optimization and Online Feedback Optimization.
closed-loop feedback control advantages such as robustness to disturbances \(d\) and model mismatch in the sensitivity. This has been done with several different optimization algorithms which all lead to a different system behavior with specific advantages. For an overview see [2]. Here we explain the idea with an illustrative example, i.e. an optimization problem with no constraints:
\[\min_{q}f(q) \tag{2}\]
and the optimization algorithm is gradient descent. This means to minimize a function \(f(q)\) one takes gradient steps with step size \(\alpha\). The gradient of \(f(q)\) is \(\nabla f(q)\) and a gradient step with step size \(\alpha\) is:
\[q(k+1)=q(k)-\alpha\nabla f(q(k)). \tag{3}\]
This is an integral controller which keeps changing \(q\) until the gradient of the cost function \(\nabla f(q(k))\) is zero and therefore \(q\) is driven to a locally optimal solution of (2). Just as with standard integral controllers and due to using feedback this works for a wide range of gains \(\alpha\). An OFO controller based on standard gradient descent like (3) does not satisfy any constraints. To ensure that the constraints on the reactive power injections \(q\) and voltages \(v\) are satisfied, we can use projected gradient descent. An OFO controller derived from projected gradient descent was presented in [22]. We tailor the update law to our specific use case and get
\[q(k+1)=q(k)+\alpha\sigma(q(k),v(k)) \tag{4}\]
\[\sigma(q,v)=\arg\min_{w\in\mathbb{R}^{p}}\|w+\nabla f(q)\|^{2} \tag{5}\] \[\text{s. t. }\quad q_{min}\leq(q+\alpha w)\leq q_{max}\] \[\qquad\qquad\qquad v_{min}\leq(v+\alpha\nabla_{q}h(q,d)w)\leq v _{max}\]
with \(p\) being the number of reactive power setpoints. This is also an integral controller and it drives \(\sigma(q,v)\) to zero, which is only zero when either \(\nabla f(q)\) is zero or if the cost function \(f(q)\) cannot be further decreased because constraints on \(q\) or \(v\) are reached. In both cases, the controller has iteratively changed \(q\) until a local optimum has been reached, which is exactly what the controller is designed for. In our implementation in the distribution grid, we were not able to control the reactive power injections directly. However, we can control the power factor \(\cos(\phi)\) of the power injections instead. The commands we can send are 0.8 ind., 0.85 ind., 0.9 ind., 0.95 ind., 1, 0.95 cap., 0.85 cap., 0.8 cap. This means our control input has to be a discrete value which we can enforce by adding integer constraints. We adapt the controller proposed in [23] which results in:
\[\cos(\phi)(k+1)=\cos(\phi)(k)+\alpha\sigma(\cos(\phi(k)),v(k)) \tag{6}\]
\[\sigma(\cos(\phi),v)=\arg\min_{w\in\mathbb{R}^{p}}\|w+\nabla f(q)\|^{2} \tag{7}\] \[\text{s.t. }\quad\cos(\phi)_{min}\leq(\cos(\phi)+\alpha w)\leq \cos(\phi)_{max}\] \[\qquad\qquad v_{min}\leq(v+\alpha\nabla_{cos(\phi)}q(\cos(\phi), p)\nabla_{q}h(q,d)w)\leq v_{max}\] \[\qquad\qquad\frac{w}{0.05}\in\mathbb{Z},\]
where \(\mathbb{Z}\) is the set of all integer variables and therefore w/0.05 can only take values of 0.05, -0.05, 0.1, etc. This is the control algorithm we implement on the distribution grid.
Problem (7) is a mixed integer quadratic optimization problem (MIQP) that needs to be solved at every time step. Without integer constraints, it is easy and fast to solve even for large systems. With integer constraints, the problem is harder to solve but easier than solving the overall problem (1) including these integer constraints that the hardware setup demands.
### _Necessary Model Information_
The controller used in the implementation needs to know how a change in the \(\cos(\phi)\) setpoint is going to affect the voltage. This can be split into two parts. First how a change in \(\cos(\phi)\) changes the reactive power injections \(q\) (\(\nabla_{\cos(\phi)}q(\cos(\phi))\)) and second how the reactive power injections affect the voltage (\(\nabla_{q}h(q,d)\)). Such sensitivities can either be derived experimentally by changing an input and observing the change in the output or the same can be done using a simulation model of the grid. Furthermore, the sensitivity can be derived mathematically using the admittance matrix of the grid, and the power flow equations [24].
The sensitivity \(\nabla_{q}h(q,d)\) depends on both the topology and line impedances as well as \(q\) and \(d\). Therefore, the sensitivity changes over time and is generally hard to compute exactly. One has to work with approximations which poses a challenge to any kind of optimization. In such conditions of uncertainty, OFO controllers are particularly effective due to their feedback nature. Approximating the sensitivity with a constant matrix has proven to work well [13] and most importantly, even with an approximate sensitivity, the controller will enforce the constraints on both the input (\(q\) or \(\cos(\phi)\)) and output (\(v\)) in steady-state. Also, temporary constraint violations [22] and the suboptimality are bounded [25]. Last but not least, the sensitivity can be learned online from measurements [26] and OFO controllers exist that rely on zeroth order optimization algorithms and therefore do not need any sensitivity [27].
### _Possible Extensions_
OFO controllers offer great flexibility and possible extensions. We show some that are helpful in power grids.
#### Iii-B1 State estimation
Instead of feeding the raw voltage measurements into the OFO controller one can run the measurements through a state estimation and provide the result of the state estimation to the controller instead. The convergence of this feedback system was proven in [28]. This also enables to control voltages that are not directly measured and get estimated instead.
#### Iii-B2 Time-varying constraints
The constraints in the control law (7) can be different at every time step. This allows to include time-varying constraints of the overarching optimization problem (1). For example, with certain inverters, one can directly command reactive power injections \(q\). Given that an inverter has a current limit the available capacity for reactive power injections would depend on the active power injections which change over time. In other applications, time-varying
constraints could be dynamic line ratings or they could be used to temporarily block taps changers, make the controller reduce the power flow on a line, or lower the voltage angle over a circuit breaker.
#### Iii-B3 Updating the sensitivity
The sensitivity depends on the topology, tap changer positions, line parameters, generation, and consumption. These may change over time and therefore also the sensitivity can change over time. If for example, the topology has changed the sensitivity could be recomputed or the results of a new state estimation could be used to update the sensitivity.
## IV Distribution Grid Deployment
### _Hardware and Communication Setup_
The area of the grid under control by the OFO controller is visualized in Figure 2. We control 16 inverters located at point 2 which is approximately 9.2 km away from the connection to the subtransmission grid. Their total rated apparent power is 800 kVA and with our maximum power factor of 0.8, this corresponds to 640 kW and 480 kVAr. These inverters operate at 400 V and are located close to a transformer which transforms the power to the 16 kV radial grid whose topology is depicted in the figure. Voltage magnitude and power measurements are taken throughout the grid and communicated to the SCADA system of the distribution grid operator.
To implement our controller we retrofitted this infrastructure as depicted in Figure 3. Our controller gets measurements from the SCADA system through an existing Archive Data Server in a CSV file once every minute. It then calculates the new power factor setpoints for the inverters which are collected in a data storage and given to a Modbus server. This Modbus server communicates the setpoints to a protocol converter which transmits the setpoints to the SCADA system over an IEC104 protocol. We equip the inverters with a Siemens Smartgrid-Box A8000 to be able to send them these setpoints. The SCADA system communicates with this A8000 through an IEC 60870-5-104 protocol. Data logging at the inverters is done with an ADL-MXSpro from Meier-NT. A dashboard visualizes the measurements and setpoints and it can be used to enable, disable, and reset the controller or for manually choosing the setpoints. To enable these features the dashboard crawls data from the data storage and communicates with the Modbus server.
The OFO controller, the dashboard, and the data storage are implemented on a virtual machine inside the control room. Figure 4 shows an overview of the programs running and interacting on the virtual machine. The code was written in Python and its execution is computationally very light, meaning no large computation power is needed.
### _Controller Setup_
The controller is implemented as follows. The SCADA system provides the controller with voltage magnitude measurements of the three points shown in Figure 2. At measurement point 1 we also get the reactive power flow which is needed to calculate \(\nabla f(q)\). The goal is to optimize the reactive power flow at the connection point to the external grid (measurement point 1). The cost function \(f(q)\) is based on the pricing scheme of the subtransmission grid operator and it is a piece-wise linear function, see Figure 5. Due to the linearity the derivative \(\nabla f(q)\) is constant in each area. There is a high cost for capacitive reactive energy and a small reward for inductive reactive energy (MVArh). A deadband with no cost nor reward exists and is of size 0.25% \(S_{n}\) where \(S_{n}\) is the sum of the apparent power of all transformers at the connection
Fig. 4: Overview of the programs inside the virtual machine.
Fig. 3: High-level overview of the system including interfaces and communication links.
Fig. 2: The part of the grid controlled by OFO with the connection to the subtransmission grid operator, the measurement points, and the grid topology.
point to the subtransmission grid. Recall that, \(\sigma=\nabla f(q)\) when no constraints are active and note that the derivative of the cost function \(\nabla f(q)\) is zero in the gray area. Hence, \(\sigma\) would therefore be zero in the gray area (as long as there are no voltage violations) and the controller would not change the setpoints. To circumvent this, we artificially change the cost function to have a small gradient in the gray area which ensures that the controller tries to drive the reactive power flows into the conform (green) area. The sensitivity \(\nabla_{q}h(q,d)\) was calculated based on a model and is kept constant.
Given that our control approach is relying on communication infrastructure, it is necessary to define a fallback strategy in case the communication breaks down. In case the controller does not receive measurements for five minutes it instructs the inverter to operate at a power factor of 1. If the inverters do not receive commands from the controller anymore, they also set their power factor to 1.
## V Results and Data Analysis
In this section, we analyze the consequences of using off-the-shelf hardware, the interaction of an OFO controller with a real distribution grid, and the behavior of the controller. The controller went live in December 2022 and the data analysis of the first months revealed the following.
The controller gives power factor setpoints \(\cos(\phi)\) to the inverters. Figure 6 shows the active and reactive power injections of the inverters for a power factor setpoint of 0.8 inductive. The figure shows that the inverters do not track the setpoint well. Especially for low active power injections the reactive power injection is capacitive even though the controller was asking for a power factor of 0.8 inductive. This happens because old norms only required reactive power tracking for active power generation of larger than 5% of the rated power. These measurements highlight how important it is to utilize measurements as feedback because not only a grid model can be wrong, but also actuators might not follow their reference.
In 2022, the pricing scheme of the subtransmission grid operator was different from the one in Figure 5 and was dependent on the time of day. Therefore, the optimization problem changed at certain times and the controller automatically adjusted the setpoints to drive them to the optimal solution of the new optimization problem. Figure 7 shows the behavior of the controller when the cost function changes. Note that, the controller iteratively changes the setpoints over several steps. This iterative behavior is at the core of OFO as it allows to work with minimal model information. It can also be seen that there is a time delay of approximately four minutes between the setpoints being changed and the inverters adjusting their reactive power injections. The presence of this time delay means that voltage violations could persist for up to four minutes before the controller is able to mitigate them. Currently, the VDE 4105 norm is allowing temporary voltage violations for up to one minute [29]. We conclude that a sampling time of the controller of fewer than 10 seconds should be chosen for future implementations to guarantee that voltage violations are cleared within one minute.
The sensors in the grid only send a new measurement to the control room when the measured value has changed by more than 1%. The gathered data suggests that the controller does not have a problem with the measurements being triggered.
Note that, the distribution grid is currently not experiencing voltage violation and hence the virtual reinforcement feature of the controller could not be evaluated.
## VI Conclusion
In this paper, we presented the retrofitting of existing grid infrastructure to optimize reactive power flows in a distribution grid using an OFO controller that controls reactive power injections of PV inverters. The controller also enforces voltage limits by adjusting the reactive power injections which means
Fig. 5: Cost function for reactive power flows into the subtransmission grid based on the pricing scheme of the subtransmission grid operator. The distribution grid operator has to pay a high penalty for capacitive flows and gets a small amount of money for inductive flows.
Fig. 6: Active and reactive power injections of the inverters in the month of January together with a line indicating a power factor of 0.8 inductive.
there exists potential to virtually reinforce the grid by mitigating voltage limit violations. The implementation shows that the controller is robust against model mismatch, is compatible with the legacy grid infrastructure, and can work with triggered measurements. We consider this 24/7 implementation to be a system prototype demonstration in an operational environment and conclude that the OFO control method has therefore reached technology readiness level 7.
Further investigations are needed to quantify the monetary value of the virtual grid reinforcement that voltage control through reactive power can provide. Also, given the high technology readiness level, OFO might be considered for commercial control room software. Finally, the principle of defining a control problem as an optimization problem and then using an OFO controller to solve the optimization and therefore the control problem could be applied to more problems, e.g. active power curtailment, curative actions and automatic redispatch, disaggregation of flexibility commands onto several resources.
|
2301.05924 | Higher order degenerations of Fay's identities and applications to
integrable equations | Higher order degenerated versions of Fay's trisecant identity are
presented. It is shown that these lead to solutions for
Schwarzian Kadomtsev-Petviashvili equations. | C. Klein, J. Pillet | 2023-01-14T14:15:20Z | http://arxiv.org/abs/2301.05924v1 | # Higher order degenerations of Fay's identities and applications to integrable equations
###### Abstract.
Higher order degenerated versions of Fay's trisecant identity are presented. It is shown that these lead to solutions for Schwarzian Kadomtsev-Petviashvili equations.
This work is partially supported by the isite BFC, the EIPHI Graduate School (contract ANR-17-EURE-0002) and by the European Union Horizon 2020 research and innovation program under the Marie Sklodowska-Curie RISE 2017 grant agreement no. 778010 IPaDEGAN. We thank M. Pavlov for helpful discussions and hints.
## 1. Introduction
Solutions to integrable partial differential equations (PDEs) in terms of multi-dimensional theta functions on compact Riemann surfaces appeared in the 1970s in the search for quasi-periodic solutions, see for instance [5, 1] for a historic account. These solutions were constructed via the _Baker-Akhiezer_ function, a function with an essential singularity on the Riemann surface first introduced by Clebsch and Gordan. Mumford and coworkers introduced in [19] a complementary approach based on Fay's celebrated _trisecant identity_ for theta functions [7],
\[\Theta^{*}_{ad}\Theta^{*}_{cb}\Theta_{ac}\Theta_{bd}+\Theta^{*}_{ca}\Theta^{* }_{db}\Theta_{be}\Theta_{ad}=\Theta^{*}_{cd}\Theta^{*}_{ab}\Theta\Theta_{a+b,c +d},\]
where we have introduced the notation
\[\Theta^{*}_{ab}=\Theta^{*}\left(\int_{a}^{b}\right),\quad\Theta_{ab}=\Theta \left(\mathrm{z}+\int_{a}^{b}\right); \tag{1}\]
here \(\Theta(\mathrm{z})\), \(\mathrm{z}\in\mathbb{C}^{g}\), is the \(g\)-dimensional Riemann theta function, \(\Theta^{*}(\mathrm{z})\) is a theta function with an odd non-singular characteristic, see the definitions (6), (7), \(\Theta=\Theta(\mathrm{z})\), \(\Theta^{*}=\Theta^{*}(0)=0\), and \(a,b,c,d\) are points on a Riemann surface \(\mathcal{R}\) with genus \(g\). The Abel map \(\int_{a}^{b}\) between two points \(a\) and \(b\) on \(\mathcal{R}\) is defined at the beginning of section 2. Note that the name trisecant identity refers to secants on the so-called Kummer variety, see [22] for a comprehensive review.
Since Fay's identity (9) holds for arbitrary points \(a\), \(b\), \(c\), \(d\) on the Riemann surface \(\mathcal{R}\), it is possible to consider the identity in the limit that two or more points coincide1. This leads to identities between
derivatives of theta functions making it possible to identify solutions to certain PDEs from degenerated identies. In [19] this was done for the Sine-Gordon equation and the Kadomtsev-Petviasvili (KP) equation. On special Riemann surfaces (hyperelliptic, trigonal) the latter solutions lead to algebro-geometric solutions for the Korteweg-de Vries (KdV) [19] and the Boussinesq equation [1]. In [14] previously known solutions to the Ernst equation [17] were reconstructed via Fay's identity, see also [15], in [12] known solutions to the Camassa-Holm equation [9] were obtained with Mumford's approach. In [11] Kalla presented a new degenerated identity allowing to identify known solutions to the nonlinear Schrodinger [10, 20] and Davey-Stewartson equations [18] and to construct solutions to vector nonlinear Schrodinger equations in terms of theta functions. For a recent review on completely integrable dispersive PDEs, we refer to [16]. In this paper we generalize Kalla's approach to higher order in the local parameter near the point \(a\). We obtain with the above notation
**Main theorem Part I**
Let \(a\), \(b\) be points on a compact Riemann surface \(\mathcal{R}\), and let \(U:=D_{b}\ln\Theta\Theta^{*}_{ba}\). Then \(U\) satisfies
\[0 =2(D_{a}U)^{2}D_{a}^{\prime\prime}D_{a}U-2D_{a}UD_{a}^{\prime \prime}UD_{a}^{2}U-(D_{a}U)^{2}D_{a}^{4}U+4D_{a}UD_{a}^{3}UD_{a}^{2}U\] \[-3(D_{a}^{2}U)^{3}+3(D_{a}^{\prime}U)^{2}D_{a}^{2}U-3(D_{a}U)^{2} (D_{a}^{\prime})^{2}DU. \tag{2}\]
This identity has similarities to the classical identity (14) by Fay in the sense that it involves the derivatives \(D_{a}^{\prime\prime}\), \(D_{a}^{\prime}\) and \(D_{a}\) of \(\Theta\)(z), but appears to be new. In contrast to the potential in (14), the function \(U\) also depends on a point \(b\) on the Riemann surface \(\mathcal{R}\) which is distinct from \(a\), but otherwise arbitrary.
We also prove
**Main theorem Part II**
The function \(\phi(x,y,t):=\mathrm{D}_{b}\ln\Theta^{*}_{ab}\Theta(x\mathbf{v}_{0}(a)+y \mathbf{v}_{1}(a)+t\mathbf{v}_{2}(a)+\mathbf{d})\), \((x,y,t)\in\mathbb{R}^{3}\), solves the Schwarzian KP equation:
\[\Big{(}\frac{\phi_{t}}{\phi_{x}}-\frac{1}{2}\{\phi;x\}\Big{)}_{x}-\frac{3}{2} \Big{(}\frac{\phi_{y}}{\phi_{x}}\Big{)}_{y}-\frac{3}{4}\Big{(}\frac{\phi_{y}^ {2}}{\phi_{x}^{2}}\Big{)}_{x}=0 \tag{3}\]
where \(\{\phi;x\}\) denotes the Schwarzian derivative along \(x\): \(\{\phi;x\}:=\frac{\phi_{xxx}}{\phi_{x}}-\frac{3}{2}\Big{(}\frac{\phi_{xx}}{ \phi_{x}}\Big{)}^{2}\), where the indices denote partial derivatives with respect to the respective variable, and where \(\mathbf{v}_{j}\), \(j=0,1,2\) has the components \(v_{ij}\), \(i=1,\ldots,g\) defined in (4).
The solution \(\phi\) in terms of multi-dimensional theta functions for the Schwarzian KP equation seems to be new. The Schwarzian KP equation (3) appeared first in the Painleve analysis of the KP equation in [23] as a singularity manifold equation. Its integrability was established
in [3]. As in the case of the KP equation, a reduction to a Schwarzian KdV and Boussinesq equation is possible.
The paper is organised as follows: in section 2 we collect some basic definitions of quantities defined on a compact Riemann surface and known facts on Fay's identities. In section 3 we rederive identities (12) and (16) from identity (10) and prove the first part of the main theorem. In section 4 this is applied to integrable PDEs. We add some concluding remarks in section 5.
## 2. Preliminaries
In this section, we will collect some basic definitions and known facts on Fay's identities and applications.
### Basic definitions
In this paper we always consider a Riemann surface \(\mathcal{R}\) of genus \(g\in\mathbb{N}\) equipped with a canonical basis of cycles \(a_{1},\dots,a_{g},b_{1},\dots,b_{g}\) satisfying the intersection conditions
\[a_{i}\circ b_{j}=\delta_{ij},\quad a_{i}\circ a_{j}=0,\quad b_{i}\circ b_{j}=0,\quad i,j=1,\dots,g.\]
The \(g\)-dimensional vector of holomorphic \(1\)-forms is denoted by \(\mathrm{d}\omega\) and normalized by \(\int_{a_{i}}\mathrm{d}\omega_{j}=\delta_{ij}\), \(i,j=1,\dots,g\). The matrix of \(b\)-periods \(\mathbb{B}_{ij}=\int_{b_{i}}\mathrm{d}\omega_{j}\), \(i,j=1,\dots,g\), is a Riemann matrix, i.e., it is symmetric and has a positive definite imaginary part. The Abel map \(\omega:P\mapsto\int_{P_{0}}^{P}\mathrm{d}\omega\) is a bijective map from the Riemann surface \(\mathcal{R}\) into the _Jacobian_\(Jac(\mathcal{R}):=\mathbb{C}^{g}/\Lambda\) where \(\Lambda\) is the lattice formed by the periods of the holomorphic \(1\)-forms,
\[\Lambda=\left\{\mathrm{m}+\mathbb{B}\mathrm{n}:m,n\in\mathbb{Z}^{g}\right\}.\]
The expansion of the Abel map at a point \(P\in\mathcal{R}\) near a point \(a\in\mathcal{R}\) is written in the form,
\[\omega_{i}(P)=\sum_{j=0}^{\infty}v_{ij}\frac{\tau^{j}}{j!},\quad i=1,\dots,g, \tag{4}\]
where \(\tau\) is a local parameter in the vicinity of \(a\) containing also \(P\). We define the derivatives acting on a function \(f(z)\), \(z\in\mathbb{C}^{g}\) as
\[\begin{split} D_{a}&:=\sum_{i=1}^{g}v_{i0}\partial_ {z_{i}},\quad D_{a}^{\prime}:=\sum_{i=1}^{g}v_{i1}\partial_{z_{i}},\quad D_{a} ^{\prime\prime}:=\sum_{i=1}^{g}\frac{v_{i2}}{2}\partial_{z_{i}},\\ D_{a}^{(n)}&:=\sum_{i=1}^{g}\frac{v_{in}}{n!} \partial_{z_{i}},\quad n\in\mathbb{N}.\end{split} \tag{5}\]
Multi-dimensional theta functions are the building blocks of meromorphic functions on Riemann surfaces. The theta function with characteristic \([\mathrm{p},\mathrm{q}]\) is defined as an infinite series,
\[\Theta_{\mathrm{pq}}(\mathrm{z},\mathbb{B})=\sum_{\mathrm{N}\in\mathbb{Z}^{g}} \exp\left\{\mathrm{i}\pi\left\langle\mathbb{B}\left(\mathrm{N}+\mathrm{p} \right),\mathrm{N}+\mathrm{p}\right\rangle+2\pi\mathrm{i}\left\langle\mathrm{z }+\mathrm{q},\mathrm{N}+\mathrm{p}\right\rangle\right\}\;, \tag{6}\]
with \(\mathrm{z}\in\mathbb{C}^{g}\) and \(\mathrm{p}\), \(\mathrm{q}\in\mathbb{R}^{g}\), where \(\left\langle\cdot,\cdot\right\rangle\) denotes the Euclidean scalar product \(\left\langle\mathrm{N},\mathrm{z}\right\rangle=\sum_{i=1}^{g}N_{i}z_{i}\). The properties of the Riemann matrix ensure that the series converges absolutely and that the theta function is an entire function on \(\mathbb{C}^{g}\). A characteristic is called _singular_ if the corresponding theta function vanishes identically. Half-integer characteristics with \(2\mathrm{p},2\mathrm{q}\in\mathbb{Z}^{g}\) are called _even_ if \(4\langle\mathrm{p},\mathrm{q}\rangle=0\bmod 2\) and _odd_ otherwise. Theta functions with odd (even) characteristic are odd (even) functions of the argument \(\mathrm{z}\). The theta function with characteristic is related to the Riemann theta function \(\Theta\), the theta function with zero characteristic \(\Theta:=\Theta_{00}\), via
\[\Theta_{\mathrm{pq}}(\mathrm{z},\mathbb{B})=\Theta(\mathrm{z}+\mathbb{B} \mathrm{p}+\mathrm{q})\exp\left\{\mathrm{i}\pi\left\langle\mathbb{B}\mathrm{p},\mathrm{p}\right\rangle+2\pi\mathrm{i}\left\langle\mathrm{p},\mathrm{z}+ \mathrm{q}\right\rangle\right\}\;. \tag{7}\]
A theta function with a nonsingular half-integer characteristic is denoted by \(\Theta^{*}\).
### Fay's identities
Theta functions on Jacobians satisfy Fay's celebrated trisecant identity [7]. It can be seen as a generalization of the classical relation between cross ration functions for four arbitrary points \(a\), \(b\), \(c\), \(d\) in the euclidean plane
\[\lambda_{abcd}=\frac{\Theta^{*}(f_{a}^{b})\Theta^{*}(f_{c}^{d})}{\Theta^{*}(f_{ a}^{d})\Theta^{*}(f_{c}^{b})}\,, \tag{8}\]
which is a function on \(\mathcal{R}\) that vanishes for \(a=b\) and \(c=d\) and has poles for \(a=d\) and \(b=c\).
**Theorem 2.1** (Fay [7]).: _Let \(a\), \(b\), \(c\), \(d\) be four points on the Riemann surface \(\mathbb{R}\). Then with the above definitions the following identity holds_
\[\begin{split}\lambda_{cabd}\,\Theta(\mathrm{z}+\int\limits_{b}^{ c})&\,\Theta(\mathrm{z}+\int\limits_{a}^{d})+\lambda_{cbad}\,\Theta( \mathrm{z}+\int\limits_{a}^{c})\,\Theta(\mathrm{z}+\int\limits_{b}^{d})\\ &=\Theta(\mathrm{z})\;\Theta(\mathrm{z}+\int\limits_{b}^{c}+ \int\limits_{a}^{d})\;,\end{split} \tag{9}\]
\(\forall\mathrm{z}\in\mathbb{C}^{g}\)_. The integration paths in (9) have to be chosen in a way not to intersect the canonical cycles._
Degenerated versions of Fay's identity lead to identities for derivatives of theta functions. In the limit \(d\to b\), one finds for (9)
**Corollary 2.2** (Fay [7]).: _Let \(a\), \(b\), \(c\) be points on the Riemann surface \(\mathcal{R}\). Then the following identity holds,_
\[D_{b}\ln\frac{\Theta(\mathrm{z}+\int_{a}^{c})}{\Theta(\mathrm{z})}=p_{1}(a,b,c) +p_{2}(a,b,c)\frac{\Theta(\mathrm{z}+\int_{b}^{c})\Theta(\mathrm{z}+\int_{a}^{ b})}{\Theta(\mathrm{z}+\int_{a}^{c})\Theta(\mathrm{z})}, \tag{10}\]
_where_
\[p_{1}(a,b,c) =D_{b}\ln\frac{\Theta^{*}(\int_{a}^{b})}{\Theta^{*}(\int_{c}^{b})}, \tag{11}\] \[p_{2}(a,b,c) =\frac{\Theta^{*}(\int_{a}^{c})D_{b}\Theta^{*}}{\Theta^{*}(\int_{ b}^{c})\Theta^{*}(\int_{b}^{a})}.\]
In the limit \(c\to a\), equation (10) yields
**Corollary 2.3** (Fay [7]).: _Let \(a\), \(b\) be points on the Riemann surface \(\mathcal{R}\). Then the following identity holds,_
\[D_{a}D_{b}\ln\Theta(\mathrm{z})=q_{1}(a,b)+q_{2}(a,b)\frac{\Theta(\mathrm{z}+ \int_{b}^{a})\Theta(\mathrm{z}+\int_{a}^{b})}{\Theta(\mathrm{z})^{2}}, \tag{12}\]
_where_
\[q_{1}(a,b) =D_{a}D_{b}\ln\Theta^{*}(\overset{b}{\underset{a}{\overset{b}{ \underset{a}{\overset{b}{\underset{a}{\overset{b}{\underset{a}{\overset{b}{ \underset{a}{\overset{b}{\underset{a}{\overset{b}{\underset{a}{\overset{b}{ \underset{a}{\overset{b}{\underset{a}{\overset{b}{\overset{b}{\overset{b}{ \overset{b}{\overset{b}{\overset{b}{\overset{b}{\overset{b}{\overset{b}{ }}{\overset{\overset{b}{\overset{b}{\overset{b}{\overset{b}{\overset{b}{ }}{\overset{\overset{b}{\overset{b}{\overset{b}{\overset{b}{b}{\overset{b}{ }}{\overset{\overset{b}{\overset{b}{\overset{b}{\overset{b}{b}{\overset{b}{ \overset{b}{\overset{b}{\overset{b}{\overset{b}{b}{\overset{b}{b}{\overset{b}{ \overset{b}{\overset{b}{\overset{b}{b}{\overset{b}{\overset{b}{b}{\overset{b}{ \overset{b}{\overset{b}{\overset{b}{b}{\overset{b}{\overset{b}{b}{\overset{b}{b}{ \overset{b}{\overset{b}{\overset{b}{\overset{b}{b}{\overset{b}{\overset{b}{b}{\overset{b}{ \overset{b}{\overset{b}{b}{\overset{b}{\overset{b}{\overset{b}{b}{\overset{b}{b}{\overset{b}{ \overset{b}{\overset{b}{\overset{b}{b}{\overset{b}{\overset{b}{b}{\overset{b}{b}{\overset{b}{\overset{b}{b}{\overset{b}{\overset{b}{b}{\overset{b}{\overset{b}{b}{\overset{b}{\overset{b}{ }{\overset{b}{\overset{b}{\overset{b}{\overset{b}{b}{\overset{b}{b}{\overset{b}{\overset{b}{b}{ \overset{b}{\overset{b}{\overset{b}{\overset{b}{b}{\overset{b}{\overset{b}{b}{\overset{b}{\overset{b}{b}{\overset{b}{ }{\overset{b}{\overset{b}{\overset{b}{\overset{b}{b}{\overset{b}{\overset{b}{b}{\overset{b}{b}{\overset{b}{ \overset{b}{\overset{b}{\overset{b}{\overset{b}{\overset{b}{b}{\overset{b}{\overset{b}{b}{\overset{b}{\overset{b} {\overset{b}{\overset{b}{b}{\overset{b}{\overset{b}{b}{\overset{b}{\overset{b}{b}{\overset{b}{\overset{b} {\overset{b}{\overset{b}{\overset{b}{\leftleft}{\overset{b}{\overset{b}{b}{\overset{b}{\overset{b}{b}{\overset{b}{ \overset{b}{\overset{b}{\overset{b}{\leftleftleft}{\overset{b{}}{\overset{b}{\overset{b}{b}{\overset{b}{\overset{b}{ \overset{b}{\overset{b}{\overset{b}}{\overset{b}{\overset{b}{\overset{b}{b}{\overset{b}{\overset{b}}{\overset{{b} \overset{{b}}{\overset{{b}}{\overset{{b}}{\overset{{b}}{\overset{{b}}{\overset{b}{\overset{{b}}{\overset{b}{\overset{{b}} {\overset{{b}}{\overset{{b}}{\overset{{b}}{\overset{{b}}{\overset{b}{\overset{{b}}}{\overset{{b}}{\overset{{b}}{\overset{{b}}{ \overset{{\left}{\overset{{b}}{\overset{{b}}{\overset{{b}}}{\overset{{b}}{\overset{{b}}{\overset{{b}}{\overset{{b}}{\overset{{b}}{ \overset{{b}}{\overset{{b}}{\overset{{b}}{\overset{{b}}{\overset{{b}}{\overset{{b}}{\overset{{b}}{\overset{{b}}{\overset{{b}} \overset{{{b}}\overset{{{b}\overset{{b}}\overset{{{b}}\overset{{{b}}\overset{{b}}\overset{{{b}}\overset{{{b}}\overset{{{b} \left{{\leftleft}{\leftleft{\leftleft{\leftleft{\leftleft|}{\left|}{\left|}{\left{\left|}{\left{\left|} {\left|}{\left|}\right|}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\,\,\
## 3. Proof of Main theorem Part I
In this section we will consider various identities following from Fay's identity (10) in the limit \(c\to a\). First we identify the known relations (12) and (16), then we prove the first part of the main theorem.
To this end we write (10) in the form
\[V^{2}D_{b}\frac{V(c)}{V}=D_{b}\Theta^{*}\Theta^{*}_{ac}\Theta_{ab}\Theta_{bc}, \tag{17}\]
where we have put
\[V(c)=\Theta_{ac}\Theta^{*}_{cb},\quad V=V(a)=\Theta\Theta^{*}_{ab}. \tag{18}\]
### Known identities
For identity (17)) we consider a Taylor expansion in the limit \(c\to a\) in the local parameter \(\tau\) in (4). In lowest order we get (12) in the form
\[D_{a}D_{b}\ln V=\frac{1}{V^{2}}D_{b}\Theta^{*}D_{a}\Theta^{*}\Theta_{ab}\Theta _{ba}. \tag{19}\]
In order \(\tau^{2}\), we get for (17)
\[\frac{V^{2}}{2}D_{b}(D_{a}^{2}\ln V+D_{a}^{\prime}\ln V+(D_{a}\ln V)^{2})=D_{b }\Theta^{*}D_{a}\Theta^{*}\Theta_{ab}\Theta_{ba}\left(\frac{D_{a}^{\prime} \Theta^{*}}{2D_{a}\Theta^{*}}+D_{a}\ln\Theta_{ba}\right),\]
which can be written with (19) in the form
\[\frac{1}{2}D_{b}(D_{a}^{2}\ln V+D_{a}^{\prime}\ln V+(D_{a}\ln V)^{2})=D_{a}D_{ b}\ln V\left(\frac{D_{a}^{\prime}\Theta^{*}}{2D_{a}\Theta^{*}}+D_{a}\ln\Theta_{ba }\right). \tag{20}\]
Since this identity holds for all z \(\in\mathbb{C}\), it also holds for z replaced by z \(+\int_{a}^{b}\). This means \(\Theta\mapsto\Theta_{ab}\), \(\Theta_{ba}\mapsto\Theta\). As shown by Kalla [11], the difference between identity (20) and (20) after this shift of z reads
\[0= \frac{1}{2}D_{b}\left(D_{a}^{\prime}\ln\frac{\Theta_{ab}}{\Theta }+\frac{D_{a}^{2}\Theta_{ab}}{\Theta_{ab}}-\frac{D_{a}^{2}\Theta}{\Theta}+2D_ {a}\ln\Theta^{*}_{ba}D_{a}\ln\frac{\Theta_{ab}}{\Theta}\right)\] \[+D_{a}D_{b}\ln\Theta\Theta^{*}_{ab}D_{a}\ln\Theta_{ba}-D_{a}D_{b} \ln\Theta_{ab}\Theta^{*}_{ab}D_{a}\ln\Theta-\frac{D_{a}^{\prime}\Theta^{*}}{2 D_{a}\Theta^{*}}D_{a}D_{b}\ln\frac{\Theta_{ab}}{\Theta}. \tag{21}\]
With (12), we get
\[0= \frac{1}{2}D_{b}\left(D_{a}^{\prime}\ln\frac{\Theta_{ab}}{\Theta} +D_{a}^{2}\ln\frac{\Theta_{ab}}{\Theta}+\left(D_{a}\ln\frac{\Theta_{ab}}{ \Theta}\right)^{2}+2D_{a}^{2}\ln\Theta\Theta^{*}_{ba}\right)\] \[+\left(D_{a}\ln\Theta^{*}_{ba}-\frac{D_{a}^{\prime}\Theta^{*}}{2D _{a}\Theta^{*}}\right)D_{a}D_{b}\ln\frac{\Theta_{ab}}{\Theta} \tag{22}\]
Introducing the derivative \(\nabla_{b}:=\sum_{i=1}^{g}v_{i0}(b)\partial_{z_{i}}\) acting only on z, we can write (22) in the form
\[0= \nabla_{b}\left\{\frac{1}{2}\left(D_{a}^{\prime}\ln\frac{\Theta_{ ab}}{\Theta}+D_{a}^{2}\ln\frac{\Theta_{ab}}{\Theta}\right)+\left(D_{a}\ln \Theta_{ba}^{*}-\frac{D_{a}^{\prime}\Theta^{*}}{2D_{a}\Theta^{*}}\right)D_{a} \ln\frac{\Theta_{ab}}{\Theta}\right.\] \[\left.+D_{a}^{2}\ln\Theta\Theta_{ab}^{*}+\frac{1}{2}\left(D_{a} \ln\frac{\Theta_{ab}}{\Theta}\right)^{2}\right\} \tag{23}\]
This implies that relation (16) holds with
\[K(a,b)= D_{a}^{\prime}\ln\frac{\Theta_{ab}}{\Theta}+D_{a}^{2}\ln\frac{ \Theta_{ab}}{\Theta}+2\left(D_{a}\ln\Theta_{ba}^{*}-\frac{D_{a}^{\prime}\Theta ^{*}}{2D_{a}\Theta^{*}}\right)D_{a}\ln\frac{\Theta_{ab}}{\Theta}\] \[+2D_{a}^{2}\ln\Theta\Theta_{ab}^{*}+\left(D_{a}\ln\frac{\Theta_{ ab}}{\Theta}\right)^{2} \tag{24}\]
where \(K(a,b)\) just depends on \(a\), \(b\), but not on z. It can be computed for instance by putting \(\mathrm{z}=0\) on the right hand side of (24). This reproduces the proof of Theorem 2.5 from [11].
### Third order in \(\tau\)
In third order of the local parameter \(\tau\) we get for (10)
\[0= D_{b}\left(\frac{1}{6}D_{a}^{\prime\prime}\ln V+\frac{1}{2}D_{a}D _{a}^{\prime}\ln V+\frac{1}{2}D_{a}^{\prime}\ln VD_{a}\ln V+\frac{1}{6}D_{a}^ {3}\ln V\right.\] \[\left.+\frac{1}{2}D_{a}^{2}\ln VD_{a}\ln V+\frac{1}{6}(D_{a}\ln V )^{3}\right)\] \[-D_{a}D_{b}\ln V\left(\frac{D_{a}^{\prime\prime}\Theta^{*}}{6D_ {a}\Theta^{*}}+\frac{D_{a}^{3}\Theta^{*}}{6D_{a}\Theta^{*}}+\frac{D_{a}^{ \prime}\Theta^{*}}{2D_{a}\Theta^{*}}D_{a}\ln\Theta_{ba}+\frac{1}{2}D_{a}^{ \prime}\ln\Theta_{ba}+\frac{D_{a}^{2}\Theta_{ba}}{2\Theta_{ba}}\right), \tag{25}\]
where we have used (19). With (20), we can replace \(\Theta_{ba}\) in (25) to obtain a relation only involving \(V\). To this end we put
\[F:=\frac{1}{2}D_{b}\left(D_{a}^{\prime}\ln V+D_{a}^{2}\ln V+(D_{a}\ln V)^{2}\right) \tag{26}\]
as well as
\[C_{1}:=\frac{D_{a}^{\prime}\Theta^{*}}{2D_{a}\Theta^{*}} \tag{27}\]
which leads to (20) in the form
\[D_{a}\ln\Theta_{ba}=\frac{F}{D_{a}D_{b}\ln V}-C_{1}. \tag{28}\]
In addition we put
\[\begin{split} G:=& D_{b}\left(\frac{1}{6}D_{a}^{\prime \prime}\ln V+\frac{1}{2}D_{a}D_{a}^{\prime}\ln V+\frac{1}{2}D_{a}^{\prime}\ln VD _{a}\ln V\right.\\ &\left.+\frac{1}{6}D_{a}^{3}\ln V+\frac{1}{2}D_{a}^{2}\ln VD_{a} \ln V+\frac{1}{6}(D_{a}\ln V)^{3}\right)\end{split} \tag{29}\]
and
\[C_{2}=\frac{D_{a}^{\prime\prime}\Theta^{*}}{6D_{a}\Theta^{*}}+\frac{D_{a}^{3} \Theta^{*}}{6D_{a}\Theta^{*}}. \tag{30}\]
with which (25) takes the form
\[G=D_{a}D_{b}\ln V\left(C_{2}+C_{1}D_{a}\ln\Theta_{ba}+\frac{1}{2}D_{a}^{\prime }\ln\Theta_{ba}+\frac{1}{2}D_{a}^{2}\ln\Theta_{ba}+\frac{1}{2}(D_{a}\ln\Theta_{ ba})^{2}\right). \tag{31}\]
Eliminating \(\Theta_{ba}\) with (28) from (31) leads in a first step to
\[\frac{G}{D_{a}D_{b}\ln V}-\frac{1}{2}D_{a}\left(\frac{F}{D_{a}D_{b}\ln V} \right)-\frac{F^{2}}{2(D_{a}D_{b}\ln V)^{2}}=C_{2}-\frac{C_{1}^{2}}{2}+\frac{ 1}{2}D_{a}^{\prime}\ln\Theta_{ba}. \tag{32}\]
Differentiating with respect to \(D_{a}\) (note that the derivatives of the odd theta functions in \(c_{1}\), \(c_{2}\) vanish), we obtain with (28)
\[D_{a}\left(\frac{G}{D_{a}D_{b}\ln V}-\frac{1}{2}D_{a}\left(\frac{F}{D_{a}D_{b} \ln V}\right)-\frac{F^{2}}{2(D_{a}D_{b}\ln V)^{2}}\right)=\frac{1}{2}D_{a}^{ \prime}\left(\frac{F}{D_{a}D_{b}\ln V}\right). \tag{33}\]
We have with \(U:=D_{b}\ln V\)
\[\begin{split}&\frac{G}{D_{a}D_{b}\ln V}-\frac{1}{2}D_{a}\left( \frac{F}{D_{a}D_{b}\ln V}\right)-\frac{F^{2}}{2(D_{a}D_{b}\ln V)^{2}}\\ &=\frac{D_{a}^{\prime\prime}U}{6D_{a}U}+\frac{D_{a}^{\prime}D_{a} U}{4D_{a}U}+\frac{1}{2}D_{a}^{\prime}\ln V\\ &-\frac{D_{a}^{3}U}{12D_{a}U}+\frac{(D_{a}^{2}U)^{2}}{8(D_{a}U)^ {2}}-\frac{(D_{a}^{\prime}U)^{2}}{8(D_{a}U)^{2}},\end{split} \tag{34}\]
Thus we get for (33)
\[\begin{split} 0&=\frac{D_{a}^{\prime\prime}D_{a}U}{6D_{a}U}- \frac{D_{a}^{\prime\prime}UD_{a}^{2}U}{(6D_{a}U)^{2}}-\frac{D_{a}^{4}U}{12D_{a }U}+\frac{D_{a}^{3}UD_{a}^{2}U}{3(D_{a}U)^{2}}\\ &-\frac{(D_{a}^{2}U)^{3}}{4(D_{a}U)^{3}}+\frac{(D_{a}^{\prime}U)^ {2}D_{a}^{2}U}{4(D_{a}U)^{3}}-\frac{(D_{a}^{\prime})^{2}DU}{4D_{a}U}.\end{split} \tag{35}\]
identical to equation (2) which concludes the proof. Note that there are terms in (33) without a derivative \(D_{a}\), but remarkably these terms all cancel leaving (34) a relation for terms all involving this derivative.
## 4. Applications to integrable PDEs
In this section we will apply relation (2) to integrable equations, prove the second part of the main theorem and consider various reductions on special Riemann surfaces as known for the KP case.
### Main theorem Part II
To prove the second part of the main theorem, we define the function \(\phi(x,y,t):=\mathrm{D}_{b}\ln\Theta_{ab}^{*}\Theta(x\mathbf{v}_{0}(a)+y \mathbf{v}_{1}(a)+t\mathbf{v}_{2}(a)+\mathbf{d})\) and show that it solves the Schwarzian KP equation (3).
With our previous notations \(\phi=D_{b}\ln V=U\), moreover we can identify: \(D_{a}=\partial_{x}\), \(D_{a}^{\prime}=\partial_{y}\) and \(D_{a}^{\prime\prime}=\partial_{t}\). Inserting the function \(\phi\) into (3) we get:
\[\frac{D_{a}^{\prime\prime}D_{a}U}{D_{a}U}-\frac{D_{a}^{2}U}{(D_{a} U)^{2}}D_{a}^{\prime\prime}U-\frac{1}{2}\frac{D_{a}^{4}U}{D_{a}U}+\frac{2}{(D_{a} U)^{2}}D_{a}^{2}UD_{a}^{3}U\] \[-\frac{3}{2(D_{a}U)^{3}}(D_{a}U)^{3}+\frac{3}{2}\frac{D_{a}^{2}U} {(D_{a}U)^{3}}(D_{a}^{\prime}U)^{2}-\frac{3}{2}\frac{(D_{a}^{\prime})^{2}U}{D_ {a}U}=0\]
which is equivalent to (2) thus proving this part of the theorem.
**Remark 4.1**.: _It is well known, see for instance [19, 1], that one way to represent meromorphic functions on a Riemann surface is in terms of second logarithmic derivatives of theta functions. The function \(D_{b}\ln\Theta_{ab}^{*}\Theta\) is a priori not independent of the path between \(a\) and \(b\) in \(\int_{a}^{b}\mathrm{d}\omega\). Both points have to be in the same fundamental polygon. A possibility to avoid this condition would be to consider \(U=\partial_{x}^{-1}u\), where \(u:=D_{a}D_{b}\ln V\) is path independent; the anti-derivative is defined as \(\partial_{x}^{-1}=\int_{a}^{x}dx^{\prime}\)._
The Schwarzian KP equation can also be written in the form
\[\left(\frac{U_{t}}{U_{x}}-\frac{1}{2}\frac{U_{xxx}}{U_{x}}+\frac{3}{4}\frac{U _{xx}^{2}}{U_{x}^{2}}\right)_{x}+\frac{3}{2}\frac{U_{xx}}{U_{x}^{3}}U_{y}^{2} -\frac{3}{2}\frac{U_{yy}}{U_{x}}=0. \tag{36}\]
Integrating with respect to \(x\), we get
\[U_{t}-\frac{1}{2}U_{xxx}+\frac{3}{4}\frac{U_{xx}^{2}-U_{y}^{2}}{U_{x}}+\frac{ 3}{2}U_{x}\partial_{x}^{-1}\left(\frac{U_{xy}U_{y}}{U_{x}^{2}}-\frac{U_{yy}}{U _{x}}\right), \tag{37}\]
which can be written in the form
\[U_{t}-\frac{1}{2}U_{xxx}+\frac{3}{4}\frac{U_{xx}^{2}-U_{y}^{2}}{U_{x}}-\frac{ 3}{2}U_{x}W_{y}, \tag{38}\]
where \(W_{x}:=U_{y}/U_{x}\). This is equation (13) in [3] after the change of time \(t\mapsto 2t\).
### Reductions on special Riemann surfaces
Let us restrict our attention to the special case of the Riemann surface \(\mathcal{R}\) being hyperelliptic, i.e., given by the zero locus of the polynomial \(P(\lambda,\mu)=\mu^{2}-\prod_{j=1}^{N}(\lambda-\lambda_{j})\) where \(\lambda_{j}\in\mathbb{C}\), \(j=1,\ldots,N\), and \(N=2g+1\) or \(N=2g+2\), and denote by \(\pi:\mathcal{R}\longrightarrow\mathbb{CP}^{1}\) the projection onto the
Riemann sphere.
If \(a\) is a branch point of \(\pi\) then the hyperelliptic involution locally reads \(\sigma:k_{a}\mapsto-k_{a}\) and its pullback on \(\omega_{j}\) is given by: \(\sigma^{*}\omega_{j}=-\omega_{j}\), hence the Taylor expansion of \(\omega_{j}\) around \(a\) must be even in \(k_{a}\), i.e \(\mathbf{v}_{1}=0\) and thus \(\mathrm{D}^{\prime}_{\mathrm{a}}=0\). Eliminating \(\Theta_{ba}\) from (31) via (26), we get
**Corollary 4.2**.: _Identity (2) reduces on hyperelliptic surfaces with \(a\) being a branch point to_
\[0=\frac{1}{6}D^{\prime\prime}_{a}D_{b}\ln V-\frac{1}{12}D^{3}_{a}D_{b}\ln V-C_ {2}D_{a}D_{b}\ln V+\frac{1}{8}\frac{(D^{2}_{a}D_{b}\ln V)^{2}}{D_{a}D_{b}\ln V}. \tag{39}\]
If we put again \(U=D_{b}\ln V\), \(D^{\prime\prime}_{a}=\partial_{t}\) and \(D_{a}=\partial_{x}\), we get for (39)
\[\frac{1}{6}U_{t}-\frac{1}{12}U_{xxx}-C_{2}U_{x}+\frac{1}{8}\frac{U^{2}_{xx}}{ U_{x}}=0. \tag{40}\]
This implies
**Corollary 4.3**.: _The function \(\phi(x,t):=\mathrm{D}_{b}\ln\Theta^{*}_{ab}\Theta(x\mathbf{v}_{0}(a)+t\mathbf{ v}_{2}(a)+\mathbf{d})\) solves the Schwarzian KdV equation:_
\[\frac{1}{6}\frac{\phi_{t}}{\phi_{x}}-\frac{1}{12}\{\phi;x\}=C_{2}. \tag{41}\]
As in Chapter 3.4 of [1] for KP, there is also a reduction to a Schwarzian Boussinesq equation. If the surface \(\mathcal{R}\) is given by a trigonal curve, i.e., a curve on which a meromorphic function with a third order pole at a point \(a\in\mathcal{R}\) and no other singularities exists. A simple example of such a curve is
\[\mu^{4}=\prod_{i=1}^{4}(\lambda-E_{i}). \tag{42}\]
In this case \(D^{\prime\prime}_{a}=0\) which leads for (2) to
\[0=-\frac{D^{4}_{a}U}{12D_{a}U}+\frac{D^{3}_{a}UD^{2}_{a}U}{3(D_{a}U)^{2}}- \frac{(D^{2}_{a}U)^{3}}{4(D_{a}U)^{3}}+\frac{(D^{\prime}_{a}U)^{2}D^{2}_{a}U} {4(D_{a}U)^{3}}-\frac{(D^{\prime}_{a})^{2}DU}{4D_{a}U}. \tag{43}\]
The function \(\phi(x,t):=\mathrm{D}_{b}\ln\Theta^{*}_{ab}\Theta(x\mathbf{v}_{0}(a)+t\mathbf{ v}_{1}(a)+\mathbf{d})\) then gives a solution to the Schwarzian Boussinesq equation [23]
\[\Big{(}-\frac{1}{2}\{\phi;x\}\Big{)}_{x}-\frac{3}{2}\Big{(}\frac{\phi_{t}}{ \phi_{x}}\Big{)}_{t}-\frac{3}{4}\Big{(}\frac{\phi_{t}^{2}}{\phi_{x}^{2}}\Big{)} _{x}=0 \tag{44}\]
## 5. Conclusion
In this paper we have studied degenerations of Fay's identities in higher order of the local parameter \(\tau\) near one of the points. The starting point was Fay's identity for 3 points on a Riemann surface in the form (17). The case of order \(\tau^{3}\) was studied in detail as well as its application to the Schwarzian KP equation. It appears straight forward to generalize this approach to higher orders of the parameter
\(\tau\). A standard Taylor expansion of the quantity \(V(c)\) yields for the left hand side of equation (17)
\[D_{b}\sum_{m=1}^{\infty}\frac{1}{m!V}\left(\tau D_{a}+\frac{\tau^{2}}{2}D_{a}^{ \prime}+\frac{\tau^{3}}{6}D_{a}^{\prime\prime}+\ldots\frac{\tau^{k}}{k!}D_{a}^ {(k)}+\ldots\right)^{m}V. \tag{45}\]
Thus one gets in order \(\tau^{n}\)
\[D_{b}\left(\frac{D_{a}^{n}V}{n!V}+\frac{nD_{a}^{n-1}D_{a}^{\prime}V}{2(n-1)!V}+ \ldots+\frac{D_{a}^{(n)}V}{n!V}\right). \tag{46}\]
The same expansion can be obtained on the right hand side of (17) for \(\Theta_{ac}^{*}\), where all even order derivatives vanish for symmetry reasons, and for \(\Theta_{bc}\) leading to derivatives of \(\Theta_{ba}\). As in section 3, the latter terms can be replaced via (28) by derivatives of \(\Theta\). As in (33), it will be necessary to differentiate with respect to \(D_{a}\) in general in order to eliminate all terms with \(\Theta_{ba}\). It is beyond the scope of the current paper to detail the resulting relations and to establish a potential relation to integrable PDEs and whether these are from a hierarchy of Schwarzian KP equations. An interesting question is also whether a similar approach can be applied to the degeneration of identity (12) in the limit \(b\to a\) in higher orders of the local parameter near \(a\), which would lead to a generalisation of relation (14). This will be the subject of future research.
Another interesting aspect would be to relate the present work to the bilinear approach studied in [4]. In this article the authors describe the tau and Baker-Akhiezer functions associated to some generalized KP hierarchy (the Schwarzian KP hierarchy being one of them). Their derivation is based on a generalized Hirota's bilinear identity based on the equation:
\[\int_{\partial G}\chi(\nu,\mu;g_{1})g_{1}(\nu)g_{2}^{-1}(\nu)\chi(\lambda,\nu ;g_{2})d\nu=0\]
Where (following the notations of [4]) \(G\) is the unit disk, \((\lambda,\mu)\in\mathbb{C}^{2}\) are spectral parameters, \(g_{1}(\nu)=g(\nu,\mathbf{x})=\exp\big{(}\sum_{i=1}^{\infty}x_{i}\nu^{-i}\big{)}\), \(g_{2}(\nu)=g(\nu,\mathbf{x}^{\prime})\) and \(\chi(\lambda,\mu)\) is an unknown meromorphic function in both variables.
The associated tau function is given by:
\[\chi(\lambda,\mu,\mathbf{x})=\frac{1}{(\lambda-\mu)}\frac{\tau(\mathbf{x}-[ \lambda]+[\mu])}{\tau(\mathbf{x})}\]
where \(\mathbf{x}+[\mu]=x_{i}+[\mu]_{i}\), with \([\mu]_{i}=\frac{1}{i}\mu^{i}\), \(0\leq i<\infty\). They proved that this tau function satisfies the following addition formula for \(a\), \(b\)
\(c\) and \(d\) some arbitrary complex numbers :
\[(a-c)(d-b)\tau(\mathbf{x}+[a]+[c])\tau(\mathbf{x}+[d]+[b])+\] \[+(d-a)(b-c)\tau(\mathbf{x}+[d]+[a])\tau(\mathbf{x}+[b]+[c])+\] \[+(b-a)(d-c)\tau(\mathbf{x}+[b]+[a])\tau(\mathbf{x}+[d]+[c])=0\]
As established in the seminal paper [21], this addition formula is nothing more than Fay's trisecant identity when a tau function can be written in terms of Riemann theta functions. All these considerations lead to the following conjecture:
**Conjecture 5.1**.: _There exists a quadratic form \(\mathbf{Q}(\mathbf{t}):=\sum_{i,j=1}^{4}Q_{ij}t_{i}t_{j}\), \(Q_{ij}\in\mathbb{C}\), such that the function_
\[\tau(\mathbf{t}):=\exp{(\mathbf{Q}(\mathbf{t}))}\Theta(t_{1}\mathbf{v}_{0}(a )+t_{2}\mathbf{v}_{1}(a)+t_{3}\mathbf{v}_{2}(a)+t_{4}\mathbf{v}_{0}(b)+ \mathbf{d})\]
_is a tau function for the Schwarzian KP equation 3 in the sense of [4], the variables \(t_{1}\), \(t_{2}\), \(t_{3}\) being respectively identified with \(x\), \(y\), \(t\) and \(t_{4}\) is an auxiliary parameter._
This conjecture relies on the fact that the Schwarzian KP hierarchy is generated in a very similar fashion as the classical KP hierarchy [4]. Hence a tau function for the full Schwarzian KP hierarchy could be of the form \(\tilde{\tau}:=\exp{(\mathbf{Q}(\mathbf{t}))}\Theta(\forall\mathbf{t}+ \mathbf{d})\), with \(\mathbb{V}\) a \(g\times\infty\) matrix whose columns are the vectors associated to the different "time" variables, and the higher order degenerations discussed above could have an explicit and compact form when expressed with Hirota's symbols.
## Appendix A Interesting links with other integrable systems
To allow for an independent reading, we collect in this appendix some facts on the Schwarzian PDEs appearing naturally in the context of higher order degenerations of Fay's identity. The Schwarzian KP, KdV and Boussinesq equations originally appear in a series of papers by Weiss [23], [24], in an extensive study of the Painleve property of many integrable PDEs. These three equations were shown to be in some sense prototypical PDEs satisfying this property. They are also linked to some integrable systems appearing in physics through Backlund and Miura transforms. We would like to explain these relationships in this section.
First we can observe that these equations are invariant under a Mobius transformation, in the sense that if the function \(\phi\) solves the Schwarzian KP, Boussinesq or KdV equations then \(\psi:=\frac{A\phi+B}{C\phi+D}\) with \(AD-BC\neq 0\) is also a solution of these equations. Moreover this transformation plays the role of the Backlund transform for these three equations.
If \(\phi(x,y,t)\) solves the Schwarzian KP equation
\(\frac{1}{2}\Big{(}\frac{\phi_{y}^{2}}{\phi_{x}^{2}}\Big{)}_{x}=0\) then the Backlund transformed function \(u(x,y,t):=12\frac{\partial^{2}}{\partial x^{2}}\ln\phi+v\) with \(v=3\frac{\phi_{xx}^{2}}{\phi_{x}^{2}}-4\frac{\phi_{xxx}}{\phi_{x}}-\frac{\phi_ {t}}{\phi_{x}}-\frac{\phi_{y}^{2}}{\phi_{x}^{2}}\) is a solution of the KP equation \(u_{tx}+u_{x}^{2}+uu_{xx}+u_{xxxx}+u_{yy}=0\).
Similarly the Schwarzian KdV equation is related to the classical KdV equation via the same type of Backlund transformation: let \(\phi(x,t)\) be a solution of \(\frac{\phi_{t}}{\phi_{x}}-\{\phi;x\}=\lambda\) then \(u=12\frac{\partial^{2}}{\partial x^{2}}\ln\phi+v\) is a solution of \(u_{t}+u_{xxx}+uu_{x}=0\) where \(v\) is given by \(v=3\frac{\phi_{xx}^{2}}{\phi_{x}^{2}}-4\frac{\phi_{xxx}}{\phi_{x}}-\frac{\phi _{t}}{\phi_{x}}\) (the very same Backlund transformation relates solutions of the Schwarzian Boussinesq equation to solutions of the classical Boussinesq equation). Moreover if we set \(\lambda=0\) then \(w:=2\frac{\phi_{x}}{\phi}+\frac{\phi_{xx}}{\phi_{x}}\) solves the modified KdV equation \(w_{t}+\frac{\partial}{\partial x}\big{(}w_{xx}-\frac{w^{3}}{2}\big{)}=0\).
Finally, under the change of variables (Miura transform):
\[x\rightarrow\phi,\quad t\to t,\quad\phi\to x\]
which implies
\[\{\phi;x\}=-\phi_{x}^{2}\{x;\phi\},\quad\phi_{x}=\frac{1}{x_{\phi}},\quad x_{ t}=-\frac{\phi_{t}}{\phi_{x}}.\]
Therefore, under this change of variables, the Schwarzian KdV equation becomes
\[x_{\phi}^{2}x_{t}=\lambda x_{\phi}^{2}+\{x;\phi\},\]
or replacing the Schwarzian derivative \(\{x;\phi\}\) by its explicit expression,
\[x_{t}=\lambda-\frac{1}{2}\Big{(}\frac{1}{x_{\phi}}\Big{)}_{\phi\phi}+\frac{3} {2}\Big{(}\frac{1}{x_{\phi}}\Big{)}_{\phi}^{2}.\]
With \(v:=x_{\phi}^{-1}\), the previous equation is equivalent to
\[v_{t}=v^{3}v_{\phi\phi\phi}\]
i.e. the Dym equation.
|
2304.09718 | Sample-efficient Model-based Reinforcement Learning for Quantum Control | We propose a model-based reinforcement learning (RL) approach for noisy
time-dependent gate optimization with improved sample complexity over
model-free RL. Sample complexity is the number of controller interactions with
the physical system. Leveraging an inductive bias, inspired by recent advances
in neural ordinary differential equations (ODEs), we use an auto-differentiable
ODE parametrised by a learnable Hamiltonian ansatz to represent the model
approximating the environment whose time-dependent part, including the control,
is fully known. Control alongside Hamiltonian learning of continuous
time-independent parameters is addressed through interactions with the system.
We demonstrate an order of magnitude advantage in the sample complexity of our
method over standard model-free RL in preparing some standard unitary gates
with closed and open system dynamics, in realistic numerical experiments
incorporating single shot measurements, arbitrary Hilbert space truncations and
uncertainty in Hamiltonian parameters. Also, the learned Hamiltonian can be
leveraged by existing control methods like GRAPE for further gradient-based
optimization with the controllers found by RL as initializations. Our algorithm
that we apply on nitrogen vacancy (NV) centers and transmons in this paper is
well suited for controlling partially characterised one and two qubit systems. | Irtaza Khalid, Carrie A. Weidner, Edmond A. Jonckheere, Sophie G. Shermer, Frank C. Langbein | 2023-04-19T15:05:19Z | http://arxiv.org/abs/2304.09718v2 | # Sample-efficient Model-based Reinforcement Learning for Quantum Control
###### Abstract
We propose a model-based reinforcement learning (RL) approach for noisy time-dependent gate optimization with improved sample complexity over model-free RL. Sample complexity is the number of controller interactions with the physical system. Leveraging an inductive bias, inspired by recent advances in neural ordinary differential equations (ODEs), we use an auto-differentiable ODE parametrised by a learnable Hamiltonian ansatz to represent the model approximating the environment whose time-dependent part, including the control, is fully known. Control alongside Hamiltonian learning of continuous time-independent parameters is addressed through interactions with the system. We demonstrate an order of magnitude advantage in the sample complexity of our method over standard model-free RL in preparing some standard unitary gates with closed and open system dynamics, in realistic numerical experiments incorporating single shot measurements, arbitrary Hilbert space truncations and uncertainty in Hamiltonian parameters. Also, the learned Hamiltonian can be leveraged by existing control methods like GRAPE for further gradient-based optimization with the controllers found by RL as initializations. Our algorithm that we apply on nitrogen vacancy (NV) centers and transmons in this paper is well suited for controlling partially characterised one and two qubit systems.
## I Introduction
Control of quantum devices for practical applications requires overcoming a unique set of challenges [1]. One is to find robust controls for noisy systems where typical noise sources include control and feedback noise, system parameter mischaracterization, measurement and state preparation errors, decoherence and cross-talk [2]. In order to achieve scalable, fault-tolerant quantum devices [3; 4; 5], control algorithms must produce controls resilient to such noise. Reinforcement learning (RL) approaches have been shown to be more likely to find robust controls [6] at the cost of requiring large amounts of measurements from the quantum device (samples). We propose a model-based RL approach to address this problem.
Typically, a quantum control problem is formulated as an open-loop optimization problem based on a model [1; 7; 8; 9] where the model may be constructed ab-initio or obtained via a process tomography approach. During optimization there is no interaction between the physical system to be controlled and the control algorithm. The underlying assumption is that the model represents the system sufficiently accurately. This class of control algorithms has low sample complexity (high sample efficiency) represented by the number of optimization function calls until successful termination. The reason for this efficiency is generally that an analytical model, in particular gradient information, can be leveraged. This is a strong assumption, at least in the noisy intermediate scale quantum era where noise impedes perfect characterization of quantum devices. However, the approach has merit, since significant thought goes into modelling and engineering quantum devices [10].
Alternatively, RL seeks an optimal control via interaction with the physical system, building models to various degrees. It successfully addresses challenging, noisy quantum control problems with the promise of inherent robustness [6; 11; 12; 13; 14; 15]. There are also gradient-free approaches [16] and methods estimating gradients using variations of automatic differentiation [10; 17; 18; 19; 20].
RL approaches utilizing only measurements without prior information do not suffer from model bias. Moreover, they usually optimize an average controller performance over the noise in the system, yielding inherently robust controllers [12]. However, this means the number of optimization function calls becomes prohibitively large, and RL's high sample complexity is a core problem limiting its practical applicability [21]. This is not surprising as without a prior model considerably less information is available to the optimization that must be obtained via measurements.
In classical RL, high sample complexity is typically addressed using model-based methods, which construct a model from the measurements from scratch. Such methods result in reduced sample complexity for benchmark problems [22]. They are successful if the model and the measurements (samples) obtained during training possess some generalizability [23; 24] that is captured by a function approximator (usually a neural network). How
ever, methods involving universal function approximation of dynamic trajectories are unstable. This is because learning can be hindered by the very large space of trajectories, and interpolating from insufficient sample trajectories can be shallow or incorrect [25]. More importantly, for quantum data, it is known that a time-independent Hamiltonian can generate many unitary propagators, so estimating the model may imply learning the entire Hilbert space of propagators for a particular control problem which is often intractable. This motivates learning the dynamical generator, i.e., the Hamiltonian, instead of the propagators.
In this paper, we propose a model-based RL method for time-dependent, noisy gate preparation where the model is an ordinary differential equation (ODE), differentiable with respect to model parameters [26]. ODE trajectories do not intersect [27; 28; 29] which constrains the space of potential models for learning and makes learning robust to noise. We parameterise the Hamiltonian by known time-dependent controls and unknown time-independent system parameters, which, in addition, makes the model interpretable. We show that combining the inductive bias from this ODE model with partially correct knowledge (assuming we know the controls, but not the time-independent system Hamiltonian) reduces the sample complexity compared to model-free RL by roughly at least an order of magnitude.
It has recently been shown that inductive biases, i.e. encoding the symmetries of the problem into the architecture of the model space, such as the translation equivariance of images in the convolution operation [30], leads to stronger out-of-distribution generalization by the learned model. This is because inductive biases impose strong priors on the space of models such that training involves exploring a smaller subset of the space to find an approximately correct model.
We demonstrate improvement over the sample efficient soft-actor critic (SAC) model-free RL algorithm for performing noisy gate control in three settings that correspond to leading quantum computing architectures: nitrogen vacancy (NV) centers (one and two qubits) [31], and transmons (two qubits) [32], subject to dissipation and single-shot measurement noise. We also show that the learned Hamiltonian can be leveraged to optimize the controllers found by our RL method further using GRAPE [7; 9].
We note that our approach is similar in spirit to Ref. [33] where a novel Hamiltonian learning protocol via quantum process tomography is proposed for the purpose of model-predictive control. The complete Hamiltonian (including the control and system parts) is identified term by term via a Zero-Order Hold (ZOH) method where only one term is turned on at a time, e.g., by setting the control parameters to zero, and it is learnt individually using optimization over the Stiefel manifold. The learnt Hamiltonian is then used to obtain a viable control sequence for a variety of state and gate preparation problems for closed (unitary) systems under the influence of initial state preparation errors. While it is possible for our Hamiltonian learning protocol to also learn the full Hamiltonian using the ZOH method, we instead solely focus on the problem of improving the sample complexity of RL in this paper through the incorporation of a partially known physics-inspired model. Furthermore, our focus is also directed on the interplay of concurrently learning the model and controlling the system in noisy closed and open system settings.
This paper is organised as follows: in Sec. II, we define the open and closed system control problems including our setup to simulate single-shot measurements and the RL control framework; Sec III describes the model-based version of the RL control framework and Sec. IV presents numerical studies for some example control problems on the system architectures described above in noisy and ideal settings and how to leverage the learnt system Hamiltonian using GRAPE.
## II The quantum control problem
We briefly introduce the quantum control problem for open and closed quantum systems and describe how we estimate the propagators from measurements, needed for our RL approach.
### Closed System Dynamics
Consider a quantum system that is represented by an effective Hamiltonian \(H(t)\) in the space of complex Hermitian \(n\times n\) matrices
\[H(\mathbf{u}(t),t)=H_{0}+H_{c}(\mathbf{u}(t),t), \tag{1}\]
where \(H_{0}\) is the time-independent system Hamiltonian and \(H_{c}\) is the control Hamiltonian parametrised by time-dependent controls \(\mathbf{u}(t)\). Its closed-system dynamics are governed by the Schrodinger equation,
\[\frac{\mathrm{d}U(\mathbf{u}(t),t)}{\mathrm{d}t}=-\frac{i}{\hbar}H(\mathbf{u} (t),t)U(\mathbf{u}(t),t),\quad U(t=0)=\mathds{1}, \tag{2}\]
where \(U(\mathbf{u}(t),t)\) is the unitary propagator representing the state evolution. Its fidelity to realize a target gate \(U_{\mathrm{target}}\) is
\[F(U_{\mathrm{target}},U(\mathbf{u}(t),t))=\frac{1}{n^{2}}\left|\mathrm{Tr} \Big{[}U_{\mathrm{target}}^{\dagger}U(\mathbf{u}(t),t)\Big{]}\right|^{2}. \tag{3}\]
The control problem to implement \(U_{\mathrm{target}}\) is
\[\mathbf{u}^{*}(t^{*})=\operatorname*{arg\,max}_{\mathbf{u}(t),\ t\leqslant T}F (U_{\mathrm{target}},U(\mathbf{u}(t),t)), \tag{4}\]
where \(\mathbf{u}^{*}(t^{*})\) are the optimised control parameters for an optimised final time \(t^{*}\leqslant T\).
### Open System Dynamics
For open system dynamics consider an arbitrary state with density matrix \(\rho\) for \(\log_{d}n\) qudits evolving according to the master equation [34; 35]
\[\frac{\mathrm{d}\rho(t)}{\mathrm{d}t}=-\frac{i}{\hbar}[H(\mathbf{u}(t),t),\rho]+ \mathfrak{L}(\rho(t)), \tag{5}\]
where \(\mathfrak{L}(t)\) describes the Markovian decoherence and dephasing dynamics (i.e., the environment),
\[\mathfrak{L}(\rho(t))=\sum_{d}\gamma_{d}\left(l_{d}\rho l_{d}^{\dagger}-\frac{ 1}{2}\{l_{d}l_{d}^{\dagger},\rho\}\right)\!, \tag{6}\]
and \(l_{d}\) is a decoherence operator is crucially not unitary. To characterize the gate implemented by \(\mathbf{u}(t)\), we need to consider the evolution of a complete orthonormal basis of states, \(\{\rho_{k}\}_{k=1}^{N^{2}}\). For this we introduce the Liouville superoperator matrix \(\mathbf{X}\) that acts on an arbitrary vectorised state \(\mathbf{\rho}\) (e.g., obtained by stacking the matrix columns) to produce the evolution
\[\mathbf{\rho}(t)=\mathbf{X}(t)\mathbf{\rho}(t=0). \tag{7}\]
This is equivalent to the tensor-matrix evolution [36]
\[\rho(t)_{mn}=\sum_{\mu,\nu}X_{nm,\nu\mu}(t)\rho_{\mu\nu}(t=0). \tag{8}\]
\(X_{nm,\nu\mu}(t)\) is a fourth order tensor-matrix form of \(\mathbf{X}(t)\) that encodes the evolution of the state element \(\rho_{\mu\nu}\).
Thus, similar to Eq. (2), we define a superoperator \(X(\mathbf{u}(t),t)\) which encodes the evolution of \(\{\rho_{k}\}_{k=1}^{N^{2}}\) and follows the linear ODE
\[\frac{\mathrm{d}\mathbf{X}(\mathbf{u}(t),t)}{\mathrm{d}t}=-\frac{i}{\hbar}( \mathbf{L}_{0}+i\mathbf{L}_{1})\mathbf{X}(\mathbf{u}(t),t),\quad\mathbf{X}(t =0)=\mathds{1} \tag{9}\]
where \(\mathbf{L}_{0},\mathbf{L}_{1}\) represent the superoperator version of the commutator map \([H(\mathbf{u}(t),t),\cdot]\) and \(\mathfrak{L}(\cdot)\) the Markovian decoherence and dephasing dynamics.
Note that we factorize out an imaginary prefactor \(i\) to the left in Eq. (9) to unify the ODE for open and closed system dynamics. For \(\mathfrak{L}\equiv\mathbf{0}\), the above reduces to the closed system dynamics of Eq. (2). For open dynamics, to be faithful to experimental limitations, we implement single-shot noise when estimating the gate, i.e., process tomography. We transform the superoperator \(X_{nm,\nu\mu}\) to the Choi matrix \(\Phi/\operatorname{Tr}[\Phi]\) that is given by index reshuffling or partial transpose (and more formally a contravariant-covariant change of coordinates) [36; 37],
\[\Phi_{nm,\mu\nu}=X_{\nu m,\mu n}. \tag{10}\]
In Sec. IV, we use this for open and closed dynamics. Estimating \(\Phi\) is possible using ancilla-assisted quantum process tomography (AAPT) and the Choi-Jamiolkowski isomorphism [38; 39; 40] for \(2\log_{d}n\)-qudit states and \(\log_{d}n\)-qudit gates. Analogously to the above, \(\Phi\) has a matrix version \(\Phi\). In this paper, we decompose \(\Phi\) over a generalised \(\mathrm{SU}(n^{2})\)'s algebra basis \(\{P_{k}\}_{k=1}^{n^{4}}\), e.g., Gell-Mann matrices [41],
\[\frac{\mathbf{\Phi}}{\operatorname{Tr}[\mathbf{\Phi}]}=\frac{\mathds{1}}{n^{2}}+\sum _{k=2}^{n^{4}-1}q_{k}P_{k} \tag{11}\]
whose coefficients are
\[q_{k}=\frac{\operatorname{Tr}[P_{k}\mathbf{\Phi}]}{\operatorname{Tr}[\mathbf{\Phi}]} \in[-1,1]. \tag{12}\]
\(q_{k}\) can be modelled as a binomial random variable \(\mathrm{Bin}(M,p_{k})\) with probability \(p_{k}=\frac{1}{2}(1+q_{k})\) where \(M\) is the number of single-shot (Bernoulli) measurements [42].
We measure the faithfulness of the implemented gate \(\mathbf{\Phi}(\mathbf{u}(t),t)\) w.r.t. the target gate (as another Choi state) \(\mathbf{\Phi}_{\text{target}}\) using the generalised state-fidelity [43],
\[F(\mathbf{\Phi}(\mathbf{u}(t),t),\mathbf{\Phi}_{\text{target}}) =\operatorname{Tr}[\mathbf{\Phi}(\mathbf{u}(t),t)\mathbf{\Phi}_{\text{ target}}] \tag{13}\] \[=\frac{1}{n^{4}}+\sum_{k=2}^{n^{4}-1}q_{k}^{\text{target}}q_{k}.\]
Analogously to the closed case, the open control problem is to find an optimal control \(\mathbf{u}^{*}(t^{*})\) for an optimal final time \(t^{*}\leqslant T\) (with \(T\) being the fixed upper bound), such that
\[\mathbf{u}^{*}(t^{*})=\operatorname*{arg\,max}_{\mathbf{u}(t),\ t\leqslant T}F (\mathbf{\Phi}(\mathbf{u}(t),t),\mathbf{\Phi}_{\text{target}}). \tag{14}\]
### Discretization
The exact solution of the time-dependent general dynamics discussed in Eq. (14) is given by the time-ordered operator
\[\mathbf{E}(t^{*},\mathbf{u}^{*}(t^{*}))=\mathcal{T}\exp\left(\int_{0}^{t^{*}} dt^{\prime}\,-\frac{i}{\hbar}\mathbf{G}(t^{\prime},\mathbf{u}^{*}(t^{\prime}))\right)\]
for a unitary/Lindbladian generator \(\mathbf{G}\). In practice, we solve for a piece-wise constant version of the dynamics represented by \(N\) fixed steps of \(\Delta t=T/N\) of the final time \(T\). Thus, \(\mathbf{E}(\mathbf{u}(t),t)\) is discretised which amounts to fixing \(\mathbf{u}(t)=\mathbf{u}_{m}\) to be constant for each timestep such that \(\mathbf{u}_{m}\in\mathbb{C}^{m\times C}\) is a finite dimensional array where \(C\) is the number of controls per timestep in the vector \(u_{l}\) parametrizing \(H_{c}(u_{l},t_{l})\) and \(m\) is the number of total timesteps in the pulse, with \(m\leqslant N\) for a maximum number of pulse segments \(N\). The propagator is
\[\mathbf{E}(t,\mathbf{u}(t)):=\mathbf{E}(\mathbf{u}_{m})=\prod_{l=1}^{m}\exp \!\left(-\frac{i}{\hbar}\Delta t\mathbf{G}(t_{l},\mathbf{u}(t_{l}))\right)\!. \tag{15}\]
The control problems in Eqs. (4) and (14) are equivalent to
\[\mathbf{u}_{m}^{*}=\operatorname*{arg\,max}_{\mathbf{u}_{m}=[u_{1},\ldots,u_{m }]\in\mathbb{X},m\leqslant N}\mathcal{F}(\mathbf{\Phi}(\mathbf{E}(\mathbf{u}_{m})), \mathbf{\Phi}(\mathbf{E}_{\text{target}})) \tag{16}\]
for a fidelity \(\mathcal{F}\) and the time. Note that \(\mathbf{u}_{m}\) is constrained to some maximum and minimum values given by \(\mathbb{X}=\{\mathbf{u}_{m}:\forall c,l\;u_{\min}u_{cl}\leqslant u_{\max}\in \mathbb{C}\}\). The constraints are applied separately to the real and and imaginary parts of the components of \(\mathbf{u}_{m}\).
## III Model-based reinforcement learning control
We give a brief overview of RL, followed by explaining our model-based RL approach. An excellent introduction can be found in Ref. [21].
### Reinforcement Learning for Quantum Control
The RL problem is usually treated as a sequential Markov decision problem (MDP) on the space of states, actions, transition probabilities and rewards: \((\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R})\). This describes an environment for consecutive one-step transitions, indexed by \(k=1,2,\dots\), from current state \(\mathbf{s}_{k}\in\mathcal{S}\) to next state \(\mathbf{s}_{k+1}\in\mathcal{S}\) if an RL agent executes action \(\mathbf{a}_{k}\in\mathcal{A}\), yielding immediate scalar reward \(\mathrm{r}_{k}\in\mathcal{R}\). The environment is generally probabilistic, so \(\mathcal{P}(\mathbf{s}_{k+1}\,|\,\mathbf{s}_{k},\mathbf{a}_{k})\) is the probability that the agent is in state \(\mathbf{s}_{k+1}\) after executing \(\mathbf{a}_{k}\) in state \(\mathbf{s}_{k}\). An RL agent follows a policy function that is represented by a conditional probability distribution \(\pi(\mathbf{a}_{k}\,|\,\mathbf{s}_{k})\): the probability of taking action \(\mathbf{a}_{k}\) after observing the state \(\mathbf{s}_{k}\).
The quantum control problem can be represented as an RL problem by sequentially constructing the control amplitudes as actions, using the unitary propagator the control implements as state with the reward as fidelity:
\[\mathbf{a}_{k} =u_{k}, \tag{17a}\] \[\mathbf{s}_{k} =\prod_{l=1}^{k}\exp\biggl{(}-\frac{i}{\hbar}\Delta t\mathbf{G}( t_{l},u_{l})\biggr{)},\] (17b) \[\mathrm{r}_{k} =\mathcal{F}(\mathbf{\Phi}(\mathbf{E}(\mathbf{u}_{k})),\mathbf{ \Phi}(\mathbf{E}_{\mathrm{target}})). \tag{17c}\]
As this is deterministic the probabilities \(\mathcal{P}\) are trivial, and we have a simple environment function \(\mathcal{E}:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}\times\mathcal{R}\), mapping the current state and action \((s,a)\) to the next state and reward \((s^{\prime},r)\). In model-free RL (see Algorithm 1), a discounted sum of expected rewards, called the returns,
\[\eta(\pi):=\mathds{E}_{\mathbf{a}_{t}\sim\pi}\left[\sum_{k=0}^{\infty}\gamma ^{k}\,\mathrm{r}_{k}\right] \tag{18}\]
is maximised, where \(\mathds{E}_{x\sim P}[\cdot]=\int_{\mathcal{X}}dx\;P(x)[\cdot]\) is the expectation operator.
```
1Initialize empty dataset \(\mathcal{D}\), parametrised random policy \(\pi_{\theta}\), \(k\gets 0\)
2Observe initial state \(s_{0}\)
3while\(k<T/\Delta t\)do
4Execute \(\mathbf{a}_{k}\leftarrow\pi_{\theta}\left(\cdot\,|\,\mathbf{s}_{k}\right)\)
5Observe \(\mathbf{s}_{k+1}\), \(\mathrm{r}_{k}\leftarrow\mathcal{E}(\mathbf{s}_{k},\mathbf{a}_{k})\)
6Store \(\mathcal{D}\leftarrow\mathcal{D}\cup\{(\mathbf{s}_{k},\mathbf{s}_{k+1}, \mathbf{a}_{k},\mathrm{r}_{k})\}\)
7\(k\gets k+1\)// if require update: perform model-free update of parameters (e.g. policy \(\pi_{\theta}\) )
```
**Algorithm 1**Reinforcement learning loop
In this paper, we use the soft actor-critic (SAC) algorithm [44] as our base (model-free) RL algorithm. For brevity, we only highlight parts of SAC relevant to us. A detailed description can be found in the original paper [44]. We use a neural network policy function \(\pi_{\theta}(\mathbf{a}_{k}\,|\,\mathbf{s}_{k})\), with the optimizable parameters \(\theta\), as the actor and the state-action value function \(Q_{\phi}(\mathbf{s}_{k},\mathbf{a}_{k})=\mathds{E}_{(\mathbf{s}_{k},\mathbf{ a}_{k})\sim\mathcal{E}_{\pi}}\left[\sum_{k=0}^{\infty}\gamma^{k}(\mathrm{r}( \mathbf{s}_{k},\mathbf{a}_{k})+\alpha J_{1}(\mathbf{s}_{k}))\right]\) as the neural network critic with parameters \(\phi\). Both \(\pi\) and \(Q\) are simple multilayer perceptrons. In essence, the critic is used to reduce the high variance in the reward function due to the non-stationary nature of the MDP. It is trained by having its predictions match the estimated \(\tilde{Q}\) values obtained for some data \(\{\mathbf{s}_{k},\mathbf{s}_{k+1},\mathbf{a}_{k},\mathrm{r}_{k}\}_{k=1}^{b}\) obtained from a \(b\)-length rollout (number of interactions) with \(\mathcal{E}\). The actor is trained by minimizing the loss function
\[J^{\prime}(\pi_{\theta})=\mathds{E}_{(\mathbf{s}_{k},\mathbf{a}_{k})\sim \mathcal{E}_{\pi_{\theta}}}\left[\alpha\log\pi_{\theta}(\mathbf{a}_{k}\,|\, \mathbf{s}_{k})-Q_{\phi}(\mathbf{s}_{k},\mathbf{a}_{k})\right], \tag{22}\]
which is equivalent to maximizing \(J\) in Eq. (19). For SAC, this policy optimization is carried out heuristically
using neural networks to approximate the policy function \(\pi_{\theta}\). We define the number of agent-environment interactions needed to find an approximately optimal policy \(\pi^{*}\) as the _sample complexity_. Moreover, the policy outputs parametrize the mean and covariance \(\mathbf{\mu},\mathbf{\Sigma}\) of a multivariate Gaussian \(\mathcal{N}(\mathbf{\mu},\mathbf{\Sigma})\) from which the control vector \(\mathbf{u}\) is drawn. For the quantum control problem in Eq. (16), we are usually just concerned with finding an optimal action sequence \(\mathbf{u}^{*}\) producing the maximum intermediate reward \(\mathbf{r}_{k}\) rather than the optimal policy function \(\pi^{*}\) which can be produced by a sub-optimal policy, too.
### Model-Based Reinforcement Learning
SAC can be augmented to incorporate a model \(\mathbf{M_{\zeta}}(\mathbf{s}_{k},\mathbf{a}_{k})\) that approximates the dynamics of \(\mathcal{E}(\mathbf{s}_{k},\mathbf{a}_{k})\) using the policy's interaction data \(\mathcal{D}\)[24] where \(\zeta\) are the model's learnable parameters. The model acts as a proxy for the environment and allows the policy to do MDP rollouts/steps to augment the interaction data. For this to work, the dynamics obtained from interacting with \(\mathbf{M_{\zeta}}\) must be close enough to the true dynamics of \(\mathcal{E}\) to allow the policy to maximize \(J\). By improving the returns \(\tilde{\eta}(\pi)\) on the model \(\mathbf{M_{\zeta}}\) by at least a tolerance factor that depends on this dynamical modelling error, the policy's true returns \(\eta(\pi)\) on the environment are guaranteed to improve ([24], see App. C for a detailed mathematical discussion). See Fig. 1 for an illustration of model-based RL. A good choice of the model function class, therefore, can impose strong and beneficial constraints on the space of possible predicted dynamics and thus lead to a smaller modelling error and returns' tolerance factor or allow the model to reduce the tolerance factor greatly after consuming an appropriate amount of training data.
Our choice of the model's functional form is motivated by the two ideas presented in the introduction: (a) Incorporating correct partial knowledge about the physical system in the model ansatz parameters; (b) encoding the problem's symmetries and structure into model predictions as function space constraints. For the system in Eq. (1) we assume that the controls are partially characterised to address (a). Specifically, its time-dependent control structure \(H_{c}\) is known. We achieve (b) by parametrizing the system Hamiltonian \(H_{0}^{(L)}(\mathbf{\zeta})\) with learnable parameters \(\mathbf{\zeta}\), where \(L\) is the number of qubits. We make the model \(\mathbf{M_{\zeta}}\) a differentiable ODE whose generator is interpretable and has the form
\[H_{\mathbf{\zeta}}(\mathbf{u}(t),t) =H_{0}^{(L)}(\mathbf{\zeta})+H_{c}(\mathbf{u}(t),t)\] \[=\sum_{l=1}^{n^{2}}\zeta_{l}P_{l}+H_{c}(\mathbf{u}(t),t) \tag{23}\]
where \(\zeta_{l}=\operatorname{Tr}[P_{l}H_{0}(t)]\in[-1,1]\) are real. Generally, like the Choi state, \(H_{0}/\operatorname{Tr}[H_{0}]\) admits an arbitrary decom
Figure 1: A schematic of model-based RL is given in (a). The arrow-head implies direction of affect of the edge between a source and sink node. The agent or policy function \(\pi_{\theta}\) interacts with the RL environment modelled as MDP to collect data \(\{\mathbf{s}_{k},\mathbf{s}_{k+1},\mathbf{a}_{k},\mathbf{r}_{k}\}\). This encompasses model-free RL. The data is then used to train the model \(\mathbf{M_{\zeta}}(\mathbf{s}_{k},\mathbf{a}_{k})\). The model is trained until some quality measure like the validation prediction error on some untrained-upon data from the environment plateaus indicating that the training is complete. Then, it is used to generate synthetic data through a \(b\)-step rollout in which the policy interacts with the model \(b\) times. The policy parameters \(\theta\) (and the state-action value function parameters \(\phi\)) are optimised using the real and model generated data. In (b), we visualize the policy inputs as the gate (unitary or Lindblad) characterizing observables about the Choi matrix \(\mathbf{\Phi}\) given by Eq. (12) and the tunable outputs are the parameters of a multivariate Gaussian distribution, i.e., the mean \(\mathbf{\mu}\) and covariance \(\mathbf{\Sigma}\). The controls \(u_{i}\) are drawn from \(\mathcal{N}(\mathbf{\mu},\mathbf{\Sigma})\).
position in terms of a basis \(\{\mathds{1}\}\cup\{P_{l}\}_{l=1}^{n^{2}-1}\) of \(\mathrm{SU}(n)\) algebra. Analogously, for an open system, we parametrize the time-independent part of any dissipation dynamics in addition to the system Hamiltonian using an \(\mathrm{SU}(n^{2})\) algebra parametrization: \(\mathbf{G}_{0}^{(L)}(\boldsymbol{\zeta}^{\mathrm{diss}})=\sum_{l}\zeta_{l}^{ \mathrm{diss}}P_{l}\) in the full generator \(\mathbf{G}_{\boldsymbol{\zeta}}\).
The model is trained by minimizing the regression loss for single timestep predictions using data uniformly sampled \(D\sim\mathcal{D}\) where \(\mathcal{D}\) represents the entire dataset,
\[L_{\mathrm{model}}(D)=\sum_{D}\left(\mathbf{M}_{\boldsymbol{\zeta}}\left( \mathbf{s}_{k},\mathbf{a}_{k}\right)-\mathbf{s}_{k+1}\right)^{2}. \tag{24}\]
To understand why a differentiable ODE ansatz is a good choice for the model, we need to define an ODE path that is given by \(\phi_{t}:\mathbf{E}(0)\xrightarrow{H_{\boldsymbol{\zeta}}}\mathbf{E}(T)\) generated by \(H_{\boldsymbol{\zeta}}\) for some time \(t\in[0,T]\) and propagator \(\mathbf{E}\). The ansatz is a good choice because of the following two properties of ODE paths: (a) they do not intersect and (b) if paths \(\phi_{0}^{(A)}\), \(\phi_{0}^{(B)}\) start close compared to path \(\phi_{0}^{(C)}\), then paths \(\phi_{t}^{(A)}\), \(\phi_{t}^{(B)}\) remain close compared to path \(\phi_{t}^{(C)}\).
Both properties are well known [46; 47] for ODEs and become very useful when we try to predict the trajectories from noisy quantum data by imposing strong priors on the space of learnable Hamiltonians. Property (b) is a consequence of Gronwall's inequality [47] and essentially can be interpreted as: ODE flows that start off closer (w.r.t. the initial condition) stay closer (w.r.t. the final condition). Both (a) and (b) essentially imply a sort of intrinsic robustness of the ODE flow \(\phi_{t}(\mathbf{z}_{0})\) to perturbations on \(\mathbf{z}_{0}\)[29]. They constrain the trajectories predicted by the model \(\mathbf{M}_{\boldsymbol{\zeta}}\) to be intrinsically robust to small noise in the states \(\mathbf{s}_{k}\) and inaccuracies in the learned system Hamiltonian \(H_{0}^{(L)}(\boldsymbol{\zeta})\).
We call the SAC equipped with this differentiable ODE model the learnable Hamiltonian model-based SAC (LH-MBSAC) as listed in Algorithm 2. Crucially, we note that LH-MBSAC generalizes the SAC by allowing the policy to interact with the ODE model and the physical system. LH-MBSAC gracefully falls back to the model-free SAC in the absence of a model with low prediction error that is measured from the performance of the model's predictions on an unseen validation set of interaction data. Note that the threshold or tolerance level for switching to the agent-model interaction part of the algorithm is likely problem-dependent and thus needs to be selected along with other hyperparameters in RL. However, this allows us to improve the sample complexity of model-free reinforcement learning when possible, by leveraging knowledge about the controllable quantum system, yet still be able to control the system in a model-free manner if this isn't possible.
Experiments
We demonstrate the performance of LH-MBSAC on three quantum systems that are of current interest in open and closed settings with shot noise.
To warm up, the first system \(\tilde{H}_{\mathrm{NV}}^{(1)}\) is a single-qubit Hamiltonian model for an NV center with microwave pulse control [48],
\[\frac{H_{\mathrm{NV}}^{(1)}(t)}{\hbar}=2\pi\kappa\sigma_{z}+\underbrace{2\pi \Omega\left(\Delta_{1}(t)\sigma_{x}+\Delta_{2}(t)\sigma_{y}\right)}_{\tilde{H}_ {c}(t)}, \tag{25}\]
where \(\kappa=1\) MHz is the microwave frequency detuning, \(\Omega=1.4\) MHz is the Rabi frequency and the control field parameters are \(\mathbf{u}_{j}(t)=\Delta_{j}(t)\) in the range \(\mathbb{X}_{\mathrm{NV}}^{(1)}=\{-1\leqslant\Delta_{ij}\leqslant 1\}\). The final time is \(20\)\(\mu\)s.
The second system \(H_{\mathrm{NV}}\) is again NV center based, but for two qubits [31]. This system is realised in the system subspace using microwave pulses of approximately \(0.5\) MHz and is given by
\[\frac{H_{\mathrm{NV}}(t)}{\hbar} =|1\rangle\!\langle 1|\otimes\left(-\left(\nu_{z}+a_{zz} \right)\sigma_{z}-a_{zx}\sigma_{x}\right) \tag{26}\] \[+|0\rangle\!\langle 0|\otimes\nu_{z}\sigma_{z}+\underbrace{\sum_{l=x,y,j=1}^{2}\sigma_{j}^{(l)}\Delta_{lj}(t)}_{H_{c}(t)}\]
where \(\nu_{z}=0.158\) MHz, \(a_{zz}=-0.152\) MHz and \(a_{zx}=-0.11\) MHz, \(\sigma_{j}^{(l)}\) is the \(l\)th Pauli operator on qubit \(j\) and \(\Delta_{lj}(t)\) is a time-dependent control field. The range of control is \(\mathbb{X}_{\mathrm{NV}}^{(2)}=\{-1\;\mathrm{MHz}\leqslant\Delta_{lj}\leqslant 1 \;\mathrm{MHz}\}\) and the final time is \(T=20\)\(\mu\)s.
The third system \(\tilde{H}_{\mathrm{tra}}^{(L)}\) is the \(L\)-qubit effective Hamiltonian model for cavity quantum electrodynamics (cQED) [32] for two or more transmons/qubits as a proxy for the IBM quantum circuits [49],
\[\frac{H_{\mathrm{tra}}^{(L)}(t)}{\hbar} =\sum_{j=1}^{L}\omega_{j}\hat{b}_{j}^{\dagger}\hat{b}_{j}+\frac{ \kappa_{j}}{2}\hat{b}_{j}^{\dagger}\hat{b}_{j}(\hat{b}_{j}^{\dagger}\hat{b}_{ j}-\mathds{1}) \tag{27}\] \[+J\sum_{j=1}^{L}(\hat{b}_{j}^{\dagger}\hat{b}_{j+1}+\hat{b}_{j} \hat{b}_{j+1}^{\dagger})+\underbrace{\sum_{j=1}^{L}\Delta_{j}(t)(\hat{b}_{j} +\hat{b}_{j}^{\dagger})}_{H_{c}(t)}\]
This model is comprised of Duffing oscillators with frequency \(\omega_{j}=5\) GHz representing the qubits with an anharmonicity \(\kappa_{j}=0.2\) GHz, qubit coupling \(J\) and a control field per qubit \(\Delta_{j}\). Note that this a special case of the Bose-Hubbard model [50] with \(\hat{b}_{j}\) representing the boson annihilation operator on the \(j\)th qubit. The control field \(\mathbf{u}_{j}(t)=\Delta_{j}(t)\) is real by construction in addition to extra constraints imposed on the space of possible controls \(\mathbb{X}\). The range of control is given by \(\mathbb{X}_{\mathrm{tra}}^{(2)}=\{-0.2\;\mathrm{GHz}\leqslant\Delta_{ij} \leqslant 0.2\;\mathrm{GHz}\}\) and the final time is \(T=20\)\(\mu\)s.
We demonstrate results of LH-MBSAC, benchmarked against its model-free counterpart, SAC, for a one- and two-qubit NV center \(H_{\mathrm{NV}}^{(1)},H_{\mathrm{NV}}^{(2)}\) and the two-qubit transmon \(H_{\mathrm{tra}}^{(2)}\). For the two-qubit system, the target gate is the CNOT and for the one-qubit system, it is the Hadamard gate. Pulses are discretised in accordance with the scheme introduced in Sec. II.3 for a number of timesteps, \(N=20\). We follow the parameter restrictions for all systems introduced in Ref. [31; 32; 48; 10]. Moreover, due to limited support in our autodifferentiation library [51], we simulate the complex dynamics by mapping the complex ODE to two real coupled ODEs [52] (see App. A for more details on our ODE solver).
The following sections are organised as follows. In Sec. IV.1, we demonstrate a sample complexity improvement for the different control problems discussed above in a noisy closed setting. For the subsequent sections, we focus on the two qubit transmon control problem since the results were similar for other systems that we studied. In Sec. IV.2, we study the effect of Hamiltonian error from truth on the sample complexity. Sec. IV.3 imparts how the learnt Hamiltonian in LH-MBSAC can be further utilised to do model-based control using gradient-based methods like GRAPE. Sec. IV.4 extends results from the closed setting to the noisy open system setting. Finally, in Sec. IV.5, we highlight some limitations and silver linings of the LH-MBSAC and the RL for control approach and provide promising ideas to circumvent some of the issues.
### Sample Efficiency for Closed System Control
We only consider closed system control for the single-shot measurements or shots in Eq. (14) and Eq. (4). Unitary control (with closed system dynamics) is implemented for shots as a special case of open system control where the dissipation operators \(\mathfrak{L}\) are \(0\).
The Choi operator \(\mathbf{\Phi}\) corresponding to the gate realised by the controls is obtained from sampling from the binomial distribution in Eq. (12) with \(M=10^{6}\) shots per measurement operator. By Hoeffding's inequality, we know that with probability \(1-0.01\) the error in the estimator of \(q_{l}\) is \(10^{-3}\). Or generally, with probability \(1-\delta\), for \(\epsilon\) error, we require \(O(\log\frac{1}{\delta}/\epsilon^{2})\) measurements. The AAPT (see Sec. II.2) uses \(M\times 3^{L}\) shots in total for \(3^{L}\) possible measurement operators which is quite expensive. Further restrictions on the structure of \(\mathbf{\Phi}\) imposed by a \(k\)-local Hamiltonian, where qubit interactions up to only the nearest \(k\leqslant L\) qubits are assumed, allows the shot cost to go down to \(O(4^{k}(\log M)/\epsilon^{2})\) which is asymptotically optimal [53].
The results for LH-MBSAC and model-free SAC for the one- and two-qubit control problems are shown in Fig. 2. We consider LH-MBSAC's performance in the shot-noise setting by estimating the gate using its corresponding estimated Choi state \(\mathbf{\Phi}\) as well as using the
AAPT scheme with \(10^{6}\) shots per observable. The sample complexity of LH-MBSAC to achieve a maximum fidelity significantly improves, by at least an order of magnitude, upon the model-free baseline in both cases, although it is more significant for the two-qubit transmon. We randomly initialize the learnable system Hamiltonian using the Pauli basis parametrization in Eq. (23) with coefficients \(\zeta_{i}\sim\text{Uniform}(-1,1)\). The environment data buffer \(D_{\mathcal{E}}\), i.e., the size of the initial exploration data set using random control actions, is comprised of 1, 20, or 100 pulse sequences for the NV center and transmon systems. A more detailed discussion of the amount of training data needed for Hamiltonian learning is presented in App. D.
It is used to learn the system Hamiltonians \(H_{0_{\text{trn}}}^{(2)}\), \(H_{0_{\text{NV}}}\) via supervised learning of \(\mathbf{M_{\zeta}}\) until a validation loss of around \(10^{-3}\times 2^{2q}\times\texttt{train batch size}\) is reached after which we switch to the model \(\mathbf{M_{\zeta}}\) to generate synthetic samples to train the policy \(\pi\). Note that here \(q\) is the number of qubits and \(q=2\) for the theoretical unitary and \(q=4\) for the Choi state (due to the Choi-Jamiolkowski isomorphism in AAPT).
### Sample Complexity as a Function of Hamiltonian Error
Here, we study the tradeoff between sample complexity and error in the model's ansatz Hamiltonian \(H_{0}(\mathbf{\zeta})^{(L)}\) compared to the true system Hamiltonian \(H_{0}\). We find it to be highly non-linear. This Hamiltonian error \(\delta\) is defined by
\[\delta=\left\|H_{0}(\mathbf{\zeta})^{(L)}-H_{0}\right\| \tag{28}\]
where \(\|\cdot\|\) is the spectral norm (the largest singular value) of \(H_{0}(\mathbf{\zeta})^{(L)}-H_{0}\). The non-linear dependence on the sample complexity of LH-MBSAC as a function of \(\delta\) for the two-qubit transmon control problem in case we do not learn \(H_{0_{\text{trn}}}^{(2)}\) is shown in Fig. 3(a)-(e) where we consider \(\delta=0.01,0.02,0.05,0.1,0.2\). This effectively corresponds to keeping everything unchanged in Algorithm 2 and not minimizing the loss \(L_{\text{model}}(D_{\text{train}})\) to update our model in Eq. (24) and set the model to have a fixed Hamiltonian error \(\delta\) compared to the true Hamiltonian.
We note that \(\delta=0.02,0.05,0.1\) results show worse performance compared to the \(\delta=0.2\) for the theoretical unitary control problem (without measurement noise). This indicates that some model system Hamiltonians \(H_{0}(\mathbf{\zeta})\) with a larger \(\delta\) predict dynamics that are more consistent with the true system Hamiltonian \(H_{0}\) dynamics other than \(H_{0}(\mathbf{\zeta})\) with a smaller \(\delta\). However, learning \(H_{0_{\text{trn}}}^{(2)}\) for all shown cases restores improved performance in all these cases for both the theoretical unitary and shots' control problem.
To make our empirical results more intuitive, we make use of the following bound obtained from Ref. [54] for the unitary prediction error of the ODE model w.r.t. the environment for our idealised control problem Eq. (4),
**Proposition 1**.: _(informal) The following is true for the difference between the unitary model's predicted state \(U_{\mathbf{\zeta}}\) and the environment's unitary state \(U_{\mathcal{E}}\),_
\[\left\|U_{\mathcal{E}}-U_{\mathbf{M_{\zeta}}}\right\|_{\infty,t} \\ \leqslant t^{2}\delta\left(\frac{1}{t}+\frac{2}{t}\|H_{e}\|_{1,t} +\|H_{\mathbf{\zeta}}\|+\|H_{\mathcal{E}}\|\right) \tag{29}\]
Figure 2: Fidelity \(\mathcal{F}\) of Hadamard gate controls for (a) \(H_{\text{NV}}^{(1)}\); and of CNOT gate controls for (b) \(H_{\text{NV}}\) and (c) \(H_{\text{tra}}^{(2)}\) as a function of the number of environment \(\mathcal{E}\) calls. The mean fidelity over 100 controllers is plotted as a solid line with the shading indicating two standard deviations and the maximum fidelity is indicated by the dashed line. LH-MBSAC or model-free SAC with just the unitary tag indicates the closed system control problem in Eq. (4) and shots are indicated likewise. For transmons we terminate the algorithm early after a maximum fidelity \(\mathcal{F}>0.98\) is reached for LH-MBSAC with and without shots. Sample complexity of LH-MBSAC is significantly improved for the two-qubit transmon and the NV center over model-free SAC for the closed system control problem and for shots. We average these results over three seeds of each algorithm run. Seed refers to running the algorithm from scratch with a fresh set of randomly initialised parameters.
_where \(\|\cdot\|\) is the spectral norm and for some linear operator \(A\), we have \(\|A\|_{\infty,t}=\sup_{s\in[0,t]}\|A(s)\|\) and \(\|A\|_{1,t}=\int_{0}^{t}ds\|A(s)\|\)._
Proof.: See App. B
From this, we infer that the unitary model prediction error or the supervised learning regression loss \(L_{\text{model}}(D)\) in Eq. (24) being small does not imply closeness between learned and true system Hamiltonian, i.e., \(\delta\to 0\). This is illustrated for the two-qubit transmon control problem in Fig. 4(a). Note that there is also a lot of variation in the unitary model prediction error, even for the same value of \(\delta\). But we can see that with decreasing \(\delta\), the variation decreases which is also explained by the above bound.Moreover, we confirm that the unitary model prediction error grows as a function of time which makes intuitive sense since predictions far into the future compared to their time-wise preceding counterparts must necessarily have more built-up error.
### Leveraging the Learned Hamiltonian with GRAPE
Prop. 2 paves the way to learning system Hamiltonians that are locally consistent with the unitary trajectories they generate. By local we mean that the learnt Hamiltonian is consistent with the true Hamiltonian on a subset of all possible generatable trajectories. During the model's \(\mathbf{M_{\zeta}}\) training phase, \(H_{0}(\boldsymbol{\zeta})\) is made consistent with random trajectories drawn from the data buffer \(D_{\mathcal{E}}\) by minimizing the regression loss \(L_{\text{model}}(D_{\mathcal{E}})\). This allows us to learn a model of the environment that can predict locally consistent unitary trajectories (i.e. at the scale of the control problem). In other words, the learned system Hamiltonian \(H_{0}(\boldsymbol{\zeta})\) does not have to coincide with the true system Hamiltonian \(H_{0}\) for it to be useful for the optimal control task. Indeed, we take the Hamiltonian learned for the two-qubit transmon in Fig. 2(b) and find that it has \(\delta=0.91509\). Despite this large discrepancy between the true and learned system Hamiltonians, we find mostly good local agreement between the two trajec
Figure 3: Sample complexity or \(\mathcal{E}\) calls of LH-MBSAC for the two-qubit transmon control problem as a function of spectral norm error \(\delta\), quantifying closeness of the learned system Hamiltonian \(H_{0}(\boldsymbol{\zeta})\) and the true system Hamiltonian \(H_{0}\). The cases for \(\delta=0.01,0.02,0.05,0.1,0.2\) are plotted in (a)–(e). The mean fidelity over 100 controllers is plotted as a solid line with the shading indicating two standard deviations and the maximum fidelity is indicated by the dashed line. The \(M=\infty\) is the no measurement noise setting where the exact unitary is seen by the algorithm while alternatively the unitary is estimated using AAPT with \(M=10^{6}\) shots per observable characterizing the Choi state. The blue line indicates a baseline setting where no learning of \(H_{0}(\boldsymbol{\zeta})\) occurs whilst the orange is the setting with learning. We confirm a non-linear dependence of \(\mathcal{E}\) calls on \(\delta\) and that a small \(\delta\) does not necessarily conform to better trajectories and a smaller unitary trajectory prediction error. Learning \(H_{0}(\boldsymbol{\zeta})\) restores performance in the theoretical unitary (no measurement noise) and the shots’ setting (unitary with measurement noise).
tories they induce thanks to the supervised training phase of the model. Furthermore, we use the learned 'local' \(H_{0}(\mathbf{\zeta})\) and the controllers found by LH-MBSAC for the model-based GRAPE control algorithm [7, 9] to optimize the fidelities for those controllers further. Note that the fidelities after applying GRAPE are evaluated w.r.t. the true system Hamiltonian \(H_{0}\). Usually LH-MBSAC/SAC controllers have fidelities \(0.995>\mathcal{F}>0.98\) which are improved to \(\mathcal{F}>0.99\). We show in Fig. 4(b) the local and global trajectories corresponding to \(H_{0}(\mathbf{\zeta})\) and \(H_{0}\) which shows that the two unitary trajectories w.r.t. the CNOT fidelity are not always coinciding. In (c), we show the RL controllers being optimised further using the learned \(H_{0}(\mathbf{\zeta})\) with GRAPE.
### Open System Control with Single Shot Measurements
Due to the interpretable nature of our ODE model's ansatz in Eq. (23), it is pertinent to ask if two competing but linear terms in the model \(\mathbf{M_{\zeta}}\) can be learned simultaneously. In the previous sections, we only learn one term represented by \(H_{0}(\mathbf{\zeta})\). Utilizing the open system formulation of the control problem in Sec. II.2, we consider Lindblad dissipation along with shot noise for the two-qubit transmon control problem in Eq. (14). Specifically, we consider the decay operator \(\mathfrak{L}^{(l)}_{\text{diss}}=\sqrt{\frac{2}{R_{l}^{*}}}b_{l}b_{l}^{\dagger}\), acting on the \(l\)th qubit, and the decoherence operator \(\mathfrak{L}^{(l)}_{\text{deco}}=\sqrt{\frac{2}{R_{l}}}b_{l}\) for \(l=1,2\). \(R_{l}^{*}\) and \(R_{l}\) are the decay and decoherence rates. Both operators are time-independent. The Lindblad dissipation term \(\mathbf{L}_{1}\) comprising these operators, that is learned, is, thus, also time-independent.
We perform experiments for high and low dissipation corresponding to the rates \(R_{l}^{\text{s}^{\text{s}^{\text{u}}}}=R_{l}^{\text{hi}}=4\)\(\mu\)s, and \(R_{l}^{\text{s}^{\text{u}}}=R_{l}^{\text{lo}}=20\)\(\mu\)s. The results are shown in Fig. 5 where the
Figure 4: (a) An illustration of the non-linear relationship between the unitary model prediction error \(\big{\|}U_{\mathcal{E}}-U_{\mathbf{M_{\zeta}}}\big{\|}\) and Hamiltonian spectral norm error \(\delta\) for the two-qubit transmon control problem. For the same 50 random control pulses, we evaluate the average unitary prediction error of \(\mathbf{M_{\zeta}}\) with increasing \(\delta\). This is repeated three times for three different \(H_{0}(\mathbf{\zeta})\) to illustrate the variation in response of the unitary error. (b) Local and global unitary trajectories: \(\mathcal{F}\) as a function of random controller actions and time with either the learned system Hamiltonian \(H_{0}(\mathbf{\zeta})\) or the true system Hamiltonian \(H_{0}\). The learned \(H_{0}(\mathbf{\zeta})\) trajectories do not coincide with the global trajectory with \(\delta=0.91509\). Note that both trajectories are, however, extremely close. (c) The learned \(H_{0}(\mathbf{\zeta})\) can be leveraged using GRAPE to further optimize the fidelities of LH-MBSAC’s controllers. We plot a histogram of 100 LH-MBSAC controller infidelities \(1-\mathcal{F}\) before and after applying GRAPE on these controllers using the learned Hamiltonian and a random Hamiltonian. The LH-MBSAC fidelities are significantly improved after applying GRAPE. Note that the appropriate baseline, which is a random \(H_{0}(\mathbf{\zeta})\) plugged into GRAPE, with the rest of the procedure carried out as before, yields extremely low fidelities near 0. The fidelities \(\mathcal{F}\) are evaluated w.r.t. the true system Hamiltonian.
Figure 5: Diamond norm fidelity \(\mathcal{F}_{\circ}\) for the two-qubit transmon control problem in low and high Lindblad dissipation regimes for LH-MBSAC. The results are averaged over two seeds with the mean \(\mathcal{F}_{\circ}\) over 100 controllers shown in solid and the maximum \(\mathcal{F}_{\circ}\) in dashed lines. Shading denotes two standard deviations from the mean.
'learn' label signifies that \(\mathbf{L}_{1}\) is being learned in addition to the system Hamiltonian \(H_{0}(\mathbf{\zeta})\).
The diamond norm fidelity [55]\(\mathcal{F}_{\circ}\),
\[\mathcal{F}_{\circ}(\mathbf{\Phi}(\mathbf{u}(t),t),\mathbf{\Phi}_{\text{target}})=1- \|\mathbf{\Phi}(\mathbf{u}(t),t)-\mathbf{\Phi}_{\text{target}}\|_{\circ}, \tag{30}\]
is used instead of the generalised state fidelity since the latter lacks the sensitivity to detect the low dissipation regime (see App. E). We find that attempting to learn \(\mathbf{L}_{1}\) while learning \(H_{0}(\mathbf{\zeta})\) confers little to no advantage in both the high and low dissipation regimes for this control task. Further investigation shows that the estimate of the system Hamiltonian \(H_{0}(\mathbf{\zeta})\) compensates for the observed discrepancy in observed dynamics due to dissipation as much as it is unitarily possible. Moreover, the learning signals for \(\mathbf{L}_{1}\) and \(H_{0}(\mathbf{\zeta})\) become mixed so learning multiple independent terms in \(\mathbf{M}_{\mathbf{\zeta}}\) might not be suitable for LH-MBSAC.
### Limitations and Silver Linings
We note that there are two major limitations of LH-MBSAC. The first is that only the system or time-independent part of the Hamiltonian can be learned using the algorithm while learning the time-dependent part of the Hamiltonian, being more difficult [56], is left as future work.
Moreover, we found that LH-MBSAC does not scale to three-qubit problems (e.g., a Toffoli gate for an extension of the transmon system) due to inherent limitations of SAC that means it gets stuck in local optima. We note that the LH-MBSAC strategy is not limited to SAC and can augment a different RL algorithm for which the three-qubit problem is tractable. Moreover, a reformulation of the RL control problem could also alleviate this issue by reducing the probability of SAC getting stuck. Fig. 6 shows the infidelity \(1-\mathcal{F}\) as a function of time for 100 pulses found by LH-MBSAC and GRAPE for the two-qubit transmon control problem. Compared to GRAPE, LH-MBSAC pulses are much more consistent and periodic in terms of the intermediate fidelity values. This highlights that the RL approach is biased towards optimizing intermediate fidelities along with the final target fidelity (since the objective function in Eq. (19) is the expected cumulative fidelity). This is quite different from the approach taken by the gradient-based GRAPE algorithm. Despite being interesting from a controller robustness point of view [12], this bias can prevent solutions that do not admit high intermediate fidelities from being found as RL can get stuck in a loop mining medium-level fidelity values. Stepping away from this particular sequential decision making MDP formulation might be one solution to consider in future work.
However, there are silver linings of the aforementioned MDP formulation. RL pulses are fidelity-wise better on average across the duration of the pulse. Leveraging the learned system Hamiltonian, we can further improve the performance of the RL pulses by using GRAPE with the RL pulse parameters as initialization. As seen in Fig. 6, these pulses are still better than the ones found by GRAPE using the learned system Hamiltonian but with completely random pulse initializations, i.e., without SAC controllers as seeds.
Furthermore, this RL bias to value intermediate fidelities allows us to identify optimal pulses that can be extracted in short times which is a difficult problem for GRAPE even if the final time is explicitly added to the control objective [9].
Truncating the control sequence for pulses at time \(t\) if the infidelity is below \(5\times 10^{-2}\), we again leverage GRAPE to maximize the final fidelities at these shorter times. These are shown as stars in Fig. 6 with the fidelities at \(t=6\)\(\mu\)s being approximately Pareto optimal i.e. the best fidelity for that time. The Pareto optimal efficient frontier is constructed by sampling 100 GRAPE pulses with random initializations at different final times.
## V Conclusion
We have presented the learnable Hamiltonian soft actor-critic (LH-MBSAC) algorithm for time-dependent noisy quantum gate control. LH-MBSAC augments
Figure 6: The infidelities over time for 100 different control pulses found by LH-MBSAC and by GRAPE using the learned system Hamiltonian \(H_{0}(\mathbf{\zeta})\) for the two-qubit transmon control problem with final time \(T\leqslant 20\)\(\mu\)s. RL pulses are further optimised using GRAPE. GRAPE is also used to obtain pulses without the RL controls as initial values for a fixed final time \(T=20\)\(\mu\)s. Short optimal controls found by RL are identified by truncating RL pulse parameters at times \(t\geqslant\{6,9\}\)\(\mu\)s whose final infidelities are shown as stars with \(t=6\)\(\mu\)s being Pareto optimal w.r.t. the efficient frontier (the surface indicating the best fidelity for that time).
model-free SAC by allowing the RL policy to query a learnable model of the environment or the controllable system. It thereby reduces the total number of queries (sample complexity) required to solve the RL task. The model is a differentiable ODE that is equipped with a partially characterised Hamiltonian where only the parametrised time-independent system Hamiltonian is required to be learned. We show why this is a good inductive bias for the quantum control task as ODE trajectories do not intersect, thereby sensibly constraining the space of models to be learned. Using exploration data acquired from the policy during the RL loop, we train the model by reducing a model prediction error over the data. We show that LH-MBSAC is able to reduce the sample complexity for gate control of one- and two-qubit NV centers, and Transmon systems in unitary and single-shot measurements' settings.
Moreover, we highlight that despite the non-linear relationship between the error in the learned Hamiltonian and the model prediction error, LH-MBSAC's performance is robust to this variation. Furthermore, even if the learned Hamiltonian that minimizes the model prediction error is not the same as the true system Hamiltonian, the learned Hamiltonian can be leveraged using gradient-based methods that require full knowledge of the controllable system, like GRAPE, to further optimize the controllers found by LH-MBSAC. Applying LH-MBSAC in high and low Lindblad dissipation regimes with shot noise, we found that its performance in both was not improved if the Lindblad dissipation terms are also learned in addition to the system Hamiltonian as it is likely that the latter part compensates for the extra dissipation effects.
Despite LH-MBSAC's limitations of requiring knowing the time-dependent Hamiltonian and system scalability beyond two qubits (four with shots due to AAPT), the algorithm can be used to augment many existing model-free RL approaches for quantum control. Moreover, despite the scalability problems due to the potentially hindering nature of the RL strategy towards maximizing intermediate fidelities, it can be useful in particular to identify short time optimal pulses. Learning the time-dependent part of the Hamiltonian is harder and might require a stronger learning protocol e.g., using the ZOH method with the learning protocol presented in this paper, Bayesian Hamiltonian Learning [56] or more informative learning process or Hamiltonian learning methods [57, 58] which would be exciting to pursue in future. The study of the abilities and limitations of our Hamiltonian learning protocol using ZOH will be left to future work. Our code is available at [59].
|
2309.00918 | Electron beams traversing spherical nanoparticles: analytic and
numerical treatment | We present an analytic, Mie theory-based solution for the energy-loss and the
photon-emission probabilities in the interaction of spherical nanoparticles
with electrons passing nearby and through them, in both cathodoluminescence and
electron energy-loss spectroscopies. In particular, we focus on the case of
penetrating electron trajectories, for which the complete fully electrodynamic
and relativistic formalism has not been reported as yet. We exhibit the
efficiency of this method in describing collective excitations in matter
through calculations for a dispersive and lossy system, namely a sphere
described by a Drude permittivity. Subsequently, we use the analytic solution
to corroborate the implementation of electron-beam sources in a
state-of-the-art numerical method for problems in electrodynamics, the
discontinuous Galerkin time-domain (DGTD) method. We show that the two
approaches produce spectra in good mutual agreement, and demonstrate the
versatility of DGTD via simulations of spherical nanoparticles characterized by
surface roughness. The possibility of simultaneously employing both kinds of
calculations (analytic and numerical) facilitates a better understanding of the
rich optical response of nanophotonic architectures excited by fast electron
beams. | P. Elli Stamatopoulou, Wenhua Zhao, Álvaro Rodríguez Echarri, N. Asger Mortensen, Kurt Busch, Christos Tserkezis, Christian Wolff | 2023-09-02T11:55:19Z | http://arxiv.org/abs/2309.00918v3 | # Electron beams traversing spherical nanoparticles: analytic and numerical treatment
###### Abstract
We present an analytic, Mie-theory based solution for the energy-loss and the photon-emission probabilities in the interaction of spherical nanoparticles with electrons passing nearby and through them, in both cathodoluminescence (CL) and electron energy-loss spectroscopies (EELS). In particular, we focus on the case of penetrating electron trajectories, for which the complete fully electrodynamic and relativistic formalism has not been reported as yet. We exhibit the efficiency of this method in describing collective excitations in matter through calculations for a dispersive and lossy system, namely a sphere described by a Drude permittivity, and discuss possible complications when computing contributions from higher-order modes. Subsequently, we use the analytic solution to corroborate the implementation of electron-beam sources in a state-of-the-art numerical method for problems in electrodynamics, the discontinuous Galerkin time-domain (DGTD) method. We show that the two approaches produce spectra in good mutual agreement, and demonstrate the versatility of DGTD via simulations of spherical nanoparticles characterized by surface roughness. The possibility of simultaneously employing both kinds of calculations (analytic and numerical) facilitates a better understanding of the rich optical response of nanophotonic architectures excited by fast electron beams.
## I Introduction
In recent decades, electron-beam spectroscopy has emerged as a revolutionary tool for the optical characterization of materials. Swift electrons passing in close proximity or through a specimen, undergo energy loss owing to energy transfer to the optical modes sustained in the material [1]. From localized and propagating surface plasmons in metallic structures [2; 3; 4; 5], to Mie resonances in dielectric resonators [6; 7; 8] and phonon polaritons in polar crystals [9; 10; 11], electron-beam spectroscopy has proven quintessential for mapping collective excitations in a broad spectral range that spans from ultraviolet to far-infrared frequencies.
With the diffraction limit ultimately being controlled by the de Broglie wavelength, highly energetic electrons are excellent probes to study the optical properties of truly nanoscale structures, with atomic spatial resolution and sub-meV energy resolution [12; 13]. In electron energy-loss spectroscopy (EELS), the sample is excited by a high-energy (\(30-300\,\mathrm{keV}\)) electron beam and the energy lost to the interaction is measured in a transmission electron microscope (TEM) setup [14; 15]. EELS allows, thereby, the detection of both radiative and dark modes, including longitudinal bulk plasmons (BPs) [16], breathing modes [17], or antibonding modes in nanoparticle (NP) dimers [18]. Optical excitations in thick samples can be imaged in cathodoluminescence (CL) spectroscopy, performed in scanning electron microscopes (SEMs) at intermediate beam energies (\(1-50\,\mathrm{keV}\)) [1; 12]. In CL measurements, the signal collected is the result of far-field photon emission from the sample, originating from the radiative decay of the excited modes. Recent advances in instrumentation have even added temporal resolution in EEL and CL spectra, introducing the field of ultrafast electron microscopy (UEM) [19; 20].
Considering the recent progress in electron spectroscopy techniques, robust analytic and computational tools are evidently required to interpret the plethora of experimental data. While first theoretical efforts were performed within the non-retarded approximation for the description of plasmons in thin films [3], the theory was gradually generalized to account for collective excitations in diverse media and geometries [21; 22], also considering retardation effects [23; 24]. Quantum approaches [25; 26] and analytic solutions including relativistic effects for simple geometries were later developed [27; 28], allowing the combination of high-velocity electron beams with both common and less conventional materials, including dielectric media, polar crystals, graphene and other two-dimensional (2D) materials [8; 11; 29; 30]. Most theoretical relativistic descriptions have focused on aloof electron trajectories, that do not penetrate the specimen, in contrast to experimental practices, where the electron beam is typically scanned over the entire sample area [14]. However, aloof electron trajectories oftentimes fail to capture intriguing phenomena associated with bulk properties, such as BPs and bulk phonons [9; 18], and other sources of electron-induced photon emission, like
Cherenkov or transition radiation [31, 32, 33, 27].
Despite the undeniable advantage of analytic solutions in data analysis, their applicability is limited to a handful of highly symmetric geometries. Over the years, different numerical schemes have been consolidated to complement analytic approaches in simulating the electromagnetic properties of nanophotonic systems of diverse shapes and forms [34], such as the boundary element method (BEM) [35, 36], the finite element method (FEM) [37, 38], or the finite-difference frequency-domain (FDFD) and finite-difference time-domain (FDTD) methods [39, 40]. An alternative route is offered by the discontinuous Galerkin time-domain (DGTD) method, which employs the Galerkin scheme to solve Maxwell's equations in the time domain [41, 42, 43, 44, 45]. This method combines the flexible space discretization of finite elements, with the memory efficiency and the ability to include nonlinearities, characteristic of time-domain methods. As a consequence, DGTD offers great versatility in simulating objects of complex geometry and nonlinear response.
In this work, we present and compare an analytic approach and the DGTD method for the study of spherical nanostructures excited by aloof and penetrating electron beams. Following the work of Garcia de Abajo for aloof electron beams [28], we derive analytic formulas for the energy loss and photon emission probability, generalized here to account for penetrating trajectories. We then validate the implementation of electron-beam excitation of nanostructures in DGTD [42] by comparing the EEL and CL spectra produced by the two methods for a perfectly spherical plasmonic NP featuring localized surface plasmons (LSPs) and BPs. Finally, we apply the numerical method to study the optical response of a NP with surface roughness, showcasing the ability of the DGTD method to emulate scenarios aligned with realistic experimental conditions that involve imperfect structures [46, 47].
## II Methods
### Analytic approach
As a first step to examine the agreement and complementarity of analytic and numerical tools, we outline the modeling of the physical system and the assumptions made in each method. As a testbed, we consider a perfectly spherical metal NP of radius \(R\) suspended in air. The NP is characterized by a unity relative permeability (\(\mu=1\)), while its relative permittivity depends on the angular frequency \(\omega\) as described by the Drude model
\[\varepsilon(\omega)=1-\frac{\omega_{\mathrm{p}}^{2}}{\omega(\omega+i\tau^{-1} )}, \tag{1}\]
with plasma frequency \(\omega_{\mathrm{p}}\) and damping rate \(\tau^{-1}\).
In the analytic calculation, the electron beam is modeled as a single point particle that carries the elementary charge \(-e\). We assume that it travels with constant velocity \(\mathbf{v}\), say along the \(z\)-axis, following thus a trajectory \(\mathbf{r}_{e}=\mathbf{r}_{0}+\mathbf{v}t\), where \(\mathbf{r}_{0}=(b,\phi_{0},z=-\infty)\) is its initial position in cylindrical coordinates. The impact parameter \(b\) indicates the distance between the electron trajectory and the center of the sphere, as shown in the schematics of Fig. 1. In passing, we mention that the angular coordinate \(\phi_{0}\) need not be specified, as it does not enter the calculation, due to the symmetry of the problem.
The fast electron drives plasmon oscillations in the metal, generating an induced electric and magnetic field, \(\mathbf{E}_{\mathrm{ind}}\) and \(\mathbf{H}_{\mathrm{ind}}\) respectively, eventually resulting in the emission of radiation to the environment (labeled as region II in Fig. 1). Following the basic steps of Mie theory [48], one can decompose the induced electromagnetic field into transverse electric (TE) and transverse magnetic (TM) components. In region II the induced fields take the general form of outgoing spherical waves [49]
\[\mathbf{E}_{\mathrm{ind}}^{\mathrm{II}}(\mathbf{r})=\sum_{\ell=1 }^{\infty}\sum_{m=-\ell}^{+\ell}\Big{\{}b_{\ell m}^{\mathrm{II}}h_{\ell}^{+}( k_{0}r)\mathbf{X}_{\ell m}(\mathbf{\hat{r}})\\ +\frac{i}{k_{0}}a_{\ell m}^{\mathrm{II}}\mathbf{\nabla}\times h_{ \ell}^{+}(k_{0}r)\mathbf{X}_{\ell m}(\mathbf{\hat{r}})\Big{\}} \tag{2a}\] \[\mathbf{H}_{\mathrm{ind}}^{\mathrm{II}}(\mathbf{r})=\frac{1}{Z_{0 }}\sum_{\ell=1}^{\infty}\sum_{m=-\ell}^{+\ell}\Big{\{}a_{\ell m}^{\mathrm{II}} h_{\ell}^{+}(k_{0}r)\mathbf{X}_{\ell m}(\mathbf{\hat{r}})\\ -\frac{i}{k_{0}}b_{\ell m}^{\mathrm{II}}\mathbf{\nabla}\times h_{ \ell}^{+}(k_{0}r)\mathbf{X}_{\ell m}(\mathbf{\hat{r}})\Big{\}}, \tag{2b}\]
where \(\ell\), \(m\) are the angular momentum quantum numbers, and \(a/b_{\ell m}^{\mathrm{II}}\) denote the expansion coefficients of the TE/TM components corresponding to modes of electric/magnetic multipole character (see the appendix for the analytic expressions). Furthermore, in Eqs. (2) \(h_{\ell}^{+}\) is the spherical Hankel function of the first kind, \(\mathbf{X}_{\ell m}\) are the vector spherical harmonics, \(k_{0}=\omega/c\), is the wave
Figure 1: Schematic illustration of the geometry under study; a metallic NP of radius \(R=75\,\mathrm{nm}\), and permittivity described by the Drude model of Eq. (1), is excited by an electron beam passing with velocity \(\mathbf{v}\) at impact parameter \(b\) with respect to its center. The color dots correspond to the selection of impact parameters examined in our study. Labels I and II indicate the region inside the NP and the surrounding medium (air), respectively.
number in free space, \(c=1/\sqrt{\varepsilon_{0}\mu_{0}}\) the speed of light in vacuum, where \(\varepsilon_{0}\) and \(\mu_{0}\) denote the vacuum permittivity and permeability, respectively, and \(Z_{0}=\sqrt{\mu_{0}/\varepsilon_{0}}\) is the impedance in free space. Details about the field expansions can be found in section S.I of the Supporting Information (SI).
The energy radiated in the far field can be found by integrating the Poynting flux at a spherical surface of radius \(r\to\infty\), in the normal direction \(\mathbf{\hat{r}}\). Then the (CL) probability of collecting a photon of energy \(\hbar\omega\) is given by
\[\Gamma_{\mathrm{CL}}(\omega)=\frac{r^{2}}{\pi\hbar\omega}\int d\Omega\, \mathrm{Re}\big{\{}\mathbf{E}_{\mathrm{ind}}^{\mathrm{II}}(\mathbf{r},\omega) \times\mathbf{H}_{\mathrm{ind}}^{\mathrm{II}^{\ast}}(\mathbf{r},\omega)\big{\}} \cdot\mathbf{\hat{r}}, \tag{3}\]
where \(d\Omega\) denotes the infinitesimal solid angle. By inserting Eqs. (2) into Eq. (3), and evaluating the result in the far field (\(k_{0}r\to\infty\)), we find
\[\Gamma_{\mathrm{CL}}(\omega)=\frac{1}{\pi\hbar\omega Z_{0}k_{0}^{2}}\sum_{ \ell=1}^{\infty}\sum_{m=-\ell}^{+\ell}\Big{\{}\big{|}b_{\ell m}^{\mathrm{II}} \big{|}^{2}+\big{|}a_{\ell m}^{\mathrm{II}}\big{|}^{2}\Big{\}}. \tag{4}\]
Apart from the emission of radiation, part of the energy transferred to the optical modes of the NP dissipates non-radiatively, owing to the intrinsic losses within the material. The total energy lost can be calculated by the work done by the electron against the induced field along the entire electron trajectory. Then the (EEL) probability of the electron losing energy \(\hbar\omega\) is given by
\[\Gamma_{\mathrm{EELS}}(\omega)=\frac{e}{\pi\hbar\omega}\int dt\,\mathrm{Re} \big{\{}\exp\left(-i\omega t\right)\mathbf{v}\cdot\mathbf{E}_{\mathrm{ind}}( \mathbf{r}_{e},\omega)\big{\}}. \tag{5}\]
The integral in Eq. (5) can be decomposed into three terms (see details in section S.I of the SI)
\[\Gamma_{\mathrm{EELS}}(\omega)=\Gamma_{\mathrm{bulk}}(\omega)+\Gamma_{\mathrm{ surf}}(\omega)+\Gamma_{\mathrm{Begr}}(\omega). \tag{6}\]
Here, \(\Gamma_{\mathrm{bulk}}\) is related to the bulk modes of the unbound medium, reduced by the Begrenzung term \(\Gamma_{\mathrm{Begr}}\) that accounts for the presence of a boundary [50]. The \(\Gamma_{\mathrm{surf}}\) term contains the contribution from modes excited by the part of the electron trajectory lying externally to the NP, and is, thus, associated with the excitation of LSPs. The terms entering Eq. (6) are given by the following formulas
\[\Gamma_{\mathrm{bulk}}(\omega)=\frac{e^{2}z_{e}}{2\pi^{2}\varepsilon_{0}\hbar v ^{2}}\mathrm{Im}\Bigg{\{}\frac{1}{\gamma_{0}^{2}}\mathrm{ln}\left(\Big{[} \frac{q_{\mathrm{c}}\gamma v}{\omega}\Big{]}^{2}+1\right)-\frac{1}{\gamma^{2} \varepsilon}\mathrm{ln}\left(\Big{[}\frac{q_{\mathrm{c}}\gamma v}{\omega} \Big{]}^{2}+1\right)\Bigg{\}}, \tag{7a}\] \[\Gamma_{\mathrm{surf}}(\omega)=\frac{e}{\pi\hbar\omega}\mathrm{Re}\sum_{\ell=1 }^{\infty}\sum_{m=-\ell}^{+\ell}\Bigg{\{}\frac{K_{m}\left(\omega b/[v\gamma_{ 0}]\right)}{ik_{0}\sqrt{\ell(\ell+1)}}\left[mb_{\ell m}^{\mathrm{II}}\mathcal{ M}_{\ell m}^{\ast}-a_{\ell m}^{\mathrm{II}}\frac{\mathcal{N}_{\ell m}^{\ast}}{ \beta\gamma_{0}}\right]\] \[-\int_{-z_{e}}^{z_{e}}dz\,\frac{\exp\left(-i\omega z/v\right)}{ \sqrt{\ell(\ell+1)}}\left[mb_{\ell m}^{\mathrm{II}}h_{\ell}^{+}(k_{0}r)Y_{ \ell}^{m}\left(\theta,0\right)-\frac{a_{\ell m}^{\mathrm{II}}}{k_{0}b}\big{\{} \mathcal{H}_{\ell m}^{+}(k_{0}z)+\mathcal{H}_{\ell m}^{-}(k_{0}z)\big{\}} \right]\Bigg{\}}, \tag{7b}\]
and
\[\Gamma_{\mathrm{Begr}}(\omega)=\frac{e}{\pi\hbar\omega}\mathrm{Re}\sum_{\ell= 1}^{\infty}\sum_{m=-\ell}^{+\ell}\int_{-z_{e}}^{z_{e}}dz\frac{\exp\left(-i \omega z/v\right)}{\sqrt{\ell(\ell+1)}}\left[mb_{\ell m}^{\mathrm{I}}j_{\ell}( kr)Y_{\ell}^{m}\big{(}\theta,0\big{)}-\frac{a_{\ell m}^{\mathrm{I}}}{kb}\big{\{} \mathcal{J}_{\ell m}^{+}(kz)+\mathcal{J}_{\ell m}^{-}(kz)\big{\}}\right]. \tag{7c}\]
Here, \(2z_{e}=2\sqrt{R^{2}-b^{2}}\) is the length of the electron path inside the NP, and \(\gamma=1/[1-\varepsilon\beta^{2}]^{1/2}\) with \(\beta=v/c\) are the Lorentz kinematic factors (\(\gamma_{0}\) is evaluated in free space). In Eqs. (7b) and (7c) \(K_{m}\) is the modified Bessel function of the second kind and \(Y_{\ell}^{m}\) are the spherical harmonics. In addition, we have set \(r=\sqrt{b^{2}+z^{2}}\), and \(\theta=\arccos(z/r)\), while analytic expressions for coefficients \(a/b_{\ell m}^{\mathrm{I}}\), \(\mathcal{M}_{\ell m}\), \(\mathcal{N}_{\ell m}\), \(\mathcal{H}_{\ell m}^{\pm}\), and \(\mathcal{J}_{\ell m}^{\pm}\) can be found in the appendix.
It is important to note here that the aforementioned decomposition of the EEL spectra introduces a free parameter; assuming that upon losing energy \(\hbar\omega\) the electron transfers a transverse (with respect to the electron trajectory) momentum \(q\) to excite an optical mode, \(q_{\mathrm{c}}\) is the maximum transverse momentum collected. In an experiment, the momentum cutoff is determined by the half-aperture collection angle \(\varphi\) of the microscope spectrometer, as
\[\hbar q_{\mathrm{c}}\approx\sqrt{(m_{e}v\varphi)^{2}+(\hbar\omega/v)^{2}}, \tag{8}\]
where \(m_{e}\) is the electron mass. We may freely choose the value for this momentum cutoff, making sure that it aligns with the typical values for the collection angle in scanning TEM (STEM) setups, which are in the order of a few mrad. Naturally, this introduces a level of arbitrariness in the EEL spectra, as various values of \(q_{\mathrm{c}}\) lead to different peak intensities at the BP energy.
### DGTD simulation
We complement our analytical work with numerical simulations of the electromagnetic problem of a NP excited by a moving Gaussian charge distribution. To this end, we employ the DGTD method (see details in section S.II of the SI), which combines a piecewise polynomial spatial interpolation on an unstructured tetrahedral mesh with a Runge-Kutta time integrator to obtain a high-order accurate explicit solver for Maxwell's equations in time domain
\[\partial_{t}\mathbf{H}(\mathbf{r},t)= -\mu_{0}^{-1}\mu^{-1}(\mathbf{r})\,\boldsymbol{\nabla}\times \mathbf{E}(\mathbf{r},t), \tag{9a}\] \[\partial_{t}\mathbf{E}(\mathbf{r},t)= \,\varepsilon_{0}^{-1}\varepsilon^{-1}(\mathbf{r})\,\left[ \boldsymbol{\nabla}\times\mathbf{H}(\mathbf{r},t)-\mathbf{j}(\mathbf{r},t) \right]. \tag{9b}\]
Here, \(\mathbf{j}\) is the total current density that encompasses both any current associated with the excitation source, as well as dispersive polarization currents. The resulting method is memory-efficient compared to traditional finite elements and especially well-suited for the calculation of wide-band spectra.
One key difference between the Mie-based theory and the DGTD simulations, which can potentially lead to deviations between the two approaches, is the implementation of the excitation source. In the numerical treatment, we model the electron beam with a Gaussian charge distribution of the form
\[\rho(\mathbf{r})=-\frac{e}{\sigma_{e}^{3}\sqrt{\pi^{3}}}\exp(-r^{2}/\sigma_{ e}^{2}), \tag{10}\]
with width \(\sigma_{e}=$5\,\mathrm{nm}$\). This choice essentially prevents numerical artifacts arising when implementing a point-charge particle moving inside the simulation domain, while also being compatible with the typical spot size in CL experiments [8]. Thereby, we introduce a new spatial scale, which needs to be taken into account when it is comparable to the parameters related to the discretization of the computational domain, such as the mesh element size, as well as the characteristic lengths of the physical system, e.g. the radius of the NP and the impact parameter. For very large distances between the electron and the NP (i.e., \(b-R\gg\sigma_{e}\)), the source resembles a point charge and we, therefore, expect an excellent agreement with analytic results. However, in the opposite scenario, the finite width of the electron beam becomes important, and the corresponding fields do not accurately match those of a point charge.
## III Results and discussion
### EEL and CL spectroscopy of perfectly spherical NPs
In what follows, we analyze the response of a metallic NP excited by a fast electron beam, as predicted by both the analytic and the numerical approach. We consider a smooth sphere of radius \(R=$75\,\mathrm{nm}$\) with plasma energy \(\hbar\omega_{\mathrm{p}}=$5\,\mathrm{eV}$\) and damping rate \(\tau^{-1}\) corresponding to an energy of \(\hbar\tau^{-1}=$50\,\mathrm{meV}$\). The parameters mimic typical plasmonic metals and are chosen for the purpose of illustration, while the particular values have no consequences for our general conclusions. Fig. 2 shows the scattering spectrum (panel a) under plane-wave illumination, and the CL and EELS probability (panels b and c, respectively) calculated for a low electron velocity \(v=0.33c\) (kinetic energy \(\approx$30\,\mathrm{keV}$\)) intersecting the NP at \(b=$35\,\mathrm{nm}$\) from its center. As suggested by Eq. (4), the total photon emission probability can be decomposed into contributions of pairwise orthogonal electric and magnetic multipoles of order \(\ell\). Fig. 2b reveals that the CL spectrum is composed of the contributions of the first four (\(\ell=1,2,3,4\)) electric-type modes, appearing at approximately 2, 2.8, 3.1, and \(3.2\,\mathrm{eV}\), associated with the excitation of LSP resonances, while higher-order (\(\ell>4\)) multipoles contribute negligibly to the spectrum. In comparison with the scattering cross section of the NP shown in Fig. 2a (calculated using Mie theory [51]), the CL spectrum provides very similar information. This is somewhat expected, since both calculations are based on the collection of far-field radiation. We observe, nonetheless, two notable distinct features in the CL spectrum of Fig. 2b. Firstly, the electron source excites more efficiently the \(\ell=2\) and 3 modes, whereas in the scattering spectrum of Fig. 2a the dipolar mode peak features the highest intensity. The relative peak intensities in the CL and EEL spectra depend strongly on the impact parameter, which determines the arrangement of the polarization charges in the material [52]. Secondly, we observe a small redshift of the dipolar (\(\ell=1\)) mode in CL due to retardation, stemming from the fact that the speed of the electron is only a fraction of the speed of light.
In Fig. 2c we present the EEL spectrum, decomposed as described in Eq. (6). In the zoom-in area of the figure we observe the excitation of numerous higher-order multipoles, appearing as sharp peaks at energies up to \(3.5\,\mathrm{eV}\). With increasing multipole order, the wavelength of the corresponding mode reduces, so higher-order modes experience the curved surface of the NP as increasingly more flat. As a result, they accumulate at the energy corresponding to that of a surface plasmon polariton (SPP) at a planar interface, at \(\hbar\omega_{\mathrm{spp}}=\hbar\omega_{\mathrm{p}}/\sqrt{2}\approx$3.5\, \mathrm{eV}$\). Above the SPP energy, the spectrum exhibits a pronounced peak at the BP energy \(\hbar\omega_{\mathrm{bp}}=\hbar\omega_{\mathrm{p}}=$5\,\mathrm{eV}$\), pertaining to the excitation of BPs in the volume of the NP. Since BPs are longitudinal modes, they do not couple to far-field radiation and, therefore, they can be detected only in EELS. At the same energy, we observe the expected negative peak related to the Begrenzung term, reducing the BP peak in the total EEL probability [50].
Having a clear picture of the origin of all spectral features, in Figs. 3a and b we compare the CL and EEL probability, respectively, of the same metal NP, as calculated employing the analytic (Mie) and the DGTD method. We test the agreement between the two calculations probing various impact parameters, that range from
\(b=125\,\)nm to \(10\,\)nm, corresponding to aloof electron trajectories (violet and blue spectra), grazing (green), and penetrating (yellow and red). Overall, the DGTD method reproduces the positions of the LSP modes and the corresponding CL probabilities of the Mie calculations. In Fig. 3a we show an excellent agreement in the CL spectra, with a relative error at around \(1\%\) for aloof and penetrating electron trajectories (see Table 1). The highest error is acquired when the electron grazes the surface of the NP, passing exactly at \(b=R\) (middle panel in Fig. 3a). This point reflects an important limit in the capabilities of the DGTD method; in the grazing trajectory, due to the finite width of the electron beam, half of the Gaussian charge density distribution lies inside the NP, while the other half lies outside, leading to numerical inconsistencies. One may avoid this point, which is inevitably difficult to resolve, by slightly adjusting the impact parameter by half the Gaussian width.
Regarding compatibility of the EEL spectra, the two rightmost panels in Fig. 3b reveal an excellent agreement for aloof electron trajectories. We consistently find a higher error for grazing trajectories (middle panel in Fig. 3b); here, the EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting once again from the fact that only a fraction of the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in energy window (red line), which is due to the fact that the electron charge distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in energy window (red line), which is due to the fact that the electron charge distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in the energy window (red line), which is due to the fact that the electron charge distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (gray dashed line), resulting in a smaller error in energy window (red line), which is due to the fact that the electron charge density distribution is lost. The EELS spectrum calculated with DGTD exhibits a numerical artifact at the BP energy (
tribution penetrates the NP, exciting only partially the bulk mode. A substantial deviation between the two methods is found in EEL spectra for both grazing and penetrating electron trajectories (three leftmost panels in Fig. 3b) between the SPP and the BP energy, denoted by the gray dotted and dashed lines, respectively. The disagreement is, naturally, reflected in the large relative errors presented in Table 1 accordingly. At these impact parameters, the condition \(b-R\gg\sigma_{e}\) is not fulfilled, hence the field of the electron deviates considerably from that of a point charge. Moreover, the beam width \(\sigma_{e}\) becomes important compared to the mesh element size, since the surface of the NP is more finely discretized than the surrounding medium (see Fig. S4 in the SI). Finally, there exists an additional source of error in the evaluation of the BP contribution, that stems from the rather arbitrary choice of the transverse momentum cutoff \(q_{\text{c}}\) in the analytic approach. In contrast, in the DGTD implementation there is a respective internal limit, associated with \(\sigma_{e}\). As a result, we consistently find higher relative errors in EELS in comparison with CL, and for grazing and penetrating electron trajectories as compared to aloof, probing both low-energy electron beams as in Fig 3, as well as higher energies that are more realistic for EEL measurements (see Fig. S.8 in the SI for CL and EEL spectra at energy \(200\,\mathrm{keV}\)).
Admittedly, the accumulation point of high-order multipoles at \(\hbar\omega_{\text{spp}}\) is hard to be resolved in both methods. On the one hand, the analytic Mie calculation assumes a moving point charge, which can, in principle, excite an infinite number of multipoles, resulting in a sharp high-intensity peak at energy \(\hbar\omega_{\text{spp}}\). On the other hand, the finite mesh size and beam width implemented in DGTD imposes a limitation to the number of multipoles that can be resolved for a given discretization, since high order multipoles associated with field variation shorter than the mesh element size at the surface cannot be captured without the use of very high order polynomials. In EELS and CL experiments, there exists an analogous limitation, associated with the finite width of the electron beam employed, as well as the geometric imperfections of the NP. The versatility of DGTD allows us not only to adjust the electron beam width according to the experimental setup, but also to mimic NPs with surface roughness, as we discuss in section III.2. In the analytic approach, the smearing of higher-order modes and the overall quenching of the sharp peak at the accumulation point can too be reproduced, once we consider the nonlocal response of the material; this can be done particularly easily within Mie theory [53, 54, 55]. Nonlocal effects manifest as increased damping and uneven energy shifts of high-order modes, and, therefore, lead to the suppression of the individual modes, as well as the reduction of their overlap at \(\hbar\omega_{\text{spp}}\)[56].
The difficulty in resolving the high-order multipoles even in the analytic calculation is clear in the convergence study presented in Figs. 4 and 5. Due to multipole orthogonality, increasing \(\ell_{\text{max}}\) only ever adds signal. This suggests monitoring the total EEL spectrum integrated over energy (area under the curve) as a proxy for convergence. For aloof trajectories, the asymptotics of the Hankel functions suggest exponential convergence, which is in agreement with Figs. 4a and b; the EEL probability converges at \(\ell_{\text{max}}=10\). In contrast, Fig. 4c shows that in the case of the grazing trajectory the convergence order breaks down to a square root law. By extrapolation (magenta line) we can conclude that even for \(\ell_{\text{max}}=63\) the analytic result is still converged only up to
Figure 4: (a, c) Area under the curve of the EEL spectra of Fig. 3b, (a) for the aloof electron trajectory at \(b=$125\,\mathrm{nm}$\), (c) for the grazing trajectory at \(b=$75\,\mathrm{nm}$\), showcasing the convergence of the EEL probability for increasing values of the multipole cutoff \(\ell_{\text{max}}\), plotted versus \(1/\sqrt{\ell_{\text{max}}}\). In both panels, the magenta line is fitted to the data points marked as black bullets, while the red crosses represent data points excluded from the fitting. The vertical gray lines serve as guides to the eye for the position of \(\ell_{\text{max}}=1,2\ldots 5\). (b, d) EEL probability calculated for impact parameters (b) \(b=$125\,\mathrm{nm}$\), and (d) \(b=$75\,\mathrm{nm}$\), and for selected values of \(\ell_{\text{max}}\), as denoted in the labels.
\begin{table}
\begin{tabular}{c c c} \(b\) (nm) & \(\Gamma_{\text{CL}}\) (\%) & \(\Gamma_{\text{EELS}}\) (\%) \\ \hline
10 & 1.26 & 9.75 \\
35 & 1.06 & 6.17 \\
75 & 8.23 & 28.87 \\
100 & 1.06 & 1.02 \\
125 & 1.07 & 1.93 \\ \end{tabular}
\end{table}
Table 1: Relative error [see Eq. (S.38) in the SI] between the analytic and the DGTD calculations of \(\Gamma_{\text{CL}}\) and \(\Gamma_{\text{EELS}}\) for varying \(b\).
about 15%. Fig. 4d corroborates that the area missing from the converged value corresponds to the higher-order modes piling up at the SPP energy. Finally, we note that Fig. 4a exhibits the same square root convergence before the curve flattens off. The exact value of \(\ell_{\mathrm{max}}\) where this transition happens increases as the impact parameter approaches \(R\).
For the penetrating trajectories shown in Fig. 5 we find signs of the same slow convergence around the SPP accumulation point. This is masked in our convergence plots by the fact that the BP peak diverges. The source of the divergence lies in the decomposition of the EEL probability into two competing contributions stemming from the bulk and the Begrenzung terms. As illustrated in Figs. 5a and c, the two terms produce divergences of opposite sign; the negative Begrenzung term diverges linearly for increasing multipole order \(\ell_{\mathrm{max}}\), whereas the positive bulk term diverges logarithmically for increasing momentum cutoff \(q_{\mathrm{c}}\) [see Eqs. (7a) and (7c)]. As our computational resources do not allow driving \(\ell\) and \(q_{\mathrm{c}}\) to infinity, once one of the two parameters is truncated to a certain cutoff value, the other has to be adjusted accordingly. Figs. 5b and d show that the different values of \(\ell\) and \(q_{\mathrm{c}}\), respectively, affect the spectra in the energy window between the SPP and BP modes.
### EELS of NPs with surface roughness
The synthesis of spherical metallic NPs, as the ones studied in the present work, is routinely done with colloidal chemistry, for a large variety of materials and NP shapes [57, 58]. However, despite being able to accurately control the NP size, assuring a smooth surface is rather challenging. Typically the structures exhibit protuberances on the surface, which can be responsible for symmetry breaking [59, 60], hot spots in dimers [61, 62] and picocavities [63, 64], or energy shifts of the LSPs [65]. Within the DGTD method, surface texture can be easily implemented on top of the perfect spherical mesh and incorporated in the numerical calculations. Here, we follow the prescription presented in Ref. [66] to implement the desired roughness as a radius variation derived from Gaussian white noise, with a correlation length of \(10\,\mathrm{nm}\) and two different values for the root-mean square (rms) amplitude (see details in section S.II.F of the SI).
In Fig. 6 we explore the effect of surface roughness on a spherical NP, described by the same Drude permittivity as in the previous section, now excited by an electron beam traveling with velocity \(v=0.7c\) (kinetic energy \(\approx 200\,\mathrm{keV}\)) at distance \(b=100\,\mathrm{nm}\) from its center. We probe meshes of two degrees of surface roughness on top of the NP of nominal radius \(R=75\,\mathrm{nm}\), namely \(\mathrm{rms}=2\,\mathrm{nm}\) and \(4\,\mathrm{nm}\). Since the breaking of the spherical symmetry introduces a dependence on the electron propagation direction, and on the mesh morphology, in Fig. 6 we plot the _average_ EEL probability, corresponding to the average values obtained for \(6\) different meshes. The resulting spectra for \(\mathrm{rms}=2\,\mathrm{nm}\) and \(4\,\mathrm{nm}\) (dark red and blue curves, respectively) deviate notably from that of a smooth sphere (gray shaded area). Firstly, we observe an increasing redshift of the spectra with increasing degree of roughness, in agreement with experimental observations of corrugated plasmonic NPs [65]. The energy shift is most evident for the broad dipolar mode at around \(\sim 2\,\mathrm{eV}\), and is foremost the result of the area increase of the rough surface as compared to the smooth NP. Evaluation of this area from the mesh parameters yields an effective radius of \(R_{\mathrm{eff}}=76.2\,\mathrm{nm}\) for \(\mathrm{rms}=2\,\mathrm{nm}\), and \(R_{\mathrm{eff}}=79.6\,\mathrm{nm}\) for \(\mathrm{rms}=4\,\mathrm{nm}\). Indeed the EEL spectra of smooth spheres of said effective radii reproduce accurately the position of the dipolar mode (see Fig. S.9 in the SI).
In addition to the redshift, the spectra of the corrugated NPs feature a large number of low-intensity peaks. These new spectral features are the result of two factors. Due to the breaking of the spherical symmetry, the prior degenerate modes associated with the same angular momentum \(\ell\) but different \(m\) number, now exhibit a small energy difference. As a result of the lift of the degeneracy, the sharp peaks observed in the spectrum of the
Figure 5: (a, c) Area under the curve of the EEL spectrum of the leftmost panel in Fig. 3b (\(b=10\,\mathrm{nm}\)), showcasing the divergence of the EEL probability for increasing values of (a) the multipole cutoff \(\ell_{\mathrm{max}}\), and (c) the transverse momentum cutoff \(q_{\mathrm{c}}\) plotted versus \(\ln(q_{\mathrm{c}})\). The data follow a linear divergence trend with respect to their corresponding horizontal axes. In both panels, the magenta line is fitted to the data points marked as black bullets, while the red crosses represent data points excluded from the fitting. (b, d) EEL probability calculated at \(b=10\,\mathrm{nm}\) for selected values of (b) \(\ell_{\mathrm{max}}\), and (d) \(q_{\mathrm{c}}\). In panels (a, b) we scan over \(\ell_{\mathrm{max}}\), while keeping a fixed value \(q_{\mathrm{c}}=0.71\,\mathrm{nm}^{-1}\), whereas in (c, d) we scan over \(q_{\mathrm{c}}\) for a fixed value \(\ell_{\mathrm{max}}=63\), as denoted in the labels. In panel (c) the vertical lines correspond to the selected \(q_{\mathrm{c}}\) values presented in (d), following the same color coding.
smooth NP are suppressed and, depending on the size of this energy difference with respect to the linewidth of the degenerate mode, they are either split or broadened. Moreover, additional spectral features may arise from hot-spots, namely protuberances of large curvature that strongly enhance and confine the incident field [64]. It is important to note here that, as the features become increasingly smaller, a rigorous description of the system requires the implementation of nonlocal effects in the method [67; 68], which effectively introduces a cutoff in the scattering from large wavevector components [69; 70].
## IV Conclusion
We have presented an analytic and a numerical method for the study of spherical structures excited by fast electron beams. Based on Mie theory, we have derived formulas for the calculation of the EEL and CL probability that are valid for both aloof and penetrating electron beams, as is typically the practice in EEL and CL measurements. Focusing on the plasmon oscillations of a metallic NP as a testbed, we compared the analytic theory with numerical simulations performed using the DGTD method, and found excellent agreement. We discussed the applicability and limitations of each method, particularly for grazing trajectories and at energies near the surface- and bulk-plasmon resonances, and showcased the flexibility of the DGTD method by studying a NP with different degrees of surface corrugation, which can lead to resonance shifts and splittings due to the lifting of mode degeneracy. We thus believe that both methods are essential and complementary for exploring collective optical excitations in matter and for interpreting experimental observations.
###### Acknowledgements.
We thank F. Intravaia, C. Maciel-Escudero, and B. Beverungen for stimulating discussions. K. B. acknowledges funding by the German Research Foundation (DFG) in the framework of the Collaborative Research Center 1375 "Nonlinear Optics down to Atomic Scales (NOA)" (Project No. 398816777). N. A. M. is a VIL-LUM Investigator supported by VILLUM Fonden (grant No. 16498). The Center for Polariton-driven Light-Matter Interactions (POLIMA) is funded by the Danish National Research Foundation (Project No. DNRF165).
## Author contribution
P. E. S and W. Z. contributed equally to this work. P. E. S performed the analytic study and W. Z. the DGTD simulations. All authors participated in the discussion of the results and the writing of the manuscript.
## Appendix
The field expansion coefficients entering Eqs. (4) and (7) are given by
\[b^{1}_{\ell m} =T^{11}_{M}b^{0,1}_{\ell m}+T^{21}_{M}b^{0,\mathrm{II}}_{\ell m}- b^{0,\mathrm{II}}_{\ell m}, \tag{11a}\] \[b^{\mathrm{II}}_{\ell m} =T^{12}_{M}b^{0,1}_{\ell m}+T^{22}_{M}b^{0,\mathrm{II}}_{\ell m}- b^{0,\mathrm{I}}_{\ell m}{}_{\mathrm{air}},\] (11b) \[a^{1}_{\ell m} =T^{11}_{E_{\ell}}a^{0,\mathrm{I}}_{\ell m}+T^{21}_{E_{\ell}}a^{0,\mathrm{II}}_{\ell m}-a^{0,\mathrm{II}}_{\ell m},\] (11c) \[a^{\mathrm{II}}_{\ell m} =T^{12}_{E_{\ell}}a^{0,\mathrm{I}}_{\ell m}+T^{22}_{E_{\ell}}a^{0,\mathrm{II}}_{\ell m}-a^{0,\mathrm{I}}_{\ell m}{}_{\mathrm{air}}, \tag{11d}\]
where \(a/b^{0,\mathrm{I}}_{\ell m}\) and \(a/b^{0,\mathrm{II}}_{\ell m}\) correspond to the fields generated by the part of the electron trajectory lying in region I (inside the NP) and region II (outside the NP), respectively, as illustrated in Fig. 1. The coefficients are found by the following expressions
\[b^{0,\mathrm{II}}_{\ell m}=-\frac{ik_{0}^{2}e}{\varepsilon_{0}\omega}\frac{m} {\sqrt{\ell(\ell+1)}}\left[\mathcal{M}_{\ell m}K_{m}\left(\frac{\omega b}{v \gamma_{0}}\right)-ik_{0}\int_{-z_{\mathrm{e}}}^{z_{\mathrm{e}}}dz\,\exp{(i \omega z/v)}h^{+}_{\ell}(k_{0}r)Y^{m}_{\ell}\left(\theta,0\right)\right], \tag{12a}\]
Figure 6: Average EEL probability in the interaction between a spherical NP featuring surface roughness and an electron beam passing with velocity \(v=0.7c\) (kinetic energy \(\approx 200\,\mathrm{keV}\)) at distance \(b=100\,\mathrm{nm}\). The solid lines correspond to NPs, whose shapes deviate from that of a perfectly smooth sphere of radius \(R=75\,\mathrm{nm}\) (gray shaded area) by root-mean square roughness values \(\mathrm{rms}=2\,\mathrm{nm}\) (dark red curve) and \(\mathrm{rms}=4\,\mathrm{nm}\) (dark blue curve). The average EEL probability corresponds to the average values obtained with DGTD for 6 different rough meshes characterized by the same rms.
\[a^{0,\text{II}}_{\ell m}=\frac{ik_{0}^{2}e}{\varepsilon_{0}\omega}\frac{1}{\sqrt{ \ell(\ell+1)}}\bigg{[}\frac{\mathcal{N}_{\ell m}}{\beta\gamma_{0}}K_{m}\left( \frac{\omega b}{v\gamma_{0}}\right)-\frac{i}{b}\int_{-z_{a}}^{z_{a}}dz\,\exp{(i \omega z/v)}\Big{\{}\mathcal{H}^{+}_{\ell m}(k_{0}z)+\mathcal{H}^{-}_{\ell m}(k _{0}z)\Big{\}}\bigg{]}, \tag{12b}\]
\[b^{0,\text{I}}_{\ell m}=-\frac{ik_{0}^{2}e}{\varepsilon_{0}\omega}\frac{m}{ \sqrt{\ell(\ell+1)}}ik\int_{-z_{a}}^{z_{a}}dz\,\exp{(i\omega z/v)}j_{\ell}(kr)Y ^{m}_{\ell}\left(\theta,0\right), \tag{12c}\]
and
\[a^{0,\text{I}}_{\ell m}=\frac{ik_{0}^{2}e}{\varepsilon_{0}\omega}\frac{1}{ \sqrt{\ell(\ell+1)}}\frac{i}{b}\int_{-z_{a}}^{z_{a}}dz\,\exp{(i\omega z/v)} \Big{\{}\mathcal{J}^{-}_{\ell m}(kz)+\mathcal{J}^{+}_{\ell m}(kz)\Big{\}}. \tag{12d}\]
In Eqs. (11) the notation \(a/b^{0,\text{I}}_{\ell m}|_{\text{air}}\) indicates evaluation of the terms in Eqs. (12c) and (12d) in air (\(\varepsilon=1\), \(k=k_{0}\)). Moreover, in Eqs. (7) and (12) we have set
\[\mathcal{M}_{\ell m}=i^{\ell+m}\sqrt{\frac{2\ell+1}{\pi}\frac{(\ell-m)!}{( \ell+m)!}}\frac{(2m-1)!!}{(\beta\gamma_{0})^{m}}G^{m+1/2}_{\ell-m}\left(\frac {1}{\beta}\right), \tag{13}\]
and
\[\mathcal{N}_{\ell m}=c^{m}_{\ell}\mathcal{M}_{\ell\,m+1}-c^{-m}_{\ell} \mathcal{M}_{\ell\,m-1}, \tag{14}\]
with
\[c^{m}_{\ell}=\frac{1}{2}\sqrt{(\ell-m)(\ell+m+1)}, \tag{15}\]
where \(G^{m+1/2}_{\ell-m}(x)\) is the Gegenbauer polynomial. Eq. (13) holds for \(m\geq 0\), while \(\mathcal{M}_{\ell-m}=(-1)^{m}\mathcal{M}_{\ell m}\). Additionally, in Eqs. (7) and (12), we have set
\[\mathcal{F}^{\pm}_{\ell m}(k_{n}z)=\mp c^{\pm m}_{\ell}\bigg{\{} \frac{k_{n}b^{2}}{r}f^{\prime}_{\ell}(k_{n}r)Y^{m\pm 1}_{\ell}(\theta,0)\] \[\pm\frac{zb}{r^{2}}f_{\ell}(k_{n}r)\left[c^{\pm m+1}_{\ell}Y^{m\pm 2 }_{\ell}(\theta,0)-c^{\pm m}_{\ell}Y^{m}_{\ell}(\theta,0)\right]\] \[+(1\pm m)f_{\ell}(k_{n}r)Y^{m\pm 1}_{\ell}(\theta,0)\bigg{\}}. \tag{16}\]
Eq. (16) holds for any type of spherical Bessel function \(f_{\ell}(k_{n}r)\) evaluated in any medium \(n\) (the prime here and on any other Bessel function denotes the derivative of the function with respect to the argument). In particular, in Eq. (12d) we use expression (16) for \(\mathcal{F}^{\pm}_{\ell m}(k_{n}z)=\mathcal{J}^{\pm}_{\ell m}(kz)\) and \(f_{\ell}=j_{\ell}\), whereas in Eq. (12b) we use \(\mathcal{F}^{\pm}_{\ell m}(k_{n}z)=\mathcal{H}^{\pm}_{\ell m}(k_{0}z)\) and \(f_{\ell}=h^{+}_{\ell}\).
Finally in Eqs. (11) we have introduced the Mie coefficients
\[T^{22}_{E_{\ell}} =\frac{\varepsilon j_{\ell}(kR)\Psi^{\prime}_{\ell}(k_{0}R)-\Psi^{ \prime}_{\ell}(kR)j_{\ell}(k_{0}R)}{h^{+}_{\ell}(k_{0}R)\Psi^{\prime}_{\ell}(kR )-\varepsilon\xi^{\prime}_{\ell}(k_{0}R)j_{\ell}(kR)}, \tag{17a}\] \[T^{22}_{M_{\ell}} =\frac{j_{\ell}(kR)\Psi^{\prime}_{\ell}(k_{0}R)-\Psi^{\prime}_{ \ell}(kR)j_{\ell}(k_{0}R)}{h^{+}_{\ell}(k_{0}R)\Psi^{\prime}_{\ell}(kR)-\xi^{ \prime}_{\ell}(k_{0}R)j_{\ell}(kR)},\] (17b) \[T^{21}_{E_{\ell}} =-\frac{i\sqrt{\varepsilon}/(k_{0}R)}{h^{+}_{\ell}(k_{0}R)\Psi^{ \prime}_{\ell}(kR)-\xi^{\prime}_{\ell}(k_{0}R)j_{\ell}(kR)},\] (17c) \[T^{21}_{M_{\ell}} =-\frac{i/(k_{0}R)}{h^{+}_{\ell}(k_{0}R)\Psi^{\prime}_{\ell}(kR)- \xi^{\prime}_{\ell}(k_{0}R)j_{\ell}(kR)},\] (17d) \[T^{11}_{E_{\ell}} =\frac{\varepsilon\xi^{\prime}_{\ell}(k_{0}R)h^{+}_{\ell}(kR)-h^{+} _{\ell}(k_{0}R)\xi^{\prime}_{\ell}(kR)}{h^{+}_{\ell}(k_{0}R)\Psi^{\prime}_{\ell} (kR)-\varepsilon\xi^{\prime}_{\ell}(k_{0}R)j_{\ell}(kR)},\] (17e) \[T^{11}_{M_{\ell}} =\frac{\xi^{\prime}_{\ell}(k_{0}R)h^{+}_{\ell}(kR)-h^{+}_{\ell}(k_{ 0}R)\xi^{\prime}_{\ell}(kR)}{h^{+}_{\ell}(k_{0}R)\Psi^{\prime}_{\ell}(kR)-\xi^{ \prime}_{\ell}(k_{0}R)j_{\ell}(kR)},\] (17f) \[T^{12}_{E_{\ell}} =-\frac{i/(k_{0}R)}{h^{+}_{\ell}(k_{0}R)\Psi^{\prime}_{\ell}(kR)- \varepsilon\xi^{\prime}_{\ell}(k_{0}R)j_{\ell}(kR)},\] (17g) \[T^{12}_{M_{\ell}} =-\frac{i/(\sqrt{\varepsilon}k_{0}R)}{h^{+}_{\ell}(k_{0}R)\Psi^{ \prime}_{\ell}(kR)-\xi^{\prime}_{\ell}(k_{0}R)j_{\ell}(kR)}, \tag{17h}\]
where we have adopted the notation of the Riccati-Bessel functions \(\Psi_{\ell}(x)=xj_{\ell}(x)\) and \(\xi_{\ell}(x)=xh^{+}_{\ell}(x)\).
|
2305.17169 | Fitting a Deep Generative Hadronization Model | Hadronization is a critical step in the simulation of high-energy particle
and nuclear physics experiments. As there is no first principles understanding
of this process, physically-inspired hadronization models have a large number
of parameters that are fit to data. Deep generative models are a natural
replacement for classical techniques, since they are more flexible and may be
able to improve the overall precision. Proof of principle studies have shown
how to use neural networks to emulate specific hadronization when trained using
the inputs and outputs of classical methods. However, these approaches will not
work with data, where we do not have a matching between observed hadrons and
partons. In this paper, we develop a protocol for fitting a deep generative
hadronization model in a realistic setting, where we only have access to a set
of hadrons in data. Our approach uses a variation of a Generative Adversarial
Network with a permutation invariant discriminator. We find that this setup is
able to match the hadronization model in Herwig with multiple sets of
parameters. This work represents a significant step forward in a longer term
program to develop, train, and integrate machine learning-based hadronization
models into parton shower Monte Carlo programs. | Jay Chan, Xiangyang Ju, Adam Kania, Benjamin Nachman, Vishnu Sangli, Andrzej Siodmok | 2023-05-26T18:00:06Z | http://arxiv.org/abs/2305.17169v2 | # Fitting a Deep Generative Hadronization Model
###### Abstract
Hadronization is a critical step in the simulation of high-energy particle and nuclear physics experiments. As there is no first principles understanding of this process, physically-inspired hadronization models have a large number of parameters that are fit to data. Deep generative models are a natural replacement for classical techniques, since they are more flexible and may be able to improve the overall precision. Proof of principle studies have shown how to use neural networks to emulate specific hadronization when trained using the inputs and outputs of classical methods. However, these approaches will not work with data, where we do not have a matching between observed hadrons and partons. In this paper, we develop a protocol for fitting a deep generative hadronization model in a realistic setting, where we only have access to a set of hadrons in data. Our approach uses a variation of a Generative Adversarial Network with a permutation invariant discriminator. We find that this setup is able to match the hadronization model in Herwig with multiple sets of parameters. This work represents a significant step forward in a longer term program to develop, train, and integrate machine learning-based hadronization models into parton shower Monte Carlo programs.
## 1 Introduction
Hadronization connects theory and experiment by transforming the fundamental degrees of freedom - quarks and gluons - with observable degrees of freedom - hadrons. However, we do no have a first-principles understanding of hadronization and so existing approaches use physically-inspired, highly flexible models fit to data. Our vision is to replace these hand-crafted models with deep learning, where the additional exprresivity would have the potential to enhance precision, the models would be readily differentiable, and they would be naturally compatible with Graphical Processing Unit (GPUs).
There are currently two hadronization models in wide use: the cluster model [1] and the string model [2; 3]. The former is employed by default in the Herwig [4; 5; 6; 7] and Sherpa [8; 9] Parton Shower Monte Carlo (PSMC) programs and the latter is used by default in the Pythia [10; 11] PSMC. Previously, Refs. [12] and [13] showed that deep generative models could emulate the string and cluster models, respectively, in a simple setting where the neural network has access to parton-hadron pairs and only pions are produced *. Furthermore, these models were integrated into the Pythia and Herwig PSMC programs. These papers marked an important milestone, but represent only the first steps along a multiyear program to achieve a complete, integrated, and tuned machine learning (ML)-based hadronization model.
Footnote *: Hadron type was taken from Herwig.
While previous work has shown that neural networks can emulate the existing hadronization models, we want to eventually fit the models to data. A fundamental challenge with using data directly is that hadronization acts locally on partons while only non-local information about hadrons is observable. In other words, events are measured as a permutation-invariant set of hadrons that have no inherent order or grouping to know which hadrons 'came from' the same partons. This means that we need a model that can learn to generate
hadrons from partons based on information from a loss function that acts on the set of observable hadrons.
The two-level challenge of fitting to data rules out most standard implementations of deep generative models. Variational Autoencoders (VAE) [14; 15], Normalizing flows (NF) [16; 17], and diffusion models [18; 19; 20] do not directly apply because we need to know the probability density of the partons and we need a permutation invariant reconstruction loss (VAE), probability density (NF), or score function (diffusion). While there has been some progress on these fronts [21; 22; 23; 24; 25; 26; 27; 28; 29; 30], Generative Adversarial Networks (GANs) [31; 32] can be naturally applied to this setting. For GANs, the latent space does not require a tractable probability density, the discriminator can be applied on a different level (hadrons) as the generator (partons), and permutation invariance can be enforced by using a set-based classifier for the discriminator. GANs were the first deep generative model applied to particle physics data [33; 34; 35] and have since been extensively studied (see e.g. Ref. [36; 37; 38]). GAN-like setups have also been used for two-level fitting in the context of parameter estimation [39] and unfolding [40]. We propose to use GANs for fitting hadronization models to data.
We embed the GAN-based hadronization model HadML introduced in Ref. [13] in a full event-level fitting framework. A fully connected neural network takes as input individual clusters and outputs pairs of hadrons. This network acts in the cluster rest frame. The resulting hadrons are then boosted to the lab frame and the GAN discriminator is based on Deep Sets [41], which is a permutation invariant neural network architecture. We restrict ourselves to the cluster model inputs (clusters created from pre-confined partons) and pion outputs in order to focus on the two-level fitting challenge. These simplifications will be relaxed in future work.
This paper is organized as follows. Section 2 introduces the conceptual and technical details behind our fitting framework. Numerical examples are presented in Sec. 3, including two variations on the cluster model. The paper ends with conclusions and outlook in Sec. 4.
## 2 Methods
### Statistical Approach
Our goal is to learn a conditional generator function \(G\left(z,\lambda;\omega_{G}\right)\) which maps cluster kinematic properties onto the kinematic properites of the two+ hadrons from each cluster decay \(\{h_{1},h_{2}\}\in\mathbb{R}^{2N_{h}}\) with the parameters \(\omega_{G}\). Here, \(z\in\mathbb{R}^{N_{z}}\) is the input noise variable sampled from the prior \(p\left(z\right)\), and \(\lambda\in\mathbb{R}^{N_{\lambda}}\) is the conditional variable, namely the cluster kinematic properties. Since two hadrons from a cluster decay must be back-to-back in the rest frame of cluster, the generator \(G\) can instead output the polar angles \(\theta\) and \(\phi\) of the "first hadron" in the cluster rest frame. Note that here \(\phi\) is defined in the range of \(\left(-\pi/2,\ \pi/2\right)\), and the hadron with \(\phi\) in this range is defined to be the first hadron. In the original setup [13], a discriminator function \(D\left(\theta,\phi;\omega_{D}\right)\), parametrized with \(\omega_{D}\), is learned to represent the
probability that \(\left\{\theta,\phi\right\}\) came from cluster fragmentation rather than the generator \(G\). \(G\) and \(D\) are then trained alternately to maximize and minize the loss function, respectively:
\[L=-\sum_{\lambda\sim\text{Herwig, }z\sim p\left(z\right)}\left(\log\left(D\left( \tau\left(\lambda\right)\right)\right)+\log\left(1-D\left(G\left(z,\lambda \right)\right)\right)\right)\,, \tag{1}\]
where \(\tau\) is the cluster fragmentation.
In the setup above, all hadrons are paired and matched to a cluster. In the actual data, however, the only observables are the kinematic properties of each individual hadron. In order to be able to fit the model to actual data, where the hadron matching and cluster information is not accessible, the discriminator function is modified to be \(D_{E}\left(x\right)\), where \(D_{E}\) takes a set of hadron kinematic properties \(x\equiv\left\{h_{1},h_{2},...,h_{n}\right\}\) in the same event as inputs. Furthermore, we parameterize \(D_{E}\) as a Deep Sets model [41]:
\[D_{E}\left(x\right)=F\left(\frac{1}{n}\sum_{i=1}^{n}\Phi\left(h_{i},\omega_{D_ {\Phi}}\right),\omega_{F}\right)\,, \tag{2}\]
where \(\Phi\) embeds a set of hadrons into a fixed-length latent space and \(F\) acts on the average of the latent space. Due to the average, \(D_{E}\) can take any length of hadron set and is invariant under permutations of hadrons. The loss function thus becomes:
\[L=-\sum_{x\sim\text{data}}\log\left(D_{E}\left(x\right)\right)-\sum_{\left\{G \right\}\sim\text{Herwig, }z\sim p\left(z\right)}\log\left(1-D_{E}\left(\left\{G \left(z,\lambda\right)\right\}\right)\right)\,, \tag{3}\]
where \(\left\{G\left(z,\lambda\right)\right\}\) is generated by a set of clusters that came from the same event. The generator acts in the cluster rest frame and then the resulting hadrons are boosted into the lab frame before being passed to the discriminator. A summary of the setup and how it differs from Ref.[13] is presented in Fig. 1.
In our implementation, \(G\) is a neural network. However, this approach could also be used to fit (without binning) data to a parametric physics model as well. For that case, \(G\) would be e.g. the cluster model and the parameters would not be weights and biases of a neural network, but instead the parameters of the cluster model. This would require making the cluster model differentiable so that gradients could be passed through the model. We leave explorations of this hybrid setup to future work.
### Machine Learning Implementation
Both the generator and discriminator functions are parametrized as neural networks and implemented using PyTorch [42]. The generator is a fully connected network which consists of two hidden layers with 256 nodes per layer. The noise dimension is set to 10. The discriminator comprises two networks \(\phi\) and \(F\). Both \(\phi\) and \(F\) are a fully connected network with two hidden layers of 256 nodes each. Each intermediate layer in these networks uses a batch normalization and a LeakyReLU [43] activation function. The last layer of the generator uses a tanh activation function to restrict the outputs to be in the range of \(\left(-1,\ 1\right)\). The outputs are then scaled and transformed linearly to match the actual range
\((-\pi/2,\ \pi/2)\) for \(\phi\) and \((0,\ \pi)\) for \(\theta\). The last layer of \(F\) uses a sigmoid activation function and no activation is used for the last layer of \(\Phi\).
All neural network inputs are normalized to the range of \((-1,\ 1)\), whereas the noise prior \(p\) is a Gaussian distribution with a mean of 0 and width of 1. The generator and discriminator are optimized alternately (1 discriminator step and 5 generator steps) with Adam [44] with a learning rate of \(5\times 10^{-7}\) and \(10^{-4}\) for the generator and discriminator, respectively. The training uses a batch size of 10,000 and is performed for 6,000 epochs. The hyperparameters were optimized with Weights and Biases [45].
## 3 Results
### Datasets
Crucial data for fitting hadronisation models are LEP events collected in \(e^{+}e^{-}\) collisions at the center-of-mass energy \(\sqrt{s}=91.2\) GeV. Therefore, we used such events generated with version 7.2.1 of the Herwig Monte Carlo generator for a training dataset for our Generative Hadronization Model. As mentioned earlier, the cluster model [1] is used for hadronisation in the Herwig generator. Based on the color preconfinement [46], the cluster model groups a partonic final state into a set of colour-singlet clusters (pre-hadrons) with an invariant mass distribution that is independent of the specific hard scattering process or its centre-of-mass energy and that peaks at low masses. Therefore, most clusters decay
Figure 1: An overview of the model presented in this paper and how it compares to HadML v1 from Ref. [13]. Since the clusters are not observable in data, the discriminator in v2 acts on sets of hadrons and does not have access to cluster-hadron-hadron labels. We first study the performance in the same Herwig setup as in Ref. [13] (‘Closure Test’) and then check that it is also able to fit another Herwig setup (Cluster Frag’) with variations in the cluster hadronization model (‘Stress Test’).
into two hadrons. However, a small fraction of clusters are too heavy for this approach to be justified. Therefore, these heavy clusters are first split into lighter clusters before decaying. The decay of such massive clusters is not discussed in this publication but will be considered in future work. Each entry in our training data set includes information about the four-momentum of all the light clusters in an event and the four-momenta of their parents (partons) and children (hadrons), along with their flavours. An example of an entry from our data sets is available on Zenodo at Ref. [47]. To simplify the training data further, only decays into \(\pi\) mesons were considered++. To check whether the model can adapt to different variants of the kinematics of hadron decays, we also prepared two datasets with different, minimal (0) and maximal (2) settings of the **ClSmr** parameter. The **ClSmr** parameter is the main parameter governing the kinematics of cluster hadron decay. Hadrons that contain a parton produced in the perturbative stage of the event retain the direction of the parton in the cluster rest frame with possible Gaussian smearing of the direction. The smearing is controlled by the the **ClSmr** parameter through an angle \(\theta_{\rm smear}\) where
Footnote ‡: In Herwig, this is achieved by adding the following line: set HadronSelector:Trial 1 into the default LEP:in input card. The only other modification to the default hadronisation settings was the change that the hadrons produced from cluster decays were on the mass shell. This can be achieved by adding the command: set ClusterDecayer:OnShell Yes in the input file.
\[\cos\theta_{\rm smear}=1+{\bf ClSmr}\log{\cal R}. \tag{1}\]
where \({\cal R}\) is a uniform random number chosen from \([0,1]\). For more details about the parameters of the cluster model implemented in Herwig, see Chapter 7 of the generator's manual [5].
In Sec. 3.2 we use the minimal **ClSmr** as our alternative sample and refer to this setup as Herwig Cluster \(kin^{min}\). As would be the case with actual data, we use clusters from the nominal setting when fitting the alternative sample, although changing **ClSmr** does not change the cluster kinematic properties and thus the inputs to the GAN model are statistically correct. When we fit the nominal sample, the cluster inputs to the fit are distinct but statistically identical to those in the dataset we are fitting.
### Fitted Models
The training history of the fit is presented in Fig. 2. As expected, the discriminator loss increases and the generator loss decreases, with a final value near \(\log(2)\) (classifier outputs 0.5 for all examples). As an independent evaluation of the model performance, we also compute the Wasserstein distance between the true and generated four-momenta in the lab frame that are used by the discriminator to update the generator. The Wasserstein distance is computed as the average over the first Wasserstein distance for each four-vector component with Scipy [48]. Interestingly, the best Wasserstein distance decreases for the first 1000 epochs, then plateaus for the next 3000 epochs, before dropping to the final value around 5500 epochs. There are many possible variations on the GAN training setup that are possible to further improve the performance and we plan to explore these in the future.
The direct inputs and outputs of the model are shown in Fig. 3. The generator produces two outputs per cluster, corresponding to the angle of one of the pions in the cluster rest
frame in spherical coordinates. Histograms corresponding to this model are shown in the top row of Fig. 3. The marginal distributions looks similar to isotropic decays. For illustration, we also show what an initialized, untrained GAN looks like in both coordinates. The fact that the initial GAN is so far from the final GAN is a non-trivial demonstration of the learning. Both GAN models match their respective truth Herwig spectra well. The marginal \(\phi\) distribution is uniform, which is difficult for generative models to reproduce exactly. In the future, it may be possible to make this more precise by constructing the model to give a uniform marginal.
After the clusters are decayed, the resulting hadron kinematic properties are Lorentz boosted to the lab frame and then aggregated over all clusters in the event. The second row of Fig. 3 shows histograms of the resulting hadron four-vectors, which are the inputs to the discriminator. We only show the energy \(E\) and the \(x\) momentum \(p_{x}\), but similar trends hold for \(p_{y}\) and \(p_{z}\). Since hadronization is a small correction for such inclusive observables, the kinematic properties are mostly set by the Herwig parton shower, which is the same for the Herwig and GAN lines in the plots (since the GAN takes the clusters from the parton shower as input). This is the reason why the initial GAN starts so close to Herwig truth. However, the alternative Herwig sample differs significantly from the nominal Herwig sample, in particular in how hadrons split energy, which is most clearly seen in the tails of the energy and momentum distributions. The GAN model is an excellent match to the Herwig events across the full spectra.
Figure 2: Generator loss, discriminator loss and running best Wasserstein distance as a function of the training epoch. The running best Wasserstein distance is quantified by the \(y\) axis on the right side of the plot.
igure 4 goes beyond the direct inputs and outputs by studying derived, but measureable, quantities. The first plot in Fig. 4 is the number of hadrons. Since we restrict our attention to \(1\to 2\) decays only, the number of hadrons is an even number, with a mode of 12. It is not possible to uniquely pair observed hadrons with their partner from the same cluster decay, but we can approximate the combination using nearest neighbor information. In particular, since the hadron masses are small compared to the typical cluster energy in the lab frame, the two hadrons tend to be close together in phase space. For all hadrons, we assign a hadron neighbor as the particle that minimizesSS\(\Delta R^{2}=\Delta\phi^{2}+\Delta\eta^{2}\). A histogram of the resulting \(\Delta R\) distribution is shown in the middle left plot of Fig. 4. The peak is at about 0.1, with most hadrons having a neighbor less than 0.1. While there is some difference between models in the \(\Delta R\) distribution, a most distinguishing observable is the energy sharing between hadrons in the reconstructed cluster (middle right of Fig. 4). The nominal Herwig has more equal sharing of energy, while the alternative Herwig sample is much more asymmetric. The GAN models are able to match these trends, which both differ significantly from the initialized and untrained GAN model. Future GAN models
Figure 3: Top: the generative model in the true cluster rest frame. Bottom: two of the four-vector components that are used by the discriminator to update the generator.
could be improved by adding in these features to the discriminator directly.
Additionally, we consider properties of the hadrons in the reconstructed cluster frame (bottom row of Fig. 4). Since the reconstructed clusters are not exactly the true clusters, the \(\phi\) and \(\theta\) distributions do not exactly match the top row of Fig. 3, although they are qualitatively similar. The distribution of \(\phi\) is more discriminating between models, where the GAN models perform well, except near the edge of phase space where both GAN model match the nominal Herwig events.
A key advantage of this fitting protocol over other methods is that it can accommodate unbinned and high-dimensional inputs. It would be possible to replace our neural network discriminator (and cross-entropy loss) with a \(\chi^{2}\) fit to binned histograms, like the ones in Fig. 3 (bottom) and 4, which are all observable in the lab frame. However, this would be a highly non-trivial modification to our setup and would necessarily be less effective. Comparing with standard tools that process low-dimensional and binned inputs would likely be inconclusive because we will not know if the difference in performance is from the tool or from the less information contained in the data.
As a compromise in order to quantify the information gained from using our discriminator setup, we use a set of auxiliary classifiers. Our nominal setup is represented by our discriminator trained on the same inputs as our GAN model and to distinguish the two Herwig cluster model variations. The information content is represented by the area under the Receiver Operating Characteristic (ROC) curve or AUC, which is a standard metric for information content. An AUC of 0.5 means there is no useful information and an AUC of 1 means that the models can be exactly distinguished. For comparison, we compute the AUC also of the single observables in Fig. 3 (bottom) and Fig. 4. We do not bin these observables to avoid arbitrary binning choices and assume (which is conservative) that the bins of any actual measurement would be chosen to be maximally effective for this task. Technically, the AUC for single observables is computed by scanning over the observable to determine the true positive rate versus the false positive rate.
Since a threshold cut may not be optimal for all observables, we have also checked how the results change if we train a simple Boosted Decision Tree (BDT) using sklearn [49]. We find that the BDT-based AUCs (including for the neural network as an observable) are consistent with the non-BDT ones. Numerically, the AUCs are as follows: neural network: 0.77, energy ratio (Fig. 4 upper right): 0.55, \(\Delta R\) (Fig. 4 upper left): 0.53, rest frame \(\theta\) (Fig. 4 lower right): 0.51, rest frame \(\phi\) (Fig. 4 lower left): 0.51, \(p_{x}\) (Fig. 3 lower right): 0.54, \(E\) (Fig. 3 lower left): 0.57. The information content accessible to the neural network far exceeds the information in any of the individual observables.
Figure 4: Top: A histogram of the number of hadrons. Since each cluster decays into two pions, the number of hadrons is an even integer. Middle: \(\Delta R\) between a given hadron and its nearest neighbor in \(\phi-\eta\) in the lab frame (left) and the ratio of energies between a given hadron and its neighbor (right). Bottom: The \(\phi\) (left) and \(\theta\) (right) of the hadrons in the reconstructed cluster frame.
Conclusions and Outlook
We have presented a setup for fitting deep generative hadronization models to data. The main challenge we have addressed is the lack of truth labels connecting partons and hadrons, which were used by previous deep generative hadronization models [12; 13]. In order to address this challenge, we used a two-level Generative Adversarial Network (GAN) setup, where the generator acts at parton level and the discriminator acts on hadron level. Since there is no natural order to the hadrons, the discriminator is a classifier based on the Deep Sets architecture that can process variable-length and permutation-invariant inputs. We have shown that we can fit this model to two variations of the Herwig cluster hadronization model. The GAN is able to reproduce Herwig well, with additional refinement and optimization required in the future to improve the prevision further.
While this represents a significant step towards realizing a deep generative hadronization model, there are still other aspects to address. We have restricted our attention to pions, but a complete model will need to generate the full spectrum of hadrons in addition to kinematic information. Additionally, we have started from clusters decaying to two hadrons, while in reality, more complex arrangements are possible. In fact, we ran a test to fit the string model in Pythia using our setupP, but the cluster model is not flexible enough. Modifications that allow for more general parton to hadron mappings, including variable-length generation [24; 25; 26; 27; 28; 29; 30; 50], will be required in the future. In particular, we would not take pre-confinement as a starting point and instead also model the combination of partons with a neural network (so partons to hadrons instead of clusters to hadrons). Such a model would have the capacity to mimic the cluster or string models as well as go beyond either model. Such an architecture could be swapped out for our generator and use our same GAN setup to do the final fit.
Footnote ¶: For this, we used exactly the same partons as in the Herwig dataset and ran the string model in Pythia, modified to only produce pions.
Once we have a full model, there is a question of which data to use for the fit. Traditionally, hadronization models have been fit to histograms (binned differential cross section measurements) from \(e^{+}e^{-}\) data using tools like Professor [51] and other automated tuning protocols [52; 53; 54]. However, these approaches may need to be modified since the parameter space of the models is much bigger. One possibiliy is to use a variation of Unbinned Profiled Unfolding (UPU) [55], which uses histograms to steer neural networks with a two-level fit for unfolding. The reweighting function in UPU could be replaced with the hadronization model. Another possibility is to start with unbinned data, as is now possible with machine learning-based unfolding methods [56; 57; 58; 59; 60; 61; 62; 63; 64; 65]. There are also now first unbinned cross section measurements [66; 67; 68; 69; 70], although none are currently published without binning [56]. There are not yet any unbinned measurements from \(e^{+}e^{-}\), but results from deep inelastic scattering may be effective, since they share many of the features of \(e^{+}e^{-}\) that makes them particularly clean with respect to hadron colliders.
While there are still multiple components needed to arrive at a complete ML-based hadronization model, the program ahead is well-motivated. Current models are excellent, but the additional flexibility of neural networks will allow us to improve the precision on
hadronization modeling so for precise measurements that are affected by these uncertainties. With improvements in machine learning models, it may also be possible to use these tools to learn more about hadronization itself, which remains a key research topic in nuclear physics.
## Software and Datasets
The code for this paper can be found at [https://github.com/hep-lbdl/hadml/releases/tag/1.0.0](https://github.com/hep-lbdl/hadml/releases/tag/1.0.0) [71]. The data sets are hosted on Zenodo at Ref. [47].
## Acknowledgments
We thank Aishik Ghosh for many useful discussions. The work of AS is funded by grant no. 2019/34/E/ST2/00457 of the National Science Centre, Poland and the Priority Research Area Digiworld under the program Excellence Initiative - Research University at the Jagiellonian University in Cracow. JC, BN and XJ are supported by the U.S. Department of Energy (DOE), Office of Science under contract number DE-AC02-05CH11231. JC is supported by the DOE, Office of Science under contract DE-SC0017647.
|
2306.11048 | UncLe-SLAM: Uncertainty Learning for Dense Neural SLAM | We present an uncertainty learning framework for dense neural simultaneous
localization and mapping (SLAM). Estimating pixel-wise uncertainties for the
depth input of dense SLAM methods allows re-weighing the tracking and mapping
losses towards image regions that contain more suitable information that is
more reliable for SLAM. To this end, we propose an online framework for sensor
uncertainty estimation that can be trained in a self-supervised manner from
only 2D input data. We further discuss the advantages of the uncertainty
learning for the case of multi-sensor input. Extensive analysis,
experimentation, and ablations show that our proposed modeling paradigm
improves both mapping and tracking accuracy and often performs better than
alternatives that require ground truth depth or 3D. Our experiments show that
we achieve a 38\% and 27\% lower absolute trajectory tracking error (ATE) on
the 7-Scenes and TUM-RGBD datasets respectively. On the popular Replica dataset
using two types of depth sensors, we report an 11\% F1-score improvement on
RGBD SLAM compared to the recent state-of-the-art neural implicit approaches.
Source code: https://github.com/kev-in-ta/UncLe-SLAM. | Erik Sandström, Kevin Ta, Luc Van Gool, Martin R. Oswald | 2023-06-19T16:26:25Z | http://arxiv.org/abs/2306.11048v2 | # UnLe-SLAM: Uncertainty Learning for Dense Neural SLAM
###### Abstract
We present an uncertainty learning framework for dense neural simultaneous localization and mapping (SLAM). Estimating pixel-wise uncertainties for the depth input of dense SLAM methods allows re-weighing the tracking and mapping losses towards image regions that contain more suitable information that is more reliable for SLAM. To this end, we propose an online framework for sensor uncertainty estimation that can be trained in a self-supervised manner from only 2D input data. We further discuss the advantages of the uncertainty learning for the case of multi-sensor input. Extensive analysis, experimentation, and ablations show that our proposed modeling paradigm improves both mapping and tracking accuracy and often performs better than alternatives that require ground truth depth or 3D. Our experiments show that we achieve a 38% and 27% lower absolute trajectory tracking error (ATE) on the 7-Scenes and TUM-RGBD datasets respectively. On the popular Replica dataset using two types of depth sensors, we report an 11% F1-score improvement on RGBD SLAM compared to the recent state-of-the-art neural implicit approaches. Source code: [https://github.com/kev-in-tau/UnLe-SLAM](https://github.com/kev-in-tau/UnLe-SLAM).
## 1 Introduction
Neural scene representations have taken over the 3D reconstruction field by storm [47, 41, 12, 42] and have recently also been built into SLAM systems [67, 81, 78] with excellent results for geometric reconstruction, hole filling, and novel view synthesis. However, their camera tracking performance is typically inferior to the one of traditional sparse methods [9] that rely on feature point matching [81, 78]. A major difference to sparse methods which focus on a small set of points is that the rendering loss in most dense methods treats all pixels equally although it is plausible that they differ in their amount of useful information for SLAM, due to sensor noise. In the context of RGBD-cameras, it is well-known that several factors such as surface material type, texture _etc._, often affect the sensor's raw output, leading to noisy measurements [23, 4]. Introducing pixel-wise uncertainties into a dense SLAM approach allows us to model non-uniform weights to focus on tracking and mapping suitable scene parts in a continuous manner. This is akin to the discrete selection of features points in traditional sparse approaches. Currently, the majority of dense neural SLAM approaches employ a uniform weighting for all pixels during mapping [81, 78, 37, 80] and tracking [81, 78, 67, 80]. Some efforts have been made to construct more informed pixel sampling strategies via active resampling or rejection based on the re-rendering loss for mapping [67] and tracking [37], but these approaches are ultimately limited by simple heuristics. In this paper, we therefore tackle the task of learning aleatoric depth sensor uncertainty on the fly to weigh scene parts in a non-uniform manner based on the estimated confidence. Furthermore, mobile devices are often equipped with more than one depth sensing modality and it is often observed that different modalities complement each other [58]. With these aspects in mind, we design our implicit SLAM system to perform dense SLAM with one or more depth sensors. Additionally, existing depth fusion methods that model single
Figure 1: **UnLe-SLAM benefit. Our proposed method learns depth uncertainty on the fly in a self-supervised way. We show that our approach yields more accurate 3D mapping and tracking than other dense neural implicit SLAM methods, like NICE-SLAM [81] which does not model depth uncertainty.**
sensor depth uncertainty [59, 56, 54, 71] or fuse multiple depth sensors [58] require access to ground-truth depth or 3D at train time. Hence, these methods may not be robust to domain shifts at test time. On the contrary, we learn sensor-agnostic uncertainty online in a self-supervised way without requiring ground truth depth or 3D. For that, we assume a Laplacian error distribution on the depth sensor and derive the corresponding loss function.
Our method, dubbed UncLe-SLAM, jointly learns the aleatoric depth uncertainty and the scene geometry by passing cheaply available 2D features from the depth sensor as input to a small uncertainty decoder, meaning that we stay within real-time run constraints. Our approach thus guides the mapping and tracking process with the implicitly learned uncertainty, see Fig. 1. Moreover, we showcase that our formulation generalizes well to the multi-sensor setting where two depth sensors with varying noise distributions are fused into the same 3D representation. Our contributions are:
* A robust approach for estimating aleatoric depth uncertainty for the single and multi-sensor case is proposed. The introduced framework is robust, accurate and can be directly integrated into a dense SLAM system without the need for ground truth depth or 3D.
* In the single depth sensor case, we show that our uncertainty-driven approach often improves on standard performance metrics regarding geometric reconstruction and tracking accuracy. In the multi-sensor case, we show for various sensor combinations that our method extracts results that are consistently better than those obtained from the individual sensors.
## 2 Related Work
The approach proposed in this paper covers a wide range of research topics such as SLAM, sensor fusion, sensor modeling, uncertainty modeling, etc. All of these topics are well-studied with an exhaustive list of literature. Therefore, we narrow our related work discussion to the relevant methods that better helps expose our contributions.
### Single-Sensor Depth Fusion and Dense SLAM
Curless and Levoy's seminal work [14] is the basis for many dense depth mapping approaches [43, 71]. Subsequent developments include scalable techniques with voxel hashing [45, 28, 46], octrees [63], and pose robustness [8]. Further advancements led to dense SLAM, such as [44, 61, 67, 81], which can also handle loop closures such as BundleFusion [15]. To address the issue with noisy depth maps, RoutedFusion [71] learns a fusion network that outputs the TSDF update of the volumetric grid. Other works such as NeuralFusion [72] and DI-Fusion [27] extend this concept by learning the scene representation, resulting in better outlier handling. Lately, the work on continuous neural mapping [74] learns the scene representation using continual mapping from a sequence of depth maps. Yet, none of the above-mentioned approaches explicitly study multiple depth modalities or their uncertainty and their fusion in a neural SLAM framework. Further, their extensions to multiple sensor fusion are often not trivial. Nevertheless, by treating all sensors alike, they can be used as simple baselines.
### Multi-Sensor Depth Fusion
The fusion of at least two types of depth-sensing devices has been studied in the past. Notably, the fusion of raw depth maps from two different sensors, such as RGB stereo and time-of-flight (ToF) [70, 13, 2, 21, 16, 38, 3, 17], RGB stereo and Lidar [36], RGB and Lidar [55, 48, 50], RGB stereo and monocular depth [40] and the fusion of multiple RGB stereo algorithms [53] is well-studied and explored. Yet, these methods study specific sensors and are not inherently equipped with 3D reasoning. Few works consider 3D reconstruction with multiple sensors [57, 31, 7, 76, 77, 24], but these do not consider the online mapping setting. Conceptually, more closely related to our work is SenFuelNet [58], which is an online mapping method for multi-sensor depth fusion. Still, contrary to our approach, [58] requires access to ground truth 3D data at train time. It does not predict explicit uncertainty per sensor but requires multi-sensor input to weigh the sensors against each other.
### Uncertainty Modeling for Depth
Uncertainty modeling for depth estimation has been studied extensively in the past, specifically for multiview stereo (MVS) [33, 73, 79, 66] and binocular stereo [54, 62, 69, 30]. In addition to the popular Gaussian distribution to model sensor noise [10], the Laplacian noise model has also been employed to analyse depth uncertainty. For instance, Klodt _et al._[32] assume, like our approach, a Laplacian noise model to explore the advantage of depth uncertainty modeling from short sequences of RGB images. Likewise, Yang _et al._[75] uses a Laplacian model for monocular depth estimation [75]. Furthermore, some works propose self-supervised frameworks for monocular depth estimation, such as [52, 75]. Aleatoric uncertainty estimation has also been applied for surface normal estimation from RGB [5]. This technique was recently used to refine depth estimated from a monocular RGB camera [6]. Closer to our setting, RoutedFusion [71] trains an encoder-decoder style network to refine depth maps and predict a measure of confidence. Nevertheless, unlike our approach, they require access to ground truth depth for training. Despite impressive progress in depth uncertainty modeling, there has been little focus on uncertainty estimation of the 3D surface. DI-fusion [27] proposed a technique to do this by imposing a
Gaussian assumption on the signed distance function. Yet, unlike our approach, it needs ground truth 3D for training.
Regarding uncertainty modeling, our method is related to the treatment of probabilistic depth fusion methods [19, 20, 34, 18, 10]. As studied and observed by several methods, As studied and observed by several methods, explicit uncertainty modeling is helpful1. In the context of SLAM, Cao _et al._[10] introduced a probabilistic framework via a Gaussian mixture model for dense visual SLAM based on surfels to address uncertainties in the observed depth. However, it is well-known that Gaussian noise modeling has its practical limitations [49].
Footnote 1: For a review on uncertainty estimation in deep learning we refer to [1]
Overall, to the best of our knowledge, none of the state-of-the-art neural SLAM methods for dense online SLAM consider aleatoric uncertainty modeling along with multiple sensors. Moreover, none of the above works consider estimating uncertainty in an online self-supervised way with implicit neural SLAM.
## 3 Preliminaries
To perform online neural implicit SLAM from a sequence of RGBD images, it is necessary to have a 3D representation. Furthermore, due to the self-supervision from the incoming sensor frames, a rendering technique is needed that connects the 3D representation to the 2D observations. By using the 3D representation and 2D rendering technique, the mapping and tracking processes can be constructed. In this paper, we focus on solid (non-transparent) surface reconstruction. We first present background information on implicit surface and volumetric radiance representations, which is then used to develop our online uncertainty modeling approach.
### Scene Representation
Convolutional Occupancy Networks [51] proposes to learn the occupancy \(\mathrm{o}\in[0,1]\) using an encoded 3D grid of features that can be passed, after trilinear interpolation, through an MLP decoder to acquire the occupancy. NICE-SLAM [81] utilizes this idea and encodes the scene in hierarchical voxel grids of features. For any sampled 3D coordinate \(\mathbf{p}_{i}\in\mathbb{R}^{3}\), feature vectors can be extracted from these voxel grids. The features can then be fed, in a coarse-to-fine manner, through MLP decoders to extract the occupancy of the given point.
The geometry is encoded in two feature grids - middle and fine2. Each feature grid \(\phi_{\theta}^{l}\) has an associated pretrained decoder \(f^{l}\), where \(l\in\{1,2\}\) and \(\theta\) describes the optimizable features. We denote a trilinearly interpolated feature vector at point \(\mathbf{p}_{i}\) as \(\phi_{\theta}^{l}(\mathbf{p}_{i})\). Additionally, the color is encoded in a fourth feature grid \(\psi_{\omega}\) (parameters \(\omega\)) with decoder \(g_{\xi}\) (parameters \(\xi\)), and is used for further scene refinement after initial stages of geometric optimization. The observed scene geometry is reconstructed from the middle and fine resolution feature grids, with the fine feature grid output residually added to the middle grid occupancy. In summary, the occupancy \(\mathrm{o}_{i}\) and color \(\mathbf{c}_{i}\) are predicted as
Footnote 2: There is an additional coarse grid, but it is not used for mapping, and despite claims from the authors, when looking at the source code, it is neither used for tracking. Thus, we do not consider it.
\[\mathrm{o}_{i} =f^{1}\big{(}\mathbf{p}_{i},\phi_{\theta}^{1}(\mathbf{p}_{i}) \big{)}+f^{2}\big{(}\mathbf{p}_{i},\phi_{\theta}^{2}(\mathbf{p}_{i}),\phi_{ \theta}^{1}(\mathbf{p}_{i})\big{)}\] \[\mathbf{c}_{i} =g_{\xi}\big{(}\mathbf{p}_{i},\psi_{\omega}(\mathbf{p}_{i})\big{)}. \tag{1}\]
### Depth and Image Rendering
To link the 3D representation with supervision using 2D RGBD observations, NICE-SLAM uses volume rendering of depth maps and RGB images. This process involves sampled points \(\mathbf{p}_{i}\in\mathbb{R}^{3}\) at depth \(d_{i}\in\mathbb{R}^{1}\) along a ray \(\mathbf{r}\in\mathbb{R}^{3}\) cast from origin \(\mathbf{O}\in\mathbb{R}^{3}\), as
\[\mathbf{p}_{i}=\mathbf{O}+d_{i}\mathbf{r},\quad i\in\{1,...,N\}. \tag{2}\]
The occupancies are evaluated along the ray according to Eq. (1) and volume rendering constructs a weighting function \(w_{i}\) using Eq. (3). This weight represents the discretized probability that the ray terminates at that particular point.
\[w_{i}=\mathrm{o}_{i}\prod_{j=1}^{i-1}(1-\mathrm{o}_{j}) \tag{3}\]
The rendered depth is computed as the weighted average of the depth values along each ray, and equivalently for the color following Eq. (4) as defined below.
\[\hat{D}=\sum_{i=1}^{N}w_{i}d_{i},\quad\hat{I}=\sum_{i=1}^{N}w_{i}\mathbf{c}_{i} \tag{4}\]
This volume rendering method also provides variance from the discretized selection of points. By taking the depth differences with respect to the sensor depth multiplied by the weighting function, a measure of variance can be extracted that is a composite of the model uncertainty and sampling uncertainty, as defined in Eq. (5).
\[\hat{S}_{D}=\sqrt{\sum_{i=1}^{N}w_{i}\big{(}\hat{D}-d_{i}\big{)}^{2}} \tag{5}\]
## 4 Method
This section details how we introduce aleatoric uncertainty modeling based on the preliminaries covered in Section 3. The rest of our methodology section is arranged as follows: We first present our theoretical assumptions which form the basis for our loss function derivation. Then,
we explain how our framework elegantly supports multi-sensor fusion with additional depth sensors and RGBD fusion without relying on heuristic hyperparameters. Finally, we describe our architecture and implementation. For an overview, see Fig. 2.
### Theoretical Assumptions
We motivate our formulation of sensor noise under the assumption of a Laplacian noise distribution on a per-ray basis which was found to perform better on vision tasks than a Gaussian assumption by [29]. Further, we assume that the noise is heteroscedastic meaning that the noise variance is a variable for each pixel. That is, each pixel \(m\) in the captured depth sensor is treated independently. Consequently, the measured depth is sampled from the probability density function
\[P(D_{m})=\frac{1}{2\beta_{m}}\exp\left(-\frac{|D_{m}-\hat{D}_{m}|_{1}}{\beta_{ m}}\right)\ . \tag{6}\]
We take \(\hat{D}_{m}\) to be the true depth and \(\sqrt{2}\beta_{m}\) to be the standard deviation of the depth reading of a specific pixel, parameterised by some function with parameters \(\tau\). When we aggregate all depth sensor information, we get the joint density of the per-ray depth observations
\[P(D_{1},...,D_{M})=\prod_{m=1}^{M}\frac{1}{2\beta_{m}}\exp\left(-\frac{|D_{m}- \hat{D}_{m}|_{1}}{\beta_{m}}\right)\ \,\]
where M is the total number of pixel readings. The best estimate of the depth can thus be determined via maximum likelihood estimation
\[\arg\max_{\theta,\tau} P(D_{1},...,D_{M})=\arg\min_{\theta,\tau}-\log\left(P(D_{1},...,D_{M})\right)\] \[=\arg\min_{\theta,\tau}\sum_{m=1}^{M}\frac{|D_{m}-\hat{D}_{m}|_{1} }{\beta_{m}}+\log(\beta_{m}). \tag{7}\]
### Mapping
Mapping is performed equivalently to [81], but with the revised loss function
\[\mathcal{L}_{map}=\sum_{m=1}^{M}\frac{|D_{m}-\hat{D}_{m}(\theta)|_{1}}{\beta_{ m}(\tau)}+\log\left(\beta_{m}(\tau)\right) \tag{8}\]
A database of keyframes is utilized to regularize the mapping loss. Keyframes are added at a regular frame interval and sampled for each mapping phase to have a significant overlap with the viewing frustum of the current frame. Pixels are then sampled from the keyframes along with the current frame to optimize the map. In terms of optimization, a two-stage approach is taken. For each mapping phase, the middle grid is first optimized and then, once converged, the fine grid is included for further refinement. For more details, we refer to [81].
### Tracking
Tracking is performed equivalently to [81], but with the revised mapping loss function
\[\mathcal{L}_{\mathrm{track}}=\frac{1}{M_{t}}\sum_{m=1}^{M_{t}}\frac{|D_{m}-\hat {D}_{m}(\theta)|_{1}}{\hat{S}_{D}(\theta)+\beta_{m}(\tau)}, \tag{9}\]
which additionally takes the aleatoric sensor uncertainty into account. \(M_{t}\) is the number of pixels that are sampled during tracking. We optimize the camera extrinsics \(\{\mathbf{R},\mathbf{t}\}\).
### Multi-Sensor Depth Fusion and RGBD Fusion
The methods described so far have encompassed implicitly learning uncertainty given a single sensor. We extend this single-sensor approach to incorporate a second sensor. If we again assume that each depth observation is I.I.D., the joint likelihood we wish to maximize is the product of the probability distributions for each pixel in each sensor.
Given two synchronized and aligned sensors, we can sample a set of pixels \(m\in\{1,...,M\}\) from two depth sensors yielding the generalized loss function
\[\mathcal{L}=\sum_{m=1}^{M}\sum_{i=1}^{2}\frac{|D_{m,i}-\hat{D}_{m}|_{1}}{\beta _{m,i}}+\log(\beta_{m,i}). \tag{10}\]
One interpretation of this objective function is that the pipeline implicitly learns the weighting between the two sensor observations. The loss function penalizes large uncertainties via the log terms, and implicitly learns the uncertainty for both sets of observations as the model depth is optimized. In an analogous fashion, RGBD fusion can be achieved via the loss function
\[\mathcal{L}_{rgbd} =\mathcal{L}_{geo}+\mathcal{L}_{rgb} \tag{11}\] \[\mathcal{L}_{geo} =\sum_{m=1}^{M}\frac{|D_{m}-\hat{D}_{m}|_{1}}{\beta_{m,d}}+\log( \beta_{m,d})\] (12) \[\mathcal{L}_{rgb} =\sum_{m=1}^{M}\frac{|I_{m}-\hat{I}_{m}|_{1}}{\beta_{m,r}}+\log( \beta_{m,r}), \tag{13}\]
where \(\beta_{m,d}\) and \(\beta_{m,r}\) denote the per pixel sensor uncertainty for the depth and rgb sensor respectively. This modeling is different to NICE-SLAM where the color and geometry losses are weighted by a heuristic hyperparameter.
### Design Choices and Architecture Details
The per-pixel depth and variance is rendered according to Eq. (4) and Eq. (5) respectively.
The variance from Eq. (5) could naively be applied to Eqs. (8) and (9) with the rendered variance \(\hat{S}_{D}\) representing \(2\beta^{2}\). Unfortunately, such an approach is poorly motivated as this calculated variance is related to the model confidence, as opposed to the sensor-specific noise. In practice,
the uncertainty we strive to model is aleatoric uncertainty and should be distinct from the model confidence. One interpretation of the variance from Eq. (5) is as the epistemic uncertainty. With an increasing number of observations, the epistemic uncertainty should shrink, driving the model towards sharp bounds. We instead seek a separate process to extract aleatoric uncertainty. We take the concept of implicitly learned aleatoric uncertainty from the work of Kendall and Gal [29] and design a patch-based MLP. Our approach takes in spatial information from the specific depth frame to generate uncertainty \(\beta\), distinct and decoupled from the rendered variance \(\hat{S}_{D}\).
An additional concern within the framework is the computational overhead. Volume rendering is one of the more intensive operations and an additional rendering for each sensor may be prohibitively expensive. Consequently, we propose a simpler approach to derive a ray-specific uncertainty through the use of 2D features that contain relevant information. We can leverage cheaply available metadata, as was done in _e.g._[60], to capture sensor noise. We investigate plausible per-pixel (per-ray) features and end up with the following inputs to estimate depth uncertainty: the measured depth \(D_{m}\in\mathbb{R}\) and the incident angle \(\theta\in\mathbb{R}\) between the local ray direction and the surface normal, computed as in [43] from the depth map through central difference after bilateral filtering [68]. For RGB uncertainty, we feed the color instead of the depth and incident angle. Instead of only feeding the features from a single pixel observation, we feed the features from a 5\(\times\)5 patch, effectively expanding the receptive field of the ray. This patch of pixels gives local context and local correlation of uncertainty for areas near edges or with high frequency content. We denote the concatenation of the features \(\zeta\).
The MLP network, denoted \(h_{w}\), is similar in architecture to the MLPs \(f^{l}\) used for the occupancy decoders. We use a network with 5 intermediate layers with 32 nodes each, activated via ReLU, except for the last layer. Inspired by NeRF-W [39], we apply a softplus activation with a minimum uncertainty value \(\beta_{\min}\). The output \(\tilde{y}_{m}\in\mathbb{R}\) from the last layer is thus processed as
\[\beta_{m}=h_{w}(\zeta)=\beta_{\min}+\log\left(1+\exp\left(\tilde{y}_{m}\right)\right) \tag{14}\]
The addition of a minimum uncertainty changes the bound of the uncertainty to \((\beta_{\min},\infty)\), and mitigates numerical instability during optimization. Finally, we only update \(h_{w}\) during the fine stage of optimization _i.e._ in the middle stage, we use the same loss as [81].
Figure 2: **UncLe-SLAM Architecture. Given an input depth map from an estimated camera pose, mapping and tracking is performed by minimizing a re-rendering loss, by optimizing either the grid features \(\theta\) and network parameters \(w\) or the camera extrinsics respectively. The depth is estimated using point samples \(\mathbf{p}_{i}\) along rays with a volumetric renderer which decodes geometric multi-scale features \(\phi_{\theta}^{1}(\mathbf{p}_{i})\) and \(\phi_{\theta}^{2}(\mathbf{p}_{i})\) into occupancies. The uncertainty is estimated by feeding informative features through an uncertainty decoder \(h_{w}\). The architecture can be extended to a multi-sensor setting or with RGB by adding additional uncertainty MLPs. We build the architecture on top of NICE-SLAM [81].**
## 5 Experiments
We first describe our experimental setup and then report results on single and multi-sensor experiments. We evaluate our method on the Replica dataset [64] as well as the real-world 7-Scenes [22] and TUM-RGBD [65] datasets. All reported results are averages over the respective test scenes and over ten runs, unless otherwise stated. Further experiments and details are in the supplementary material.
**Implementation Details.** We leave many of the hyperparameters from [81] as is _e.g._ we use 0.32 \(\mathrm{m}\) and 0.16 \(\mathrm{m}\) voxel size for the middle and fine resolution respectively. The ray sampling strategy remains the same, with 32 points uniformly sampled along the ray and 16 points sampled uniformly near the depth reading. The feature grids store 32-dimensional features and we use the same occupancy decoders and color decoders as [81]. We leave the learning rates for feature grid optimization under the same schedule--_i.e._ 0.1 for the middle stage and 0.005 for the fine stage. On Replica, we map every 5th frame and use 5K pixels uniformly sampled during mapping and tracking. We use 10 tracking iterations and 60 mapping iterations and include the fine grid optimization after 60 \(\%\) of the total mapping iterations. These parameters were not tuned and may be optimized to further improve performance. Specifically, the learning rates may be adjusted under the new loss formulation to improve stability.
**Evaluation Metrics.** The meshes, produced by marching cubes [35] from the occupancy grids, are evaluated using the F-score which is the harmonic mean of the Precision (P) and Recall (R). We further provide the mean precision and mean recall along with the depth L1 metric as in [81]. For tracking accuracy, we use ATE RMSE [65].
**Baseline Methods.** We compare our proposed method to existing state-of-the-art online dense neural SLAM methods. The most natural baseline is NICE-SLAM [81], which treats all depth observations equally, followed by SenFuelNet [58], which performs multi-sensor depth fusion. SenFuelet does not explicitly model per sensor uncertainty, but fuses two depth sensors with a learned weighting network. In the multi-sensor setting, we also compare to VoxFusion [78] by weighting all depth readings equally. Additionally, we pretrain a 2D confidence prediction network from the raw depth maps using a slightly modified version of the network proposed by Weder _et al._[71]. The per pixel learned confidences are used at runtime in NICE-SLAM to scale the importance in the mapping and tracking loss function. We call this baseline "NICE-SLAM+Pre". Details are provided in the supplementary material.
**Datasets.** The Replica dataset [64] comprises high-quality 3D reconstructions of a variety of indoor scenes. We utilize the publicly available dataset collected by Sandstrom _et al._[58], which provides trajectories from a simulated structured light (SL) sensor [25], depth from stereo with semi-global matching [26] (SGM) and from a learning-based approach called PSMNet [11] as well as color.
The 7-Scenes [22] and TUM-RGBD [65] datasets comprise a set of RGBD scenes captured with an active depth camera along with ground truth poses.
### Single Sensor Evaluation
**Replica.** We provide experimental evaluations on two depth sensors in three different settings: 1. Depth with ground truth poses _i.e._ pure mapping from noisy depth. 2. Depth with estimated camera poses (_i.e._ with tracking) and 3. RGBD with tracking. In Table 1 for the PSMNet [11] sensor, our model shows consistent improvements on all metrics in all three settings. For the SGM [26] sensor (in Table 2) we find consistent improvements in the settings where tracking is enabled. In the mapping only setting, the pre-trained confidence model performs marginally better for the SGM sensor. Fig. 7 shows the reconstruction results for two scenes from the Replica dataset with the two sensors. Compared to NICE-SLAM [81], we find that UncLe-SLAM on average reconstructs more accurate geometries.
**Uncertainty Visualization.** To gain insights about the estimated uncertainties that our model produces, we visualize the estimated uncertainties for our two depth sensors in Fig. 8. For reference, we also plot the absolute ground truth depth error. Compared to the uncertainties produced by the pretrained network, we find that our model produces sharper estimates, see _e.g._ the last row where our model can replicate the error pattern more accurately. This is likely a result of our restricted receptive field while the pretrained model employs a fully convolutional network model with a larger receptive field. Moreover, our model seems to be able to replicate some errors better than the pretrained model, see _e.g._ the red patch for the PSMNet sensor where our model
\begin{table}
\begin{tabular}{l|c c c c c c c} Model \(\downarrow\) Metric \(\rightarrow\) & \begin{tabular}{c} Depth L1 \\ [cm] \\ \end{tabular} & \begin{tabular}{c} mP1 \\ [cm] \\ \end{tabular} & \begin{tabular}{c} mR\(\downarrow\) \\ [cm] \\ \end{tabular} & \begin{tabular}{c} P\(\uparrow\) \\ [\(\%\)] \\ \end{tabular} & \begin{tabular}{c} R\(\uparrow\) \\ [\(\%\)] \\ \end{tabular} &
\begin{tabular}{c} F\(\uparrow\) \\ [cm] \\ \end{tabular} \\ \hline \multicolumn{6}{c}{_Depth + Ground Truth Poses_} \\ \hline NICE-SLAM [81] & 2.64 & 2.65 & 2.35 & 88.75 & 88.20 & 88.45 & - \\ NICE-SLAM+Pre & 2.67 & 2.65 & 2.31 & 89.00 & 88.62 & 88.78 & - \\ Ours & **2.42** & **2.58** & **2.29** & **89.14** & **88.70** & **88.89** & - \\ \hline \multicolumn{6}{c}{_Depth + Tracking_} \\ \hline NICE-SLAM [81] & 10.65 & 10.04 & 7.17 & 48.46 & 51.43 & 49.80 & 27.90 \\ NICE-SLAM+Pre & 9.90 & 13.99 & 6.84 & 52.43 & 57.72 & 54.54 & 36.95 \\ Ours & **7.39** & **6.56** & **6.20** & **57.30** & **57.57** & **57.41** & **19.36** \\ \hline \multicolumn{6}{c}{_RGB-D + Tracking_} \\ \hline NICE-SLAM [81] & 8.11 & 7.81 & 6.77 & 51.81 & 53.56 & 52.63 & 20.25 \\ Ours & **6.49** & **6.43** & **5.93** & **58.89** & **59.39** & **59.09** & **18.92** \\ \hline \end{tabular}
\end{table}
Table 1: **Reconstruction Performance on Replica [64]: PSMNet [11].** Our model outperforms the baseline methods in the mapping only setting as well as with tracking enabled and when color is available. Best results are highlighted as \(\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxboxboxboxboxeded{\boxboxboxboxboxboxboxeded{\boxboxboxboxboxboxboxed
can capture the error while the pretrained model struggles. We believe this is due the the ability of our model to adapt to test time constraints through runtime optimization. Moreover, our network \(h_{w}\) contains only 5409 parameters while the pretrained network contains 360 241.
**7-Scenes.** In Table 8, we evaluate our framework on the 7-Scenes dataset [22]. We use sequence 1 for all scenes. We find that NICE-SLAM [81] consistently yields worse tracking results suggesting the effectiveness of our depth uncertainty when it comes to maintaining robust camera pose tracking. On average, our method yields a 38 \(\%\) gain in terms of the mean ATE.
**TUM-RGBD.** In Table 7, we evaluate our framework on the real-world TUM-RGBD dataset [65]. Our conclusions on this dataset is similar to the 7-Scenes dataset. On average, camera pose tracking is greatly benefited by our uncertainty aware strategy.
### Multi-Sensor Evaluation
We conduct experiments in the multi-sensor setting. We compare to Vox-Fusion [78], a dense neural SLAM system and SenFuNet [58], which is a mapping only framework. To learn sensor specific uncertainties, we use one uncertainty decoder \(h_{w}\) per sensor. In Table 5 we show for SGM+PSMNet fusion that we are able to consistently improve over the single-sensor reconstructions in isolation and over SenFuNet [58] and VoxFusion [78]. When ground truth poses are provided, we find that original NICE-SLAM performs very similar to our proposed uncertainty aware model. On a closer look, the PSMNet and SGM sensors are
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & frl/ frl/ frl/ & frl/ & Avg. \\ & desk & desk2 & xyz & \\ \hline NICE-SLAM [81] & 40.40 & 47.81 & 5.11 & 31.11 \\ Ours & **29.04** & **36.57** & **2.71** & **22.77** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Tracking Evaluation on TUM-RGBD.** We report the average ATE RMSE [cm] by mapping every 2nd frame.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Model \(\downarrow\) [Metric \(\rightarrow\) ]} & Depth L1\(\downarrow\) & m\(\downarrow\) & m\(\downarrow\) & m\(\downarrow\) & frl\(\uparrow\) & r\(\uparrow\) & ATE\(\downarrow\) \\ & [cm] & [cm] & [cm] & [cm] & [\(\%\)] & [\(\%\)] & [cm] \\ \hline \multicolumn{7}{c}{_Depth + Ground Truth poses_} \\ \hline NICE-SLAM [81] & 2.35 & 2.55 & 2.12 & 89.54 & 91.07 & 90.29 & - \\ NICE-SLAM+Pre & **2.25** & **2.49** & **2.08** & **89.86** & **91.42** & **90.62** & - \\ Ours & 2.27 & 2.56 & 2.10 & 89.59 & 91.24 & 90.40 & - \\ \hline \multicolumn{7}{c}{_Depth + Tracking_} \\ \hline NICE-SLAM [81] & 12.03 & 10.21 & 7.75 & 46.60 & 50.58 & 48.10 & 30.73 \\ NICE-SLAM+Pre & 18.96 & 16.35 & 6.90 & 48.92 & 57.60 & 52.54 & 39.14 \\ Ours & **10.60** & **9.38** & **6.58** & **52.62** & **57.21** & **54.72** & **29.11** \\ \hline \multicolumn{7}{c}{_RGB-D Tracking_} \\ \hline NICE-SLAM [81] & 9.91 & **10.37** & 6.82 & 50.12 & 54.51 & 52.00 & **26.56** \\ Ours & **7.79** & **11.01** & **5.80** & **56.10** & **61.16** & **58.19** & 27.41 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Reconstruction Performance on Replica [64]: SGM [26].** Our model outperforms the baseline methods in most settings while being marginally worse than the model using pretrained uncertainties in the mapping only setting.
Figure 4: **Single Sensor Reconstruction on Replica [64].** We show that our uncertainty modeling on average helps to achieve more accurate reconstructions when noisy depth sensors are provided as input. The office 0 scene uses only depth as input while the room 2 scene is provided RGBD input. Tracking is enabled for all experiments. The colorbar displays the deviation from the ground truth mesh.
Figure 3: **Uncertainty Visualization. Each row shows a depth map from a specific sensor with the associated uncertainty estimation from the pretrained network model and ours. As reference, the ground truth absolute depth error is shown in the last column. We find our model reproduces the error map with less smoothing than the pretrained model while capturing more details, _e.g._ the red patch from the PSMNet sensor. Blue: low uncertainty, red: high uncertainty.**
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Method & Chess Fire Head Off & Pump. Kitch. Stairs & Avg. \\ \hline NICE-SLAM [81] & 40.30 & 47.67 & 20.55 & 8.49 & 33.11 & 24.39 & 9.18 & 24.24 \\ Ours & **14.85** & **25.47** & **13.12** & **7.83** & **29.32** & **6.21** & **8.53** & **15.05** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Tracking Evaluation on 7-Scenes. We report the average ATE RMSE [cm] over 5 runs for each scene. With our depth uncertainty modeling, we achieve significantly better tracking compared to NICE-SLAM. On average, our method yields a 38 \(\%\) gain in terms of the mean ATE.**
quite similar and we believe that when both sensors yield similar depth characteristics, simple averaging works well, _i.e._ putting equal weight to both sensors as done by NICE-SLAM. We find, however, that uncertainty modeling is very important to obtain robust tracking which greatly improves the reconstruction accuracy. Finally, Fig. 9 shows visualizations of the reconstruction accuracy comparing the single sensor reconstructions to the geometry attained by UncLeSLAM. We find that the most accurate sensor is on average favored. For more results, see the supplementary material.
### Memory and Runtime
Due to the low number of parameters in our uncertainty MLP \(h_{w}\) (5409), we add 43 kB to the already allocated 421 kB for the decoders in NICE-SLAM. This is negligible in comparison to the 95.86 MB allocated for the dense grids for the office 0 scene. We report a 15 \(\%\) increase in runtime over NICE-SLAM which can be compared to the average gain of 38\(\%\) and 27\(\%\) in terms of ATE RMSE on the 7-Scenes and TUM-RGBD datasets respectively and 11\(\%\) and 32\(\%\) in terms of the F1-score on single sensor RGBD SLAM and multi-sensor depth SLAM.
### Limitations
Our framework uses patch based modeling of uncertainty which may not hold in the general case along with the cheaply available features we feed as input to the uncertainty decoder. Simply using a more expressive model with learned features is not straight forward though, as shown by our results with the pretrained model and we leave this as future work. Finally, we believe that the relatively large voxel size we use can prevent efficient uncertainty learning from fine geometric details due to the high degree of averaging. We believe that our method can benefit from a scene representation that allows for resolving finer details.
## 6 Conclusion
The paper presents a way to learn per pixel depth uncertainties for dense neural SLAM. This allows the mapping and tracking re-rendering losses to be re-weighted such that trustworthy sensor readings are used to track the camera and to update the map. We believe this is a useful instrument in closing the gap in tracking accuracy to traditional sparse SLAM methods. We show that modeling depth uncertainty generally results in improvements both in terms of mapping and tracking accuracy and often performs better than alternatives that require ground truth depth or 3D. The paper also provides one of the initial solutions that utilizes more than one depth sensing modality for dense neural SLAM.
**Acknowledgements.** This work was supported by a VIVO collaboration project on real-time scene reconstruction, as well as by a research grants from FIFA. We thank Suryansh Kumar for fruitful discussions.
Figure 5: **Multi-Sensor Reconstruction on Replica [64]. The two middle columns show single sensor reconstructions while the rightmost column shows the result when both sensors are jointly fused into the same geometry using our proposed UncLe-SLAM. Our uncertainty modeling helps on average to achieve more accurate reconstructions in the multi-sensor setting compared to the single sensor reconstructions. The colorbar displays the deviation from the ground truth mesh.**
\begin{table}
\begin{tabular}{l|c c c c c c c} Model \(\downarrow\) [Metric \(\rightarrow\)] & Depth & L1\(\downarrow\) & mP & mR & P & R\(\uparrow\) & F\(\uparrow\) & ATE \\ & [cm] & [cm] & [cm] & [cm] & [\%] & [\%] & [cm] \\ \hline \multicolumn{8}{c}{_Single Sensor Ours: Depth + Ground Truth Posets_} \\ \hline PSMNet [11] & 2.42 & 2.58 & 2.29 & 89.14 & 88.70 & 88.89 & - \\ SGM [26] & 2.27 & 2.56 & 2.10 & 89.59 & **91.24** & 90.40 & - \\ \hline \multicolumn{8}{c}{_Multi-Sensor: Depth + Ground Truth Posets_} \\ \hline NICE-SLAM [81] & 2.03 & **2.34** & **1.99** & **90.57** & **90.86** & **90.69** & - \\ SenFNet [58] & 2.03 & 4.19 & 15.62 & 12.66 & 32.74 & 28.32 & 30.22 & - \\ Vox-Fusion [78] & 6.52 & 48.76 & 30.72 & 28.01 & 49.36 & 35.65 & - \\ NICE-SLAM+Pre & 2.19 & 2.44 & 2.01 & 89.93 & 90.76 & 90.31 & - \\ Ours & **1.97** & 2.36 & 2.01 & 90.15 & 90.76 & 90.42 & - \\ \hline \multicolumn{8}{c}{_Single Sensor Ours: Depth + Tracking_} \\ \hline PSMNet [11] & 7.39 & 6.56 & 6.20 & 57.30 & 57.57 & 57.41 & **19.36** \\ SGM [26] & 10.60 & 9.38 & 6.58 & 52.62 & 57.21 & 54.72 & 29.11 \\ \hline \multicolumn{8}{c}{_Multi-Sensor: Depth + Tracking_} \\ \hline NICE-SLAM [81] & 13.58 & 16.76 & 7.84 & 51.19 & 55.45 & 52.81 & 40.37 \\ NICE-SLAM+Pre & 11.29 & 13.59 & 61.26 & 62.02 & 65.95 & 63.30 & 35.55 \\ Ours & **4.13** & **4.60** & **4.35** & **70.30** & **69.30** & **69.76** & 19.88 \\ \end{tabular}
\end{table}
Table 5: **Reconstruction Performance on Replica [64]: SGM [26]+PSMNet [11]. Our multi-sensor reconstruction performance improves over the single sensor results in isolation and we outperform most of the baseline methods. The experiment was conducted in the depth only setting with known camera poses.**
# UnLe-SLAM: Uncertainty Learning for Dense Neural SLAM
-- Supplementary Material --
Erik Sandstrom\({}^{1}\)
\({}^{1}\)ETH Zurich, Switzerland
Equal contribution.
\({}^{2}\)KU Leuven, Belgium
\({}^{3}\)University of Amsterdam, Netherlands
###### Abstract
This supplementary material accompanies the main paper by providing further information for better reproducibility as well as additional evaluations and qualitative results.
## Appendix A Video
We provide a video that shows the predicted depth uncertainty along the trajectory of the Office 1 scene from the Replica dataset. For reference, the video also contains the absolute depth error. Video link: [https://youtu.be/jsbZx3A7Y74](https://youtu.be/jsbZx3A7Y74)
## Appendix B Method
In the following, we provide more details about our proposed method, specifically the decoder network architecture and details regarding the multi-sensor experiments.
**Decoder Network Architecture.** Each feature grid \(\phi_{\theta}^{l}\) has an associated decoder \(f^{l}\), where \(l\in\{1,2\}\). Additionally, the color is encoded in a third feature grid \(\psi_{\omega}\) with decoder \(g_{w}\), used for further scene refinement after initial stages of geometric optimization. The observed scene geometry is reconstructed from the middle and fine resolution feature grids, with the fine feature grid output being added to the middle grid occupancy in a residual manner. We use the same geometric decoder architecture as proposed by NICE-SLAM [81] detailed in Fig. 6. The color decoder \(g_{w}\) follows the same general architecture as \(f^{1}\), but outputs RGB instead of occupancy.
**Multi-Sensor Modifications.** We detail the main considerations that are required to handle two-sensor input. When provided with more than one input sensor, the feature set for optimization extends to the furthest depth reading of the sensors. This ensures that the optimizable parameters include the set of feature points in the grid that would theoretically be observed by any sensor. We modify the keyframe selection strategy to accept an _averaged_ depth map between the two input sensors for determining the relevant keyframes used during the mapping process. A future extension could include using the uncertainty maps to form a weighted depth map for this purpose. NICE-SLAM samples 16 points near the depth measurements as well as 32 points a long the ray. With the addition of another sensor, we sample 16 points at around each sensor measurement on top of the 32 points along the ray for a total of 64 points. The 32 points sampled equally throughout the ray are determined by the maximum depth of both sensors.
## Appendix C Implementation Details
We use PyTorch 1.11.0 and Python 3.7.11 to implement the pipeline. Training is done with the Adam optimizer using various Nvidia GPUs with a maximum 12 GB of memory. For the uncertainty decoder \(h_{w}\), we use a learning rate of 3e-4. For all optimizers, we use the default Adam hyperparameters \(\textit{betas}=(0.9,0.999)\), _eps_ = \(1e\)-\(08\) and \(\textit{weight\_decay}\) = 0. The tracking learning rate for the camera pose is set to 0.001.
## Appendix D Evaluation Metrics
**Mapping.** We use the following five metrics to quantify the reconstruction performance. We run marching cubes [35] on the predicted occupancy grid \(V\) and compare to the ground truth mesh. The F-score is defined as the harmonic mean between Recall (R) and Precision (P), \(F=2\frac{PR}{P+R}\). Precision is defined as the percentage of points on the predicted mesh which lie within some distance \(\tau\) from a point on the ground truth mesh. Vice versa, Recall is defined as the percentage of points on the ground truth mesh which lie within the same distance \(\tau\) from a point on the predicted mesh. In all our experiments, we use a distance threshold \(\tau=0.05\) m. In addition to the F-score, Recall and Precision, we report the mean Precision and Recall which we define as the mean distance to all points. We use the evaluation script provided by the authors of [58]1.
Footnote 1: [https://github.com/eriksandstroem/evaluate_3d_reconstruction_lib](https://github.com/eriksandstroem/evaluate_3d_reconstruction_lib)
Finally, we report the depth L1 metric which renders depth maps from randomly sampled view points from the reconstructed and ground truth meshes. The depth maps are then compared and the L1 error is reported and averaged over 1000 sampled novel view points.
**Tracking.** We use the absolute trajectory error (ATE) RMSE [65] to compare tracking error across methods. This error normally computes the translation difference of the trajectories after alignment. We disable the alignment on Replica to better analyze camera pose drift, as the initial pose is fixed at the ground-truth pose. For the real-world experiments we keep the alignment enabled to be comparable to other methods.
## Appendix E Baselines
**SenFuNet.** To make the comparison to our method fair, we increase the voxel size from \(0.01\) m to \(0.16\) m and train SenFuNet on the following scenes of the Replica dataset: {apartment 1, frl apartment 0, office 3, room 0, office 4, hotel 0} and validate using the scene {frl apartment 1}.
**Pretrained Confidence Network.** We use the identical network as described by Weder _et al._[71] (called routing network), but make the following modification. We remove the refinement decoder of the network and only keep the confidence decoder of the network. This means that the network predicts the confidence of the input depth rather than the refined depth. We train on the following scenes of the Replica dataset: {apartment 1, frl apartment 0, office 3, room 0, office 4, hotel 0} and validate using the scene {frl apartment 1}.
## Appendix F More Experiments
**Single Sensor Evaluation**
**Replica.** We provide experimental evaluations on the SL [25] depth sensor in three different settings: 1. Depth only with ground truth poses _i.e._ pure mapping from noisy data. 2. Depth with estimated camera poses (_i.e._ with tracking) and 3. RGBD with tracking. In Table 6 we find consistent improvements in the settings where tracking is enabled. In the mapping only setting, our model performs best in terms of precision for the SL sensor. When tracking is turned on, we perform better than NICE-SLAM in both the depth only setting and in the RGBD setting. Fig. 7 shows a visualization from the office 1 scene from the Replica dataset depicting lower surface reconstruction errors compared to NICE-SLAM [81]. For a visualization of the predicted uncertainty from the SL sensor compared to the pretrained model from NICE-SLAM+Pre, we refer to Fig. 8.
**TUM-RGBD.** In Table 7, we provide additional evaluation metrics on the real-world TUM-RGBD dataset [65] over the main paper. Specifically, we provide median and minimum ATE RMSE over 10 runs. On average, camera pose tracking is greatly benefited by our uncertainty aware strategy. Due to the randomness of the pipeline, we find that the best performance (minimum ATE) is similar to NICE-SLAM.
Figure 6: **Decoder Architecture. The geometric middle and fine MLP architecture. The middle decoder \(f^{1}\) takes only the middle feature encoding as input, while the fine decoder \(f^{2}\) takes the concatenation of the middle and fine geometric features as input.**
**7-Scenes.** We provide additional evaluation metrics over the main paper per scene in Table 8 on the 7-Scenes dataset [22]. We use sequence 1 for all scenes. We find that NICE-SLAM [81] consistently yields worse tracking results suggesting the effectiveness of our depth uncertainty when it comes to maintaining robust camera pose tracking. On average, our method yields a 38 \(\%\) gain in terms of the mean ATE. Due to the randomness of the pipeline, we find that the best performance (minimum ATE) is similar to NICE-SLAM.
### Multi-Sensor Evaluation
Fig. 9 shows multi-sensor reconstruction results with ground truth poses. Also in this case, a visual improvement can on average be observed over the single sensor reconstructions.
### Architecture Ablation
We provide architecture ablations on the Replica dataset under the SL noise model and ground truth poses. For the ablations, we use the same evaluation protocol and metrics
\begin{table}
\begin{tabular}{l l|c c c c c c} \hline \hline \multirow{2}{*}{Model \(\downarrow\) [Metric \(\rightarrow\)]} & Depth & mP & mR & pT & R & F & F & ATE \\ & [cm] & [cm] & [cm] & [\(\%\)] & [\(\%\)] & [\(\%\)] & [cm] \\ \hline \multicolumn{8}{l}{_Depth Orb + Ground Truth Poses_} \\ \hline NICE-SLAM [81] & **1.79** & 2.23 & **1.69** & 91.06 & **93.97** & **92.47** & - \\ NICE-SLAM+Pre & 1.88 & 2.25 & 1.77 & 90.68 & 93.35 & 91.98 & - \\ Ours & 1.85 & **2.19** & 1.75 & **91.12** & 93.50 & 92.28 & - \\ \hline \multicolumn{8}{l}{_Depth + Tracking_} \\ \hline NICE-SLAM [81] & 16.47 & 13.36 & 10.31 & 42.56 & 46.34 & 44.29 & 41.51 \\ NICE-SLAM+Pre & **8.62** & **7.98** & **7.13** & **53.89** & **56.88** & **55.32** & **26.11** \\ \hline \multicolumn{8}{l}{_ORBD + Tracking_} \\ \hline NICE-SLAM [81] & 14.39 & 11.80 & 9.51 & 44.31 & 47.81 & 45.96 & 36.03 \\ Ours & **9.78** & **8.74** & **8.46** & **49.78** & **51.78** & **50.73** & **25.64** \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Reconstruction Performance on Replica [64]: SL [25].** Our model is able to outperform NICE-SLAM when tracking is enabled. In the mapping only setting, our model favors precision over recall compared to NICE-SLAM [81]. Best results are highlighted as **first**, **second**, and **third**.
\begin{table}
\begin{tabular}{l l|c c c} \hline \hline Scene & Model & Mean ATE [cm]\(\downarrow\) & Median ATE [cm]\(\downarrow\) & Min. ATE [cm]\(\downarrow\) \\ \hline \multirow{2}{*}{Chess} & NICE-SLAM [81] & 40.30 & 15.77 & **10.38** \\ & Ours & **14.85** & **12.49** & 10.88 \\ \hline \multirow{2}{*}{Fire} & NICE-SLAM [81] & 47.67 & 41.24 & 7.89 \\ & Ours & **25.47** & **11.88** & **7.00** \\ \hline \multirow{2}{*}{Head} & NICE-SLAM [81] & 20.55 & **12.95** & 8.46 \\ & Ours & **13.12** & 14.53 & **8.31** \\ \hline \multirow{2}{*}{Office} & NICE-SLAM [81] & 8.49 & **8.40** & 6.99 \\ & Ours & **7.83** & **8.05** & **6.87** \\ \hline \multirow{2}{*}{Pumpkin} & NICE-SLAM [81] & 33.11 & **27.63** & **25.92** \\ & Ours & **29.32** & 28.47 & 27.59 \\ \hline \multirow{2}{*}{Red Kitchen} & NICE-SLAM [81] & 24.39 & 7.56 & 6.61 \\ & Ours & **6.21** & **6.07** & **5.42** \\ \hline \multirow{2}{*}{Stairs} & NICE-SLAM [81] & 9.18 & 8.81 & 7.80 \\ & Ours & **8.53** & **8.05** & **6.24** \\ \hline \multirow{2}{*}{Average} & NICE-SLAM [81] & 24.24 & 17.48 & 10.58 \\ & Ours & **15.05** & **12.79** & **10.29** \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Tracking Evaluation on TUM-RGBD. We report the average and median ATE RMSE and the minimum ATE RMSE over 10 runs for each scene by mapping every 2nd frame. Best results are highlighted as **first** and **second**.
Figure 8: **Uncertainty Visualization. From left to right, the depth map from the SL sensor with the associated uncertainty estimation from the pretrained network model and ours. As reference, the ground truth absolute depth error is shown in the last column. We find that our model reproduces the error map with less smoothing than the pretrained model. Blue: low uncertainty, red: high uncertainty.**
\begin{table}
\begin{tabular}{l l|c c c} \hline \hline Scene & Model & Mean ATE [cm]\(\downarrow\) & Median ATE [cm]\(\downarrow\) & Min. ATE [cm]\(\downarrow\) \\ \hline \multirow{2}{*}{Chess} & NICE-SLAM & 40.40 & 31.74 & 6.27 \\ & Ours & **29.04** & **6.50** & **6.54** \\ \hline \multirow{2}{*}{Fire} & NICE-SLAM & 47.81 & 48.07 & **16.73** \\ & Ours & **36.57** & **27.00** & 18.42 \\ \hline \multirow{2}{*}{Fírl} & NICE-SLAM & 5.11 & 2.78 & 2.65 \\ & Ours & **2.71** & **2.71** & **2.61** \\ \hline \multirow{2}{*}{Average} & NICE-SLAM & 31.11 & 27.53 & **8.55** \\ & Ours & **22.77** & **12.07** & **8.86** \\ \hline \hline \end{tabular}
\end{table}
Table 8: **Tracking Evaluation on 7-Scenes. We report the average ATE RMSE, the median ATE RMSE as well as the minimum ATE RMSE over 5 runs for each scene. With our depth uncertainty modeling, we achieve significantly better tracking compared to NICE-SLAM. On average, our method yields a 38 \(\%\) gain in terms of the mean ATE.**
Figure 7: **Single Sensor Reconstruction on Replica [64]. We show that our uncertainty modeling on average helps to achieve more accurate reconstructions when the SL [25] sensor is provided as input. This experiment uses RGBD input with tracking enabled. The colorbar displays the deviation from the ground truth mesh.**
are reported by NICE-SLAM [81]_i.e._ the mean Precision (mP), mean Recall (mR), Recall (R) and depth L1.
We select four variables to vary in these experiments to understand which architecture provides the most promising results. We vary the minimum uncertainty value \(\beta_{\text{min}}\): 1e-1 m or 1e-3 m. We vary the kernel size or patch size: 1\(\times\)1 or 5\(\times\)5. Lastly we select two options for the informative features. The first option is to use the depth and the incident angle. This yields a pixel feature dimension of two. The second option is to use the depth, the normal direction, the image gradients, and the incident angle yielding a pixel feature dimension of seven. In total, we perform 8 ablations, whose details are provided in Table 102.
Footnote 2: We provide trial names based on the ablation parameters. These involve the patch size, the number of features, and the use of a “small” or “large” regularizer: [1K/5K][ZF/7F][S/L]
We first present results using a 1\(\times\)1 patch. Within this subset, we have four ablation results between the choice of regularizer and the number of input features. These results are summarized in Table 11.
Using the 1\(\times\)1 patch-based approach, we find improvement over some parameters and degradation in others. We find that the simplest architecture "1K2FL", employing two input features and a larger \(\beta_{\text{min}}\), has one of the better performances within this subset of ablations. This method im
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Name & Patch-size & \(D_{m}\) & \(\mathbf{N}_{m}\) & \(\mathrm{dx},\mathrm{dy}\) & \(\theta\) & \(\beta_{min}\) \\ \hline
1K7FS & 1 & ✓ & ✓ & ✓ & ✓ & 1e-3 \\
1K2FS & 1 & ✓ & - & - & ✓ & 1e-3 \\
1K7FL & 1 & ✓ & ✓ & ✓ & ✓ & 1e-1 \\
1K2FL & 1 & ✓ & - & - & ✓ & 1e-1 \\
5K7FS & 5 & ✓ & ✓ & ✓ & ✓ & 1e-3 \\
5K2FS & 5 & ✓ & - & - & ✓ & 1e-3 \\
5K7FL & 5 & ✓ & ✓ & ✓ & ✓ & 1e-1 \\
5K2FL & 5 & ✓ & - & - & ✓ & 1e-1 \\ \hline \hline \end{tabular}
\end{table}
Table 10: **Naming Table.** Description of different ablations for understanding the effect of different architectural and loss methods.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Scene & Sensor & Map Loss & N & mP [cm] & mR [cm] & R [\%] & Depth L1 [cm] \\ \hline Office 0 & SL & 1K7FS & 10 & **2.97 (0.19)** & **2.06 (0.02)** & **94.7 (0.3)** & **1.79 (0.03)** \\ Office 1 & SL & 1K7FS & 10 & 2.75 (0.19) & **1.64 (0.04)** & **96.3 (0.3)** & **1.35 (0.05)** \\ Room 2 & SL & 1K7FS & 10 & 2.88 (0.12) & 2.41 (0.02) & 92.4 (0.2) & 2.14 (0.05) \\ \hline Office 0 & SL & 1K7FL & 10 & 3.24 (0.34) & **2.07 (0.03)** & **94.7 (0.3)** & 2.73 (1.44) \\ Office 1 & SL & 1K7FL & 10 & 2.80 (0.20) & **1.65 (0.03)** & 96.2 (0.3) & **1.36 (0.06)** \\ Room 2 & SL & 1K7FL & 9 & 2.81 (0.06) & 2.99 (0.3) & 92.5 (0.4) & 2.12 (0.03) \\ \hline Office 0 & SL & 1K2FS & 10 & **2.93 (0.13)** & 2.10 (0.02) & 94.4 (0.2) & 1.90 (0.03) \\ Office 1 & SL & 1K2FS & 10 & **2.63 (0.20)** & **1.62 (0.03)** & **96.4 (0.3)** & 1.39 (0.05) \\ Room 2 & SL & 1K2FS & 10 & 2.91 (0.14) & _2.49 (0.02)_ & 91.7 (0.4) & 2.22 (0.02) \\ \hline Office 0 & SL & 1K2FL & 10 & **2.85 (0.09)** & **2.65 (0.02)** & **94.8 (0.2)** & 1.80 (0.03) \\ Office 1 & SL & 1K2FL & 10 & 2.80 (0.15) & **1.61 (0.03)** & **96.5 (0.3)** & 1.40 (0.10) \\ Room 2 & SL & 1K2FL & 10 & **2.76 (0.11)** & **2.88 (0.03)** & 92.6 (0.3) & **2.09 (0.04)** \\ \hline \hline \end{tabular}
\end{table}
Table 11: **Patch Size 1\(\times\)1 Ablation.** Green denotes improvement over no uncertainty estimation _i.e._ defaulting to [81]. Orange shows degradation. No color denotes no change. “1K2FL” achieves the most consistent improvement across metrics. \(N\) denotes the number of runs. The number in parenthesis denotes the standard deviation across the N runs.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Model \(\downarrow\) [Metric \(\rightarrow\) ] & Depth L1 [cm] & mP [cm] & nR & RT & RT & RT & RT & RT & RT & RT & RT & RT & RT & RT & RT [FOOTNOTE:1]Footnote **Patch Size 1\(\times\)1 Ablation.** Green denotes improvement over no uncertainty estimation _i.
proves across eight metrics and observes degradation in two metrics. The remaining two metrics are within rounding error. With few input parameters and a larger regularizer, the chance of overfitting may be limited by this particular architecture, preventing the more wide-spread degradation we observe across other trials.
We next present results using a 5\(\times\)5 patch. Within this subset, we have four ablation results between the use of regularizer and the number of input features. These results are summarized in Table 12.
We find that two methods achieve positive improvement across a majority of metrics. Trials "5K7FL" and "5K2FS" both see improvements across eight metrics. "5K2FS" saw fewer metrics degrade in performance after discounting rounding errors. Overall, however, both these methods achieve marginal improvement over the baseline methods. The "5K7FS" trial experienced a strong outlier where the tracking error was high. This outlier run skewed the results in the Room 2 scene. The other three trials congregate around a similar performance cluster. We see again that the use of a larger regularizer \(\beta_{\min}\) may be beneficial within the single sensor framework in improving metrics. When using a smaller regularizer, the inclusion of fewer features may improve results.
However, the results we have are inconclusive and the evidence is limited in the above claims. We see that that "5K2FL" appears to perform worse than "5K2FS", which contradicts our belief that a strong regularizer should be beneficial in 3D reconstruction. Amongst the architectures, we decide to use "5K2FS" as the architecture of choice.
### Ablation Statistical Significance
The default implementation of NICE-SLAM is non-deterministic due to the backward pass of the grid_sample() function in PyTorch. This function is used for interpolating on the voxel grids of features. We note that a deterministic implementation of the backwards pass through should be possible, but that no plug-and-play implementation exists.
A first strategy to address the variance in output is simple aggregate statistics across a number of runs. A second strategy to determine the significance of our work is through an unpaired t-test (see Section G) that reports the likelihood that both results exhibit the same mean. Such tests give insight into the effectiveness of various implementations.
We perform a cursory analysis on the total number of improved metrics in Tables 11 and 12, showing the general trends of improvement. We now present the detailed results of the significance analysis of the various ablations in Tables 11 and 12. The summary can be found in Table 13.
We can see that many of the results are not statistically significant. We present in Table 14 the number of significant improvements, the number of significant degradations, and the _net total_ of significant improvements.
The two best performing methods, "1K2FL" and "5K2FS", each have three significant improvements and no significant degradations.
### G. t-test
The unpaired t-test is a two sample location test that compares if two sample populations have the same mean. This analysis is performed by determining the statistic \(t\) and the degrees-of-freedom \(\nu\). Given the sample means \(\overline{X}_{\{1,2\}}\) and the standard errors \(s_{\overline{X}_{\{1,2\}}}\), \(t\) and \(\nu\) can be calculated using Eqs. (15) and (16).
\[t = \frac{\Delta\overline{X}}{s_{\Delta\bar{X}}}\ =\ \frac{\overline{X}_{1}- \overline{X}_{2}}{\sqrt{s_{X_{1}}^{2}+s_{X_{2}}^{2}}} \tag{15}\] \[\nu \approx \frac{\Big{(}\ \frac{s_{1}^{2}}{N_{1}}\ +\ \frac{s_{2}^{2}}{N_{2}}\Big{)}^{2}}{ \frac{s_{1}^{4}}{N_{1}^{4}\nu_{1}}\ +\ \frac{s_{2}^{3}}{N_{2}^{2}\nu_{2}}} \tag{16}\]
These values can then be used to identify the probability given by the Student's t-distribution that the two sample means are equal. If we assume significance at \(P<0.05\), we can determine if an improvement--i.e. an increase in the mean performance--should be considered significant.
## Appendix H Experiments per Scene
In the main paper, we show the average over the test scenes Office 0, Office 1 and Room 2. In the fol
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline Scene & Sensor & Map Loss & N & mP [cm] & mR [cm] & R [\%] & Depth L1 [cm] \\ \hline Office 0 & SL & 5K7FS & 10 & 3.06 (**0.09**) & 2.08 (0.03) & 94.5 (0.4) & 1.86 (0.02) \\ Office 1 & SL & 5K7FS & 10 & 2.27 (0.27) & **1.62 (0.03)** & **96.4 (0.3)** & 1.39 (0.06) \\ Room 2 & SL & 5K7FS & 10 & 2.87 (0.17) & 2.45 (0.01) & 91.9 (0.2) & 2.56 (1.25) \\ \hline Office 0 & SL & 5K7FL & 10 & 3.08 (0.18) & **2.06 (0.03)** & **94.7 (0.3)** & 1.82 (0.04) \\ Office 1 & SL & 5K7FL & 10 & **2.68 (0.20)** & **1.61 (0.02)** & **96.6 (0.2)** & **1.36 (0.03)** \\ Room 2 & SL & 5K7FL & 7 & 2.85 (0.06) & **2.37 (0.04)** & **92.8 (0.4)** & 2.11 (0.01) \\ \hline Office 0 & SL & 5K2FS & 10 & **2.87 (0.09)** & **2.06 (0.04)** & **94.7 (0.3)** & **1.79 (0.02)** \\ Office 1 & SL & 5K2FS & 10 & 2.75 (0.21) & **1.61 (0.03)** & **96.5 (0.3)** & **1.36 (0.04)** \\ Room 2 & SL & 5K2FS & 10 & **2.79 (0.15)** & 2.39 (0.02) & 92.6 (0.3) & 2.11 (0.05) \\ \hline Office 0 & SL & 5K2FL & 10 & 3.03 (0.19) & **207 (0.02)** & **94.6 (0.2)** & 1.80 (0.03) \\ Office 1 & SL & 5K2FL & 10 & 2.84 (0.19) & **1.60 (0.03)** & **96.7 (0.3)** & 1.39 (0.04) \\ Room 2 & SL & 5K2FL & 10 & 2.81 (0.14) & **2.40 (0.03)** & 92.4 (0.4) & 2.11 (0.02) \\ \hline \hline \end{tabular}
\end{table}
Table 12: **Patch Size 5\(\times\)5 Ablation. Green** denotes improvement over no uncertainty estimation _i.e._ defaulting to [81]. Orange shows degradation. No color denotes no change. “5K2FS” achieves the most consistent improvement across metrics. \(N\) denotes the number of runs. The number in parenthesis denotes the standard deviation across the N runs.
lowing, we show the per scene results for the same experiments conducted on the Replica dataset.
### Replica Single-Sensor Tracking
Table 18, Table 19 and Table 20 show the per scene results when camera pose estimation is enabled. We provide results with depth as the only input and with RGBD.
\begin{table}
\begin{tabular}{l|c c c c c c c c} \hline \multirow{2}{*}{Model \(\downarrow|\)Metric \(\rightarrow\)} & Depth L1\(\downarrow\) & mP\(\downarrow\) & mR\(\downarrow\) & P\(\uparrow\) & R\(\uparrow\) & F\(\uparrow\) & ATE\(\downarrow\) \\ & [cm] & [cm] & [cm] & [\%] & [\%] & [cm] \\ \hline \multicolumn{8}{c}{_SL + PSMNet Office 0_} \\ \hline NICE-SLAM [81] & **3.72** & **3.97** & **3.43** & **75.47** & **77.23** & **76.34** & **14.00** \\ NICE-SLAM+Pre & 4.63 & 5.32 & 4.89 & 66.05 & 68.27 & 67.14 & 20.48 \\ Ours & 5.58 & 5.38 & 4.71 & 64.04 & 66.12 & 65.06 & 16.32 \\ \hline \multicolumn{8}{c}{_SL + PSMNet Office 1_} \\ \hline NICE-SLAM [81] & 4.77 & 9.16 & 8.01 & 49.46 & 50.36 & 49.88 & 28.58 \\ NICE-SLAM+Pre & **3.41** & **4.94** & **4.13** & **70.72** & **70.78** & **70.74** & **23.89** \\ Ours & 3.95 & 8.69 & 7.36 & 44.44 & 46.74 & 45.54 & 26.09 \\ \hline \multicolumn{8}{c}{_SL + PSMNet Room 2_} \\ \hline NICE-SLAM [81] & 5.61 & 6.03 & 5.78 & 65.90 & 66.17 & 66.01 & 23.90 \\ NICE-SLAM+Pre & 2.66 & 3.62 & 3.88 & 74.83 & 73.64 & 74.23 & 16.93 \\ Ours & **2.09** & **2.36** & **2.64** & **90.14** & **86.91** & **88.49** & **8.18** \\ \hline \multicolumn{8}{c}{_SL + PSMNet Overall_} \\ \hline NICE-SLAM [81] & 4.70 & 6.39 & 5.74 & 63.61 & 64.58 & 64.08 & 22.16 \\ NICE-SLAM+Pre & **3.57** & **4.63** & **4.30** & **70.53** & **70.90** & **70.70** & 20.43 \\ Ours & 3.87 & 5.48 & 4.90 & 66.20 & 66.59 & 66.36 & **16.86** \\ \hline \end{tabular}
\end{table}
Table 24: **Depth + Tracking: SL+PSMNet Sensor Fusion.** Average of 5 runs.
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \multirow{2}{*}{Model \(\downarrow|\)Metric \(\rightarrow\)} & Depth L1\(\downarrow\) & mP\(\downarrow\) & mR\(\downarrow\) & P\(\uparrow\) & R\(\uparrow\) & F\(\uparrow\) & ATE\(\downarrow\) \\ & [cm] & [cm] & [cm] & [\%] & [\%] & [cm] \\ \hline \multicolumn{8}{c}{_SGM + PSMNet Office 0_} \\ \hline NICE-SLAM [81] & **1.74** & 2.29 & 1.88 & **91.0** & **91.8** & **91.4** \\ NICE-SLAM+Pre & 1.78 & 2.37 & 1.92 & 90.5 & 91.5 & 91.0 \\ Ours & 1.76 & **2.26** & **1.93** & **91.0** & 91.4 & 91.2 \\ \hline \multicolumn{8}{c}{_SGM + PSMNet Office 1_} \\ \hline NICE-SLAM [81] & 2.34 & **2.78** & **1.84** & **87.7** & **91.1** & **89.4** \\ NICE-SLAM+Pre & 2.80 & 3.02 & 1.87 & 86.3 & 90.8 & 88.5 \\ Ours & **2.15** & 2.85 & 1.85 & 86.6 & **91.1** & 88.8 \\ \hline \multicolumn{8}{c}{_SGM + PSMNet Room 2_} \\ \hline NICE-SLAM [81] & 1.99 & 1.95 & 2.26 & 92.9 & 89.7 & 91.3 \\ NICE-SLAM+Pre & **1.97** & **1.93** & **2.24** & **93.0** & **90.0** & **91.4** \\ Ours & 2.00 & 1.96 & 2.25 & 92.8 & 89.8 & 91.3 \\ \hline \end{tabular}
\end{table}
Table 22: **Depth + Ground Truth Poses: SGM+PSMNet Sensor Fusion.**
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \multirow{2}{*}{Model \(\downarrow|\)Metric \(\rightarrow\)} & Depth L1\(\downarrow\) & mP\(\downarrow\) & mR\(\downarrow\) & P\(\uparrow\) & R\(\uparrow\) & F\(\uparrow\) & ATE\(\downarrow\) \\ & [cm] & [cm] & [cm] & [\%] & [\%] & [cm] \\ \hline \multicolumn{8}{c}{_SL + PSMNet Office 0_} \\ \hline NICE-SLAM [81] & **3.72** & **3.97** & **3.43** & **75.47** & **77.23** & **76.34** & **14.00** \\ NICE-SLAM+Pre & 4.63 & 5.32 & 4.89 & 66.05 & 68.27 & 67.14 & 20.48 \\ Ours & 5.58 & 5.38 & 4.71 & 64.04 & 66.12 & 65.06 & 16.32 \\ \hline \multicolumn{8}{c}{_SL + PSMNet Office 1_} \\ \hline NICE-SLAM [81] & 4.77 & 9.16 & 8.01 & 49.46 & 50.36 & 49.88 & 28.58 \\ NICE-SLAM+Pre & **3.41** & **4.94** & **4.13** & **70.72** & **70.78** & **70.74** & **23.89** \\ Ours & 3.95 & 8.69 & 7.36 & 44.44 & 46.74 & 45.54 & 26.09 \\ \hline \multicolumn{8}{c}{_SL + PSMNet Room 2_} \\ \hline NICE-SLAM [81] & 5.61 & 6.03 & 5.78 & 65.90 & 66.17 & 66.01 & 23.90 \\ NICE-SLAM+Pre & 2.66 & 3.62 & 3.88 & 74.83 & 73.64 & 74.23 & 16.93 \\ Ours & **2.09** & **2.36** & **2.64** & **90.14** & **86.91** & **88.49** & **8.18** \\ \hline \multicolumn{8}{c}{_SL + PSMNet Overall_} \\ \hline NICE-SLAM [81] & 4.70 & 6.39 & 5.74 & 63.61 & 64.58 & 64.08 & 22.16 \\ NICE-SLAM+Pre & **3.57** & **4.63** & **4.30** & **70.53** & **70.90** & **70.70** & 20.43 \\ Ours & 3.87 & 5.48 & 4.90 & 66.20 & 66.59 & 66.36 & **16.86** \\ \hline \end{tabular}
\end{table}
Table 23: **Depth + Tracking: PSMNet+SGM Sensor Fusion.** Average of 5 runs. |
2306.09607 | Listener Model for the PhotoBook Referential Game with CLIPScores as
Implicit Reference Chain | PhotoBook is a collaborative dialogue game where two players receive private,
partially-overlapping sets of images and resolve which images they have in
common. It presents machines with a great challenge to learn how people build
common ground around multimodal context to communicate effectively. Methods
developed in the literature, however, cannot be deployed to real gameplay since
they only tackle some subtasks of the game, and they require additional
reference chains inputs, whose extraction process is imperfect. Therefore, we
propose a reference chain-free listener model that directly addresses the
game's predictive task, i.e., deciding whether an image is shared with partner.
Our DeBERTa-based listener model reads the full dialogue, and utilizes
CLIPScore features to assess utterance-image relevance. We achieve >77%
accuracy on unseen sets of images/game themes, outperforming baseline by >17
points. | Shih-Lun Wu, Yi-Hui Chou, Liangze Li | 2023-06-16T03:41:14Z | http://arxiv.org/abs/2306.09607v1 | # Listener Model for the _PhotoBook_ Referential Game
###### Abstract
PhotoBook is a collaborative dialogue game where two players receive private, partially-overlapping sets of images and resolve which images they have in common. It presents machines with a great challenge to learn how people build common ground around multimodal context to communicate effectively. Methods developed in the literature, however, cannot be deployed to real gameplay since they only tackle some subtasks of the game, and they require additional reference chains inputs, whose extraction process is imperfect. Therefore, we propose a reference chain-free listener model that directly addresses the game's predictive task, i.e., deciding whether an image is shared with partner. Our DeBERTa-based listener model reads the full dialogue, and utilizes CLIPScore features to assess utterance-image relevance. We achieve \(>\)77% accuracy on unseen sets of images/game themes, outperforming baseline by \(>\)17 points.
## 1 Introduction
PhotoBook Haber et al. (2019) is a collaborative dialogue game of two players. In a game round, each player receives 6 images of an identical theme--the two largest objects in all images share the same categories, e.g., _dog_, _car_, etc. The players have some of their images in common. Their goal is to communicate through text dialogue, and individually mark 3 privately highlighted images as either _common_ (i.e., shared with partner) or _different_. A full game lasts 5 rounds. After each round, some of each player's images are replaced with different ones under the same theme. Images may reappear in later rounds after being swapped out. This game setup encourages building and leveraging common ground with multimodal contexts, which humans are known to do to facilitate conversation Clark and Wilkes-Gibbs (1986); Brennan and Clark (1996). Fig. 1 displays an example of a PhotoBook game.1
Footnote 1: In this case, the game theme is _person & bench_.
Models proposed in past works on the dataset Haber et al. (2019); Takmaz et al. (2020) are unable to realistically play the game due to several reasons: (i) they only address subtasks in the game whose time span is _one utterance_, rendering it unnecessary for the models to keep track of the entire game's, or round's, progress; (ii) the models operate on additional input of _reference chains_, i.e., past utterances referring to each image, whose (rule-based) extraction process is imperfect and hence complicates learning and evaluation; and, (iii) utterances outside of reference chains, e.g., '_I don't have that one_', may also be important pieces of information.
To address the drawbacks above, we propose a full (i.e., able to play real games), reference chain-free listener model, which accepts all dialogue utterances of a round2 and the 6 context images, and predicts whether the 3 target (highlighted) images are _common/different_. Our listener model is based on a pretrained DeBERTa Transformer He et al. (2021). To incorporate visual context, CLIPscores Hessel et al. (2021) between each utterance and the 6 given images are infused with DeBERTa hidden states. We employ CLIPscore as it offers strong prior knowledge about the relevance of an utterance to each of the 6 images, which may serve as a soft, implicit version of reference chain used in previous studies. Also, we chose DeBERTa since it is one of the top performers in the SuperGLUE benchmark Sarlin et al. (2020) which provides a reasonably-sized (\(\sim\)100M parameters) version to suit our purpose and computation resources. We further devise a label construction scheme to create dense learning signals. Our model scores a \(>\)77% accuracy on the novel listener task and improves by \(>\)17% (absolute) over the baseline adapted from Takmaz et al. (
2020). Our code is available at github.com/slSeanWU/photobook-full-listener.
## 2 Related Work
In typical collaborative dialogue tasks, two agents (i.e., players) hold incomplete or partially overlapping information and communicate through text to reach a predefined goal. The task-oriented setup enables simple evaluation for dialogue systems via task success rate, instead of resorting to costly human evaluation. Tasks and datasets proposed in the literature focus either on set logic (He et al., 2017), image understanding (De Vries et al., 2017; Haber et al., 2019), or spatial reasoning (Udagawa and Aizawa, 2019). They challenge dialogue systems to process multiple modalities, discard irrelevant information, and build common ground. Researchers have utilized graph neural networks (He et al., 2017), vision-and-language Transformers (Lu et al., 2019; Tu et al., 2021), and pragmatic utterance generation (Frank and Goodman, 2012; Fried et al., 2021) to tackle the tasks.3
Footnote 3: Table 2 (in appendix) summarizes these tasks & methods.
To our knowledge, there has not been a system that fully addresses the PhotoBook task. It may be particularly challenging due to the setup with multiple highly similar images and an unbounded set of information (e.g., scene, actions) the images may contain. Previous PhotoBook works targeted two subtasks: _reference resolution_(Haber et al., 2019; Takmaz et al., 2020) and _referring utterance generation_(Takmaz et al., 2020). The former resolves which of the 6 context images an utterance is referring to, while the latter generates an informative utterance for a pre-selected image. Proposed models take in extracted reference chains--whose rule-based extraction processes4 try to identify which utterances speak about each of the images. To obtain such chains, Haber et al. (2019) broke the dialogue into segments using a set of heuristics based on player marking actions. Takmaz et al. (2020), on the other hand, computed each utterance's BERTScore (Zhang et al., 2019) and METEOR (Banerjee and Lavie, 2005) respectively against ground-truth MSCOCO captions (Lin et al., 2014), and VisualGenome attributes (Krishna et al., 2017) of each image to match (at most) one utterance per round to an image.
Footnote 4: Algorithmic details in Appendix F.
As for the reference resolution task, Haber et al. (2019) employed LSTM encoders. One (query) encoder takes a current dialogue segment, while the other (i.e., context encoder) receives the 6 images' ResNet features, and the associated reference chain segments.5 Dot products between query encoder output and 6 context encoder outputs are taken to predict the image the current segment refers to. Takmaz et al. (2020) largely kept the setup, but they used BERT (Devlin et al., 2019) embeddings and contextualized utterances via weighted averaging instead of LSTMs.
Footnote 5: The 6 ‘images’ + ref. chains’ are processed separately.
Takmaz et al. (2020) claimed an 85% reference resolution accuracy, but they also reported an 86% precision6 on reference chain extraction, making it difficult to conclude whether prediction errors are due to model incompetence, or incorrect input data/labels. (We find that some parts of extracted reference chains either point to the wrong image or
Figure 1: A round of PhotoBook game with dialogue, player marking actions, corresponding images, and CLIPScore (i.e., CS) difference between top and 2nd-top scoring images w.r.t. the utterance. A player needs to figure out whether their partner has each of the 3 target (i.e., highlighted) images through text dialogue.
provide no information at all.7) Yet, we do agree that keeping track of which images have been referred to is vital for the game. Therefore, we aim to build a full listener model that does not depend on explicit reference chains, but gathers such information from implicit hints given by an image-text matching model, i.e., CLIP Radford et al. (2021).
Footnote 7: We rerun Takmaz et al. (2020)’s experiment and show some of the problematic examples in Appendix F & Table 5.
## 3 Method
### Functionality of CLIPScore
Based on CLIP vision-and-language Transformer Radford et al. (2021), CLIPScore Hessel et al. (2021) is a reference-free8 metric to measure semantic image-text similarity. On image captioning, Hessel et al. (2021) showed that CLIPScore correlates better with human judgment than reference-dependent metrics like BERTScore Zhang et al. (2019) and SPICE Anderson et al. (2016).
Footnote 8: i.e., does not take ground-truth text as input
In our pilot study, we find that the CLIPScore of an utterance-image pair is particularly high when the utterance describes the image (see Fig. 1 for example). These score peaks thus form an _implicit reference chain_ for the dialogue, giving strong hints on whether the mentioned images are common/different when seen with subsequent partner feedback (e.g., '_I have that one_'). Also, the reference chain extraction method in Takmaz et al. (2020) achieves higher precision (86%\(\rightarrow\)93%) and recall (60%\(\rightarrow\)66%) when we simply replace its core scoring metrics9 with CLIPScore. The findings above show that CLIPScore captures well the utterance-image relationships in PhotoBook, and hence should be helpful to our listener model.
Footnote 9: i.e., BERTScore & METEOR. Details in Appendix F.
Computation-wise, reference chain extraction algorithms in the literature either rely on complex turn-level heuristics Haber et al. (2019), or compute multiple external metrics (i.e., BERTScore and METEOR) Takmaz et al. (2020). More importantly, they have to wait until completion of a round to compute the chains. Our utterance-level CLIPScores can be computed on the fly as utterances arrive, and are relatively time-efficient as they involve only one model (i.e., CLIP) and that batch computation may be used to increase throughput.
Modeling-wise, reference chain extraction explicitly selects which utterances the listener model should see, so when it is wrong, the model either sees something irrelevant, or misses important utterances. On the other hand, utterance-level CLIPScores resemble using a highlighter to mark crucial dialogue parts for the model. Even when CLIPScores are sometimes inaccurate, the model could still access the full dialogue to help its decisions.
### The Full Listener Model
#### 3.2.1 Inputs
An overview of our listener model is depicted in Fig. 2. Our model operates on three types of input features, which collectively represent a game round from one of the players' perspective:
\[\text{Dialogue tokens:}\ \mathcal{X} =\{\mathbf{x}_{k}\in\mathcal{W}^{|\mathcal{T}_{k}|}\}_{k=1}^{K} \tag{1}\] \[\text{CLIPScores:}\ \mathcal{C} =\{\mathbf{c}_{k}\in\mathbb{R}^{6}\}_{k=1}^{K}\] (2) \[\text{Image features:}\ \mathcal{V} =\{\mathbf{v}_{j}\in\mathbb{R}^{512}\}_{j=1}^{6} \tag{3}\]
We use \(k\), \(j\) to index utterances and images respectively. \(\mathcal{W}\) is the text token vocabulary, and \(\mathcal{T}_{k}=\{t_{k,\text{start}},\ldots,t_{k,\text{end}}\}\) is the corresponding token timesteps for the \(k^{\text{th}}\) utterance. To the start of each utterance, we prepend either a [CLS] or [SEP] token to distinguish whether it comes from the player itself or the partner. All utterances are concatenated to form one text input sequence to our model.10CLIPScore vectors (\(\mathbf{c}_{k}\)'s) are computed in a per-utterance manner, i.e., between one
Figure 2: Overview of our listener model. A DeBERTa Transformer He et al. (2021) encodes all utterances of a game round. Utterance-level CLIPScores Hessel et al. (2021) w.r.t. each image (i.e., an \(\mathbb{R}^{6}\) vector) get projected and summed with hidden states of all timesteps corresponding to that utterance. Then, a 2-layer MLP takes in pooled SegFormer Xie et al. (2021) features of the target image (\(\in\mathbb{R}^{512}\)) and DeBERTa output to predict whether the image is _common_, _different_, or _undecided_ at every token timestep.
utterance and each of the 6 images. Images are represented by the pooled11 features from SegFormer Xie et al. (2021). It is trained on semantic image segmentation Zhou et al. (2017), and hence should encode crucial visual information for the game, i.e., objects in the scene and their spatial relationships.
Footnote 11: Pooling of the 16\(\times\)16 SegFormer patch features per image into one involves 2d-conv. downsampling other than taking the mean, as we also attempt fusing visual context by cross-attending to patch features. More details in Appendix B.
#### 3.2.2 Labels and Output
Rather than training the model to predict just once after seeing the entire dialogue, we construct labels for _all_ timesteps, forming a label sequence \(\mathbf{y}_{j}\in\mathcal{L}^{T}\), where \(T=\sum_{k}|\mathcal{T}_{k}|\), for each target image, where \(\mathcal{L}\) is the label set. As there are only 3 target images out of the 6, we also only have 3 such label sequences (\(\mathbf{y}_{j}\)'s) for a training instance. At each timestep \(t\), the label of a target image, \(y_{j,t}\in\mathcal{L}\), is one of {_undecided_, _common_, _different_}. It always starts as _undecided_, changes to _common_ or _different_ at the moment of player marking action, and remains there for the rest of the dialogue. Our model's output for a (target) image \(j\) at timestep \(t\) is hence a distribution \(\mathbf{\hat{y}}_{j,t}\in\mathbb{R}^{3}\), which is a temporary belief about that image. Also, we apply causal masking on DeBERTa self-attention. Such a labeling and masking scheme creates dense learning signals--our model must judge an image at every timestep based on growing dialogue context.
#### 3.2.3 Model Components
The backbone of our model is a pretrained base DeBERTa He et al. (2021), which takes in concatenated utterances \(\mathcal{X}=\{\mathbf{x}_{k}\in\mathcal{W}^{|\mathcal{T}_{k}|}\}_{k=1}^{K}=\{x_{t }\in\mathcal{W}\}_{t=1}^{T}\), and contextualizes them into hidden states:
\[\mathcal{H}^{(l)}=\{\mathbf{h}_{t}^{(l)}\in\mathbb{R}^{d}\}_{t=1}^{T},\ \ l\in\{1,\dots,L\}\,, \tag{4}\]
where \(d\) (=768) is DeBERTa's hidden size, and \(l\) is layer index (# layers \(L=12\)). We do not adopt vision-and-language Transformers Lu et al. (2019); Wang et al. (2022) for they are pretrained on'single image-short text' pairs, which mismatches our scenario. Following Wu and Yang (2022)'s recommendation on feeding time-varying conditions to Transformers, utterance-level CLIPScores (i.e., \(\mathcal{C}\)) are projected and summed with DeBERTa hidden states at _all_ layers:12
Footnote 12: Additional experiments in Appendix D shows that feeding CLIPScore to fewer layers harms the performance.
\[\mathcal{H}^{(l)}\leftarrow\,\{\mathcal{H}^{(l)}_{\mathcal{T}_{k}}=\mathbf{h}^{(l )}_{t\in\mathcal{T}_{k}}+\mathbf{W}_{\text{proj}}\,\mathbf{c}_{k}\,\}_{k=1}^{K}\,, \tag{5}\]
where \(\mathbf{W}_{\text{proj}}\in\mathbb{R}^{d\times 6}\) is a learnable matrix.
To make predictions, we place a 2-layer MLP (with GELU activation) on top of DeBERTa. It takes in the concatenation of the pooled target image features and the last-layer DeBERTa hidden state, and produces a distribution over the label set \(\mathcal{L}=\{\textit{undecided},\textit{common},\textit{different}\}\):
\[\mathbf{\hat{y}}_{j,t}=\operatorname{MLP}_{\mathbb{R}^{512+d}\to\mathbb{R}^{3}}([ \mathbf{v}_{j};\mathbf{h}^{(L)}_{t}])\,. \tag{6}\]
We add learnable positional embeddings to \(\mathbf{v}_{j}\)'s to make our model aware of the target image's index.
## 4 Experiments and Results
Our listener model is trained with the maximum likelihood estimation (MLE) loss function:
\[\mathbb{E}_{(\mathcal{X},\mathcal{C},\mathcal{V},\mathcal{Y})\in\mathcal{D}_{ \text{train}}}\sum_{j,t}-\log p_{\mathbf{\hat{y}}_{j,t}}(y_{j,t}\,|\,\mathcal{X}, \mathcal{C},\mathbf{v}_{j}), \tag{7}\]
where \(\mathcal{D}_{\text{train}}\) is the training split, and \(\mathcal{Y}\) is the set of label sequences associated with a data instance. The same images/themes are guaranteed not to appear in multiple dataset splits. We refer readers to Appendix A for more implementation and training details. Evaluation metric adopted here is accuracy measured at the end of dialogue, i.e., at evaluation, we ignore temporary beliefs in the chat. To set a baseline, we modify the reference resolution model in Takmaz et al. (2020) to suit our listener task.13
Footnote 13: Modification details are in Appendix C.
Table 1 lists the evaluation results. Our method outperforms baseline by 17\(\sim\)20 percentage points, closing the gap to human performance by more than half. Examining the ablations, we can observe
\begin{table}
\begin{tabular}{l|c c} \hline & valid & test \\ \hline _Random guess_ & 50.0 & 50.0 \\ \hline Modified Takmaz et al. (2020) & \(64.2\pm 1.7\) & \(59.0\pm 0.7\) \\ w/ CLIPScore ref chains & \(65.0\pm 1.4\) & \(59.7\pm 0.8\) \\ \hline
**Ours** & \(\mathbf{84.8}\pm 1.3\) & \(\mathbf{77.3}\pm 0.3\) \\ a. \(+\)**VisAttn** & \(75.0\pm 0.6\) & \(69.8\pm 3.3\) \\ b. \(-\)**CLIPScore** & \(70.7\pm 1.1\) & \(64.8\pm 1.5\) \\ c. \(-\)**CLIPScore** \(+\)**VisAttn** & \(69.8\pm 1.1\) & \(64.9\pm 0.4\) \\ d. \(-\)**Dense learning signals** & \(59.4\pm 1.8\) & \(55.9\pm 0.9\) \\ \hline _Human_ & 95.0 & 94.5 \\ \hline \end{tabular}
\end{table}
Table 1: Listener model accuracy (%) of baselines and our model (full & ablated versions). StDev of 3 runs with fixed seeds shown after \(\pm\). Pairwise bootstrap tests corroborate (\(p<\).001) that our full model outperforms all baselines and ablated versions. _Human_ is the accuracy annotators achieved during dataset creation. (**VisAttn**: cross-attention to patch features of 6 context images.)
that both removing CLIPScore inputs and dense learning signals (i.e., having labels at all timesteps, see Sec. 3.2.2) cause serious accuracy degradation, indicating their essentiality in our model, and that a pretrained Transformer does not trivially beat a fully MLP-based baseline. Besides, though adding cross-attention to image features14 (i.e., ablations a. & c.) seems to be a more intuitive way to involve visual context, it leads to more severe overfitting15 and hence does not help in our case. We provide more detailed observations on our best-performing model's behavior and outputs in Appendix G.
Footnote 14: Cross-attention mechanism explained in Appendix B.
Footnote 15: Likely due to limited dataset size and configuration. More analysis and exploration can be found in Appendix E.
## 5 Conclusions and Future Work
In this paper, we first discussed why it is difficult to deploy existing reference chain-dependent PhotoBook models to real gameplay, and demonstrated that CLIPScore's image-text matching capability may provide implicit reference chains to the task. We then developed a novel listener model that is reference chain-free, and able to realistically play the game given text dialogue and the set of context images, just as what human players see. The model is built on a DeBERTa Transformer backbone, and brings in visual context by infusing utterance-level CLIPScores with its hidden states. On the newly proposed full listener task, i.e., predicting whether an image is shared with partner, our model achieves 77\(\sim\)84% accuracy on unseen sets of images, surpassing baseline Takmaz et al. (2020) by over 17 points. Ablation studies also showed that feeding CLIPScores and imposing dense learning signals are both indispensable to our model's success.
Future studies may leverage parameter-efficient transfer learning He et al. (2022); Houlsby et al. (2019); Hu et al. (2022); Perez et al. (2018) to cope with image data scarcity of PhotoBook (and potentially other datasets and tasks). It is also interesting to develop a speaker model that uses temporary beliefs from our listener model and takes pragmatics Frank and Goodman (2012); Fried et al. (2021) into account to generate informative responses. Pairing such a model with our listener model may complete the collaborative dialogue task end-to-end.
## 6 Limitations
The PhotoBook dataset has a very limited number of images (i.e., 360) and image combinations (i.e., 5 per game theme), which may lead to undesirable overfitting behavior as we discuss in Appendix E. Also, since our model depends heavily on CLIP Radford et al. (2021), it is likely to inherit CLIP's biases and weaknesses. For example, Radford et al. (2021) mentioned that CLIP fails to perform well on abstract or more complex tasks, such as counting or understanding spatial relationships between objects. Finally, whether our listener model can be easily applied/adapted to productive real-world tasks (e.g., automated customer service with image inputs) requires further exploration.
## Acknowledgements
We would like to express our utmost thanks to Dr. Daniel Fried, Emmy Liu and Dr. Graham Neubig for their guidance and insightful suggestions. We also appreciate the valuable feedback from the reviewers and the area chair.
|
2310.05903 | Graphs with no even holes and no sector wheels are the union of two
chordal graphs | Sivaraman conjectured that if $G$ is a graph with no induced even cycle then
there exist sets $X_1, X_2 \subseteq V(G)$ satisfying $V(G) = X_1 \cup X_2$
such that the induced graphs $G[X_1]$ and $G[X_2]$ are both chordal. We prove
this conjecture in the special case where $G$ contains no sector wheel, namely,
a pair $(H, w)$ where $H$ is an induced cycle of $G$ and $w$ is a vertex in
$V(G) \setminus V(H)$ such that $N(w) \cap H$ is either $V(H)$ or a path with
at least three vertices. | Tara Abrishami, Eli Berger, Maria Chudnovsky, Shira Zerbib | 2023-10-09T17:45:01Z | http://arxiv.org/abs/2310.05903v1 | # Graphs with no even holes and no sector wheels are the union of two chordal graphs
###### Abstract.
Sivaraman [5] conjectured that if \(G\) is a graph with no induced even cycle then there exist sets \(X_{1},X_{2}\subseteq V(G)\) satisfying \(V(G)=X_{1}\cup X_{2}\) such that the induced graphs \(G[X_{1}]\) and \(G[X_{2}]\) are both chordal. We prove this conjecture in the special case where \(G\) contains no sector wheel, namely, a pair \((H,w)\) where \(H\) is an induced cycle of \(G\) and \(w\) is a vertex in \(V(G)\setminus V(H)\) such that \(N(w)\cap H\) is either \(V(H)\) or a path with at least three vertices.
T. Abrishami: Department of Mathematics, University of Hamburg, Germany.
[email protected]. (This work was performed while the author was at Princeton University.) Supported by NSF-EPSRC Grant DMS-2120644 and by AFOSR grant FA9550-22-1-0083.
E. Berger: Department of Mathematics, University of Haifa, [email protected].
M. Chudnovsky: Department of Mathematics, Princeton University, USA.
[email protected]. Supported by NSF-EPSRC Grant DMS-2120644 and by AFOSR grant FA9550-22-1-008.
S. Zerbib: Department of of Mathematics, Iowa State University, [email protected]. Supported by NSF Grant DMS-1953929.
E. Berger, M. Chudnovsky, and S. Zerbib were also supported by BSF grant 2016077.
**Theorem 1.1**.: _Every even-hole-free graph with no sector wheel admits a chordal cover._
### Proof outline
Our proof depends on a decomposition theorem for even-hole-free graphs proved in [4]. We first need some definitions. Let \(T\) be a tree and write \(V_{1},V_{2}\) for its two sides when viewed as a bipartite graph. Let \(L\) denote the set of leaves of \(T\) and write \(L_{1}=L\cap V_{1}\), \(L_{2}=L\cap V_{2}\). For each \(v\in L\) we write \(e(v)\) for the unique edge of \(T\) incident with \(v\). We construct a graph \(B(T)\) as follows: the set of vertices of \(B(T)\) is \(E(T)\cup\{x_{1},x_{2}\}\), where \(x_{1},x_{2}\) are two additional vertices. Two vertices of \(B(T)\) are adjacent if one of the following holds:
* They represent two edges of \(T\) with a common vertex, or
* one of them is \(x_{i}\) and the other is \(e(v)\) and \(v\in L_{i}\) for some \(i\in\{1,2\}\), or
* they are \(x_{1}\) and \(x_{2}\).
Note that the vertex set of every induced cycle in \(B(T)\) with at least 4 vertices consists of the edge set of some path in \(T\) between two leaves together with either \(x_{1}\) or \(x_{2}\) or both. A graph \(G\) is an _extended nontrivial basic graph_ if \(G=B(T)\) for some tree \(T\) with at least three leaves and at least two non-leaves. (Note that if \(T\) is a path graph then \(B(T)\) is a cycle, and if \(T\) is a star then \(B(T)\) is a clique. Hence it makes sense to exclude these cases and deal with them separately.)
A _2-join_ of a graph \(G\) is a partition \((A_{1},C_{1},B_{1},A_{2},C_{2},B_{2})\) of \(V(G)\) such that the following hold:
* \(A_{1}\) is complete to \(A_{2}\), \(B_{1}\) is complete to \(B_{2}\), and there are no other edges of \(E(G)\) with one end in \(Z_{1}:=A_{1}\cup C_{1}\cup B_{1}\) and one end in \(Z_{2}:=A_{2}\cup C_{2}\cup B_{2}\), and
* for \(i=1,2\), \(Z_{i}\) contains an induced path \(M_{i}=(a,m_{1},\ldots,m_{k},b)\) with one end \(a\in A_{i}\), one end \(b\in B_{i}\), and \(\{m_{1},\ldots,m_{k}\}\subseteq C_{i}\) (where \(k\) may be 0), which we call the _marker path_ for \(Z_{i}\), and \(Z_{i}\) is not just this path.
A _pyramid_ is a graph consisting of a vertex \(a\) called the _apex_, a triangle \(b_{1}b_{2}b_{3}\) called the _base_, and three paths \(P_{i}\) from \(a\) to \(b_{i}\), each of which has length at least one, at most one of which has length exactly one, such that the only edge from \(P_{i}\setminus\{a\}\) to \(P_{j}\setminus\{a\}\) is \(b_{i}b_{j}\) for all \(\{i,j\}\subseteq\{1,2,3\}\). A graph \(G\) has a _star cutset_ if \(G\) is connected and if there is a vertex \(v\in V(G)\) and a set \(C\subseteq N[v]\) with \(v\in C\) such that \(G\setminus C\) is not connected. The set \(C\) is called a _star cutset_ of \(G\). A _clique cutset_ of a graph \(G\) is a set \(C\subseteq V(G)\) such that \(C\) is a clique and \(G\setminus C\) is not connected. A star cutset is _proper_ if it is not a clique cutset.
We now can state the decomposition theorem we use:
**Theorem 1.2** ([4]).: _Let \(G\) be an even-hole-free graph. Then one of the following holds:_
* \(G\) _is a clique;_
* \(G\) _is a hole;_
* \(G\) _is a pyramid;_
* \(G\) _is an extended nontrivial basic graph;_
* \(G\) _has a 2-join; or_
* \(G\) _has a star cutset._
Let \(G\) be an even-hole-free graph with no sector wheel. The main idea of our proof is to start with a "precover" of \(G\), i.e. two sets \(W_{1},W_{2}\subseteq V(G)\) such that \(G[W_{1}]\) and \(G[W_{2}]\) are chordal, and extend the precover to a chordal cover of \(G\) by
finding sets \(X_{1},X_{2}\subseteq V(G)\) such that \(W_{1}\subseteq X_{1}\), \(W_{2}\subseteq X_{2}\), \(X_{1}\cup X_{2}=V(G)\), and \(G[X_{1}]\) and \(G[X_{2}]\) are chordal. We define the "precover" using flat paths in \(G\).
For a path \(P=(v_{1},\ldots,v_{k})\), we write \(N[P]\) for the set of vertices either in the path or with at least one neighbor in the path. We define the _interior_ of \(P\) to be \(int(P)=\{v_{2},\ldots,v_{k-1}\}\). For \(k\in\{1,2\}\) we set \(int(P)=\emptyset\). We say that an induced path \(P\) is _flat_ if all the vertices in its interior have degree \(2\) in \(G\). Note that every path with either \(1\) or \(2\) vertices is flat.
We say that a graph \(G\) is _flat path extendable_ (FPE) if for every induced flat path \(P\) and every two sets \(W_{1},W_{2}\) such that \(G[W_{1}],G[W_{2}]\) are chordal, \(W_{1}\cap W_{2}=V(P)\), and \(W_{1}\cup W_{2}=N[P]\), there exist sets \(X_{1}\supseteq W_{1}\) and \(X_{2}\supseteq W_{2}\) such that \(G[X_{1}],G[X_{2}]\) are chordal, \(X_{1}\cap X_{2}=V(P)\), and \(X_{1}\cup X_{2}=V\). Under these conditions, \((P,W_{1},W_{2})\) is called a _precover_, and \((X_{1},X_{2})\) is a chordal cover of \(G\) that _extends_\((P,W_{1},W_{2})\).
If \(G\) is not FPE but every proper induced subgraph of \(G\) is FPE, then we say that \(G\) is _minimal non flat path extendable_ (MNFPE), and for a path \(P\) not satisfying the above property (i.e., there exist two sets \(W_{1},W_{2}\), such that \(G[W_{1}],G[W_{2}]\) are chordal and \(W_{1}\cap W_{2}=V(P)\) and \(W_{1}\cup W_{2}=N[P]\), but there do not exist sets \(X_{1}\supseteq W_{1}\) and \(X_{2}\supseteq W_{2}\), such that \(G[X_{1}],G[X_{2}]\) are chordal and \(X_{1}\cap X_{2}=V(P)\) and \(X_{1}\cup X_{2}=V\)) we say that \(P\) is a _witness path_ for \(G\) and that \(W_{1},W_{2}\) are the corresponding _witness sets_.
We prove the following theorem:
**Theorem 1.3**.: _Every graph with no even hole, no sector wheel, and no star cutset is FPE._
To deal with the case when \(G\) contains a star cutset, we define a closely related concept called weakly flat path extendable. A graph \(G\) is _weakly flat path extendable_ (weakly FPE) if for every path \(P\) of length zero or one, and every two sets \(W_{1},W_{2}\) such that \(G[W_{1}],G[W_{2}]\) are chordal, \(W_{1}\cap W_{2}=V(P)\), and \(W_{1}\cup W_{2}=N[P]\), there exist sets \(X_{1}\supseteq W_{1}\) and \(X_{2}\supseteq W_{2}\) such that \(G[X_{1}],G[X_{2}]\) are chordal, \(X_{1}\cap X_{2}=V(P)\), and \(X_{1}\cup X_{2}=V(G)\). Under these conditions, \((P,W_{1},W_{2})\) is a _precover_ and \((X_{1},X_{2})\) is a chordal cover of \(G\) that _extends_\((P,W_{1},W_{2})\). (The only difference between weakly flat path extendable and flat path extendable is that weakly flat path extendable only considers paths of length at most one). We note the following relationships between FPE and weakly FPE:
* If \(G\) is FPE, then \(G\) is weakly FPE.
* If \(G\) is minimal non-weakly FPE, then \(G\) is not FPE, but \(G\) is also not necessarily MNFPE.
We prove:
**Theorem 1.4**.: _Every graph with no even hole and no sector wheel is weakly FPE._
Theorem 1.3 (and the stronger definition of FPE) is needed to prove Theorem 1.4 in the case when \(G\) does not contain a proper star cutset.
Theorem 1.4 implies Theorem 1.1:
Proof of Theorem 1.1.: Let \(G\) be an even-hole-free graph with no sector wheel. Let \(v\in V(G)\). Since \(G\) has no sector wheel, it follows that \(N[v]\) is chordal. Now, \((\{v\},N[v],\{v\})\) is a precover of \(G\). By Theorem 1.4, \(G\) is weakly FPE, so \(G\) admits a chordal cover. This completes the proof.
Theorem 1.4 is not true without the assumption that the graph has no sector wheel. Indeed, consider the graph \(G\) depicted in Figure 1.1. Let \(P=\{x\}\) and let \(W_{1}=\{x,y_{1},y_{3},y_{5}\}\) and \(W_{2}=\{x,y_{2},y_{4},y_{6}\}\). Then there are no \(X_{1},X_{2}\) such that \(X_{1}\cup X_{2}=V(G)\), \(W_{1}\subseteq X_{1}\), \(W_{2}\subseteq X_{2}\), and \(G[X_{1}],G[X_{2}]\) are chordal. Therefore the method in this paper cannot be extended to the case of graphs containing sector wheels without some new ideas.
### Organization of the paper
In Section 2, we prove that if \(G\) is an extended nontrivial basic graph, then \(G\) is FPE. In Section 3, we prove that if \(G\) has no clique cutset and no star cutset, then \(G\) is FPE. In Section 4, we prove that if \(G\) admits a clique cutset, then \(G\) is weakly FPE. In Section 5, we prove that if \(G\) admits a proper star cutset, then \(G\) is weakly FPE. Finally, in Section 6, we prove Theorem 1.4.
## 2. Basic graphs
In this section, we prove that extended nontrivial basic graphs with no star cutsets are FPE.
A vertex \(v\) in a graph \(G\) is _nearly simplicial_ if \(N(v)\) is the union of a clique and a singleton. First we show that every nearly simplicial vertex in an MNFPE graph \(G\) is contained in the neighborhood of every witness path for \(G\).
**Lemma 2.1**.: _Let \(G\) be MNFPE and let \(P\) be a witness path for \(G\). Then all nearly simplicial vertices of \(G\) are in \(N[P]\)._
Proof.: Let \(W_{1}\) and \(W_{2}\) be the witness sets for \(P\). Suppose there exists a nearly simplicial vertex \(u\in V(G)\setminus N[P]\). Let \(G^{\prime}=G\setminus\{u\}\). Since \(G\) is MNFPE, it follows that \(G^{\prime}\) is FPE and \(N[P]\subseteq G^{\prime}\). Let \((X_{1},X_{2})\) be a chordal cover of \(G^{\prime}\) that extends \((P,W_{1},W_{2})\). Assume \(N(u)=C\cup\{u^{\prime}\}\) where \(C\) is a clique. Let \(i\in\{1,2\}\) such that \(u^{\prime}\in X_{i}\). Let \(X_{i}^{\prime}=X_{i}\) and \(X_{3-i}^{\prime}=X_{3-i}\cup\{u\}\). Now, \((X_{1}^{\prime},X_{2}^{\prime})\) is a chordal cover of \(G\) that extends \((P,W_{1},W_{2})\), contradicting that \(G\) is MNFPE.
**Lemma 2.2**.: _Let \(G=B(T)\) be an extended nontrivial basic graph for some tree \(T\). If there are two leaves of \(T\) with a common neighbor, then \(B(T)\) has a star cutset._
Proof.: Let \(t_{1}\) and \(t_{2}\) be two leaves of \(T\) with a common neighbor \(u\), and let \(\ell_{1}\) and \(\ell_{2}\) be the vertices of \(B(T)\) corresponding to the edges \(\{u,t_{1}\}\) and \(\{u,t_{2}\}\), respectively.
Figure 1. A graph with no even hole which is not FPE.
Up to symmetry between \(x_{1}\) and \(x_{2}\), assume that \(x_{1}\) is adjacent to \(\ell_{1}\). Since \(t_{1}\) and \(t_{2}\) have distance \(2\), which is an even number, it follows that \(x_{1}\) is adjacent to \(\ell_{2}\), and that \(\{\ell_{1},\ell_{2}\}\) is anticomplete to \(x_{2}\). Then, the neighborhood of \(\ell_{i}\) for \(i=1,2\) consists of \(x_{1}\) and the vertices of \(B(T)\) corresponding to exactly the edges incident with the common neighbor of \(t_{1}\) and \(t_{2}\). In particular, \(N_{B(T)}[\ell_{1}]=N_{B(T)}[\ell_{2}]\).
Now, \(N_{B(T)}[\ell_{1}]\setminus\{\ell_{2}\}\) is a star cutset that separates \(\ell_{2}\) from \(B(T)\setminus N_{B(T)}[\ell_{1}]\).
Since we deal with graphs with clique cutsets and proper star cutset separately in Sections 4 and 5, respectively, we may assume here that there are no two leaves in \(T\) with a common neighbor.
**Lemma 2.3**.: _Let \(T\) be a tree, let \(G=B(T)\) be an extended nontrivial basic graph, and assume that \(B(T)\) has no star cutset. Then \(B(T)\) is not MNFPE._
Proof.: We assume for contradiction that there exists a witness path \(P\) for \(B(T)\). If \(T\) is a path then \(B(T)\) is a cycle and therefore is clearly FPE. It follows that at least one of \(x_{i}\) has degree at least \(3\), so not both \(x_{1},x_{2}\) are internal in \(P\).
Let \((t_{1},t_{2},\ldots,t_{k})\) be the longest path in \(T\). Assuming \(T\) is not a path and there are no two leaves of \(T\) with a common neighbor, we must have \(k\geq 5\) and \(d(t_{2})=d(t_{k-1})=2\). Note that if \(x_{i}\) is internal in \(P\) then \(x_{3-i}\) is in \(P\). Indeed, suppose \(x_{1}\) is internal in \(P\). Then, by definition of a witness path, \(x_{1}\) has degree \(2\), and its two neighbors are also in \(P\), implying \(x_{2}\in P\).
Suppose \(k=5\). Then \(T\) is a subdivision of a star, with \(t_{3}\) being its center. In this case, all the edges of \(T\) are nearly simplicial in \(B(T)\), and therefore, by Lemma 2.1, all of them are in \(N[P]\), as vertices in \(B(T)\). Moreover, \(P\) must contain at least one of the vertices \(x_{1}\) and \(x_{2}\), for otherwise, there is some nearly simplicial vertex not in \(N[P]\), contradicting Lemma 2.1. Thus \(N[P]=V(B(T))\). This is impossible, since we have \(W_{1}\cup W_{2}=N[P]\). So from now on we assume \(k>5\).
Since no two leaves of \(T\) have a common neighbor, by Lemma 2.1, we have that \(\{t_{1},t_{2}\}\), \(\{t_{2},t_{3}\}\), \(\{t_{k-2},t_{k-1}\}\), and \(\{t_{k-1},t_{k}\}\) are all in \(N[P]\) since they are nearly simplicial. Since \(\{t_{2},t_{3}\}\in N[P]\), some edge of \(T\) that is incident with either \(t_{2}\) or \(t_{3}\) must be in \(V(P)\). Similarly, since \(\{t_{k-2},t_{k-1}\}\in N[P]\), some edge of \(T\) that is incident with either \(t_{k-2}\) or \(t_{k-1}\) must be in \(V(P)\). Since not both \(x_{1},x_{2}\) are internal in \(P\), this implies, by induction, that \(\{t_{3},t_{4}\},\ldots,\{t_{k-3},t_{k-2}\}\in V(P)\). Since \(\{t_{1},t_{2}\}\in N[P]\), we must have \(\{t_{2},t_{3}\}\in V(P)\) and similarly, since \(\{t_{k-1},t_{k}\}\in N[P]\), we must have \(\{t_{k-2},t_{k-1}\}\in V(P)\). This implies \(\{t_{3},t_{4}\},\ldots,\{t_{k-3},t_{k-2}\}\in int(P)\) and hence \(d(t_{3})=\ldots=d(t_{k-3})=2\). We conclude that \(T\) is a path, which yields a contradiction as discussed above.
## 3. Graphs with no star cutset
In this section, we prove that every (even hole, sector wheel)-free graph with no star cutset is FPE. We first focus on the case when \(G\) admits a \(2\)-join. If \(G\) admits a \(2\)-join \((A_{1},C_{1},B_{1},A_{2},C_{2},B_{2})\), we denote by \(B(Z_{i})\) the graph formed by adding to \(Z_{i}=A_{i}\cup C_{i}\cup B_{i}\) the marker path \(M_{3-i}\). We call \(B(Z_{1})\) and \(B(Z_{2})\) the _blocks of decomposition_ of the \(2\)-join \((A_{1},C_{1},B_{1},A_{2},C_{2},B_{2})\). We need the following theorem from [4]:
**Theorem 3.1** ([4], Theorem 2.10).: _If \(G\) is even-hole-free and has no star cutset, then \(B(Z_{1})\) and \(B(Z_{2})\) have no star cutset._
The next lemma states how paths and holes interact with the structure of a 2-join.
**Lemma 3.2**.: _Let \(G\) be a graph and let \((A_{1},C_{1},B_{1},A_{2},C_{2},B_{2})\) be a 2-join of \(G\). Let \(Q\) be a path or a hole of \(G\). If \(Q\cap A_{1}\neq\emptyset\) and \(Q\cap A_{2}\neq\emptyset\), then \(|Q\cap(A_{1}\cup A_{2})|\leq 3\). Similarly, if \(Q\cap B_{1}\neq\emptyset\) and \(Q\cap B_{2}\neq\emptyset\), then \(|Q\cap(B_{1}\cup B_{2})|\leq 3\)._
Proof.: Suppose first that \(|Q\cap A_{1}|\geq 2\) and \(|Q\cap A_{2}|\geq 2\). Then, \(Q\cap(A_{1}\cup A_{2})\) contains \(C_{4}\) as a subgraph, so \(Q\) is not a path. Then, either \(Q\cap(A_{1}\cup A_{2})\) is an induced \(C_{4}\), contradicting the fact that \(G\) has no even holes, or \(Q\cap(A_{1}\cup A_{2})\) contains \(K_{4}\) minus an edge as a subgraph, contradicting the fact that \(Q\) is a path or a hole. Therefore, up to symmetry we may assume that \(|Q\cap A_{1}|=1\). Suppose \(|Q\cap A_{2}|\geq 3\). Then, \(Q\cap(A_{1}\cup A_{2})\) contains \(K_{1,3}\) as a subgraph, contradicting the fact that \(Q\) is a path or a hole.
Lemma 3.2 has the following useful corollary.
**Lemma 3.3**.: _Let \(G\) be a graph and let \((A_{1},C_{1},B_{1},A_{2},C_{2},B_{2})\) be a 2-join of \(G\). Let \(P\) be a flat path of \(G\). If \(P\cap A_{1}\neq\emptyset\) and \(P\cap A_{2}\neq\emptyset\), then \(P\cap(A_{1}\cup A_{2})\) is an edge of \(P\). Similarly, if \(P\cap B_{1}\neq\emptyset\) and \(P\cap B_{2}\neq\emptyset\), then \(P\cap(B_{1}\cup B_{2})\) is an edge of \(P\)._
Proof.: Assume that \(P\cap A_{1}\neq\emptyset\) and \(P\cap A_{2}\neq\emptyset\). By Lemma 3.2, \(|P\cap(A_{1}\cup A_{2})|\leq 3\). Suppose that \(|P\cap A_{1}|=2\). Since \(P\) is a flat path and \(G\) is \(C_{4}\)-free, it follows that \(|A_{2}|=1\). Let \(\{a_{2}\}=A_{2}\), and note that \(a_{2}\in P\). Since \(a_{2}\) has two neighbors in \(P\), namely \(P\cap A_{1}\), it follows that \(a_{2}\) is an interior vertex of \(P\), so by the definition of flat path, \(a_{2}\) has degree two in \(G\). But by the definition of 2-join, there is a path with ends in \(A_{2}\) and \(B_{2}\) and interior in \(C_{2}\), so \(a_{2}\) has a neighbor in \(B_{2}\cup C_{2}\), a contradiction. This completes the proof.
Next, we prove that if a graph \(G\) admits a 2-join, then chordal covers of the blocks of decompositions of the 2-join can be combined into a chordal cover of \(G\).
**Lemma 3.4**.: _Let \(G\) be a graph with no even hole. Assume that \(G\) admits a 2-join \((A_{1},C_{1},B_{1},A_{2},C_{2},B_{2})\), where \(Z_{i}=A_{i}\cup C_{i}\cup B_{i}\) for \(i=1,2\). Let \(G_{1}=B(Z_{1})\) and \(G_{2}=B(Z_{2})\), and let \((X^{\prime}_{1},X^{\prime}_{2})\) and \((X^{\prime\prime}_{1},X^{\prime\prime}_{2})\) be chordal covers of \(G_{1}\) and \(G_{2}\), respectively. Further assume that:_
* \(M_{2}\subseteq X^{\prime}_{1}\cap X^{\prime}_{2}\)_, and_
* \(\{a_{1},b_{1}\}\cap X^{\prime}_{1}\subseteq X^{\prime\prime}_{1}\) _and_ \(\{a_{1},b_{1}\}\cap X^{\prime}_{2}\subseteq X^{\prime\prime}_{2}\)_, where_ \(a_{1}\) _and_ \(b_{1}\) _are the ends of_ \(M_{1}\) _in_ \(A_{1}\) _and_ \(B_{1}\)_, respectively._
_Let \(X_{1}=(X^{\prime}_{1}\cap Z_{1})\cup(X^{\prime\prime}_{1}\cap Z_{2})\) and let \(X_{2}=(X^{\prime}_{2}\cap Z_{1})\cup(X^{\prime\prime}_{2}\cap Z_{2})\). Then, \((X_{1},X_{2})\) is a chordal cover of \(G\)._
Proof.: Since \(Z_{1}\subseteq G_{1}\), it holds that \(Z_{1}\subseteq X^{\prime}_{1}\cup X^{\prime}_{2}\). Similarly, \(Z_{2}\subseteq X^{\prime\prime}_{1}\cup X^{\prime\prime}_{2}\). Therefore, \(Z_{1}\cup Z_{2}\subseteq X_{1}\cup X_{2}\), and so \(X_{1}\cup X_{2}=V(G)\). We show that \(X_{i}\) is chordal.
Suppose \(H\) is a hole in \(X_{1}\). Since \(X^{\prime}_{1}\) and \(X^{\prime\prime}_{1}\) are chordal, it follows that \(H\not\subseteq Z_{1}\) and \(H\not\subseteq Z_{2}\); indeed, if say \(H\subseteq Z_{1}\), then \(H\subseteq X_{1}\cap Z_{1}=X^{\prime}_{1}\cap Z_{1}\subseteq X^{\prime}_{1}\), contradicting the chordality of \(X^{\prime}_{1}\). This implies further that \(H\cap Z_{1}\neq\emptyset\) and \(H\cap Z_{2}\neq\emptyset\). Therefore, \(H\) contains an edge with one end in \(Z_{1}\) and one end in \(Z_{2}\). By Lemma 3.2, one of the following holds:
1. \(H\cap Z_{1}\) is independent and consists of at most one vertex of \(A_{1}\) and at most one vertex of \(B_{1}\), or
2. \(H\cap Z_{2}\) is independent and consists of at most one vertex of \(A_{2}\) and at most one vertex of \(B_{2}\), or
3. \(H\cap Z_{1}\) is a path with ends in \(A_{1}\) and \(B_{1}\) and (possibly empty) interior in \(C_{1}\) and \(H\cap Z_{2}\) is a path with ends in \(A_{2}\) and \(B_{2}\) and (possibly empty) interior in \(C_{2}\).
First, suppose (1) holds. We claim that if \(H\cap A_{1}\neq\emptyset\), then \(|A_{1}|=1\). Suppose for a contradiction that \(H\cap A_{1}\neq\emptyset\) and \(|A_{1}|>1\). Note that \(|H\cap A_{2}|>1\) for otherwise \(H\) is not a hole. Let \(a^{\prime}\) be the vertex of \(H\cap A_{1}\), and let \(a^{\prime\prime}\in A_{1}\setminus\{a^{\prime}\}\). Since \(\{a^{\prime},a^{\prime\prime}\}\cup(N_{H}(a^{\prime}))\) is not a \(C_{4}\), it follows that \(a^{\prime}a^{\prime\prime}\in E(G)\). But now \((H,a^{\prime\prime})\) is a twin wheel of \(G\), contradicting that \(G\) has no sector wheel. This proves that \(|A_{1}|=1\), and so \(A_{1}=\{a_{1}\}\). Similarly, if \(H\cap B_{1}\neq\emptyset\), then \(B_{1}=\{b_{1}\}\). Therefore, \(H\subseteq G_{2}\). Since \(H\subseteq X_{1}\), it follows that \(H\cap Z_{1}\subseteq X_{1}^{\prime}\), and by the second assumption of the lemma, \(H\cap Z_{1}\subseteq X_{1}^{\prime\prime}\). It follows that \(H\subseteq X_{1}^{\prime\prime}\), contradicting the chordality of \(X_{1}^{\prime\prime}\). Therefore, (1) does not hold.
Next, suppose (2) holds. Let \(H^{\prime}\) be the hole of \(G_{1}\) formed by replacing the vertex of \(H\cap A_{2}\), if it exists, with \(a_{2}\), and replacing the vertex of \(H\cap B_{2}\), if it exists, with \(b_{2}\). Since \(H\subseteq X_{1}\), it follows that \(H\cap Z_{1}\subseteq X_{1}^{\prime}\), and by the first assumption of the lemma, \(H\subseteq X_{1}^{\prime}\). Now, \(H\) is a hole of \(X_{1}^{\prime}\), contradicting the chordality of \(X_{1}^{\prime}\). Therefore, (2) does not hold.
Since (1) and (2) do not hold, it follows that (3) holds. Let \(Q_{1}=H\cap Z_{1}\) and \(Q_{2}=H\cap Z_{2}\), so \(Q_{1}\subseteq X_{1}^{\prime}\). Now, \(Q_{1}\cup M_{2}\) is a hole of \(G_{1}\) and, by the first assumption of the lemma, \(Q_{1}\cup M_{2}\subseteq X_{1}^{\prime}\), contradicting the chordality of \(X_{1}^{\prime}\). This completes the proof.
By symmetry, the following lemma is also true:
**Lemma 3.5**.: _Let \(G\) be a graph with no even hole. Assume that \(G\) admits a 2-join \((A_{1},C_{1},B_{1},A_{2},C_{2},B_{2})\), where \(Z_{i}=A_{i}\cup C_{i}\cup B_{i}\) for \(i=1,2\). Let \(G_{1}=B(Z_{1})\) and \(G_{2}=B(Z_{2})\), and let \((X_{1}^{\prime},X_{2}^{\prime})\) and \((X_{1}^{\prime\prime},X_{2}^{\prime\prime})\) be chordal covers of \(G_{1}\) and \(G_{2}\), respectively. Further assume that:_
* \(M_{1}\subseteq X_{1}^{\prime\prime}\cap X_{2}^{\prime\prime}\) _and_
* \(\{a_{2},b_{2}\}\cap X_{1}^{\prime\prime}\subseteq X_{1}^{\prime}\) _and_ \(\{a_{2},b_{2}\}\cap X_{2}^{\prime\prime}\subseteq X_{2}^{\prime}\)_, where_ \(a_{2}\) _and_ \(b_{2}\) _are the ends of_ \(M_{2}\) _in_ \(A_{2}\) _and_ \(B_{2}\)_, respectively._
_Let \(X_{1}=(X_{1}^{\prime}\cap Z_{1})\cup(X_{1}^{\prime\prime}\cap Z_{2})\) and let \(X_{2}=(X_{2}^{\prime}\cap Z_{1})\cup(X_{2}^{\prime\prime}\cap Z_{2})\). Then, \((X_{1},X_{2})\) is a chordal cover of \(G\)._
Next, we prove that a partial precover can be extended to a full precover.
**Lemma 3.6**.: _Let \(G\) be a graph with no even hole, no star cutset, and no sector wheel. Let \(P=p_{1}\)-\(\ldots\)-\(p_{k}\) be a flat path of \(G\). Let \((W_{1},W_{2})\) be such that \(W_{1}\cap W_{2}=V(P)\), \(W_{1}\cup W_{2}\subseteq N[P]\), and \(G[W_{1}]\) and \(G[W_{2}]\) are chordal. Also assume that \(N(p_{1})\cap N(p_{k})\subseteq W_{1}\cup W_{2}\). Then, there exists \(W_{1}^{\prime}\), \(W_{2}^{\prime}\) with \(W_{1}\subseteq W_{1}^{\prime}\), \(W_{2}\subseteq W_{2}^{\prime}\), such that \(W_{1}^{\prime}\cap W_{2}^{\prime}=V(P)\), \(W_{1}^{\prime}\cup W_{2}^{\prime}=N[P]\), and \(G[W_{1}^{\prime}]\) and \(G[W_{2}^{\prime}]\) are chordal._
Proof.: We construct \(W_{1}^{\prime}\) and \(W_{2}^{\prime}\) as follows. We begin by adding every vertex of \(W_{i}\) to \(W_{i}^{\prime}\) for \(i=1,2\). Then, as long as \(N[P]\setminus(W_{1}^{\prime}\cup W_{2}^{\prime})\) is not empty, we choose \(v\in N[P]\setminus(W_{1}^{\prime}\cup W_{2}^{\prime})\). Note that since \(P\) is a flat path and by the assumptions of the lemma, \(v\in(N(p_{1})\cup N(p_{k}))\setminus(N(p_{1})\cap N(p_{k}))\).
First, suppose that \(P\) has length greater than one. Assume \(v\in N(p_{i})\setminus N(p_{k+1-i})\) for \(i\in\{1,k\}\). We claim that \(v\) has at most one neighbor in \(N(p_{k+1-i})\). Indeed, Suppose \(v\) has two neighbors \(x_{1},x_{2}\in N(p_{k+1-i})\). Since \(\{p_{k+1-i},x_{1},x_{2},v\}\) does not induce a \(C_{4}\), it follows that \(x_{1}\) and \(x_{2}\) are adjacent. If \(p_{i}\) is adjacent to both \(x_{1}\) and \(x_{2}\), then \((P\cup\{x_{1}\},x_{2})\) is a twin wheel, contradicting that \(G\) has no sector wheel. Therefore, we may assume that \(p_{i}\) is non-adjacent to \(x_{1}\). But now \(P\cup\{x_{1},v\}\) is a hole and \(N(x_{2})\cap(P\cup\{x_{1},v\})\) is a path of length two, contradicting that \(G\) has no sector wheel.
We now follow the process below, which is well defined since \(v\) has at most one neighbor in \(N(p_{k+1-i})\):
* If \(v\) is anticomplete to \(N(p_{k+1-i})\), then add \(v\) to \(W^{\prime}_{1}\).
* If \(v\) has a neighbor in \(N(p_{k+1-i})\cap W^{\prime}_{1}\), then add \(v\) to \(W^{\prime}_{2}\).
* If \(v\) has a neighbor in \(N(p_{k+1-i})\cap W^{\prime}_{2}\), then add \(v\) to \(W^{\prime}_{1}\).
By the above, the sets \(W^{\prime}_{1}\), \(W^{\prime}_{2}\) formed in this way are unique: every vertex \(v\in N(p_{i})\setminus N(p_{k+1-i})\) is assigned to exactly one of \(W^{\prime}_{1},W^{\prime}_{2}\) as above.
Now, suppose that \(P\) has length one. Assume \(v\in N(p_{i})\setminus N(p_{k+i-1})\) for \(i\in\{1,k\}\). We claim that \(v\) has at most one neighbor in \(N(p_{k+1-i})\setminus N(p_{i})\). Indeed, suppose \(v\) has two neighbors \(x_{1},x_{2}\in N(p_{k+1-i})\setminus N(p_{i})\). Since \(\{p_{k+1-i},x_{1},x_{2},v\}\) does not induce a \(C_{4}\), it follows that \(x_{1}\) and \(x_{2}\) are adjacent. But now \(P\cup\{x_{1},v\}\) is a hole and \(N(x_{2})\cap(P\cup\{x_{1},v\})\) is a path of length two, contradicting that \(G\) has no sector wheel.
We now follow the process below:
* If \(v\) is anticomplete to \(N(p_{k+1-i})\setminus N(p_{i})\), then add \(v\) to \(W^{\prime}_{1}\).
* If \(v\) has a neighbor in \((N(p_{k+1-i})\setminus N(p_{i}))\cap W^{\prime}_{1}\), then add \(v\) to \(W^{\prime}_{2}\).
* If \(v\) has a neighbor in \((N(p_{k+1-i})\setminus N(p_{i}))\cap W^{\prime}_{2}\), then add \(v\) to \(W^{\prime}_{1}\).
Again, every vertex \(v\in N(p_{i})\setminus N(p_{k+1-i})\) is assigned to exactly one of \(W^{\prime}_{1}\), \(W^{\prime}_{2}\) by the argument above.
Next, we prove that \(W^{\prime}_{1}\) and \(W^{\prime}_{2}\) satisfy the conditions of the lemma. By the construction of \(W^{\prime}_{1}\) and \(W^{\prime}_{2}\), we have that \(W^{\prime}_{1}\cap W^{\prime}_{2}=V(P)\) and that \(W^{\prime}_{1}\cup W^{\prime}_{2}=N[P]\). It remains to show that \(G[W^{\prime}_{1}]\) and \(G[W^{\prime}_{2}]\) are chordal. Suppose that \(G[W^{\prime}_{1}]\) contains a hole \(H\). Suppose first that \(P\subseteq H\); so \(H\setminus P\) is either an edge with one end in \(N(p_{1})\) and one end in \(N(p_{k})\) or a vertex in \(N(p_{1})\cap N(p_{k})\). Since, by the construction of \(W^{\prime}_{1}\) and \(W^{\prime}_{2}\), no edge with one end in \(N(p_{1})\setminus N(p_{k})\) and one end in \(N(p_{k})\setminus N(p_{1})\) has both ends in \(W^{\prime}_{1}\), it follows that \(H\setminus P\) is a vertex in \(N(p_{1})\cap N(p_{k})\). But now \(H\) is a hole of \(W_{1}\), a contradiction. Therefore, \(P\not\subseteq H\), and thus \(H\subseteq N[p_{1}]\) or \(H\subseteq N[p_{k}]\). Since \(p_{i}\) is complete to \(N(p_{i})\), it follows that \(H\subseteq N(p_{i})\) for \(i=1,k\). But now \((H,p_{i})\) is a universal wheel, a contradiction. This proves that \(G[W^{\prime}_{1}]\) is chordal. The proof that \(G[W^{\prime}_{2}]\) is chordal follows similarly.
Next, we prove:
**Lemma 3.7**.: _Let \(G\) be a graph with no even hole, no sector wheel, and no star cutset. Suppose that \(G\) is non-FPE and that every proper induced subgraph of \(G\) with no star cutset is FPE. Then, \(G\) does not admit a 2-join._
Proof.: Suppose for a contradiction that \((A_{1},C_{1},B_{1},A_{2},C_{2},B_{2})\) is a 2-join of \(G\). Let \(P=p_{1}\)-\(\ldots\)-\(p_{k}\) be a witness path for \(G\) with witness sets \(W_{1}\), \(W_{2}\). Assume up to symmetry that \(p_{1}\in Z_{1}=A_{1}\cup C_{1}\cup B_{1}\). Let \(G_{1}=B(Z_{1})\) and let \(G_{2}=B(Z_{2})\). By Theorem 3.1, \(G_{1}\) and \(G_{2}\) have no star cutset. Since every proper induced subgraph
of \(G\) with no star cutset is FPE, and \(G_{1}\) and \(G_{2}\) are proper induced subgraphs of \(G\) with no star cutsets, it follows that \(G_{1}\) and \(G_{2}\) are FPE. Our strategy to complete the proof is to find appropriate chordal covers of \(G_{1}\) and \(G_{2}\), and use Lemmas 3.4 and 3.5 to obtain a chordal cover of \(G\) that extends \((P,W_{1},W_{2})\), reaching a contradiction.
First we show:
(1) \(p_{k}\not\in Z_{2}\).
Suppose that \(p_{k}\in Z_{2}=A_{2}\cup C_{2}\cup B_{2}\). By Lemma 3.3, \(P\) contains exactly one edge with one end in \(Z_{1}\) and one end in \(Z_{2}\). Assume up to symmetry between \(A_{1}\) and \(B_{1}\) that \(1\leq i\leq k\) is such that \(\{p_{1},\ldots,p_{i}\}\subseteq Z_{1}\), \(\{p_{i+1},\ldots,p_{k}\}\subseteq Z_{2}\), \(p_{i}\in A_{1}\), and \(p_{i+1}\in A_{2}\). Since \(B_{1}\) is complete to \(B_{2}\), not both \(P\cap B_{1}\neq\emptyset\) and \(P\cap B_{2}\neq\emptyset\), and since \(p_{1}\in Z_{1}\) and \(p_{k}\in Z_{2}\), we may assume up to symmetry between \(B_{1}\) and \(B_{2}\) that \(P\cap B_{1}=\emptyset\).
Let \(P_{1}=(P\cap Z_{1})\cup M_{2}\). Let \(W^{\prime\prime}_{1}=N(p_{1})\cap W_{1}\cap G_{1}\) and \(W^{\prime\prime}_{2}=N(p_{1})\cap W_{2}\cap G_{1}\). By Lemma 3.6, there exist sets \(W^{\prime}_{1}\), \(W^{\prime}_{2}\) such that \(W^{\prime}_{1}\cap W^{\prime}_{2}=V(P_{1})\), \(W^{\prime}_{1}\cup W^{\prime}_{2}=N[P_{1}]\), and \(G_{1}[W^{\prime}_{1}]\) and \(G_{1}[W^{\prime}_{2}]\) are chordal. Let \((X^{\prime}_{1},X^{\prime}_{2})\) be a chordal cover of \(G_{1}\) that extends \((P_{1},W^{\prime}_{1},W^{\prime}_{2})\). Next, we find a chordal cover of \(G_{2}\). First, assume that \(P\cap(B_{1}\cup B_{2})=\emptyset\). Let \(P_{2}=(P\cap Z_{2})\cup M_{1}\). Let \(Y^{\prime\prime}_{1}=N(p_{k})\cap W_{1}\cap G_{2}\) and \(Y^{\prime\prime}_{2}=N(p_{k})\cap W_{2}\cap G_{2}\). By Lemma 3.6, there exist sets \(Y^{\prime}_{1},Y^{\prime}_{2}\) such that \(Y^{\prime}_{1}\cap Y^{\prime}_{2}=V(P_{2})\), \(Y^{\prime}_{1}\cup Y^{\prime}_{2}=N[P_{2}]\), and \(G_{2}[Y^{\prime}_{1}]\) and \(G_{2}[Y^{\prime}_{2}]\) are chordal. Let \((X^{\prime\prime}_{1},X^{\prime\prime}_{2})\) be a chordal cover of \(G_{2}\) that extends \((P_{2},Y^{\prime}_{1},Y^{\prime}_{2})\). Let \(X_{1}=(Z_{1}\cap X^{\prime}_{1})\cup(Z_{2}\cap X^{\prime\prime}_{1})\) and let \(X_{2}=(Z_{1}\cap X^{\prime}_{2})\cup(Z_{2}\cap X^{\prime\prime}_{2})\). Since \(M_{2}\subseteq P_{1}\) and \(M_{1}\subseteq P_{2}\), it follows that the conditions of Lemma 3.4 are satisfied. By Lemma 3.4, \((X_{1},X_{2})\) is a chordal cover of \(G\). By the construction of \(X_{1}\) and \(X_{2}\), it follows that \(X_{1}\cap X_{2}=V(P)\), \(W_{1}\subseteq X_{1}\), and \(W_{2}\subseteq X_{2}\). Now, \((X_{1},X_{2})\) is a chordal cover of \(G\) that extends \((P,W_{1},W_{2})\), contradicting that \(G\) is non-FPE.
Therefore, \(P\cap B_{2}\neq\emptyset\). Let \(a_{1}\) and \(b_{1}\) be the ends of \(M_{1}\). Let \(P^{\prime}_{2}=(P\cap Z_{2})\). Let \(U^{\prime\prime}_{1}=(N(p_{k})\cap W_{1}\cap G_{2})\cup(\{a_{1},b_{1}\}\cap X ^{\prime}_{1})\) and let \(U^{\prime\prime}_{2}=(N(p_{k})\cap W_{2}\cap G_{2})\cup(\{a_{1},b_{1}\}\cap X ^{\prime}_{2})\). By Lemma 3.6, there exist sets \(U^{\prime}_{1},U^{\prime}_{2}\) such that \(U^{\prime}_{1}\cap U^{\prime}_{2}=V(P^{\prime}_{2})\), \(U^{\prime}_{1}\cup U^{\prime}_{2}=N[P^{\prime}_{2}]\), and \(G_{2}[U^{\prime}_{1}]\) and \(G_{2}[U^{\prime}_{2}]\) are chordal. Let \((X^{\prime\prime}_{1},X^{\prime\prime}_{2})\) be a chordal cover of \(G_{2}\) that extends \((P^{\prime}_{2},U^{\prime}_{1},U^{\prime}_{2})\). Let \(X_{1}=(X^{\prime}_{1}\cap Z_{1})\cup(X^{\prime\prime}_{1}\cap Z_{2})\) and let \(X_{2}=(X^{\prime}_{2}\cap Z_{1})\cup(X^{\prime\prime}_{2}\cap Z_{2})\). Since \(M_{2}\subseteq P_{1}\), and by construction of \(U^{\prime\prime}_{1}\) and \(U^{\prime\prime}_{2}\), it follows that the conditions of Lemma 3.4 are satisfied. Now, by Lemma 3.4, \((X_{1},X_{2})\) is a chordal cover of \(G\). By the construction of \(X_{1}\) and \(X_{2}\), it follows that \(X_{1}\cap X_{2}=V(P),W_{1}\subseteq X_{1}\), and \(W_{2}\subseteq X_{2}\). Now, \((X_{1},X_{2})\) is a chordal cover of \(G\) that extends \((P,W_{1},W_{2})\), contradicting that \(G\) is non-FPE. This proves (1).
Next we show:
(2) \(P\subseteq Z_{1}\).
By (1), \(\{p_{1},p_{k}\}\subseteq Z_{1}\). Suppose \(P\not\subseteq Z_{1}\). By Lemma 3.3, it follows that \(P\cap Z_{2}\) is a path with ends in \(A_{2}\) and \(B_{2}\) and interior in \(C_{2}\). Let \(P_{1}=(P\cap Z_{1})\cup M_{2}\) and let \(P_{2}=M_{1}\). Let \(W^{\prime\prime}_{1}=N(P_{1})\cap W_{1}\cap G_{1}\) and \(W^{\prime\prime}_{2}=N(P_{1})\cap W_{2}\cap G_{2}\). By Lemma 3.6, there exist \(W^{\prime}_{1}\), \(W^{\prime}_{2}\) such that \(W^{\prime}_{1}\cap W^{\prime}_{2}=V(P_{1})\), \(W^{\prime}_{1}\cup W^{\prime}_{2}=N[P_{1}]\), and \(G_{1}[W^{\prime}_{1}]\) and \(G_{1}[W^{\prime}_{2}]\) are chordal. Let \((X^{\prime}_{1},X^{\prime}_{2})\) be a chordal cover of \(G_{1}\) that extends \((P_{1},W^{\prime}_{1},W^{\prime}_{2})\). Next, let \(U^{\prime\prime}_{1}=N(P_{2})\cap W_{1}\cap G_{2}\) and \(U^{\prime\prime}_{2}=N(P_{2})\cap W_{2}\cap G_{2}\)
By Lemma 3.6, there exists \(U_{1}^{\prime}\), \(U_{2}^{\prime}\) such that \(U_{1}^{\prime}\cap U_{2}^{\prime}=V(P_{2})\), \(U_{1}^{\prime}\cup U_{2}^{\prime}=N[P_{2}]\), and \(G_{2}[U_{1}^{\prime}]\) and \(G_{2}[U_{2}^{\prime}]\) are chordal. Let \((X_{1}^{\prime\prime},X_{2}^{\prime\prime})\) be a chordal cover of \(G_{2}\) that extends \((P_{2},U_{1}^{\prime},U_{2}^{\prime})\).
Now, let \(X_{1}=(Z_{1}\cap X_{1}^{\prime})\cup(Z_{2}\cap X_{1}^{\prime\prime})\) and \(X_{2}=(Z_{1}\cap X_{2}^{\prime})\cup(Z_{2}\cap X_{2}^{\prime\prime})\). Since \(M_{2}\subseteq P_{1}\) and \(M_{1}\subseteq P_{2}\), it follows that the conditions of Lemma 3.4 are satisfied. By Lemma 3.4, \((X_{1},X_{2})\) is a chordal cover of \(G\). By the construction of \(X_{1}\) and \(X_{2}\), it follows that \(X_{1}\cap X_{2}=V(P)\), \(W_{1}\subseteq X_{1}\), and \(W_{2}\subseteq X_{2}\). Therefore, \((X_{1},X_{2})\) is a chordal cover of \(G\) that extends \((P,W_{1},W_{2})\), contradicting that \(G\) is non-FPE. This proves (2).
Next we show:
(3) \(P\subseteq C_{1}\).
By (2), \(P\subseteq Z_{1}=A_{1}\cup C_{1}\cup B_{1}\). Let \(P_{2}=M_{1}\), let \(W_{1}^{\prime\prime}=N(P_{2})\cap W_{1}\cap G_{2}\), and let \(W_{2}^{\prime\prime}=N(P_{2})\cap W_{2}\cap G_{2}\). By Lemma 3.6, there exist \(W_{1}^{\prime},W_{2}^{\prime}\) such that \(W_{1}^{\prime}\cap W_{2}^{\prime}=V(P_{2})\), \(W_{1}^{\prime}\cup W_{2}^{\prime}=N[P_{2}]\), and \(G_{2}[W_{1}^{\prime}]\) and \(G_{2}[W_{2}^{\prime}]\) are chordal. Let \((X_{1}^{\prime\prime},X_{2}^{\prime\prime})\) be a chordal cover of \(G_{2}\) that extends \((P_{2},W_{1}^{\prime},W_{2}^{\prime})\).
Since \(P\not\subseteq C_{1}\), either \(P\cap A_{1}\neq\emptyset\) or \(P\cap B_{1}\neq\emptyset\); by symmetry, assume that \(P\cap A_{1}\neq\emptyset\). If \(P\cap B_{1}=\emptyset\), let \(P_{1}=P\cup M_{2}\). If \(P\cap B_{1}\neq\emptyset\), let \(P_{1}=P\). Let \(U_{1}^{\prime\prime}=N(P_{1})\cap W_{1}\cap G_{1}\) and \(U_{2}^{\prime\prime}=N(P_{1})\cap W_{2}\cap G_{1}\). Note that in both cases, the second condition of Lemma 3.5 holds (in the first case, because \(M_{2}\subseteq P_{1}\) and so \(\{a_{2},b_{2}\}\subseteq X_{1}^{\prime\prime}\cap X_{2}^{\prime\prime}\), and in the second case, because \(\{a_{2},b_{2}\}\subseteq N(P_{1})\subseteq W_{1}\cup W_{2}\)). By Lemma 3.6, there exist \(U_{1}^{\prime},U_{2}^{\prime}\) such that \(U_{1}^{\prime}\cap U_{2}^{\prime}=V(P_{1})\), \(U_{1}^{\prime}\cup U_{2}^{\prime}=N[P_{1}]\), and \(G_{1}[U_{1}^{\prime}]\) and \(G_{1}[U_{2}^{\prime}]\) are chordal. Let \((X_{1}^{\prime},X_{2}^{\prime})\) be a chordal cover of \(G\) that extends \((P_{1},U_{1}^{\prime},U_{2}^{\prime})\).
Let \(X_{1}=(Z_{1}\cap X_{1}^{\prime})\cup(Z_{2}\cap X_{1}^{\prime\prime})\) and \(X_{2}=(Z_{1}\cap X_{2}^{\prime})\cup(Z_{2}\cap X_{2}^{\prime\prime})\). By Lemma 3.5, \((X_{1},X_{2})\) is a chordal cover of \(G\). By the construction of \(X_{1}\) and \(X_{2}\), it follows that \(X_{1}\cap X_{2}=V(P)\), \(W_{1}\subseteq X_{1}\), and \(W_{2}\subseteq X_{2}\). Now, \((X_{1},X_{2})\) is a chordal cover of \(G\) that extends \((P,W_{1},W_{2})\), contradicting that \(G\) is non-FPE. This proves (3).
By (3), \(P\subseteq C_{1}\). Therefore, \(W_{1},W_{2}\subseteq Z_{1}\). Let \((X_{1}^{\prime},X_{2}^{\prime})\) be a chordal cover of \(G_{1}\) that extends \((P,W_{1},W_{2})\). Let \(P_{2}=M_{1}\), let \(W_{1}^{\prime\prime}=((\{a_{2},b_{2}\})\cap X_{1}^{\prime})\cup M_{1}\), and let \(W_{2}^{\prime\prime}=((\{a_{2},b_{2}\})\cap X_{2}^{\prime})\cup M_{1}\). Note that since \(X_{1}^{\prime}\) and \(X_{2}^{\prime}\) are chordal, it follows that \(G_{2}[W_{1}^{\prime\prime}\) and \(G_{2}[W_{2}^{\prime\prime}]\) are chordal. By Lemma 3.6, there exist sets \(W_{1}^{\prime},W_{2}^{\prime}\) such that \(W_{1}^{\prime}\cap W_{2}^{\prime}=V(P_{2})\), \(W_{1}^{\prime}\cup W_{2}^{\prime}=N[P_{2}]\), and \(G_{2}[W_{1}^{\prime}]\) and \(G_{2}[W_{2}^{\prime}]\) are chordal. Let \((X_{1}^{\prime\prime},X_{2}^{\prime\prime})\) be a chordal cover of \(G_{2}\) that extends \((P_{2},W_{1}^{\prime},W_{2}^{\prime})\). Let \(X_{1}=(Z_{1}\cap X_{1}^{\prime})\cup(Z_{2}\cap X_{1}^{\prime\prime})\) and \(X_{2}=(Z_{1}\cap X_{2}^{\prime})\cup(Z_{2}\cap X_{2}^{\prime\prime})\). The conditions of Lemma 3.5 are satisfied by the construction of \(P_{2}\), \(W_{1}^{\prime\prime}\), and \(W_{2}^{\prime\prime}\), so by Lemma 3.5, \((X_{1},X_{2})\) is a chordal cover of \(G\) that extends \((P,W_{1},W_{2})\), contradicting that \(G\) is non-FPE. This completes the proof.
Finally, we prove the main result of this section:
**Theorem 3.8**.: _Let \(G\) be an even-hole-free graph with no sector wheel and no star cutset. If every proper induced subgraph of \(G\) with no star cutset is FPE, then \(G\) is FPE._
Proof.: We apply Theorem 1.2 to \(G\). If \(G\) is a clique, then \(G\) is chordal, so every precover of \(G\) can be arbitrarily extended to a chordal cover of \(G\) and thus \(G\) is FPE. If \(G\) is a hole, then every precover of \(G\) can be extended to a chordal cover
\((X_{1},X_{2})\) of \(G\) by ensuring that both \(X_{1}\setminus X_{2}\) and \(X_{2}\setminus X_{1}\) are non-empty, so \(G\) is FPE. Suppose \(G\) is a pyramid with base \(b_{1}b_{2}b_{3}\), apex \(a\), and paths \(P_{1},P_{2},P_{3}\). Let \(P\) be a witness path of \(G\). Up to symmetry, we may assume that \(P\) is contained in \(P_{1}\). Then, every precover of \(G\) with witness path \(P\) can be extended to a chordal cover \((X_{1},X_{2})\) by ensuring that both \(P_{2}\) and \(P_{3}\) meet both \(X_{2}\setminus X_{1}\) and \(X_{1}\setminus X_{2}\). It follows that \(G\) is FPE.
Therefore, we may assume that either \(G\) is an extended nontrivial basic graph or \(G\) admits a \(2\)-join. By Lemma 3.7, \(G\) does not admit a \(2\)-join. If \(G\) is an extended nontrivial basic graph, then \(G\) is FPE by Lemma 2.3. This completes the proof.
## 4. Graphs with a clique cutset
In this section, we prove that even-hole-free graphs with no sector wheels that have a clique cutset are minimal non-weakly FPE.
**Lemma 4.1**.: _Let \(G\) be a graph. Suppose \(G\) has a clique cutset \(Q\), and let \(C\) be a component of \(G-Q\). Let \(G^{\prime}=G[C\cup Q]\) and \(G^{\prime\prime}=G-C\). If \(P\) is a flat path in \(G\), then \(P\cap G^{\prime}\) and \(P\cap G^{\prime\prime}\) are paths or empty._
Proof.: If not, then there exists two non-adjacent vertices in \(Q\), a contradiction.
**Lemma 4.2**.: _Let \(G\) be a minimal non-weakly FPE graph. Then, \(G\) does not have a clique cutset._
Proof.: Let \(Q\) be a clique cutset. Let \(P\) be a flat path witnessing \(G\), and let \(W_{1},W_{2}\) be the corresponding witness sets. First, we prove:
(4) _Let \(Q\) be a clique cutset, let \(C\) be a component of \(G-C\), let \(G^{\prime}=G[C\cup Q]\), and let \(G^{\prime\prime}=G-C\). Suppose \(X\subseteq G^{\prime}\) is chordal and \(Y\subseteq G^{\prime\prime}\) is chordal. Then, \(G[X\cup Y]\) is chordal._
Suppose there is an induced cycle \(T\) in \(G[X\cup Y]\). Then, \(|T\cap Q|\geq 2\), otherwise \(T\subseteq X\) or \(T\subseteq Y\). Since \(Q\) is a clique, it follows that \(|T\cap Q|=2\), and since \(T\) is a cycle, it follows that \(T\setminus Q\subseteq X\) or \(T\setminus Q\subseteq Y\). But \(T\cap Q\subseteq X\cap Y\), so \(T\subseteq X\) or \(T\subseteq Y\), contradicting that \(X\) and \(Y\) are chordal. This proves (4).
First, suppose there exists a component \(C\) of \(G-Q\) such that \((W_{1}\cup W_{2})\cap C=\emptyset\). Let \(G^{\prime}=G[C\cup Q]\) and \(G^{\prime\prime}=G-C\). Then, \(W_{1}\cup W_{2}\subseteq V(G^{\prime\prime})\), and \(P\cap G^{\prime\prime}\) is a flat path. By the induction hypothesis, the chordal cover \(W_{1}\cup W_{2}\) can be extended to a chordal cover \(X_{1}\cup X_{2}\) of \(G^{\prime\prime}\). Choose a vertex \(v\in Q\) and think of \(v\) as a flat path \(P^{\prime}\) and of \(Q-v\) as a subset of \(N[P^{\prime}]\) in \(G^{\prime}\). Let \(W^{\prime}_{i}=(X_{i}\cap Q)\cup\{v\}\), for \(i=1,2\). By the induction hypothesis \(W^{\prime}_{1}\cup W^{\prime}_{2}\) is extendable to a chordal cover \(Y_{1}\cup Y_{2}\) of \(G^{\prime}\). Remove \(v\) from \(Y_{i}\) if it is not in \(X_{i}\), for \(i=1,2\). By (4), \(G[X_{i}\cup Y_{i}]\) is chordal for \(i=1,2\). Now, \((P,X_{1}\cup Y_{1},X_{2}\cup Y_{2})\) is a chordal cover of \(G\) that extends \((P,W_{1},W_{2})\), a contradiction.
Therefore, we may assume that \(W_{1}\cup W_{2}\) intersects every component of \(G-Q\). Let \(C\) be a component of \(G-Q\), let \(G^{\prime}=G[C\cup Q]\), and let \(G^{\prime\prime}=G-C\). By Lemma 4.1, \(P^{\prime}=P\cap G^{\prime}\) and \(P^{\prime\prime}=P\cap G^{\prime\prime}\) are flat paths. For \(i=1,2\), let \(W^{\prime}_{i}=W_{i}\cap V(G^{\prime})\), \(W^{\prime\prime}_{i}=W_{i}\cap V(G^{\prime\prime})\). By the induction hypothesis, \(W^{\prime}_{1}\cup W^{\prime}_{2}\) and \(W^{\prime\prime}_{1}\cup W^{\prime\prime}_{2}\) can be extended to chordal covers \(X_{1}\cup X_{2}\) and \(Y_{1}\cup Y_{2}\) of \(G^{\prime}\) and \(G^{\prime\prime}\), respectively. The sets \(G[X_{i}\cup Y_{i}]\) are chordal for \(i=1,2\) by (4). It follows that \((P,X_{1}\cup Y_{1},X_{2}\cup Y_{2})\) is a chordal cover of \(G\) that extends \((P,W_{1},W_{2})\), a contradiction. This completes the proof.
## 5. Graphs with a proper star cutset
In this section we prove that even-hole-free graphs with no sector wheels that have proper star cutsets are minimal non-weakly FPE. We begin with a few useful lemmas.
**Lemma 5.1**.: _Let \(G\) be minimal non-weakly FPE and let \(v\in V(G)\). Then, \(v\) is not complete to \(G\setminus\{v\}\)._
Proof.: Let \(P\) be the witness path and \((W_{1},W_{2})\) be the witness sets for \(G\). Suppose for the sake of contradiction that \(v\) is complete to \(G^{\prime}=G\setminus\{v\}\). Since \(G\) is minimal non-weakly FPE, \(G=N[v]\), and \(W_{1}\cup W_{2}=N[P]\), it follows that \(v\not\in P\). Since \(G\) is minimal non-weakly FPE and \(G^{\prime}\) is a proper induced subgraph of \(G\), it follows that \(G^{\prime}\) is weakly FPE. Since \(P\subseteq V(G^{\prime})\), there exists a chordal cover \((X_{1},X_{2})\) of \(G^{\prime}\) that extends \((P,W_{1}\setminus\{v\},W_{2}\setminus\{v\})\). Now, since \(v\) is complete to \(G^{\prime}\) we have \(v\in N[P]\), and we may assume up to symmetry that \(v\in W_{1}\). Since \(v\) is complete to \(X_{1}\) and \(X_{1}\) is chordal, it follows that \(X_{1}\cup\{v\}\) is chordal, so \((X_{1}\cup\{v\},X_{2})\) is a chordal cover of \(G\) that extends \((P,W_{1},W_{2})\), a contradiction.
**Lemma 5.2**.: _Let \(G\) be a graph and let \(X\subseteq V(G)\) be a subset of its vertex set such that there exists a vertex \(u\in V(G)\) anticomplete to \(X\). Suppose there exists a cutset \(Y\) of \(G\) such that \(X\subseteq Y\subseteq N[X]\). Then, there exists a cutset \(Y^{\prime}\) of \(G\) such that \(X\subseteq Y^{\prime}\subseteq N[X]\) and at least one component of \(G\setminus Y^{\prime}\) is anticomplete to \(X\)._
Proof.: Let \(C_{1},\ldots,C_{m}\) be the components of \(G\setminus Y\). Since \(u\) is anticomplete to \(X\), it follows that \(u\not\in Y\), and so we may assume up to symmetry that \(u\in C_{1}\). Let \(Y^{\prime}=Y\cup\left(\bigcup_{v\in X}(N(v)\cap C_{1})\right)\). Let \(C_{u}\) be the component of \(G\setminus Y^{\prime}\) containing \(u\). Now, \(C_{u}\) is anticomplete to \(X\). This completes the proof.
A _twin wheel_ consists of a hole \(H\) and a vertex \(v\) such that \(H\cap N(v)\) is a three-vertex path. A _short pyramid_ consists of a hole \(H\) and a vertex \(v\) such that \(H\cap N(v)\) is an edge plus an isolated vertex. For a path \(P=p_{1}\)-\(\ldots\)-\(p_{k}\), let \(P^{*}\) denote the _interior_ of \(P\); that is, \(P^{*}=P\setminus\{p_{1},p_{k}\}\). A wheel is _proper_ if it is not a twin wheel or a short pyramid. A wheel \((H,v)\) is _universal_ if \(v\) is complete to \(H\). A _sector_ of a wheel \((H,v)\) is a path \(P\subseteq H\) such that \(v\) is complete to the ends of \(P\) and anticomplete to the interior of \(P\). A sector is _long_ if it has length greater than one. A wheel \((H,v)\) is called an _even wheel_ if \(|N(v)\cap H|\) is even. If \(H\) is a graph, then we say that \(G\)_contains_\(H\) if \(G\) has an induced subgraph isomorphic to \(H\).
The following is well-known; we include a proof for completeness.
**Lemma 5.3**.: _Let \(G\) be a graph with no even hole. Then, \(G\) does not contain an even wheel._
Proof.: Suppose \(G\) contains an even wheel \((H,v)\), and suppose \(S\) is a long sector of \((H,v)\). Then, \(S\cup\{v\}\) is a hole of \(G\) whose length is the same parity as the length of \(S\). It follows that every long sector is of odd length. Since sectors that are not long are of length one, it follows that every sector of \((H,v)\) is of odd length. Since \((H,v)\) is an even wheel, \((H,v)\) has an even number of sectors. But now \(H\) is even, a contradiction.
The following lemma describes star cutsets that come from proper wheel centers.
**Lemma 5.4** ([1, 4]).: _Let \(G\) be a graph with no even hole that contains a proper wheel \((H,x)\) that is not a universal wheel. Let \(x_{1}\) and \(x_{2}\) be the endpoints of a long
sector \(Q\) of \((H,x)\). Let \(W\) be the set of all vertices \(h\in H\cap N(x)\) such that the subpath of \(H\setminus\{x_{1}\}\) from \(x_{2}\) to \(h\) contains an even number of neighbors of \(x\), and let \(Z=H\setminus(Q\cup N(x))\). Let \(N^{\prime}=N(x)\setminus W\). Then, \(N^{\prime}\cup\{x\}\) is a cutset of \(G\) that separates \(Q^{*}\) from \(W\cup Z\)._
We will also use the following corollary of Lemma 5.4:
**Lemma 5.5**.: _Let \(G\) be a graph with no even hole and no twin wheel, and let \((H,x)\) be a wheel of \(G\). Suppose \(x\) is not the center of a star cutset in \(G\). Then, \((H,x)\) is a short pyramid._
Proof.: Suppose \((H,x)\) is not a short pyramid. Since \(G\) has no twin wheel, it follows that \((H,x)\) is a proper wheel. By Lemma 5.4, it follows that \(x\) is the center of a star cutset in \(G\), a contradiction.
Next, we prove a helpful lemma about cutsets contained in the neighborhood of witness paths.
**Lemma 5.6**.: _Let \(G\) be minimal non-weakly FPE, let \(P\) be a witness path for \(G\) with witness sets \(W_{1}\) and \(W_{2}\), and let \(X\) be a cutset of \(G\) such that \(X\cap P\) is connected and \(X\subseteq N[X\cap P]\). Then, no component of \(G\setminus X\) is anticomplete to \(X\cap P\)._
Proof.: Let \(C_{1},\ldots,C_{m}\) be the components of \(G\setminus X\), and suppose for the sake of contradiction that \(C_{1}\) is anticomplete to \(X\cap P\). Let \(G^{\prime}=X\cup C_{1}\) and \(G^{\prime\prime}=G\setminus C_{1}\). Note that \(X\cap P\) is a flat path in \(G^{\prime}\), \(X\cap P\subseteq(W_{1}\cap G^{\prime})\cap(W_{2}\cap G^{\prime})\), and \((W_{1}\cap G^{\prime})\cup(W_{2}\cap G^{\prime})=N_{G^{\prime}}[X\cap P]\). Similarly, \(X\cap P\) is a flat path in \(G^{\prime\prime}\), \(X\cap P\subseteq(W_{1}\cap G^{\prime\prime})\cap(W_{2}\cap G^{\prime\prime})\), and \((W_{1}\cap G^{\prime\prime})\cup(W_{2}\cap G^{\prime\prime})=N_{G^{\prime\prime }}[X\cap P]\). Since \(G\) is minimal non-weakly FPE and \(G^{\prime}\) and \(G^{\prime\prime}\) are proper induced subgraph of \(G\), it follows that there exists a chordal cover \((X^{\prime}_{1},X^{\prime}_{2})\) of \(G^{\prime}\) that extends \((X\cap P,W_{1}\cap G^{\prime},W_{2}\cap G^{\prime})\) and a chordal cover \((X^{\prime\prime}_{1},X^{\prime\prime}_{2})\) of \(G^{\prime\prime}\) that extends \((X\cap P,W_{1}\cap G^{\prime\prime},W_{2}\cap G^{\prime\prime})\). Let \(X_{1}=X^{\prime}_{1}\cup X^{\prime\prime}_{1}\) and let \(X_{2}=X^{\prime}_{2}\cup X^{\prime\prime}_{2}\). We claim that \(X_{1}\) and \(X_{2}\) are chordal.
Suppose that there is a hole \(H\subseteq X_{1}\). Since \(X^{\prime\prime}_{1}\) is chordal, it follows that \(H\not\subseteq X^{\prime\prime}_{1}\), and so \(H\cap C_{1}\neq\emptyset\). Let \(H^{\prime}=H\cap N[C_{1}]\). Since \(X^{\prime}_{1}\) is chordal, it follows that \(H\not\subseteq X^{\prime}_{1}\). Since \(N[C_{1}]\subseteq X^{\prime}_{1}\), \(H\not\subseteq X^{\prime}_{1}\), and \(H\cap C_{1}\neq\emptyset\), it follows that \(H^{\prime}\) contains a path \(Q=q_{1}\)-\(\ldots\)-\(q_{k}\) with interior \(Q^{*}\) in \(C_{1}\) and ends \(q_{1},q_{k}\in N(C_{1})\subseteq X\subseteq N[X\cap P]\). Now, since \(P\) is anticomplete to \(Q^{*}\), \(Q\cup P\) contains a hole \(\tilde{H}\) and \(\tilde{H}\subseteq X^{\prime}_{1}\), contradicting that \((X^{\prime}_{1},X^{\prime}_{2})\) is a chordal cover of \(G^{\prime}\). Therefore, \(X_{1}\) is chordal, and by symmetry, \(X_{2}\) is chordal. Note that \(W_{1}\subseteq X_{1}\) and \(W_{2}\subseteq X_{2}\). Thus \((X_{1},X_{2})\) is a chordal cover of \(G\) that extends \((P,W_{1},W_{2})\), a contradiction.
A set \(X\subseteq V(G)\) is a _full star cutset_ if \(X\) is a star cutset and \(X=N[v]\) for some \(v\in V(G)\). If \(X=N[v]\) is a full star cutset, the vertex \(v\) is called the _center_ of the full star cutset. A set \(X\subseteq V(G)\) is a _double star cutset_ if there exist \(u,v\in V(G)\) such that \(uv\in E(G)\) and \(\{u,v\}\subseteq X\subseteq N[\{u,v\}]\).
The next lemma is the main result of this section.
**Lemma 5.7**.: _Let \(G\) be a graph with no even hole and no twin wheel. Suppose \(G\) is minimal non-weakly FPE, and let \(P=v_{0}w_{0}\) be a witness path of length one with witness sets \(W_{1}\) and \(W_{2}\) such that \(W_{1}\cup W_{2}=N[P]\). Then \(G\) does not admit a full star cutset._
Proof.: We start by proving a few claims.
(5) \(v_{0}\) _and \(w_{0}\) are not centers of star cutsets of \(G\)._
Suppose \(v_{0}\) is the center of a star cutset \(Y\subseteq N[v_{0}]\) and let \(C_{1},\ldots,C_{m}\) be the components of \(G\setminus Y\). By Lemma 5.1, \(v_{0}\) is not complete to \(G\setminus\{v_{0}\}\). Thus, applying Lemma 5.2 with \(X=\{v_{0}\}\), we may assume that \(C_{1}\) is anticomplete to \(v_{0}\). However, by Lemma 5.6, no component of \(G\setminus Y\) is anticomplete to \(\{v_{0}\}\), a contradiction. This proves (5).
(6) \(v_{0}w_{0}\) _is not the center of a double star cutset of \(G\)._
Suppose there exists a cutset \(X\subseteq N[\{v_{0},w_{0}\}]\) of \(G\) with \(\{v_{0},w_{0}\}\subseteq X\) and let \(C_{1},\ldots,C_{m}\) be the components of \(G\setminus X\). If \(G\subseteq N[\{v_{0},w_{0}\}]\), then \((W_{1},W_{2})\) is a chordal cover of \(G\), a contradiction, so \(G\not\subseteq N[\{v_{0},w_{0}\}]\). By Lemma 5.2, we may assume that \(C_{1}\) is anticomplete to \(\{v_{0},w_{0}\}\). However, by Lemma 5.6, no component of \(G\setminus X\) is anticomplete to \(\{v_{0},w_{0}\}\), a contradiction. This proves (6).
Suppose for the sake of contradiction that \(v\in V(G)\) is the center of a full star cutset \(N[v]\) in \(G\). By (5), \(v\not\in\{v_{0},w_{0}\}\). Let \(C_{1},\ldots,C_{m}\) be the connected components of \(G\setminus N[v]\).
(7) \(v_{0}\) _has a neighbor in \(C_{i}\) for \(1\leq i\leq m\). Similarly, \(w_{0}\) has a neighbor in \(C_{i}\) for \(1\leq i\leq m\)._
First, suppose that \(\{v_{0},w_{0}\}\cap N(v)=\emptyset\). We may assume that \(\{v_{0},w_{0}\}\subseteq C_{1}\). Let \(G^{\prime}=C_{1}\cup N[v]\) and note that \(N[\{v_{0},w_{0}\}]\subseteq G^{\prime}\). Since \(G^{\prime}\) is a proper induced subgraph of \(G\) and \(G\) is minimal non-weakly FPE, it follows that \(G^{\prime}\) is weakly FPE. Note that \(W_{1}\cup W_{2}\subseteq G^{\prime}\). Let \((X^{\prime}_{1},X^{\prime}_{2})\) be a chordal cover of \(G^{\prime}\) that extends \((P,W_{1},W_{2})\).
Next, let \(G^{\prime\prime}=G\setminus C_{1}\). Let \(W^{\prime\prime}_{1}=(X^{\prime}_{1}\cap N[v])\cup\{v\}\) and \(W^{\prime\prime}_{2}=(X^{\prime}_{2}\cap N[v])\cup\{v\}\). We think of \(v\) as a flat path in \(G^{\prime\prime}\), and note that \(W^{\prime\prime}_{1}\cup W^{\prime\prime}_{2}=N[v]\). Since \(G^{\prime\prime}\) is a proper induced subgraph of \(G\) and \(G\) is minimal non-weakly FPE, it follows that \(G^{\prime\prime}\) is weakly FPE. Let \((X^{\prime\prime}_{1},X^{\prime\prime}_{2})\) be a chordal cover of \(G^{\prime\prime}\) that extends \((v,W^{\prime\prime}_{1},W^{\prime\prime}_{2})\). Let \(X_{1}=X^{\prime}_{1}\cup(X^{\prime\prime}_{1}\setminus\{v\})\) and let \(X_{2}=X^{\prime}_{2}\cup(X^{\prime\prime}_{2}\setminus\{v\})\). We claim that \((X_{1},X_{2})\) is a chordal cover of \(G\) that extends \((P,W_{1},W_{2})\). Suppose for contradiction that there is a hole \(H\subseteq X_{1}\). Since \(X^{\prime}_{1}\) and \(X^{\prime\prime}_{1}\) are chordal, it follows that \(H\cap(X^{\prime\prime}_{1}\setminus X^{\prime}_{1})\neq\emptyset\) and \(H\cap(X^{\prime}_{1}\setminus X^{\prime\prime}_{1})\neq\emptyset\). So there exists a path \(Q\subseteq X^{\prime\prime}_{1}\setminus X^{\prime}_{1}\) in \(H\) with interior in \(C_{i}\) and ends in \(N(v)\) for some \(1<i\leq m\). But now \(Q\cup\{v\}\) is a hole in \(X^{\prime\prime}_{1}\), a contradiction. By the same argument, there is no hole \(H\subseteq X_{2}\). This is a contradiction to the fact that \(G\) is minimal non-weakly FPE. Therefore, \(\{v_{0},w_{0}\}\cap N(v)\neq\emptyset\), and we may assume that \(w_{0}\in N(v)\).
Suppose \(v_{0}\) is anticomplete to \(C_{i}\) for some \(1\leq i\leq m\). Let \(G^{\prime}=G\setminus C_{i}\). Now, \(P\subseteq G^{\prime}\) and \(G^{\prime}\) is a proper induced subgraph of \(G\). Since \(G\) is minimal non-weakly FPE, it follows that \(G^{\prime}\) is weakly FPE, so there exists a chordal cover \((X^{\prime}_{1},X^{\prime}_{2})\) of \(G^{\prime}\) that extends \((P,W_{1}\cap G^{\prime},W_{2}\cap G^{\prime})\). Next, let \(G^{\prime\prime}=C_{i}\cup N[v]\) and let \(P^{\prime\prime}=vw_{0}\). Let \(W^{\prime\prime}_{1}=(N[v]\cap X^{\prime}_{1})\cup(W_{1}\cap N[w_{0}]\cap G^{ \prime\prime})\cup\{v,w_{0}\}\) and let \(W^{\prime\prime}_{2}=(N[v]\cap X^{\prime}_{2})\cup(W_{2}\cap N[w_{0}]\cap G^{ \prime\prime})\cup\{v,w_{0}\}\). Note that by definition, \(W^{\prime\prime}_{1}\cup W^{\prime\prime}_{2}=N[\{v,w_{0}\}]\cap G^{\prime \prime}\) and \(\{v,w_{0}\}\subseteq W^{\prime\prime}_{1}\cap W^{\prime\prime}_{2}\). Since \(G^{\prime\prime}\) is a proper induced
subgraph of \(G\), it follows that \(G^{\prime\prime}\) is weakly FPE. Let \((X_{1}^{\prime\prime},X_{2}^{\prime\prime})\) be a chordal cover of \(G^{\prime\prime}\) that extends \((P^{\prime\prime},W_{1}^{\prime\prime},W_{2}^{\prime\prime})\).
Let \(X_{1}=X_{1}^{\prime}\cup(X_{1}^{\prime\prime}\setminus\{v\})\) and let \(X_{2}=X_{2}^{\prime}\cup(X_{2}^{\prime\prime}\setminus\{v\})\). We claim that \((X_{1},X_{2})\) is a chordal cover of \(G\) that extends \((P,W_{1},W_{2})\). Suppose for a contradiction that there is a hole \(H\subseteq X_{1}\). Since \(X_{1}^{\prime}\) is chordal, it follows that \(H\cap(X_{1}^{\prime\prime}\setminus X_{1}^{\prime})\neq\emptyset\), so \(H\) contains a path \(Q\) with ends in \(N[v]\) and interior in \(G\setminus G^{\prime}\). But now \(Q\cup\{v\}\) is a hole and \(Q\cup\{v\}\subseteq X_{1}^{\prime\prime}\), a contradiction. It follows that \(X_{1}\) is chordal, and by symmetry, \(X_{2}\) is chordal. Now, \((X_{1},X_{2})\) is a chordal cover of \(G\) that extends \((P,W_{1},W_{2})\), a contradiction. Therefore, \(v_{0}\) has a neighbor in \(C_{i}\) for \(1\leq i\leq m\), and so in particular, \(v_{0}\in N(v)\). Now the same proof using \(P^{\prime}=vv_{0}\) shows that \(w_{0}\) has a neighbor in \(C_{i}\) for \(1\leq i\leq m\). This proves (7).
By (7), \(\{v_{0},w_{0}\}\subseteq N(v)\) and by (6), \(\{v_{0},w_{0}\}\) is not the center of a double star cutset of \(G\), so for all \(1\leq i\leq j\leq m\), there exists a path \(Q=q_{1}\)-\(\ldots\)-\(q_{k}\) from \(C_{i}\) to \(C_{j}\) that is anticomplete to \(\{v_{0},w_{0}\}\) such that \(q_{1}\in C_{i}\), \(q_{k}\in C_{j}\), \(Q^{*}\subseteq N(v)\). Let \(Q=q_{1}\)-\(\ldots\)-\(q_{k}\) be the shortest such path. We may assume up to symmetry that \(i=1\) and \(j=2\). Let \(R\subseteq C_{1}\) be the shortest path with one end \(q_{1}\) such that \(R\) contains neighbors of both \(v_{0}\) and \(w_{0}\). Similarly, let \(S\subseteq C_{2}\) be the shortest path with one end \(q_{k}\) such that \(S\) contains neighbors of both \(v_{0}\) and \(w_{0}\). (Note that both \(R\) and \(S\) exist by (7)). Let \(R=q_{1}\)-\(r_{1}\)-\(\ldots\)-\(r_{\ell}\) and let \(S=q_{k}\)-\(s_{1}\)-\(\ldots\)-\(s_{t}\). Since \(R\) is the shortest path containing neighbors of both \(v_{0}\) and \(w_{0}\), it follows that \(R\setminus\{r_{\ell}\}\) contains neighbors of at most one of \(v_{0}\) and \(w_{0}\). Similarly, \(S\setminus\{s_{t}\}\) contains neighbors of at most one of \(v_{0}\) and \(w_{0}\). We may assume that \(r_{\ell}\) is the unique neighbor of \(v_{0}\) in \(R\).
(8) \(w_{0}\) _has exactly one neighbor \(r_{w}\) in \(R\) and \(r_{w}\neq r_{\ell}\)._
Let \(H_{1}\) be the hole given by \(H_{1}=v_{0}\)-\(v\)-\(q_{2}\)-\(q_{1}\)-\(R\)-\(r_{\ell}\)-\(v_{0}\). Since \(R\) contains neighbors of \(w_{0}\), it follows that \(w_{0}\) has at least three neighbors in \(H_{1}\): \(v_{0}\), \(v\), and a neighbor in \(R\). By (5), \(w_{0}\) is not the center of a star cutset of \(G\). By Lemma 5.5, \((H_{1},w_{0})\) is a short pyramid. It follows that \(w_{0}\) has exactly one neighbor \(r_{w}\) in \(R\) and \(r_{w}\neq r_{\ell}\). This proves (8).
(9) _Let \(\{a,b\}=\{v_{0},w_{0}\}\) such that \(s_{t}\) is the unique neighbor of \(a\) in \(S\). Then, \(b\) has exactly one neighbor in \(S\) and \(b\) is non-adjacent to \(s_{t}\)._
Let \(H_{2}\) be the hole given by \(H_{2}=\)_a-v-\(q_{k-1}\)-\(q_{k}\)-\(S\)-\(s_{t}\)-a_. Since \(S\) contains neighbors of \(b\), it follows that \(b\) has at least three neighbors in \(H_{1}\): \(a\), \(v\), and a neighbor in \(S\). By (5), \(b\) is not the center of a star cutset of \(G\), and so by Lemma 5.5, \((H_{2},w_{0})\) is a short pyramid. It follows that \(b\) has exactly one neighbor \(s_{w}\) in \(S\) and \(s_{w}\neq s_{t}\). This proves (9).
Suppose first that \(s_{t}\) is the unique neighbor of \(v_{0}\) in \(S\). By (9), it follows that \(w_{0}\) has a unique neighbor \(s_{w}\) in \(S\) and \(s_{w}\neq s_{t}\). Let \(H_{3}\) be the hole given by \(H_{3}=v_{0}\)-\(r_{\ell}\)-\(R\)-\(q_{1}\)-\(Q\)-\(q_{k}\)-\(S\)-\(s_{t}\)-\(v_{0}\). It holds that \(w_{0}\) has three pairwise non-adjacent neighbors \(v_{0},s_{w},r_{w}\) in \(H_{3}\), so \((H_{3},w_{0})\) is a proper wheel. But now by Lemma 5.4, \(w_{0}\) is the center of a star cutset in \(G\), contradicting (5).
Therefore, \(s_{t}\) is the unique neighbor of \(w_{0}\) in \(S\). By (9), it follows that \(v_{0}\) has a unique neighbor \(s_{v}\) in \(S\) and \(s_{v}\neq s_{t}\). Let \(H_{4}\) be the hole given by
\(v_{0}\)-\(s_{v}\)-\(S\)-\(q_{k}\)-\(Q\)-\(q_{1}\)-\(R\)-\(r_{w}\)-\(w_{0}\)-\(v_{0}\). It follows that \((H_{4},v)\) is a wheel and \(v\) has \(k\) neighbors in \(H_{4}\). Next, let \(H_{5}\) be the hole given by \(H_{5}=v_{0}\)-\(s_{v}\)-\(S\)-\(q_{k}\)-\(Q\)-\(q_{1}\)-\(R\)-\(r_{\ell}\)-\(v_{0}\). It follows that \((H_{5},v)\) is a wheel and \(v\) has \(k-1\) neighbors in \(H_{5}\). Since \(k\) and \(k-1\) have different parities, it follows that one of \((H_{4},v)\) and \((H_{5},v)\) is an even wheel, contradicting Lemma 5.3. This completes the proof of the lemma.
Finally, we apply the previous lemma to the class of graphs with no even hole and no sector wheel. Recall that a _sector wheel_ is a wheel \((H,w)\) such that \(N(w)\cap H\) is a path.
**Theorem 5.8**.: _Let \(G\) be minimal non-weakly FPE with no even hole and no sector wheel. Then, \(G\) has no star cutset._
Proof.: Assume for contradiction that \(G\) has a star cutset. Let \(v\in V(G)\) be such that there exists a cutset \(X\subseteq N[v]\) of \(G\) with \(v\in X\). By Lemma 5.7, \(v\) is not the center of a full star cutset of \(G\). This fact, together with Lemma 5.1, implies that there is exactly one component \(C\) of \(G\setminus N[v]\). Let \(A\) be a connected component of \(G\setminus X\) such that \(A\) is anticomplete to \(C\). Then \(A\subseteq N(v)\). Since \(v\) is the center of a star cutset, it follows that \(A\neq\emptyset\). Let \(B=N(C)\cap N(A)\). Since \(C\) is a connected component of \(G\setminus N[v]\), it follows that \(N(C)\subseteq N(v)\), and so \(B\subseteq N(v)\). Also, note that \(B\) can be empty. Suppose there exist \(b_{1},b_{2}\in B\) such that \(b_{1}\) is non-adjacent to \(b_{2}\). Let \(P_{1}\) be a path from \(b_{1}\) to \(b_{2}\) with \(P_{1}^{*}\subseteq C\) and let \(P_{2}\) be a path from \(b_{1}\) to \(b_{2}\) with \(P_{2}^{*}\subseteq A\). Now, \(P_{1}\cup P_{2}\) is a hole and \(v\) is complete to \(P_{2}\) and anticomplete to \(P_{1}^{*}\), so \((P_{1}\cup P_{2},v)\) is a sector wheel, a contradiction. Therefore, \(B\) is a clique. Since \(B=N(C)\cap N(A)\), it follows that \(\{v\}\cup B\) separates \(A\) from \(C\), so \(\{v\}\cup B\) is a clique cutset of \(G\). But by Lemma 4.2, \(G\) has no clique cutset, a contradiction. This completes the proof of the theorem.
## 6. Putting it all together
In this section, we prove Theorem 1.4.
Proof of Theorem 1.4.: Let \(G\) be an even-hole-free graph with no sector wheel, and suppose for a contradiction that \(G\) is minimal non-weakly FPE. If \(G\) has a clique cutset, then \(G\) is weakly FPE by Lemma 4.2. If \(G\) has a proper star cutset, then \(G\) is weakly FPE by Theorem 5.8. Therefore, \(G\) has no star cutset. Note that \(G\) is non-FPE and has no star cutset. Let \(H\) be an induced subgraph of \(G\) that is minimal with these properties, so in particular, \(H\) has no star cutset, \(H\) is non-FPE, and every induced subgraph of \(H\) with no star cutset is FPE. By Theorem 3.8, since \(H\) is minimal with no star cutset, \(H\) is FPE, a contradiction. This completes the proof.
|
2305.10606 | Versatile optimization-based speed-up method for autofocusing in digital
holographic microscopy | We propose a speed-up method for the in-focus plane detection in digital
holographic microscopy that can be applied to a broad class of autofocusing
algorithms that involve repetitive propagation of an object wave to various
axial locations to decide the in-focus position. The classical autofocusing
algorithms apply a uniform search strategy, i.e., they probe multiple,
uniformly distributed axial locations, which leads to heavy computational
overhead. Our method substantially reduces the computational load, without
sacrificing the accuracy, by skillfully selecting the next location to
investigate, which results in a decreased total number of probed propagation
distances. This is achieved by applying the golden selection search with
parabolic interpolation, which is the gold standard for tackling
single-variable optimization problems. The proposed approach is successfully
applied to three diverse autofocusing cases, providing up to 136-fold speed-up. | Julianna Winnik, Damian Suski, Piotr Zdańkowski, Luiza Stanaszek, Vicente Micó, Maciej Trusiak | 2023-05-17T23:23:04Z | http://arxiv.org/abs/2305.10606v1 | # Versatile optimization-based speed-up method for autofocusing in digital holographic microscopy
###### Abstract
We propose a speed-up method for the in-focus plane detection in digital holographic microscopy that can be applied to a broad class of autofocusing algorithms that involve repetitive propagation of an object wave to various axial locations to decide the in-focus position. The classical autofocusing algorithms apply a uniform search strategy, i.e., they probe multiple, uniformly distributed axial locations, which leads to heavy computational overhead. Our method substantially reduces the computational load, without sacrificing the accuracy, by skillfully selecting the next location to investigate, which results in a decreased total number of probed propagation distances. This is achieved by applying the golden selection search with parabolic interpolation, which is the gold standard for tackling single-variable optimization problems. The proposed approach is successfully applied to three diverse autofocusing cases, providing up to 136-fold speed-up.
## 1 Introduction
Digital holographic microscopy (DHM) [1-3] enables registration of an optical field that has been disturbed, i.e., refracted or reflected, by a microscale sample and thus enables gaining valuable information about its features such as the geometry, refractive index and absorptive properties. The true power of DHM comes from numerical refocusing tools [4-7] that enable algorithmic simulation of propagation of the light wave in space. Most importantly, numerical refocusing enables enhancing the image sharpness by propagating the captured optical field to the in-focus location. However, to fulfill this task one needs to know the required propagation distance, which in the general case is not known a priori or is known with the limited, unsatisfactory accuracy.
The problem can be addressed with powerful, holographic autofocusing [8-27]. It is worth noticing that the application of autofocusing is not limited to the standard focus enhancement in the conventional DHM. It can be applied to a wide range of tasks, e.g., particle localization [28] also in lensless DHM [29-32], time-lapse study of dynamic objects [33], tilted image plane detection [34], digital holography of macroscale samples [35], rotation errors correction in holographic tomography [36-40] and accurate shape recovery [41].
The in-focus plane detection can be performed using various autofocusing approaches. The most popular class of the autofocusing algorithms utilizes repetitive propagation of the object wave to multiple positions along the optical axis [8-27]. In each location, the defocus of the object wave is quantified using one of the available focus metrics. The plane with the minimum defocus value is assumed to be the in-focus one. Recently, also another class of autofocusing
algorithms has emerged that utilizes deep learning [42-45]. The drawback of these methods is a requirement for a large learning set. Here we focus on the former, most popular autofocusing approach that applies repetitive propagation. The major disadvantage of this solution is the high computational cost. The issue has been addressed with the autofocusing acceleration via hologram downsampling [46,47], efficient implementation of the autofocusing procedure on graphics processors [48] and a two-step approach with preliminary and precise autofocusing stages [8,49].
In this paper we claim that the high computational cost of autofocusing comes mostly from the uniform search strategy, i.e., the autofocusing algorithms investigate numerous, equidistant planes, which is insufficient and leads to heavy computational overhead. We address this issue by proposing a versatile speed-up method for autofocusing in DHM that can work with both any focus metrics and DHM configuration. First, we formulate the autofocusing task as the optimization problem. Then, we replace the uniform search strategy with a suitable algorithmic tool for single-variable optimization problems, that is the golden selection search with parabolic interpolation (GSS-PI) [50]. GSS-PI, at each iteration, skillfully decides the next axial position to investigate, which results in a reduced total number of probed axial locations. Importantly, the acceleration does not affect the autofocusing accuracy.
The proposed speed-up approach is successfully verified in application to three diverse autofocusing cases: 1) focus enhancement of an alive mesenchymal stem cell in a grating-assisted, common-path DHM system; 2) detection of the microbead location using lensless in-line DHM and dark focus metric; 3) in-focus plane detection of USAF resolution test target using an autofocusing algorithm based on two-directional illumination and DHM in Mach-Zehnder configuration. In all investigated cases, the proposed approach enabled substantial speed-up of the autofocusing procedure.
## 2 Conventional autofocusing algorithms
DHM allows acquiring complete information about the scalar object wave, i.e., its amplitude and phase. However, in many holographic systems, the hologram acquisition plane does not coincide with the image plane, which results in a blurry, often useless reconstruction. The defocused registration conditions may be inherent to the working principle of a given holographic setup, e.g., [51-53]. Nonetheless, even in the image plane holographic configuration, the defocus may arise from the sample dynamic character, the setup misalignments or the need to image to its best in-focus plane a thick sample (thicker than the depth of field of the imaging system). In either of the cases, the defocusing problem can be addressed with numerical propagation that enables computational refocusing of the object wave to the in-focus plane. This, however, requires precise knowledge about the distance between the hologram acquisition plane and the in-focus plane.
The described problem is schematically presented in Fig. 1. In order to find the in-focus distance \(z_{i}\), the object wave \(u_{z=0}\), registered at plane \(z=0\), is repetitively propagated on various distances \(z\), which can be done using, e.g., angular spectrum method [4]:
\[\tilde{u}_{z=0}\big{(}f_{x},f_{y}\big{)}=\big{[}u_{z=0}(x,y)\exp \Bigl{[}-i2\pi\big{(}f_{x}x+f_{y}y\big{)}\Bigr{]}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Crucially, in the autofocusing algorithms the investigated locations are uniformly distributed with a step \(\Delta z\). At each plane, a focus metrics of choice is applied to evaluate the focusing conditions. The plane with the minimum defocus value is assumed to be the in-focus one.
## 3 Autofocusing utilizing golden selection search with parabolic interpolation
### Formulation of the optimization task
From the computational point of view, we aim at solving the following optimization problem:
\[\min_{z\in[z]}f(z) \tag{4}\] \[s.t.\ z\in \left[z_{\min},z_{\max}\right]\]
where \(f(z)\) is the defocus metric calculated for each \(z\) based on the numerically refocused object wave \(u_{z}\) and \([z_{\min},\,z_{\max}]\) is an arbitrarily chosen interval in which the in-focus distance \(z_{t}\) is searched. We assume that \(f(z)\) is a continuous function of \(z\). In our convention \(f(z)\) is a degree of defocus therefore if the given focus metric is maximized at the in-focus plane, e.g. [10, 29], then we take negative of this metric.
It should be noted that numerical algorithms for solving the considered optimization problem can only guarantee to find the global optimum if the function \(f(z)\) is unimodal, i.e., it possesses only one local minimum on the search interval \([z_{\min},\,z_{\max}]\)[50]. If the function \(f(z)\) is multimodal the numerical algorithms may converge to the local optimum that is not actually the global minimum. This is a general problem in numerical optimization. To increase the chance of finding the global optimum, some heuristics can be applied, e.g., splitting the original interval of search into several subintervals and searching the minimum in each of them separately.
### Golden selection search with parabolic interpolation
The state-of-the-art algorithm for solving the considered optimization problem [Eq. (4)] comes from the combination of two algorithms, the golden section search (GSS) and parabolic interpolation (PI), proposed by Brent [50]. The GSS-PI results in an efficient and robust method for solving the considered single-variable optimization problem. For the completeness of the proposed paper, the advantages and drawbacks of GSS and PI algorithms will be now briefly discussed together with a short description of each of them.
The idea of GSS is to shrink the search interval at each iteration based on values of \(f(z)\) at four points [50], see Fig. 2. The points \(z_{\min}^{i}\) and \(z_{\max}^{i}\) denote the left and right end of the search interval at \(i\)-th iteration. At the first iteration we take \(z_{\min}^{i}=z_{\min}\) and \(z_{\max}^{i}=z_{\max}\). Next, we take
Figure 1: Working principle of the conventional autofocusing algorithm; \(u\) – optical field, \(f\) – defocus function, \(z\) – propagation distance; \(\Delta z\) – search step, \(z_{t}\) – optimal propagation distance.
two points inside the search interval \(z_{\rm t}^{i}\) and \(z_{\rm t}^{i}\), which are placed symmetrically with respect to the interval ends:
\[z_{\rm t}^{i}=z_{\rm min}^{i}+(1-\phi)\cdot\left(z_{\rm max}^{i}-z_{\rm min}^{i} \right), \tag{5}\]
\[z_{\rm t}^{i}=z_{\rm min}^{i}+\phi\cdot\left(z_{\rm max}^{i}-z_{\rm min}^{i} \right), \tag{6}\]
where \(\phi=\left(\sqrt{5}-1\right)/\,2\approx 0,618033\). The inverse of \(\phi\) is called the golden ratio, from which the algorithm takes its name.
For the unimodal function, if the following condition holds:
\[f(z_{\rm min}^{i})>f(z_{\rm t}^{i})>f(z_{\rm t}^{i})\, \tag{7}\]
then we know that on the interval \(\left[z_{\rm min}^{i},z_{\rm t}^{i}\right]\) the function \(f(z)\) is decreasing thus we can drop that interval at the next iteration, see Fig. 2. The new search interval will be defined by \(z_{\rm min}^{i+1}=z_{\rm t}^{i}\) and \(z_{\rm max}^{i+1}=z_{\rm max}^{i}\). The advantage of taking \(\phi=\left(\sqrt{5}-1\right)/\,2\) instead of any other value, is that \(z_{\rm t}^{i+1}\) is exactly equal to \(z_{\rm t}^{i}\) so we only need to calculate \(f(z)\) at one new point \(z_{\rm t}^{i+1}\), which saves the computational effort. Analogically, if
\[f(z_{\rm t}^{i})<f(z_{\rm t}^{i})<f(z_{\rm max}^{i}) \tag{8}\]
then we know that on the interval \(\left[z_{\rm t}^{i},z_{\rm max}^{i}\right]\)the function \(f(z)\) is increasing and we can drop that interval at the next iteration. The new interval will be given by \(z_{\rm min}^{i+1}=z_{\rm min}^{i}\) and \(z_{\rm max}^{i+1}=z_{\rm t}^{i}\). We also have \(z_{\rm t}^{i+1}=z_{\rm t}^{i}\) thus we only need to calculate \(f(z)\) at one new point \(z_{\rm t}^{i+1}\).
We repeat the above procedure at subsequent iterations. The length of the search interval decreases by a factor of \(\phi\) at each iteration and we stop the procedure when \(z_{\rm max}^{i}\) - \(z_{\rm min}^{i}\) is less than a given declared tolerance _tol_.
If \(f(z)\) is unimodal on the original search interval, then GSS guarantees to find the optimum value within the declared tolerance [50]. If \(f(z)\) is multimodal, we do not have such a warranty. Instead, we shrink the search interval from iteration to iteration and at some iteration, the actual search interval contains only one local minimum. From that iteration, GSS will search for that local minimum and reach it, but we have no warranty that the local minimum is the global
minimum of the original optimization task. From this feature, it follows that the method, in its original form, is not suitable for the autofocusing tasks with multiple focal locations [54].
The idea behind PI is to subsequently build the parabolic interpolations of the function _f(z)_ based on three points \(z_{1}^{i}\), \(z_{2}^{i}\) and \(z_{3}^{i}\), [50], see Fig. 3(a). Next, we find the minimum of the parabolic interpolation function at the point \(z_{{}_{PI}}^{i}\). We then go to the next iteration taking the new values \(z_{1}^{i+1}=z_{2}^{i}\), \(z_{2}^{i+1}=z_{3}^{i}\) and \(z_{3}^{i+1}=z_{PI}^{i}\). The advantage of PI is its fast convergence provided that we are close enough to the minimum of the function _f(z)_[50]. The parabolic interpolations give a better and better approximation of the _f(z)_ function with each iteration, see Figs 3(b). The drawback of the PI algorithm is that if we are far from the optimal point, the parabolic interpolations may behave poorly [50].
The idea behind GSS-PI is to combine the advantages of GSS (guaranteed convergence at least to some local minimum) and PI (fast convergence). Whenever possible PI is used, but when the performance of PI is poor, e.g., \(z_{{}_{PI}}^{i}\) lies outside the current search interval, the algorithm switches to pure GSS.
The GSS-PI algorithm has been implemented in the function _fminbound_ from the open-source Python _scipy_ library [55]. That implementation has been utilized in our work.
In comparison to the uniform search method, GSS-PI finds the minimum of the defocus function at a much lower computational cost. However, a drawback of GSS-PI is that the search interval must be chosen with care to avoid omitting the minimum by the algorithm. The reliability of GSS-PI can be improved with the two-step autofocusing procedure proposed in
Fig. 3: Illustration of the PI algorithm at (a) \(i\)-th and (b) (\(i\)+1)-th iteration (away and close to the minimum of the defocus function _f(z)_, for (a) and (b), respectively); \(z_{1}^{i}\), \(z_{2}^{i}\), \(z_{3}^{i}\) – the propagation distances used for parabolic interpolation at the current iteration; \(z_{{}_{PI}}\)– minimum of the fitted parabolic function.
[8]. In this method, first, the uniform search strategy with a large step \(\Delta z\) is used to obtain a rough profile of the defocus function on the original search interval. This enables defining a new, shrunk search interval, which is investigated further on using a much smaller step \(\Delta z\), to provide high precision. This two-step strategy offers a trade-off between the computational time and the reliability. GSS-PI has a potential to decrease the computational effort of the second step of this strategy, without affecting the reliability.
## 4 Experimental results
The proposed speed-up approach is tested with three, diverse autofocusing cases: (1) focus enhancement of a stem cell in recently proposed, grating-assisted DHM [56], (2) particle localization in lensless, in-line DHM, and (3) focus correction of USAF resolution target in DHM in Mach-Zehnder configuration and two-directional illumination. It is worth noticing that our method can be applied to any complex amplitude image provided by any other DHM architecture.
### Autofocusing case 1: hologram of a stem cell captured with a grating-assisted DHM
First autofocusing case is concerned with the recently proposed, common-path DHM system [56], where the object wave, after imaging with a microscope optical setup, is incident on a diffraction grating. The +1\({}^{\mathrm{st}}\) and -1\({}^{\mathrm{st}}\) diffraction orders interfere producing an off-axis hologram. The system operates in the total-shear regime. The samples for our experiment are alive mesenchymal stem cells. The hologram was registered using a laser diode with a light wavelength \(\lambda=635\) nm and the image sensor with the pixel pitch of 1.85 \(\upmu\)m. The parameters of the optical system are magnification \(M=\) -20 and the numerical aperture \(NA=0.75\). The hologram was reconstructed using the Fourier transform method [57]. The reconstructed object wave amplitude is presented in Fig. 4, where the red line indicates the region of interest containing a single cell, which was used for the evaluation of the focusing conditions.
The considered system, typically, works in the image plane holography regime, which means that the hologram acquisition plane is optically conjugated with a sample, thus no defocus occurs. However, in practice, a small defocus may arise from, e.g., instabilities of the setup, or as in this case, movement of a living specimen. Therefore, for this test, the axial range of search was set symmetrically around the hologram acquisition plane, \(z\in\) [-100 \(\upmu\)m, 100 \(\upmu\)m].
Generally, the choice of the axial scanning step is dictated by parameters of the system such as \(NA\), wavelength, magnification (all influencing the depth of field) as well as the character of the sample and the required accuracy. In our case, the search step was set to \(\Delta z=1\)\(\upmu\)m. This value was also taken as tolerance for GSS-PI, which ensured equal accuracy of both
Figure 4: Full field of view of the object wave amplitude with the indicated region of interest used for evaluation of the focusing conditions; the sample is a mesenchymal stem cell; the object wave was captured with a grating-assisted DHM system [56].
autofocusing approaches and thus facilitated their comparison. The chosen value of \(tol=\Delta z\) is slightly smaller than the depth of field (DoF) of the considered DHM system (here \(\mathrm{DoF}\approx 1.5\)\(\mathrm{\SIUnitSymbolMicro m}\)[58]), which, in theory, ensures that the autofocusing will bring the data into focus.
The stem cell is treated here as a pure phase object, which is expected to be invisible when focused. Therefore, the defocus value can be evaluated with a variance of the amplitude of the object wave [34, 36, 38] that was numerically refocused on distance \(z\):
\[f(z)=\mathrm{var}\left(\left|u_{z}\right|\right). \tag{9}\]
The numerical refocusing is handled with the angular spectrum method using the whole field of view; however, the focus measure is evaluated only in the region of interest surrounding the sample, (Fig. 4), which enhances the autofocusing accuracy. The variance _var_ is given by:
\[\mathrm{var}(u)=\frac{1}{P}\sum_{p=1}^{p}\left(u_{p}-\overline{u}\right)^{2}, \tag{10}\]
where \(p\) denotes the pixel index and \(\overline{u}\) is the average signal value.
The results of the autofocusing, performed with both uniform search strategy and GSS-PI, are shown in Fig. 5(a). Both autofocusing algorithms pointed to almost the same axial location (Tab. 1). The conventional approach investigated 200 axial locations, while GSS-PI looked up only 10 planes, providing substantial 20-fold speed-up.
The zoomed amplitude and phase of the object wave in the hologram registration plane and in the determined in-focus plane (according to GSS-PI) are presented in Figs 5(b)-5(c) and Figs 5(d)-5(e), respectively. Almost uniform amplitude distribution in Fig. 5(d) indicates the successful defocus correction. It can be observed that the autofocusing improved visibility of the cell structure in the phase image, Fig. 5(e).
Figure 5: Autofocusing results for the mesenchymal stem cell: (a) defocus function evaluated with a variance of amplitude of the object wave; amplitude (b, d) and phase (c, e) of the object wave in the hologram acquisition plane (b, c) and in the found in-focus plane (d, e) (zoomed area).
### Autofocusing case 2: hologram of microbeads captured with lensless in-line DHM
The second analyzed autofocusing case concerns the particle localization using a lensless in-line DHM in Gabor configuration [29]. In our experiment, the samples are polystyrene beads with a diameter of 90 \(\upmu\)m immersed in water and inserted in a counting chamber of 100 \(\upmu\)m thickness (thus, there is essentially a single plane containing all the beads). The hologram was registered using an image sensor (2048 x 2048 pixels, pixel pitch of 5.5 \(\upmu\)m) that was placed in a distance of 300 mm from the light source, here a fiber-coupled laser diode with \(\lambda=450\) nm. The registered hologram is presented in Fig. 6. The red rectangle indicates the region of interest with a single selected bead that was used for the autofocusing procedure.
The working principle of lensless DHM imposes defocused registration conditions, i.e., the sample is placed at some distance from the image sensor, which when combined with diverging beam illumination, enables achieving optical magnification. Therefore, in this autofocusing case, the axial range of search was set to a large area in the front of the image sensor: \(z\,\epsilon\) [-300 mm, 0]. We set \(\Delta z=tol=0.2\) mm. The evaluation of the focus condition was performed with the dark focus metric [29], which is a robust indicator of the overall sharpness for objects with mixed amplitude-phase properties. Dark focus calculates the gradient variance of the numerically generated dark field \(u_{z}^{d}\) :
\[f(z)=-\sqrt{\text{var}\left(\Delta\left|u_{z}^{d}\right|\right)}. \tag{11}\]
Note that in our convention the in-focus plane is always indicated with a minimum of \(f(z)\) thus Eq. (11) expresses the negative of the dark focus metric.
The results of the autofocusing, performed with the uniform search strategy and GSS-PI, are presented in Fig. 7(a). In the analyzed case GSS-PI provided tremendous, 136-fold autofocusing speed-up, without scarifying the accuracy (Tab. 2). This fact provides evidence that the broader the search range, the larger the computational speed-up ensured by GSS-PI in
\begin{table}
\begin{tabular}{c c c} \hline \hline & _Uniform search_ & _GSS-PI_ \\ \hline _Number of_ & & \\ _investigated planes_ & & \\ _Found in-focus_ & & \\ _location [um]_ & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Results of autofocusing for the mesenchymal stem cell.**
Figure 6: Full field of view of the object wave amplitude with the indicated region of interest (red line) that was used for evaluation of the focusing conditions; the sample is a microbead with a diameter of 90 \(\upmu\)m; the object wave was captured with lensless in-line DHM.
comparison with the uniform search strategy. Thus, results presented in Fig. 7(a) promote GSS-PI for high-throughput large volume 3D holographic imaging.
The comparison of the object wave amplitudes and phases before and after autofocusing is shown in Fig. 7(b)-7(e). The diffraction fringes in the amplitude image, Fig. 7(b), indicate a large defocus of the initial data. From Fig. 7(c) it can be noticed that the initial phase has been assumed to be uniform, which complies with the all-intensity hologram reconstruction method for lensless DHM [29]. It is worth noting that the applied reconstruction approach does not deal with the twin image problem. After the application of GSS-PI, the optical field was refocused to the found in-focus location at -100.74 mm, which resulted in the sharp reconstruction, Figs 7(d) and 7(e). It can be observed that the transparent bead is imaged in amplitude as a dark, sharp circle with a bright central spot, Fig. 7(d). This is related to small NA of the given in-line lensless DHM system, which filters out the rays that are strongly refracted on the high slope areas of the investigated microsphere.
\begin{table}
\begin{tabular}{c c c} \hline \hline & _Uniform search_ & _GSS-PI_ \\ \hline _Number of investigated planes_ & 1500 & 11 \\ _Found in-focus location [mm]_ & -100.80 & -100.74 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Results of autofocusing for the microbead.**
Figure 7: Autofocusing results for the microbead: (a) defocus function evaluated with the dark focus metric; Amplitude (b, d) and phase (c, e) of the object wave in the hologram acquisition plane (b, c) and in the detected in-focus plane (d, e) (zoomed area).
Autofocusing case 3: two holographic views of the USAF resolution test target captured with DHM in Mach-Zehnder configuration
Two previously discussed autofocusing methods were concerned with the focus metrics that quantify the defocus basing on a single object wave. In the third, last example we investigate a different autofocusing approach that looks for the in-focus plane by analyzing the interdependence of two object waves that correspond to various illumination directions [15]. The off-axis propagation directions of the waves invoke transverse, mutual displacement of the sample images in the defocus planes. Thus, the in-focus location can be found by looking for minimum variance between the waves' amplitudes:
\[f(z)=\mathrm{var}\left(\left|u_{z}^{1}\right|-\left|u_{z}^{2}\right|\right). \tag{12}\]
The discussed autofocusing method was applied to holographic tomography [40], where it addressed the key problem of rotation errors and related data defocusing. Here, we demonstrate the possibility of acceleration of this autofocusing method using GSS-PI. In our experiment, two holograms of the USAF resolution test target were taken with off-axis DHM in Mach-Zehnder configuration and two-directional tilted beam illumination (the illumination vectors lie in the horizontal plane and form +/-13.5\({}^{\circ}\) angle with the optical axis) [40]. The system applied a He-Ne laser with \(\lambda=632.8\) nm, a microscope optical system with \(M=\) -19.5 and \(NA\) = 0.42, and an image sensor of size 2456 X 2058 and a pixel pitch of 3.45 \(\upmu\)m. The holograms were reconstructed using Fourier transform method [57]. The amplitudes of the reconstructed object waves are shown in Fig. 8.
The considered DHM system operates in the image plane holography regime, where a small defocus may occur due to the setup instability or inaccurate placing of the sample. Therefore, the axial search was defined as a small region around the hologram acquisition plane, i.e., \(z\)\(\epsilon\) [-50 \(\upmu\)m, 50 \(\upmu\)m]. We set \(\Delta z=tol=2\)\(\upmu\)m (here the search step is larger than in Sec. 4.1 due to smaller NA and thus larger DoF of the considered DHM system, here \(\mathrm{DoF}\approx 2.5\)\(\upmu\)[58]). The obtained autofocus results are presented in Fig. 9.
In this case, GSS-PI provided a 7-fold acceleration of the autofocusing without decreasing its accuracy (Tab. 3). The smaller gain in computational complexity comes from a relatively sparse sampling of the autofocus curve due to large DoF of the employed optical system.
Figures 9(b)-9(e) compares the amplitudes of the pair of the object waves in the hologram registration plane, Figs 9(b) and (c), and in the in-focus plane, Figs 9(d) and (e). In this case, we did not include the phase images for the sake of a concise presentation. One can notice the transverse, mutual shift of the images in the original plane, Figs 9(b) and (c), which indicates the defocused registration conditions. The shift is removed after propagation to the in-focus plane at \(z=21.70\)\(\upmu\)m.
Figure 8: Full field of view of two object wave amplitudes corresponding to illumination at (a) +13.5\({}^{\circ}\) and (b) -13.5\({}^{\circ}\); red rectangles indicate a region of interest that was used for evaluation of the focusing conditions; the sample is USAF resolution test target; the object waves were captured with DHM in Mach-Zehnder off-axis configuration.
### Summary of the achieved speed-up
The achieved autofocusing speed-up is summarized in Tab. 4, in which we express the computational gain of the proposed algorithm as a ratio of numbers of probed locations in the uniform search strategy and GSS-PI. Using this criterion, the largest, 136.4-fold computation gain was obtained for the autofocusing case 2, i.e., particle localization in lensless DHM. This tremendous computational gain was caused by a very large axial search range, which resulted in numerous probed axial locations for the uniform search strategy. In the case of the image plane holographic configurations, the achieved speed-up was 20-fold and 7.1-fold for autofocusing cases 1 and 3, respectively. The smaller acceleration gain for case 3 comes from the relatively large axial search step \(\Delta z\). Generally, the autofocusing acceleration with GSS-PI is expected to be larger for more complex autofocusing problems (wider search range and/or higher required autofocusing accuracy).
Table 4 also includes computational times of the autofocusing procedures for all analyzed cases. Both autofocusing algorithms were implemented in Python and computed on Intel Core
\begin{table}
\begin{tabular}{c c c} \hline \hline & _Uniform search_ & _GSS-PI_ \\ \hline _Number of_ & & \\ _investigated planes_ & & \\ _Found in-focus_ & & \\ _location [\(\mu\)m]_ & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Results of autofocusing for the USAF resolution test target.**
Figure 9: Autofocusing results for USAF resolution test target: (a) defocus function evaluated with the autofocusing method proposed in [15]; zoomed areas of amplitudes of two object waves, corresponding to different illumination directions: (b, d) +13.5\({}^{\circ}\), (c, e) -13.5\({}^{\circ}\), in the hologram acquisition plane (b, c) and in the found in-focus plane (d, c).
i7-7700HQ 2.80 GHz equipped with 32 GB of RAM. The obtain accelerations comply with the theoretical gain related to the reduced number of probed locations.
Lastly, in all analyzed cases, the difference between the found in-focus locations for the uniform search strategy and GSS-PI was within the declared tolerance _tol_.
## 5 Conclusions
In this paper, we proposed a versatile acceleration method for autofocusing in DHM that can be applied to various focus metrics and DHM configurations. The method is suitable for all autofocusing approaches that investigate the focus conditions in multiple locations. Our method replaces the insufficient uniform search strategy of the conventional autofocusing algorithms with a suitable optimization tool, i.e., GSS-PI, to find the minimum of the defocus function thus limiting the number of probed locations. The downside of GSS-PI is that to guarantee convergence to the focal location, the defocus function should be unimodal in the declared search range. This limitation can be potentially overcome by performing the search independently in several subintervals or by carefully selecting the region of interest (for the case of nonoverlapping samples). The computational gain of the proposed algorithm offers a promising perspective of realizing demanding autofocusing tasks (large search range, high required accuracy, numerous samples) in a reasonable time.
The proposed GSS-PI approach was applied to three diverse autofocusing cases, providing the computational gain in a range from 136-fold to 7-fold, depending on the complexity of the autofocusing task. Crucially, the achieved speed-up did not deteriorate the autofocusing accuracy. The three examples used for validation of the proposed approach are of special relevance for coherent imaging and metrology. The first one relates to a small uncontrolled defocus appearing for unwanted sources (movement of the sample, system vibrations, etc.) when imaging a biological sample. This example demonstrates the possibility of achieving active focusing at a low consuming time to always set the best in-focus image of the sample for every frame. The second case addresses lensless DHM where imaging the sample to its best in-focus plane by numerical propagation is crucial to visualize and characterize the sample in a vast range of applications such as, for instance, tracking flowing particles, sperm cells sorting, and biological monitoring of living cells. Lastly, the third example deals with fine-tuning to get the best in-focus image for the same sample with different tilted beam illuminations. This is an example of application of the autofocusing for the system calibration in holographic tomography before assembling the final volumetric image.
**Funding.** This work has been partially funded by the National Science Center Poland (SONATA 2020/39/D/ST7/03236), the grant funded by the Scientific Council for the Discipline of Automatic Control, Electronics and Electrical Engineering (Warsaw University of Technology), BIOTECHMED-1 project granted by Warsaw University of Technology under the program Excellence Initiative: Research University (ID-UB) and Foundation for Polish Science FNP (START 2020). Also, part of this work has been supported by the Ministerio de Economia y Competitividad under the project FIS2017-89748-P and the project NanoTech4ALS no. 12/EuroNanoMed/2016, funded under the EU FP7 M-ERA.NET program.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{_Autofocus case_} & \multicolumn{2}{c}{_Number of f(z)_} & \multicolumn{3}{c}{_Computational time_} \\ & \multicolumn{2}{c}{_evaluations_} & \multicolumn{2}{c}{_[s]_} & \\ \cline{2-7} & \multicolumn{2}{c}{_uniform_} & \multicolumn{2}{c}{_GSS-PI_} & \multicolumn{2}{c}{_uniform_} & \multicolumn{2}{c}{_GSS-PI_} & \\ \cline{2-7} & \multicolumn{2}{c}{_search_} & \multicolumn{2}{c}{_GSS-PI_} & \multicolumn{2}{c}{_uniform_} & \multicolumn{2}{c}{_GSS-PI_} & \\ \cline{2-7} & \multicolumn{2}{c}{200} & \multicolumn{2}{c}{10} & \multicolumn{2}{c}{20} & \multicolumn{2}{c}{_search_} & \multicolumn{2}{c}{_GSS-PI_} & \\ \cline{2-7} & \multicolumn{2}{c}{2} & \multicolumn{2}{c}{1500} & \multicolumn{2}{c}{11} & \multicolumn{2}{c}{136.4} & \multicolumn{2}{c}{_GSS-PI_} & \\ \cline{2-7} & \multicolumn{2}{c}{50} & \multicolumn{2}{c}{7} & \multicolumn{2}{c}{7.1} & \multicolumn{2}{c}{_GSS-PI_} & \\ \hline \hline \end{tabular}
\end{table}
Table 4: Summary of the computational gain of GSS-PI
**Acknowledgments.** We thank Barbara Lukomska, Katarzyna Drela and Hanna Trusiak from NeuroRepair Department, Mossakowski Medical Research Institute, Polish Academy of Sciences, Warsaw, Poland for preparing the cells studied in Fig. 4.
**Disclosures.** The authors declare no conflicts of interest.
**Data availability.** Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
|
2301.13820 | Explaining Large Language Model-Based Neural Semantic Parsers (Student
Abstract) | While large language models (LLMs) have demonstrated strong capability in
structured prediction tasks such as semantic parsing, few amounts of research
have explored the underlying mechanisms of their success. Our work studies
different methods for explaining an LLM-based semantic parser and qualitatively
discusses the explained model behaviors, hoping to inspire future research
toward better understanding them. | Daking Rai, Yilun Zhou, Bailin Wang, Ziyu Yao | 2023-01-25T16:12:43Z | http://arxiv.org/abs/2301.13820v1 | # Explaining Large Language Model-Based Neural Semantic Parsers (Student Abstract)
# Explaining Large Language Model-Based Neural Semantic Parsers (Student Abstract)
Daking Rai,1 Yilun Zhou,2 Bailin Wang,2 Ziyu Yao1
1 George Mason University,
2 Massachusetts Institute of Technology
[email protected], [email protected], [email protected], [email protected]
###### Abstract
While large language models (LLMs) have demonstrated strong capability in structured prediction tasks such as semantic parsing, few amounts of research have explored the underlying mechanisms of their success. Our work studies different methods for explaining an LLM-based semantic parser and qualitatively discusses the explained model behaviors, hoping to inspire future research toward better understanding them.
## Introduction
Semantic parsing is a task of mapping natural language utterances to their logical forms like SQL queries or lambda expressions for database or knowledge base querying. Despite its structured prediction nature, recent work has shown that a large language model (LLM) which generates output sequentially could achieve comparable or even better performance than the traditional structured decoders [2]. However, why these LLMs could do well in semantic parsing is still unclear.
In this paper, we seek to provide one of the first studies toward explaining LLM-based neural semantic parsers. We use the text-to-SQL semantic parsing task [23] and the UnifiedSKG model [22] for a case study. We empirically explore a set of local explanation methods and quantitatively discussed the explanation results.
## Method
(1) LIME [14] generates an explanation by training locally-faithful interpretable models with the dataset obtained by perturbing the prediction instance. (2) Shapley value measures the importance of a feature by its average marginal contribution to the prediction score. (3) Kernel SHAP [10] is another efficient way of estimating Shapley values by training a linear classifier. (4) LERG [24] is a set of two approaches, LERG_L and LERG_S, recently adapted from LIME and Shapley value to conditioned sequence generation tasks. When applying these methods to explain an LLM-based semantic parser, we consider each output token as one prediction and attribute it to the input features. (5) Attention: Prior work has revealed that attention may be interpreted as feature importance. Therefore, we also introduce an attention-based local explanation method, where the feature attribution is calculated by averaging the last layer of the multi-headed cross-attention weights.
## Experimental Setup
In our experiments, we consider the task of text-to-SQL semantic parsing where the goal is to generate a SQL query given a natural language question and the database schema (i.e., tables and columns included in the database) as input. We experiment with UnifiedSKG [22], one of the state-of-the-art models, which adopts a T5 encoder-decoder structure.1 Following Xie et al. [2], we train and evaluate the parser on the Spider dataset [23].
Footnote 1: We used the “T5_base_prefix_spider_with_cell_value” version from [https://github.com/HKUNLP/UnifiedSKG](https://github.com/HKUNLP/UnifiedSKG). We did not use the T5-3B version because of the large computational demand, which we will discuss in Section.
Through the experiments, we seek to answer two _Research Questions (RQs)_: (1) _Which local explanation method is the most faithful to explaining the LLM-based UnifiedSKG parser?_ (2) _How well does the explanation align with human intuitions?_ To answer RQ1, we follow Tuan et al. [2] and compare different explanation methods on two metrics: (a) Sufficiency measures the perplexity when keeping only the top-K% most important features by each explanation method; the lower the better/faithful. (b) Necessity measures the perplexity change when the top-K%
Figure 1: Necessity (left) and Sufficiency (right) scores when removing or keeping the top-K% important features.
most important features are removed; the higher the better/faithful. To answer RQ2, we qualitatively discuss the most faithful explanation results.
### Experimental Results
**Faithfulness.** The results in Figure 1 show LERG_S has the best performance as per both sufficiency and necessity metrics with Kernel SHAP having comparable performance as well. In general, we observe that Shapley value-based explanation methods have more faithful explanations than other methods. In addition, we also found that attention-based explanations are more faithful than LIME and LERG_L.
**Plausibility.** Using LERG_S as a lens, we qualitatively study how UnifiedSKG works. We define _plausible_ explanations as those which align well with human intuition. In our study, we classify each explanation into plausible or partially plausible ones. Interestingly, we didn't find any explanation that is completely implausible. Under this setup, we investigate the four aspects listed below (Figure 2): **(1) Feature Attribution for (In)correct Predictions**: We randomly sample 20 examples where the model makes correct and incorrect predictions, respectively. We find out that in most cases (85% for correct and 70% for incorrect), LERG_S generates a plausible explanation for both types. **(2) Different Hardness Levels**: We randomly sample 20 examples for each hardness level - easy, medium, hard, and extra hard, as defined by the Spider benchmark based on the SQL complexity. We observed that in most cases the model behaviors are in line with human intuitions even at the extra hard level (80%; \(>\)90% for other levels). **(3) Compositional Generalization**: We seek to understand whether the model attributes the output fragments to correct features compositionally when it makes correct predictions. We conducted a similar manual examination as before and observed that in most (80%) cases our model shows compositionally generalizable feature attribution. **(4) In-domain vs. Out-of-domain**: As the Spider training and dev sets are split by databases (which could be seen as different domains), we also manually compare the model explanations in in-domain and out-of-domain cases. We observe that for both cases (75% and 80% respectively), the generated explanations were mostly plausible.
## Discussion and Future Directions
Our study has revealed several challenges and opportunities in explaining an LLM-based semantic parser: **(1) Computational costs**: Most local explanation methods require model inference over a large set of input perturbations, which is computationally inefficient. Future work may look into improving the attention-based explanation method, which does not rely on perturbations and hence could save much computation. **(2) Feature interaction**: Traditional feature attribution does not provide information about how features (e.g., question tokens and contextual database schema items) interact with each other. Future work may uncover these interactions to gain deeper insights into how the model works. **(3) Explanation for user understanding**: Current saliency maps encompass a lot of information. Future work could examine how to present the information in a concise and friendly way such that users could easily grasp the intuition of the model prediction and verify its correctness. **(4) Explanation for debugging**: Future work should also investigate how the local explanation results could be used to probe and debug a semantic parser, such as to improve their capability in compositional generalization.
|
2305.04673 | PreCog: Exploring the Relation between Memorization and Performance in
Pre-trained Language Models | Pre-trained Language Models such as BERT are impressive machines with the
ability to memorize, possibly generalized learning examples. We present here a
small, focused contribution to the analysis of the interplay between
memorization and performance of BERT in downstream tasks. We propose PreCog, a
measure for evaluating memorization from pre-training, and we analyze its
correlation with the BERT's performance. Our experiments show that highly
memorized examples are better classified, suggesting memorization is an
essential key to success for BERT. | Leonardo Ranaldi, Elena Sofia Ruzzetti, Fabio Massimo Zanzotto | 2023-05-08T12:51:00Z | http://arxiv.org/abs/2305.04673v2 | # PreCog: Exploring the Relation between Memorization
###### Abstract
Pre-trained language models such as BERT are impressive machines with the ability to memorize, possibly generalized learning examples. We present here a small, focused contribution to the analysis of the interplay between memorization and performance of BERT in downstream tasks. We propose \(PreCog\), a measure for evaluating memorization from pre-training, and we analyze its correlation with the BERT's performance. Our experiments show that highly memorized examples are better classified, suggesting memorization is an essential key to success for BERT.
## 1 Introduction
Pre-trained language models (PTLMs) Peters et al. (2018); Devlin et al. (2019); Liu et al. (2019) are intriguing machines dominating the arena of NLP tasks with their ability to memorize generalizations of texts in synthetic neurons. After long pre-training on large amounts of unlabeled data, PTLMs have shown to learn effectively downstream tasks with limited labeled data Howard and Ruder (2018) and generalize in out-of-distribution examples Hendrycks et al. (2020). Extensive studies have shown that these PTLMs tend to mimic traditional linguistic syntactic models Jawahar et al. (2019) and traditional NLP pipelines Tenney et al. (2019). Hence, a crucial issue is to clarify why PLTMs exploit pre-training better than traditional NLP modules exploit annotated corpora.
Understanding the learning process of PTLMs may help in understanding their results in downstream tasks and in improving their linguistic representations in scenarios where they fail Kumar et al. (2020). Indeed, unlike traditional general NLP modules in pipelines, PTLMs need to be fine-tuned for the specific tasks Devlin et al. (2019) and, eventually, domain-adapted on the specific language of the novel corpus Jin et al. (2022). Moreover, as many other machine learning models, fine-tuned PTLMs lose their ability to solve a task if subsequently fine-tuned to another task Xu et al. (2020) although they apparently do not change their language models Merchant et al. (2020). This phenomenon is known as _catastrophic forgetting_Kirkpatrick et al. (2017) in machine learning. Then, it is still unclear how these models exploit pre-training and training examples.
PTLMs, such as BERT Devlin et al. (2019), have shown to have an impressive ability to memorize and possibly generalize learning examples. This ability has been largely investigated as it may be extremely harmful. In fact, these PTLMs may reveal sensitive information that has been acquired during pre-training. For example, memories of Generative Pretrained Transformers GPTs Radford and Narasimhan (2018) have been violated and produced phone numbers, and usernames Carlini et al. (2021); Thakkar et al. (2021). However, this simple ability to memorize may play a crucial role in the performances of PTLMs in downstream tasks.
This paper presents a small, focused contribution to the role of memorization in the performance of BERT in downstream tasks. We propose \(PreCog\), a very simple measure of coverage that evaluates how much pre-training covers the information needed to model a given example or, better if BERT has already partially seen the example - it _pre_-cognizes the example. The aim is to evaluate if PreCog recognizes on which examples BERT adapted to a downstream task performs better inferences. We have extensively experimented with PreCog by using BERT over the GLUE tasks Wang et al. (2018), and we observed the ability of PreCog to predict examples where a task-adapted BERT performs better. Besides being a predictive measure, PreCog showed that example memorization is a crucial part of the success of BERT.
Related Work
The ability of linguistic neural models to memorize facts is out of doubt. This ability has been deeply explored as it is a problem for privacy issues. Indeed, LSTM language models remember facts so well that individual facts can be retrieved during inference Carlini et al. (2019). These facts may reveal sensitive personal information such as names and addresses associated with people. Moreover, revitalizing the idea of sparse distributed memories Kanerva (1988), Petroni et al. (2019) hypothesized that large language models might be used as clever and inexpensive ways to build up effortlessly knowledge bases. Even in other areas like image classification, it appears that large neural networks may memorize entire datasets as these networks achieve very low error rates over datasets with random generated target labels Zhang et al. (2017). Yet, it is still unclear to what extent this ability to memorize facts helps neural networks in downstream tasks.
A key research question is to understand how large pre-trained neural networks generalize over memorized examples. Pre-training seems to be a winning strategy to boost generalization. In fact, pre-trained models generalize better on out-of-distribution data and can detect such data better than non-pre-trained methods Hendrycks et al. (2020). However, these models need a significant number of training instances to exploit this generalization ability in downstream tasks Tanzer et al. (2022). Hence, since fine-tuning on specific datasets seems to be connected to _catastrophically forgetting_ examples Xu et al. (2020), generalization and memorization can be strictly correlated.
To explore the correlation between memorization and performance on downstream tasks,we propose a mechanism for analyzing sentence coverage.In particular, we investigate how much sentences are seen in the pre-training phase in transformer-based PLMs using perturbation masking methods. These methods allow us to observe the impact of pre-training on the performance of downstream tasks.This novel measure is needed as current measures for understanding coverage, such as "forgetting event" Toneva et al. (2019) and counterfactual memorization Zhang et al. (2021), mix performance and actual memorization.
## 3 Method and Data
This section introduces PreCog that is our measure to evaluate how much pre-training covers the information needed to model a given example (Sec. 3.1), two comparative measures \(Lenght\) and \(LexCov\) (Sec. 3.2), and the experimental setting (Sec. 3.3).
### _PreCog: a measure to evaluate_
pre-training coverage
BERT Devlin et al. (2019) is pre-trained on billions of text tokensby using the Masked Language Modeling (MLM) as one of the two main learning tasks.Indeed, during pre-training, MLM randomly selects and masks 15% of all tokens in any given sequence. This 15% of tokens are either (a) replaced with the special token [MASK], (b) replaced by a random token, or (c) kept unchanged with a respective probability of 80%, 10%, and 10%. Then, BERT learns to predict the masked tokens. This task is learned till near the overfitting.Then, one of the main ability of BERT is unmasking masked tokens.
We aim to captureto which extent a sequence of tokens is covered by pre-training in transformers such as BERT.For this reason, we build on the core capacity of BERT, that is, unmasking masked tokens. Hence, if BERT can predict masked tokens of a given sequence of tokens, it possibly has the knowledge to better deal with that sequence.Our intuition is that a measure built on unmasking masked tokens describes the "prior" knowledge of BERT over sequences.
Given a sentence or text excerpt as a list of tokens \(x=[x_{1},...,x_{T}]\), our function \(PreCog(x)\) is defined as follows.Firstly, we mask one by one each token in \(x\) obtaining T different sequences \(\hat{x}_{i}=[x_{1},...,x_{i-1},[MASK],x_{i+1}..,x_{T}]\). Then, the measure is straightforwardly defined as:
\[PreCog_{l}(x)=\frac{\sum_{i=0}^{T}\delta(x_{i}\in BERT_{MLM}(\hat{x}_{i}))}{T} \tag{1}\]
where \(BERT_{MLM}(\hat{x}_{i})\) is the set of the first \(100\) tokens predicted by BERT for the position \(i\) and \(\delta(x_{i}\in X)\) is 1 if \(x_{i}\in X\) and 0 otherwise.
PreCog is a very simple measure.Yet, it may reveal important facts about how BERT uses pre-training text in downstream tasks.A very important issue is to understand if PreCog correlates with the performance of BERT in these tasks.A positive and steady correlation will be an important hint for
understanding the role of pre-training.
### Alternative Coverage Measures
To comparatively evaluate \(PreCog\), we use two measures: Length and LexCov. Length aims to correlate the accuracy of BERT to the length of samples and LexCov to the coverage of dictionary of BERT. Then, the measures are defined as follows:
* \(Length(x)=\frac{T-min_{D}}{max_{D}-min_{D}}\) where T is the length of \(x\), \(min_{D}\) and \(max_{D}\) are the min and the max length of samples in a dataset \(D\);
* \(LexCov(x)=\frac{T-|OOV(x)|}{T}\) where \(OOV(x)\) is the set of the out-of-vocabulary words of the example \(x\) with respect to BERT's vocabulary.
### Experimental set-up
To experiment with a variety of tasks, we use the GLUE benchmark Wang et al. (2018) containing tasks for: (1) natural language inference, that is, Multigenre NLI (MNLI) Williams et al. (2018), Question NLI (QNLI) Wang et al. (2018), Recognizing Textual Entailment (RTE) Bentivogli et al. (2009), and Winograd NLI (WNLI) Levesque et al. (2012); (2) semantic similarity, that is, the Microsoft Research Paraphrase Corpus (MRPC) Dolan and Brockett (2005), the Semantic Textual Similarity Benchmark (STS-B) Cer et al. (2017), and Quora Question Pairs (QQP) Sharma et al. (2019); sentiment classification - Stanford Sentiment Treebank (SST-2) Socher et al. (2013); and corpus of linguistic acceptability (CoLA) Warstadt et al. (2019). SST-2 and CoLA are single sentence tasks.
We used two version of BERT Devlin et al. (2019): \(BERT_{FT}\) with fine-tuning and \(BERT_{DA}\) with domain-adaptation. These two are based on the pre-trained version of BERTforSequenceClassification (see Wolf et al. (2020)). The fine-tuning procedure is that of traditional BERT. For each downstream task, we chose the Adam optimizer Kingma and Ba (2015) with a batch size of \(16\) and fine-tuned BERT for 4 epochs, following the original paper Devlin et al. (2019). For hyperparameter tuning, the best learning rate is different for each task, and all original authors choose one between \(1\times 10^{-5}\) and \(5\times 10^{-5}\).
We conduct our experiments on NVIDIA RTX A6000 GPUs with CUDA v11.3. We run the models from the Transformers library Wolf et al. (2020) using PyTorch v1.12.0.
To study the correlation between the performance of BERT on the one side and one of the three measures - PreCog, Length, or LexCov - on the other side, we divided the sequences \(x\) in tests in 5 bins according to the value of the measure, we plotted histograms of accuracies of BERT with respect to the three measures (Fig. 1), and we computed the Pearson's correlation of the measure with respect to the accuracies (Tab. 2).
## 4 Experimental Results and Discussion
Accuracies reported in Fig. 0(a) and Fig. 0(c) and used in Tab. 2 are the weighted sum of accuracies in each GLUE task. This guarantees that the 20-point bins have a sufficient set of samples to compute stable accuracies.
PreCog correlates with the accuracy of \(BERT_{FT}\) better than Lenght and LexCov (see Fig. 0(a) and Tab. 2). Accuracies of PreCog in the different bins degrade more uniformly than the other two measures (red solid line in Fig. 0(a)). Moreover, the Pearson's correlation between PreCog values and the accuracies of \(BERT_{FT}\) is 0.9737 with a
Figure 1: Accuracy plots of \(BERT_{FT}\) for the weighted sum of accuracies in each GLUE task.
p-value of 0.005 and it is higher than the ones of both LexCov, 0.9014 with a p-value of 0.037, and Length which is not correlated (see Tab. 2).
PreCog values better separate examples in testing sets. At first glance, LexCov may seem a better model to separate samples with high with respect to those with less accuracy expectations. Samples with a value of LexCov less than 40 have low accuracy (see Fig. 0(a)). However, samples having LexCov between 0 and 40 are rare (Fig. 0(b)). Better observations are derived by plotting accuracies over bins rescaled according to their coverage (Fig. 0(c)). Indeed, PreCog separates samples better than LexCov (red solid line vs. dashed blue line in Fig. 0(c)): samples from 18,000 to 55,000 fall in two bins for PreCog and in only one bin for LexCov. Hence, PreCog has better discriminative power than LexCov.
Results are substantially confirmed on task basis: PreCog is a better predictor of the accuracy on tasks and a better separator of classes of samples (see Tab. 1). Accuracies of \(BERT_{FT}\) are generally higher for samples with PreCog in the interval \([80,100]\) than for samples with the other two measures in the same interval. \(LexCov\) has higher accuracy for samples in \([80,100]\) only for RTE. Moreover, accuracies of samples in the interval \([80,100]\) are always higher than those in the interval \([0,80]\) for both PreCog and LexCov. Yet, PreCog partitions more evenly samples and the differences in accuracies between intervals \([80,100]\) and \([0,80]\) are generally higher.
Moreover, domain adaptation is not changing the above findings. Accuracies for \(BERT_{DA}\) are generally higher than those without domain adaptation for all the tasks except for SST2 and WNLI (Tab. 2). Moreover, focusing on PreCog, the overall increase in accuracies in CoLa, MNLI, and RTE derives from an increase in the samples of the interval \([80,100]\). This fact suggests that \(BERT_{DA}\) is gaining a better model for these samples.
As a final observation, BERT seems to behave better on sentences that have been, at least, partially seen during pre-training. Indeed, PreCog is a measure capturing how much the sentence is covered with the pre-training task Masked Language Model (MLM). Typically, BERT overfits on MLM during pre-training. Then, PreCog is a measure telling whether sentences have already been partially seen. Instead, LexCov describes how many words of sentences are covered by BERT's vocabulary. Since there is a great difference in predicting accuracy on tasks between PreCog and LexCov, we can conclude that BERT behaves better when general knowledge of the target sentence is already acquired during pre-training.
## 5 Conclusion
Memorization of pre-training examples plays a very important role in the performance of BERT. Indeed, our PreCog, which measures how much memorized pre-training knowledge cover target examples, is highly correlated with BERT's performance in inference. PreCog can then be also used as a measure of confidence for BERT-based decisions in downstream tasks.
As BERT success is partially due to simple mem
\begin{table}
\begin{tabular}{l c|c|c c|c c c|c c c c} \hline \hline \multirow{2}{*}{Task} & \multicolumn{2}{c|}{Global} & \multicolumn{2}{c|}{depth} & \multicolumn{2}{c|}{depth} & \multicolumn{2}{c}{LEqCov} & \multicolumn{2}{c}{Pcg} & \multicolumn{2}{c}{Pcg} \\ & \multicolumn{2}{c|}{\(BERT_{FT}\)} & \multicolumn{2}{c|}{\(BERT_{DA}\)} & \multicolumn{1}{c|}{interval} & \# samples & \(BERT_{FT}\) & \(BERT_{DA}\) & \# samples & \(BERT_{FT}\) & \(BERT_{DA}\) \\ \hline CoLa & 0.920 & 0.935 & [08,100] & 499 & 0.906 & 0.918 & 857 & 0.926 & 0.940 & 577 & 0.951 & 0.972 \\ & & [08,100] & 446 & 0.935 & 0.955 & 88 & 0.852 & 0.886 & 368 & 0.850 & 0.878 \\ \hline mnli & 0.716 & 0.721 & [08,100] & 7782 & 0.717 & 0.721 & 6512 & 0.729 & 0.7245 & 3508 & 0.759 & 0.770 \\ & & [08,100] & 1601 & 0.716 & 0.718 & 2813 & 0.660 & 0.660 & 3603 & 0.690 & 0.690 \\ \hline mpc & 0.806 & 0.861 & [08,100] & 59 & 0.780 & 0.831 & 924 & 0.818 & 0.877 & 376 & 0.867 & 0.830 \\ & & [08,100] & 1590 & 0.806 & 0.861 & 725 & 0.789 & 0.839 & 1273 & 0.787 & 0.854 \\ \hline qpli & 0.808 & 0.829 & [08,100] & 3245 & 0.802 & 0.832 & 3123 & 0.809 & 0.831 & 1769 & 0.832 & 0.846 \\ & & [08,100] & 1970 & 0.817 & 0.825 & 2092 & 0.867 & 0.827 & 3446 & 0.796 & 0.821 \\ \hline qgp & 0.822 & 0.845 & [08,100] & 32728 & 0.820 & 0.8458 & 28962 & 0.823 & 0.843 & 12810 & 0.840 & 0.860 \\ & & [08,100] & 3960 & 0.834 & 0.842 & 7886 & 0.816 & 0.850 & 23908 & 0.812 & 0.837 \\ \hline ne & 0.646 & 0.653 & [08,100] & 146 & 0.6713 & 0.628 & 155 & 0.716 & 0.728 & 365 & 0.653 & 0.614 \\ & & [08,100] & 121 & 0.613 & 0.628 & 113 & 0.549 & 0.538 & 22 & 0.648 & 0.649 \\ \hline sq2 & 0.939 & 0.924 & [08,100] & 151 & 0.907 & 0.887 & 607 & 0.951 & 0.946 & 333 & 0.970 & 0.970 \\ & & [08,100] & 655 & 0.947 & 0.933 & 199 & 0.905 & 0.839 & 473 & 0.918 & 0.892 \\ \hline wuli & 0.565 & 0.594 & [08,100] & 31 & 0.452 & 0.884 & 61 & 0.590 & 0.632 & 39 & 0.590 & 0.645 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracies on the GLUE tasks computed grouping datasets according to the values of three measureres - PreCog, LexCov, and Length - for \(BERT_{FT}\) and \(BERT_{DA}\).
\begin{table}
\begin{tabular}{l|c c} \hline \hline _Measure_ & _Correlation_ & _p-value_ \\ \hline Length & -0.5922 & 0.292 \\ LexCov & 0.9014 & 0.037 \\ PreCog & 0.9737 & 0.005 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Pearson’s correlation between the measures and the accuracy bins of \(BERT_{FT}\) for the combined GLUE tasks.
orization of examples and given the overwhelming presence of ChatGPT, one area of future research should be on better understanding the relation between actual training examples and inferences in order to give credit to knowledge producers.
## Limitations
This paper presents a small, focused contribution towards the understanding of the relation between memorization and performance of pre-trained language models (PTLMs). However, we leave some issues unresolved for this more long-term goal. Indeed, we have explored our idea only for a specific PTLM that is BERT with a specific pre-training task, that is, masked language model (MLM). Future analysis should explore whether our findings hold for other PTLMs based on MLM. Morever, we have not explored to what extent tasks examples are really covered by pre-training corpora used by PTLMs. The correlation between PreCog and the actual training examples should be investigated. Finally, PreCog is not suitable for PTLMs that are based pre-training tasks that are not MLM. Then, other coverage measures should be defined in those cases.
|
2301.07855 | Digital Divide: Empirical Study of CIUS 2020 | As Canada and other major economies consider implementing "digital money" or
Central Bank Digital Currencies, understanding how demographic and geographic
factors influence public engagement with digital technologies becomes
increasingly important. This paper uses data from the 2020 Canadian Internet
Use Survey and employs survey-adapted Lasso inference methods to identify
individual socio-economic and demographic characteristics determining the
digital divide in Canada. We also introduce a score to measure and compare the
digital literacy of various segments of Canadian population. Our findings
reveal that disparities in the use of e.g. online banking, emailing, and
digital payments exist across different demographic and socio-economic groups.
In addition, we document the effects of COVID-19 pandemic on internet use in
Canada and describe changes in the characteristics of Canadian internet users
over the last decade. | Joann Jasiak, Peter MacKenzie, Purevdorj Tuvaandorj | 2023-01-19T02:52:42Z | http://arxiv.org/abs/2301.07855v3 | # Digital Divide: Empirical Study of CIUS 2020+
###### Abstract
As Canada and other major countries investigate implementing "digital money" or Central Bank Digital Currencies (CBDC), important questions need to be answered relating to the effect of demographic and geographic factors on the population's digital literacy. This paper uses the Canadian Internet Use Survey (CIUS) 2020 and survey versions of Lasso inference methods to assess the digital divide in Canada and determine the relevant factors that influence it. We find that a significant divide in the use of digital technologies, e.g., online banking and virtual wallet, continues to exist across different demographic and geographic categories. We also create a digital divide score that measures the survey respondents' digital literacy and provide multiple correspondence analyses that further corroborate these findings.
**Keywords:** Digital divide, inference after selection, Lasso, logistic regression, marginal effect, survey sample.
Introduction
As the Bank of Canada (BoC) investigates transitioning to a Central Bank Digital Currency (CBDC), the effect of the "digital divide" becomes a crucial factor in this transition. The digital divide arises between those that have been able to adapt to new digital technologies and those who have not. In a similar vein, it refers to the difference between those connected and possessing enough digital literacy to use the internet and other online technologies and those who are either not connected to, or do not have enough digital literacy, to use the internet. The future utility of these new digital currencies and digital modes of payment in Canada depends on internet connectivity and digital literacy. The digital divide is a prime cause of the limited uptake of new financial technologies amongst communities with poor access to the internet (Maniff, 2020).
This paper examines the Canadian Internet Use Survey (CIUS) 2020 \(i.\) to study internet access and the use of the internet in relation to financial technologies in Canada, and \(ii.\) to assess the digital divide in Canada and what effects this divide could have on the use of financial services, digital currencies, and digital payments. Moreover, this paper contributes to the debate on the future digitalization of money in Canada by providing data-based arguments for (or against) policy options such as introducing the CBDC (Christodorescu et al., 2020; Maniff, 2020).
Our empirical study also provides information on the outcomes and progress of the Government of Canada _High-Speed Access for All: Canada's Connectivity Strategy_ by examining the internet connectivity of Canadians and the factors associated with internet access and usage.1Through Canada's connectivity strategy, the Canadian government recognizes the importance of high-speed internet for the economic and social well-being of Canadians.
Footnote 1: Available at: [https://ised-isde.canada.ca/site/high-speed-internet-canada/en](https://ised-isde.canada.ca/site/high-speed-internet-canada/en)
Specifically, we use three main data analysis techniques to investigate how gender, age, race, education, income, geographical location, aboriginal identity, immigration status, and language impact connectivity and contribute to the digital divide in Canada. Knowing what factors affect the digital divide will allow for a fact based assessment of the government's strategy and help policymakers to make informed decisions regarding the
allocation of future resources.
We first perform survey-weighted logistic Lasso variable selection which reduces the dimensionality of the data. The logistic Lasso estimates are not suitable for inference since the Lasso selects the variables that have higher predictive power and trades off bias for variance. To address this issue, we use post-selection methods that are provably valid and allow for inference on the Lasso logit coefficients.
Second, we perform multiple correspondence analysis (MCA) and present the results in graphical form. The variable category groupings produced by MCA will be compared and contrasted with the logistic Lasso regression results.
In addition, we create a digital literacy score to measure the survey respondents' digital literacy. We compute the score of digital inclusion/digital divide and study its distributional properties in the entire and sub-samples of individuals with different demographic characteristics pertaining to the social groups with different origin, gender, age, location, and education level. We then examine the main explanatory variables that make up the score of digital inclusion among the variables in the dataset.
Over the past number of years, the use of physical cash in Canada has decreased dramatically; now, cash is used only in one of every three transactions (Huynh, 2017). This decrease in cash use is primarily due to the increased use of debit and credit cards. Recently, a digital technology known as cryptocurrency has made waves in the financial world. Many believe these cryptocurrencies could replace traditional currencies as the primary means of payment worldwide. Despite this speculation, the actual use of cryptocurrencies in Canada remains relatively low (Huynh et al., 2020).
This paper is motivated by cryptocurrencies and distributed ledger technology (DLT), commonly known as blockchain. According to Barr et al. (2021), DLTs have three main features; \(i.\) a ledger stored in multiple locations, \(ii.\) mechanisms to determine the accuracy of the data, \(iii.\) cryptographic security. These features separate cryptocurrencies like Bitcoin and Ethereum from types of digital currencies like PayPal and prepaid cards. These types of digital currencies do not travel from buyer to seller directly but instead travel through a "storage facility" during the electronic journey from buyer to seller (Bank of Canada, 2020). A possible reason for the slower than expected uptake in cryptocurren
cies is that these currencies do not satisfy the traditional definition of money; that is, \(i\). medium of exchange, \(ii\). store of value, \(iii\). unit of account. As a result, cryptocurrencies have so far behaved more as speculative assets in the market, and their stability of value has been questionable (Adrian and Mancini-Griffoli, 2019).
Despite the slower than expected uptake of cryptocurrencies, countries worldwide have noticed increasing interest in these currencies and decreasing use of physical cash. These events have led many countries to start researching and, in some cases, implementing CBDC systems (Christodorescu et al., 2020). A CBDC system can be implemented in several ways. For example, according to Bordo and Levin (2017), a CBDC system could consist of private individuals having an account directly with the central bank, or commercial banks could have specialized accounts that hold the CBDC for individuals.
Many countries are concerned that if cryptocurrency use increases to the extent that these currencies are used in place of banks' credit systems, this could weaken a country's ability to implement monetary policy properly. If private digital currencies decrease banks' role in the monetary system, governments would no longer be able to affect the interest rates in the economy by controlling the rates at which banks borrow and lend (Brunnermeier et al., 2019). A CBDC could be a countermeasure to a scenario like this. Implementing a CBDC would create a direct channel for monetary policy to work through thereby restoring power to the country's monetary authority and not requiring direct regulation in regards to new emerging cryptocurrencies (Brunnermeier et al., 2019).
Since 2014, China, through its central bank, the People's Bank of China (PBOC) has implemented a form of digital currency known as the digital Renminbi (DCEP) (Barr et al., 2021). This CBDC is used in five major cities in China. Besides its CBDC, digital forms of payment are used more than physical forms in China, with WeChat and Alipay's digital wallets being the primary forms of payment (Brunnermeier et al., 2019).
Even if most major countries are not currently using a CBDC, many have begun to study the effects of their implementation and use. For example, the United States Federal Reserve released a study on CBDC in January 2022. The UK has studied the use of CBDC and determined that they are not prepared to transition to an entirely cashless society citing digital inclusion concerns (Barr et al., 2021). If the UK did begin to use a
CBDC they would use it along with physical cash (Barr et al., 2021). Canada has also stated it currently has no plans to introduce a CBDC (Carmichael, 2020).
The BoC has, however, introduced a "contingency plan" to implement a CBDC if physical cash was no longer used at all or if private digital currencies were used more than the Canadian dollar (Bank of Canada, 2020). Right now, however, neither of these two scenarios seems likely in the near future. Very few Canadians are using cryptocurrencies, and 39% of Canadians "would not be able to cope" if cash was no longer used in Canada (Huynh et al., 2020).
There are several potential advantages to using CBDCs compared to the traditional banking model used in Canada. First, a CBDC could be used and held by both individuals and businesses, thus potentially cutting out the commercial intermediary (Brainard, 2019). A safe and secure way to hold money would increase competition with banks for individual deposits. Financial institutions yield a great deal of market power. A CBDC, through direct competition for deposits with these institutions, could be a cheaper and simpler method than developing competition policies (Usher et al., 2021). Also, CBDC could make payments cheaper and faster than the traditional bank wire system (Barr et al., 2021).
It is important to remember when researching the development and use of CBDCs that they cannot be used by or assist individuals that do not have access to the internet or a fair amount of digital literacy (Barr et al., 2021). This is one of the reasons why research in Canada on internet access and usage is essential. Christodorescu et al. (2020) suggest that a "two-tier hierarchical trust infrastructure" with the country's central bank being the main authority and other financial institutions being the intermediary certificate authority would potentially allow for an offline capability of a CBDC, thereby making offline payments possible. However, this would still require the individual to have access to the internet at some point.
High-speed internet increases social progress and improves overall quality of life (Jordan, 2019). It is therefore vital to increase access to high-speed internet in Canada, not only in the case of implementing a CBDC but, more generally, to enhance quality of life and social progress. The government understands this need in Canada and, in the 2019 budget, the
federal government made a 1.7 billion dollar commitment to connecting all Canadians to reliable high-speed internet.
To increase access to high-speed internet we must understand the factors that influence whether an individual has internet access. Friedline et al. (2020) found that rural communities of colour have the lowest fintech rates. There is also a significant rural/urban divide regarding high-speed internet access in Canada, with only 37% of rural households having access to high-speed internet, compared to 97% of urban homes. The digital divide is even more significant for indigenous communities, with just 24% having access to high-speed internet.
Haight et al. (2014) use CIUS 2010 to examine the digital divide in Canada. Using standard regression techniques and several demographic and geographic variables, the authors predict internet access and social networking site usage. The study's main finding is that the digital divide continues to exist in Canada, with income, education, immigration status, urban living, and age all having a statistically significant effect on internet usage.
A lot may have changed over the last ten years regarding the population's digital literacy. Compared to the previous installment considered by Haight et al. (2014), CIUS 2020 provides more refined categories of variables and offers an up-to-date assessment of the current digital divide in Canada. In addition, we account for the dimensionality of the variables and use Lasso variables selection techniques that result in more predictive power for explaining the categorical variables of interest.
We confirm the importance of individual characteristics such as the age, income and education. The novelty of our approach is in revealing a significant impact of the visible minority status on the use of virtual wallets and through the use of interaction variables determining that older single individuals are significantly impacted by the digital divide. To the authors' knowledge, we are the first to use these interaction variables. The use of these interaction variables facilitated by the Lasso approach allows us to show the complexity in the persisting digital divide.
This paper is organized as follows. Section 2 provides a description of the CIUS 2020 data. Section 3 lays out the estimation and testing approach of the paper. Section 4 reports the main results. We conclude in Section 5. Appendix A provides a description of
the sampling and weighting scheme used in CIUS 2020. Appendix B describes the technical aspects of methods used in the paper. Further details on the digital literacy score is given in Appendix C and additional inference results are reported in Appendix D.
## 2 Data description
CIUS 2020 is the most current data source on Canadian internet usage and comprises \(17,409\) observations on households across Canada. The survey includes answers from Canadians 15 years of age and older living in one of Canada's ten provinces. The survey has a cross-sectional design, which uses both landline and cellular phone numbers from Statistics Canada's dwelling frame. Statistics Canada uses stratified sampling at the census metropolitan area and census agglomeration level. The survey is filled out online by one member of the household who is 15 years of age or older and the overall response rate to the survey is 41.6%.
The data is appropriately weighted using sample weights. The weight variables are provided by Statistics Canada [see Appendix A for the stratification scheme and survey weights]. Properly weighting the data allows for the sample of the Canadian population used in CIUS 2020 to represent the whole population. However, the data excludes aboriginal Canadians living on reserves and Canadians living in the territories. The sample weight variable used in CIUS 2020 is based on independent estimates from Statistics Canada for each province's various age and sex groups.
A limitation to CIUS 2020 is that it is conducted off reserve. The data on internet use in Northern Canada will be forthcoming in the following Northern CIUS. Therefore, the analysis based on CIUS 2020 can be considered the first step in a long-term project exploring Canada's digital divide.
We perform survey-weighted logistic Lasso variable selection/inference for multiple categorical dependent variables to determine the relevant demographic and geographic factors impacting the digital divide. The dependent variables corresponding to different model specifications and the independent variables are described in Sections 2.1 and 2.2, respectively.
### Dependent variables
We consider the following logistic regression models where the dependent variables come from five questions asked to survey respondents.
* Model 1: "During the past three months have you used the internet from any location?"
* Model 2: "During the past three months have you conducted online banking?"
* Model 3: "During the past three months have you sent and received emails?"
* Model 4: "During the past twelve months have you used a virtual wallet to pay for goods over the internet?"
* Model 5: "During the past twelve months did you use a credit card previously entered or entered at the time of purchase to pay for goods over the internet?"
The internet use variable is a binary variable where respondents answered _Yes_ or _No_. The dependent variables 2-5 are not binary, with each variable having three categories; 1) _Yes_; 2) _No_; 3) _Not stated_. We test the hypothesis of Independence of Irrelevant Alternatives (IIA) to determine if the _Not stated_ category should be included.
The internet use question is used in this analysis to better determine what factors affect whether a person in Canada has access to the internet. Online banking, email use, and credit card dependent variables are used to judge what demographic factors affect a person's digital literacy. Determining what factors play a role in Canadian's digital literacy and internet connectivity could improve policymakers' ability to focus their efforts effectively when trying to reduce the digital divide in Canada. The analysis of these variables could also help the BoC know what groups of people will be affected by transitioning to a CBDC and a cashless economy.
The virtual wallet question determines what factors affect whether Canadians use virtual wallets when making payments. As the BoC investigates implementing a CBDC, knowing the demographic factors that affect whether someone uses virtual wallets plays an important role. The virtual wallet dependent variable question may be restrictive because
many people use virtual wallets to hold digital currencies as speculative assets rather than a liquid currency to spend online.
### Independent variables
The independent variables in this analysis are the same for the logistic regression models 1 to 5. These variables are income, education, employment status, aboriginal identity, visible minority status, immigration status, age, gender, location, type of household, language spoken at home, and province. All of them have two or more categories which are reported in the regression tables.
Some independent variables included an answer category _Not stated_. Unlike the case with the dependent variables, for these independent variables, we have still included this category in the regression. Many respondents who answered _Not stated_ to one question answered many others; therefore, removing their answers may bias results. There are \(12,124\) observations for the logit models 4 and 5. In the email use and online banking models there are \(17,268\) and \(17,135\) observations, and the internet use model includes all \(17,409\) observations in the survey.
The categories associated with a representative individual are omitted in each model as the comparison category for the logistic regression. That representative individual has the following characteristics - urban, age 45-54, male, non-aboriginal, english and non-official language speaker, not employed, some post-secondary education, not a visible minority, family household with children under 18, income of \(\$52,204\)-\(\$92,485\), landed immigrant (recent immigrant), and from the province Alberta.
## 3 Survey-weight adjusted logit Lasso inference
The survey weights play an important role as they ensure that the results of the survey can be generalized to the entire population of Canadians. However, the existing Lasso-based estimation and inference methods, including the widely-used logit Lasso variable selection, are not directly adjustable for survey weights. To overcome this gap in the literature, this paper uses a survey-weighted logistic Lasso (svy LLasso hereafter) variable selection
for binary choice models which extends the logistic Lasso, to a survey environment. The inference procedures based on svy LLasso estimator are further studied by Jasiak and Tuvaandorj (2023).
There are 41 independent categorical variables, many of which are expected to have negligible or no effect on the dependent variables considered. Moreover, some of the independent variables may interact with one another; for example, household type and income variables may have a cross-effect on the dependent variables such as internet use and online banking. Taking into account the second-order interactions gives 674 control variables which are large relative to the sample size. Yet, there is no a priori guidance on which variables should enter the model. Due to these reasons, we take the logistic Lasso approach, which is well-suited for this problem, known to have optimality properties under sparsity assumption and offers an automatic variables selection (Belloni et al., 2014; Mullainathan and Spiess, 2017)
Let \(\theta\) denote the parameter vector of the logistic regression including the slope parameters \(\beta\) and intercept \(\alpha\). The (non-negative) tuning parameter used in the Lasso is denoted by \(\lambda\). A survey-weighted logistic Lasso is based on minimizing the weighted negative log-likelihood function \(L(\theta)\) subject to \(\ell_{1}\) penalty on the parameter vector:
\[\min_{\theta=(\alpha,\beta^{\prime})^{\prime}\in\mathbb{R}^{p+1}}\left(-L( \theta)+\lambda\sum_{j=1}^{p}|\beta_{j}|\right), \tag{3.1}\]
where \(L(\theta)=n^{-1}\sum_{i=1}^{n}w_{i}(y_{i}x_{i}^{\prime}\theta-\log(1+\exp(x_{i }^{\prime}\theta)))\), \(x_{i}^{\prime}\theta=\alpha+\tilde{x}_{i}^{\prime}\beta\), and \((y_{i},x_{i}^{\prime})^{\prime}\in\mathbb{R}^{p+1},i=1,\ldots,n\), are the pairs of dependent and independent observations with the corresponding strictly positive survey weights \(w_{i},i=1,\ldots,n\). The sampling scheme used in CIUS 2020 is akin to simple stratified sampling (Cameron and Trivedi, 2009), so we treat \(w_{i}\) as given, and \(\{(y_{i},x_{i}^{\prime})^{\prime}\}_{i=1}^{n}\) as independent.
Note that, as is standard in the Lasso literature, only the "slope" parameters in \(\beta=(\beta_{1},\ldots,\beta_{p})^{\prime}\) are penalized in (3.1). We fit the model (3.1) using the R package glmnet. For the tuning parameter \(\lambda\), we use the package's default value chosen by 10-fold cross validation with the loss function "auc" (area under the ROC curve).
The logistic Lasso estimates are not suitable for inference since the Lasso selects the variables that have higher predictive power and trades off bias for variance. Due to its
computational and conceptual simplicity, we use a survey-version of the debiased Lasso (DB) method proposed by Zhang and Zhang (2014), Javanmard and Montanari (2014) and Xia et al. (2020) as the main inferential tool for the logit coefficients and the average marginal effects (AMEs) after variable selection by svy LLasso. It is based on the following one-step estimator constructed from the initial svy LLasso estimator \(\hat{\theta}\):
\[\tilde{\theta}^{DB}\equiv\hat{\theta}+H(\hat{\theta})^{-1}S(\hat{\theta}),\]
where \(H(\cdot)\) and \(S(\cdot)\) are the (sample) Hessian and the score functions for the full parameter vector in the logistic model. The one-step (or DB) estimator removes the bias of the initial svy LLasso estimator and has an asymptotic normal distribution, thus facilitating standard \(t\)-ratio-based inference.
In addition, we consider the survey-logit versions of the "selective inference" (SI) procedure proposed by Lee et al. (2016) and Taylor and Tibshirani (2018), and the \(C(\alpha)\) (or Neyman orthogonalization) method after Lasso variable selection proposed by Belloni et al. (2016) to make inference on the model parameters and AMEs. The former method is based on a one-step estimator denoted as \(\tilde{\theta}^{SI}\) and the test statistic in the latter is labeled as \(C_{\alpha}\). See Appendix B.1 for a brief description of these methods and Jasiak and Tuvaandorj (2023) for further theoretical analyses.
## 4 Empirical results
This section reports the empirical results. Section 4.1 presents the results from the weight-adjusted Lasso logit estimation of models 1 to 5. svy LLasso estimates and the test results based on the debiased Lasso estimates of the model coefficients and AMEs, \(\tilde{\theta}^{DB}\) and \(\widetilde{\text{AME}}^{DB}\), are reported in Tables 1-5 below. The outcomes of the selective inference and \(C(\alpha)\) test results are generally consistent with the debiased Lasso test results, thus are relegated to Tables 11-15 in Appendix D.
An analysis of possible interaction effects is provided in Section 4.2. We report the outcomes of the multiple correspondence analysis in Section 4.3 and present the digital divide score in Section 4.4.
As stated in Section 2, the online banking, email use, digital wallet, and credit card dependent variables originally had three categories: _Yes_, _No_, and _Not stated_. We use the survey-weighted Hausman-McFadden test for IIA hypothesis to see if we can remove the _Not stated_ observations from the logistic regression. The online banking model has Hausman-McFadden statistic of 0.05 with a p-value of 1. Therefore the results for the model show strong evidence in favour of IIA, so we use the restricted model specification removing the _Not stated_ observations from the model. The dependent variables email use, virtual wallet, and credit card use have Hausman-McFadden statistics \(-0.95,-0.77\), and \(-1.29\). Negative values of the statistic are viewed as evidence in favour of the null hypothesis (Hausman and McFadden, 1984), we proceed with the restricted model with _Not stated_ removed.
### svy LLasso regressions
Model 1: Internet use.The results from the internet use model reported in Table 1 display the significant variables that determine whether or not someone is connected and using the internet. Unlike the other regression models which measure respondents ability to use digital technologies, the internet use model directly determines what variables affect whether or not someone is connected to the internet.
Concerning the location variable, the estimation result shows that those living in rural areas across Canada are less likely to have used the internet in the previous three months. The corresponding AME is equal to \(-0.017\), meaning the probability of internet connectivity decreases by \(1.7\%\) for a rural resident compared to an urban. This rural/urban divide in access to the internet is consistent with _Canada's connectivity strategy_ findings and the impetus for the significant investments from the federal government to improve internet connectivity in rural communities. The persistence of this rural/urban divide in internet connectivity despite the large investments already made by the federal government highlights the challenges rural residents face with internet connectivity and the digital divide.
svy LLasso selects all age group categories. Comparing the five age categories to the omitted age group _45-54_, we see that the three younger age groups comprising ages _15-44_ have positive coefficients and the two older age categories comprising _55-64_ and _65 and
_older_ have negative coefficients. Older age groups were less likely to have been connected to the internet than the omitted age category. The age group categories with the largest absolute value in AME are the age group category _25-34_ and the oldest (_65 and older_). The second youngest age group category is 5.4 percentage points more likely to be connected and the oldest age group category is 8.2 percentage points less likely to be connected to the internet than the base age category _45-54_. The AME for both of these age categories are highly significant.
The estimation results indicate a correlation between various demographic factors and internet usage in Canada. Specifically, individuals who were employed, speak English as their primary language, possess university degrees, and have high incomes are found to have a higher likelihood of internet usage within the past three months. Conversely, individuals residing in the province of Quebec, who are older, have a high school education or less, identify as a visible minority, are single, and have low incomes, are found to have a lower likelihood of internet usage. These findings suggest the existence of a persistent digital divide even in terms of basic internet access within Canada.
The results for internet connectivity are generally consistent with and reinforce the findings of past research on internet connectivity in Canada (Haight et al., 2014; Friedline et al., 2020; Jordan, 2019) Younger, highly educated, high income Canadians are most likely to use the internet.
Model 2: Online banking.The online banking model is used to measure Canadians' ability to use digital technology, specifically digital financial technology. In theory, there would likely be many similarities between the online banking system currently used by the prominent Canadian banks and a system designed by the Canadian government or BoC to run a CBDC system.
The results for the dependent variable online banking reported in Table 2 show that younger, employed, high-income and university-educated Canadians are most likely to use online banking. Low education attainment, low income, being a visible minority and being 55 or older negatively affected the likelihood of an individual using online banking. The variables with the largest (in absolute value) AMEs on whether a person uses online banking are the age category _65 and older_, whether or not a person is employed, and if
their educational attainment was a _High school or less_.
People in the oldest age category are found to be 15.4 percentage points less likely than the base age category _45-54_ to use online banking according to the debiased Lasso AME estimates. An employed person was found to be 10.6 percent more likely than an unemployed person to use online banking. People with low educational attainment of a _High school or less_ are 11.4 percentage points less likely than someone with _Some post-secondary_ education to use online banking. The variables _Location, Gender, Aboriginal identity_, and _Province_ were not selected by svy LLasso.
Model 3: Email use.Tables 3 reports the logit regression models for the dependent variable email use. The email use variable is used in the same way as the online banking variable to determine what factors affect Canadians' digital literacy.
In Table 3, we see many of the same variables selected by svy LLasso as in the internet use and online banking models. However, there are some interesting differences between the models. The category _Rural_ of the location variable is selected and the coefficient is negative in the email use model but not selected in the online banking model. This difference in the two models may be due to geography. Rural Canadians may use online banking if they live far from a bank location. Canadians living in rural locations may also be less likely than urban residents to have employment requiring extensive use of email.
The category _Female_ is selected in the email use model and not the online banking one. However, the variable's debiased Lasso AME is quite small. A possible explanation for this is the number of women working office jobs compared to men, who may be more likely to work blue-collar jobs where email is not as frequently required. svy LLasso chooses more variables in the email use model than the previous two, with every variable having at least one of its categories chosen.
In Table 3, the variable with the largest estimated AME (in absolute value) is the language variable category _English, French, and Non-official language_. However, despite the large AME estimate, the variable is not selected by svy LLasso. The oldest age category, _65 and older_, has the second largest AME, those _65 and older_ are 10 percentage points less likely than those in the age group _45-54_ to send and receive emails.
Educational attainment has a significant effect on the email use model. Those holding
a _University degree_ or higher were much more likely than those with _Some post-secondary education_ to use email and those with a _High school or less_ were much less likely to use email. Email is commonly used in jobs that require a higher degree of education. Workplace requirements could explain the large difference we see in the likelihood of email use depending on a person's educational attainment.
Model 4: Virtual wallet.Whether or not someone has made payments with money from a virtual wallet is one of the most relevant variables in our model concerning research on digital currencies in Canada. The previous regression models have been used to measure Canadians' internet connectivity and digital literacy. The virtual wallet model will show what factors currently affect the uptake of digital forms of payment in Canada.
The logistic lasso regression results in Table 4 show the coefficients of the variables selected by svy LLasso. Rural Canadians are less likely to use a digital wallet than urban residents. All age group categories, excluding those aged _35-44_ were selected. Younger Canadians have the highest probability of using a virtual wallet. The age group, _15-24_, has a 11.2 percentage point increase in the likelihood of using a virtual wallet than the base age group _45-54_. The older age group categories both have negative coefficients. The age group _65 and older_ is the least less likely to use a virtual wallet compared to the age group _45-54_. The debiased Lasso AME for the oldest age group shows that those _65 and older_ are 8.3 percentage points less likely to use a virtual wallet than those _45-54_.
The coefficient for _Visible minority_ is chosen by svy LLasso and has a positive AME on the use of a digital wallet. This result is striking, considering the variable category _Visible minority_ in previous results has either not been selected by svy LLasso or had a negative effect on the dependent variable. The AME shows that a person identifying as a visible minority is 5.2 percentage points more likely than a person who is not a visible minority. The positive _Visible minority_ coefficient might reflect the increased use of foreign cryptocurrencies like Alipay and WeChat pay by visible minorities in Canada. The only significant income category is the highest. Canadians with income equal to or higher than \(\$146,560\) are found to have a higher probability of using a virtual wallet than the base income category (\(\$52,204\)-\(\$92,485\)).
The age and income variables have the largest AMEs in absolute value. The education
variable _University degree_ is choosen by svy LLasso and has a positive coefficient. Those with a _University degree_ are shown to be 2.8 percentage points more likely to use a virtual wallet than those with _Some post-secondary_ education. In contrast to previous logistic lasso regression results, the variable employment was not selected by svy LLasso.
Model 5: Credit card.The implementation of CBDC would likely involve a payment card component similar to the debit and credit cards Canadians use now. Understanding what factors influence whether someone uses a credit card to make purchases online is essential in the context of implementing a CBDC.
The logistic Lasso regression results in Table 5 show that svy LLasso has selected fewer variables than the previous models with the exception of the virtual wallet model. The youngest age group is the only statistically significant category for the age group variable. The age group category _15-24_ has a negative and significant coefficient. The AME shows that the youngest age group is 8.8 percentage points less likely to use a credit card for online purchases than the base age category _45-54_. The youngest age category being less likely to use a credit card than those in _45-54_ is reasonable given that many people do not use credit cards until later in life. svy LLasso selected both categories of the education variable with low educational attainment of a _High school or less_ having a negative coefficient and high educational attainment of a _University degree_ with a positive coefficient.
svy LLasso selected the lowest income category and the province of Quebec. Both have negative coefficients meaning that people with low income or from Quebec are less likely to use credit cards for online purchases than the comparison categories. The credit card model results show that Canadians from Quebec, with low educational attainment, low income, and that speak French are less likely to use a credit card for an online purchase. In contrast, Canadians that speak English, are employed, have a university degree, live in a family household without children under eighteen, and from Ontario are more likely to use credit card for online purchases.
Further remarks.svy LLasso selected at least one of the age variable categories in every regression specification. The younger age categories were more likely to be connected
to the internet and use services such as online banking, email, and a virtual wallet than the comparison age group _45-54_. The oldest age group, _65 years and older_, was less likely to use the internet and other digital services.
The credit card model was the only case where this relationship between the dependent variable and the age group categories did not hold. In this model, the youngest age group category was selected by svy LLasso and found to have a negative effect on credit card usage. Although not consistent with the other models designed to measure a person's digital literacy, this result was not surprising. Younger people are generally less likely to make purchases requiring a credit card than older people who must show good credit scores and credit history to make large purchases such as cars and homes.
svy LLasso selected at least one category from the variables _Employment, Education,_ and _Income_ in almost all of the models. Higher educational attainment is found to have positive effects on the probability of using the internet and having a high degree of digital literacy. People with higher income and education were more likely to be connected to the internet and have sufficient digital literacy to effectively use it. Those employed were also more likely to use the internet and conduct online banking and email. Being employed was not selected by svy LLasso in the virtual wallet model and was selected in the credit card model, but was not significant.
svy LLasso did not select the variable _Immigration status_ in any of the regression models. The lack of significance of the immigration status variable was not expected. It was assumed that new immigrants to Canada would have a more challenging time accessing the internet and may have a lower degree of digital literacy than Canadian-born residents. The variables lack of significance revealed in our models could be due to Canadian government immigration policies (such as the Global Skills Strategy) helping highly skilled workers immigrate to Canada. It is also possible that most Canadian immigrants may have used the internet and other services like email during the process of becoming a Canadian citizen.
As mentioned in Section 3, the DB test results of this section, and the \(C(\alpha)\) and SI test results reported in Appendix D are based on svy LLasso estimates with a \(\lambda\) chosen by cross-validation. svy LLasso estimates with a fixed \(\lambda\) yielded qualitatively similar results which are available upon request.
\begin{table}
\begin{tabular}{l l r r r r r} \hline \hline Variables & Categories & \multicolumn{1}{c}{svy LLasso} & \multicolumn{1}{c}{\(\tilde{\theta}^{DB}\)} & \multicolumn{1}{c}{p-value} & \multicolumn{1}{c}{\(\widetilde{\mathrm{AME}}^{DB}\)} & \multicolumn{1}{c}{p-value} \\ \hline _Intercept Location_ & & 3.428 & \(3.246^{***}\) & 0.000 & \(-\) & \(-\) \\ & Urban (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Rural & \(-0.225\) & \(-0.287^{***}\) & 0.001 & \(-0.017^{***}\) & 0.001 \\ _Age_ & 15–24 & 0.627 & \(1.235^{***}\) & 0.000 & \(0.054^{***}\) & 0.000 \\ & 25–34 & 0.161 & \(0.683^{**}\) & 0.007 & \(0.033^{*}\) & 0.014 \\ & 35–44 & 0.038 & \(0.548^{*}\) & 0.016 & \(0.027^{*}\) & 0.035 \\ & 45–54 (omitted) & \(-\) & \(-\) & \(-\) & \(-\) \\ & 55–64 & \(-0.721\) & \(-0.527^{**}\) & 0.003 & \(-0.032^{*}\) & 0.014 \\ & 65 and older & \(-1.570\) & \(-1.262^{***}\) & 0.000 & \(-0.082^{***}\) & 0.000 \\ _Gender_ & Male (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Female & 0.013 & \(0.099\) & 0.200 & \(0.006\) & 0.211 \\ _Aboriginal identity_ & Non-aboriginal (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Aboriginal & \(-\) & \(-0.497^{*}\) & 0.021 & \(-0.032^{*}\) & 0.011 \\ _Language_ & English & 0.354 & \(0.598^{*}\) & 0.037 & \(0.035^{*}\) & 0.044 \\ & French & \(-\) & 0.246 & \(0.435\) & 0.013 & 0.464 \\ & Non-official language & \(-\) & 0.065 & \(0.836\) & 0.004 & 0.842 \\ & English and French & \(-\) & 0.793 & \(0.124\) & 0.036 & 0.231 \\ & English and Non-official language (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & French and Non-official language & \(-\) & \(-0.533\) & \(0.544\) & \(-0.035\) & 0.495 \\ & English, French and Non-official language & \(-\) & \(-1.434\) & \(0.193\) & \(-0.118^{*}\) & 0.067 \\ _Employment_ & Employed & 0.514 & \(0.574^{***}\) & 0.000 & \(0.032^{***}\) & 0.000 \\ & Not employed (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ _Education_ & High school or less & \(-0.911\) & \(-0.971^{***}\) & 0.000 & \(-0.058^{***}\) & 0.000 \\ & Some post-secondary (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & University degree & 0.451 & \(0.519^{***}\) & 0.000 & \(0.027^{***}\) & 0.000 \\ _Visible minority_ & Visible minority & \(-0.048\) & \(-0.352^{*}\) & 0.037 & \(-0.021^{*}\) & 0.034 \\ & Not a visible minority (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ _Household type_ & Family with children under 18 (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Family without children under 18 & \(-\) & \(-0.029\) & 0.872 & \(-0.002\) & 0.875 \\ & Single & \(-0.596\) & \(-0.665^{***}\) & 0.000 & \(-0.043^{***}\) & 0.001 \\ & Other household type & \(-\) & 0.149 & 0.635 & 0.008 & 0.656 \\ _Income_ & \$52,203 and lower & \(-0.536\) & \(-0.475^{***}\) & 0.000 & \(-0.028^{***}\) & 0.000 \\ & \$52,204–892,485 (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & \$92,486–8146,559 & \(-\) & 0.092 & 0.469 & 0.005 & 0.486 \\ _Immigration_ & \$146,560 and higher & 0.359 & \(0.547^{***}\) & 0.001 & \(0.028^{***}\) & 0.001 \\ & Landed immigrant (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Non-landed immigrant & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ _Province_ & NL & \(-\) & \(-0.258\) & 0.176 & \(-0.014\) & 0.211 \\ & PEI & \(-\) & \(-0.31\) & 0.111 & \(-0.019\) & 0.091 \\ & PEI & \(-\) & \(-0.272\) & 0.155 & \(-0.017\) & 0.135 \\ & NS & \(-\) & \(-0.298\) & 0.123 & \(-0.018\) & 0.104 \\ & NB & \(-\) & \(-0.101\) & 0.586 & \(-0.006\) & 0.585 \\ & QC & \(-0.296\) & \(-0.448^{*}\) & 0.026 & \(-0.027^{*}\) & 0.034 \\ & ON & 0.039 & \(-0.018\) & 0.911 & \(-0.001\) & 0.913 \\ & MB & \(-\) & \(-0.501^{*}\) & 0.013 & \(-0.032^{**}\) & 0.006 \\ & SK & \(-\) & \(-0.413^{*}\) & 0.037 & \(-0.026^{*}\) & 0.024 \\ & BC & 0.031 & 0.095 & 0.602 & 0.005 & 0.613 \\ & AB (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \hline \hline \end{tabular} Notes: \(n=17,409\). The comparison category for each variable is labeled omitted in paranthesis. \(\tilde{\theta}^{DB}\) and \(\widetilde{\mathrm{AME}}^{DB}\) denote the debiased Lasso estimates of the logit parameter and AME respectively. \(``-"\) denotes the variables not selected by svy LLasso or “not computed” because the variable category is used as a comparison. Significance codes are: 0 '***’ 0.001 ‘*’ 0.05 ‘\(\lx@math@degree\)’ 0.1 ‘\(\lx@math@degree\)’ 1.
\end{table}
Table 1: Lasso Logistic Regression Results for Internet Use Dependent Variable
### Interaction effects
The inclusion of interaction terms in svy LLasso can improve the model's ability to capture complex relationships between variables. We examine whether the second-order specification with interaction terms could be more appropriate than the first-order specification in models 1 to 5. To this end, we use the R package polywog to compare the mean-squared 10-fold cross-validation (CV) error of the adaptive Lasso estimator (see Buhlmann and van de Geer (2011) for a detailed treatment) for both specifications.
Table 6 reports the result. The linear specification is selected for models 1, 4, and 5, while the second-order specification is chosen for models 2 and 3. The difference between the two specifications is minimal for models 1 and 3. For models 2 and 3, after fitting the second-order model with 674 variables by svy LLasso we make inference on the coefficients using the debiased Lasso procedure.
Table 7 and 8 display the interaction results for the online banking and email use models, respectively. Only the variables deemed significant at the 5% level based on the estimated p-values for the coefficients are displayed. We did not display the significant interactions variables that involve _Not stated_ answers because of the lack of interpretability. In addition to the two non-constant variables _High school or less_ and _Visible minority_ which had negative effects on the online banking, four interaction variables, the age group category _15-24_ interacted with _Family without children under 18_, _65 and older_ and _Single_, and _Female_ interacted with _English_ and _Employed_ are both selected by svy LLasso and significant. The signs of the selected coefficients appear to be reasonable.
It is clear that that most variables that are significant at 5% level are not significant at 1%. The age group category _15-24_, when interacted with _Family without children under 18_ and _Other household type_, has highly significant positive effects, and when interacted with _English and French_, has a negative effect on online banking. Interestingly, _English_ interacted with _High school or less_ and _Non-handed immigrant_ appear to have highly significant positive effects.
In contrast to the online banking model with interactions, only the interaction of the age category _65 and older_ and the income category _$52,203 and lower_ is both selected by svy LLasso and significant, and two interaction variables _(Rural)\(\times\)(65 and older)_ and _(Visible
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline Variables & Categories & syv LLasso & \(\hat{\theta}^{DB}\) & p-value & \(\widetilde{\text{AME}}^{DB}\) & p-value \\ \hline _Intercept Location_ & & 1.120 & \(0.625^{**}\) & \(0.009\) & \(-\) & \(-\) \\ & Urban (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Rural & \(-\) & \(-0.092\) & \(0.154\) & \(-0.015\) & \(0.167\) \\ _Age_ & 15–24 & \(-\) & \(0.045\) & \(0.721\) & \(0.007\) & \(0.734\) \\ & 25–34 & \(0.414\) & \(0.637^{***}\) & \(0.000\) & \(0.092^{***}\) & \(0.000\) \\ & 35–44 & \(0.267\) & \(0.540^{***}\) & \(0.000\) & \(0.079^{***}\) & \(0.000\) \\ & 45–54 (omitted) & \(-\) & \(-\) & \(-\) & \(-\) \\ & 55–64 & \(-0.071\) & \(-0.324^{***}\) & \(0.000\) & \(-0.052^{***}\) & \(0.001\) \\ & 65 and older & \(-0.718\) & \(-0.873^{***}\) & \(0.000\) & \(-0.154^{***}\) & \(0.000\) \\ _Gender_ & Male (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Female & \(-\) & \(0.089\) & \(0.107\) & \(0.014\) & \(0.123\) \\ _Aboriginal identity_ & Non-aboriginal (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Aboriginal & \(-\) & \(-0.248\) & \(0.105\) & \(-0.040\) & \(0.106\) \\ _Language_ & English & \(0.000\) & \(0.509^{**}\) & \(0.005\) & \(0.081^{**}\) & \(0.007\) \\ & French & \(-\) & \(0.598^{**}\) & \(0.005\) & \(0.086^{*}\) & \(0.012\) \\ & Non-official language & \(-\) & \(0.337\) & \(0.090\) & \(0.050\) & \(0.122\) \\ & English and French & \(-\) & \(0.239\) & \(0.526\) & \(0.036\) & \(0.561\) \\ & English and Non-official language (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & French and Non-official language & \(-\) & \(-0.280\) & \(0.648\) & \(-0.046\) & \(0.647\) \\ & English, French and Non-official language & \(-\) & \(-0.127\) & \(0.858\) & \(-0.020\) & \(0.861\) \\ _Employment_ & Employed & \(0.662\) & \(0.653^{***}\) & \(0.000\) & \(0.106^{***}\) & \(0.000\) \\ & Not employed (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ _Education_ & High school or less & \(-0.637\) & \(-0.686^{***}\) & \(0.000\) & \(-0.114^{***}\) & \(0.000\) \\ & Some post-secondary & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & University degree & \(0.331\) & \(0.409^{***}\) & \(0.000\) & \(0.062^{***}\) & \(0.000\) \\ _Visible minority_ & Visible minority & \(-0.135\) & \(-0.303^{**}\) & \(0.003\) & \(-0.048^{**}\) & \(0.005\) \\ & Not a visible minority (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ _Household type_ & Family with children under 18 (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Family without children under 18 & \(0.078\) & \(0.305^{***}\) & \(0.000\) & \(0.048^{***}\) & \(0.001\) \\ & Single & \(-0.166\) & \(-0.137\) & \(0.137\) & \(-0.022\) & \(0.167\) \\ & Other household type & \(-\) & \(0.372^{*}\) & \(0.042\) & \(0.054^{*}\) & \(0.068\) \\ _Income_ & \$52,203 and lower & \(-0.265\) & \(-0.252^{***}\) & \(0.001\) & \(-0.041^{**}\) & \(0.002\) \\ & \$52,204–892,485 (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & \$92,486–8146,559 & \(-\) & \(0.123\) & \(0.132\) & \(0.019\) & \(0.153\) \\ & \$146,560 and higher & \(0.086\) & \(0.252^{**}\) & \(0.004\) & \(0.039^{**}\) & \(0.006\) \\ _Immigration_ & Landed immigrant (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Non-handed immigrant & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ _Province_ & NL & \(-\) & \(-0.082\) & \(0.463\) & \(-0.013\) & \(0.486\) \\ & PEI & \(-\) & \(-0.015\) & \(0.905\) & \(-0.002\) & \(0.909\) \\ & PEI & \(-\) & \(-0.068\) & \(0.604\) & \(-0.011\) & \(0.615\) \\ & NS & \(-\) & \(-0.079\) & \(0.540\) & \(-0.012\) & \(0.552\) \\ & NB & \(-\) & \(-0.068\) & \(0.603\) & \(-0.011\) & \(0.614\) \\ & QC & \(-\) & \(-0.101\) & \(0.460\) & \(-0.016\) & \(0.474\) \\ & ON & \(-\) & \(-0.032\) & \(0.748\) & \(-0.005\) & \(0.758\) \\ & MB & \(-\) & \(-0.383^{**}\) & \(0.004\) & \(-0.063^{**}\) & \(0.003\) \\ & SK & \(-\) & \(-0.114\) & \(0.377\) & \(-0.018\) & \(0.389\) \\ & BC & \(-\) & \(-0.010\) & \(0.931\) & \(-0.002\) & \(0.934\) \\ & AB (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \hline \hline \end{tabular} Notes: \(n=17,135\).
\end{table}
Table 2: Lasso Logistic Regression Results for Online Banking Dependent Variable
\begin{table}
\begin{tabular}{l l r r r r r} \hline \hline Variables & Categories & svy LLasso & \(\hat{\theta}^{DB}\) & p-value & \(\widetilde{\text{AME}}^{DB}\) & p-value \\ \hline _Intercept_ & & 1.960 & 1.964*** & 0.000 & \(-\) & \(-\) \\ _Location_ & Urban & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Rural & \(-0.158\) & \(-0.207\)** & 0.005 & \(-0.021\)** & 0.007 \\ & 15–24 & 0.390 & 0.658*** & 0.000 & \(0.058\)*** & 0.000 \\ & 25–34 & 0.444 & 0.742*** & 0.000 & \(0.063\)*** & 0.000 \\ & 35–44 & 0.294 & 0.585* & 0.000 & \(0.051\)*** & 0.000 \\ & 45–54 (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & 55–64 & \(-0.425\) & \(-0.343\)** & 0.004 & \(-0.035\)** & 0.009 \\ & 65 and older & \(-1.036\) & \(-0.899\)*** & 0.000 & \(-0.100\)*** & 0.000 \\ _Gender_ & Male (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Female & 0.087 & \(0.151\)* & 0.021 & \(0.015\)* & 0.025 \\ _Aboriginal identity_ & Non-aboriginal (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Aboriginal & \(-\) & \(-0.473\)** & 0.008 & \(-0.051\)** & 0.004 \\ _Language_ & English & 0.402 & 0.301 & 0.179 & 0.030 & 0.207 \\ & French & \(-\) & \(-0.118\) & 0.640 & \(-0.012\) & 0.644 \\ & Non-official & \(-0.047\) & \(-0.225\) & 0.353 & \(-0.023\) & 0.357 \\ & English and French & \(-\) & \(0.426\) & 0.327 & 0.037 & 0.395 \\ & English and Non-official language (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & French and Non-official language & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & English, French and Non-official language & \(-\) & \(-0.302\) & 0.669 & \(-0.032\) & 0.656 \\ _Employment_ & Employed & 0.411 & \(0.457\)*** & 0.000 & \(0.045\)*** & 0.000 \\ & Not employed (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ _Education_ & High school or less & \(-0.790\) & \(-0.851\)*** & 0.000 & \(-0.088\)*** & 0.000 \\ & Some post-secondary (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & University degree & 0.750 & \(0.828\)*** & 0.000 & \(0.072\)*** & 0.000 \\ _Visible minority_ & Visible minority & \(-0.192\) & \(-0.346\)** & 0.008 & \(-0.035\)* & 0.011 \\ & Not a visible minority (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ _Household type_ & Family with children under 18 (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Family without children under 18 & \(-\) & \(-0.055\) & \(0.644\) & \(-0.005\) & 0.655 \\ & Single & \(-0.456\) & \(-0.571\)*** & 0.000 & \(-0.062\)*** & 0.000 \\ & Other household type & \(-\) & \(-0.052\) & \(0.824\) & \(-0.005\) & 0.828 \\ _Income_ & \$52,203 and lower & \(-0.383\) & \(-0.323\)*** & 0.000 & \(-0.033\)*** & 0.000 \\ & \$52,204–$92,485 (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & \$92,486–$146,559 & \(-\) & \(0.088\) & 0.371 & 0.008 & 0.391 \\ & \$146,560 and higher & \(0.329\) & \(0.441\)*** & 0.000 & \(0.040\)*** & 0.000 \\ _Immigration_ & Landed immigrant (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Non-handed immigrant & 0.016 & 0.147 & 0.304 & 0.015 & 0.311 \\ _Province_ & NL & \(-\) & \(-0.240\) & 0.120 & \(-0.025\) & 0.111 \\ & PEI & \(-\) & \(-0.174\) & 0.265 & \(-0.018\) & 0.260 \\ & NS & \(-\) & \(-0.387\)* & 0.012 & \(-0.041\)** & 0.008 \\ & NB & \(-\) & \(-0.251\)* & 0.098 & \(-0.026\) & 0.090 \\ & QC & \(-0.154\) & \(-0.326\)* & 0.044 & \(-0.033\)* & 0.050 \\ & ON & 0.164 & 0.069 & 0.577 & 0.007 & 0.582 \\ & MB & \(-\) & \(-0.466\)** & 0.004 & \(-0.050\)** & 0.002 \\ & SK & \(-\) & \(-0.364\)* & 0.021 & \(-0.038\)* & 0.015 \\ & BC & 0.236 & 0.260\)* & 0.077 & 0.024\)* & 0.073 \\ \hline \hline \end{tabular} Notes: \(n=17,268\).
\end{table}
Table 3: Lasso Logistic Regression Results for Email Use Dependent Variable
\begin{table}
\begin{tabular}{l l r r r r r} \hline \hline Variables & Categories & \multicolumn{1}{c}{sy LLasso} & \multicolumn{1}{c}{\(\hat{\theta}^{DB}\)} & \multicolumn{1}{c}{p-value} & \multicolumn{1}{c}{\(\widetilde{\text{AME}}^{DB}\)} & \multicolumn{1}{c}{p-value} \\ \hline _Intercept_ & & \(-2.038\) & \(-2.650^{***}\) & \(0.000\) & \(-\) & \(-\) \\ _Location_ & Urban (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Rural & \(-0.220\) & \(-0.609^{***}\) & \(0.000\) & \(-0.057^{***}\) & \(0.000\) \\ _Age_ & 15–24 & \(0.300\) & \(0.867^{***}\) & \(0.000\) & \(0.112^{***}\) & \(0.000\) \\ & 25–34 & \(0.207\) & \(0.619^{***}\) & \(0.000\) & \(0.075^{***}\) & \(0.000\) \\ & 35–44 & \(-\) & \(0.334^{**}\) & \(0.005\) & \(0.039^{**}\) & \(0.003\) \\ & 45–54 (omitted) & \(-\) & \(-\) & \(-\) & \(-\) \\ & 55–64 & \(-0.308\) & \(-0.608^{***}\) & \(0.000\) & \(-0.057^{***}\) & \(0.000\) \\ & 65 and older & \(-0.548\) & \(-1.009^{***}\) & \(0.000\) & \(-0.083^{***}\) & \(0.000\) \\ _Gender_ & Male (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Female & \(-\) & \(-0.091\) & \(0.280\) & \(-0.010\) & \(0.277\) \\ _Aboriginal identity_ & Non-aboriginal (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Aboriginal & \(-\) & \(0.040\) & \(0.872\) & \(0.004\) & \(0.869\) \\ _Language_ & English & \(-\) & \(0.129\) & \(0.596\) & \(0.014\) & \(0.597\) \\ & French & \(-\) & \(0.132\) & \(0.653\) & \(0.015\) & \(0.640\) \\ & Non-official language & \(-\) & \(-0.410\) & \(0.116\) & \(-0.040\) & \(0.153\) \\ & English and French & \(-\) & \(0.105\) & \(0.829\) & \(0.012\) & \(0.822\) \\ & English and Non-official language (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & French and Non-official language & \(-\) & \(-0.618\) & \(0.448\) & \(-0.055\) & \(0.534\) \\ & English, French and Non-official language & \(-\) & \(-0.907\) & \(0.359\) & \(-0.073\) & \(0.495\) \\ _Employment_ & Employed & \(-\) & \(0.020\) & \(0.853\) & \(0.002\) & \(0.852\) \\ & Not employed (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ _Education_ & High school or less & \(-\) & \(-0.066\) & \(0.568\) & \(-0.007\) & \(0.568\) \\ & Some post-secondary (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & University degree & \(0.027\) & \(0.254^{**}\) & \(0.009\) & \(0.028^{**}\) & \(0.008\) \\ _Visible minority_ & Visible minority & \(0.162\) & \(0.453^{***}\) & \(0.001\) & \(0.052^{***}\) & \(0.001\) \\ & Not a visible minority (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ _Household type_ & Family with children under 18 (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Family without children under 18 & \(-\) & \(0.064\) & \(0.547\) & \(0.007\) & \(0.543\) \\ & Single & \(-\) & \(0.033\) & \(0.797\) & \(0.004\) & \(0.793\) \\ & Other household type & \(-\) & \(0.121\) & \(0.621\) & \(0.014\) & \(0.606\) \\ _Income_ & \$52,203 and lower & \(-\) & \(0.080\) & \(0.551\) & \(0.009\) & \(0.541\) \\ & \$52,204–892,485 (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \\ & \$92,486–8146,559 & \(-\) & \(0.155\) & \(0.203\) & \(0.017\) & \(0.189\) \\ & \$146560 and higher & \(0.233\) & \(0.563^{***}\) & \(0.000\) & \(0.066^{***}\) & \(0.000\) \\ _Immigration_ & Landed immigrant (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Non-handed immigrant & \(-\) & \(0.129\) & \(0.372\) & \(0.014\) & \(0.380\) \\ _Province_ & NL & \(-\) & \(-0.270\) & \(0.178\) & \(-0.027\) & \(0.214\) \\ & PEI & \(-\) & \(-0.311\) & \(0.131\) & \(-0.030\) & \(0.169\) \\ & NS & \(-\) & \(-0.282\) & \(0.163\) & \(-0.028\) & \(0.198\) \\ & NB & \(-\) & \(-0.065\) & \(0.757\) & \(-0.007\) & \(0.760\) \\ & QC & \(-\) & \(-0.126\) & \(0.532\) & \(0.013\) & \(0.539\) \\ & ON & \(-\) & \(0.043\) & \(0.759\) & \(0.005\) & \(0.756\) \\ & MB & \(-\) & \(-0.420^{*}\) & \(0.032\) & \(-0.040\) & \(0.058\) \\ & SK & \(-\) & \(-0.215\) & \(0.270\) & \(-0.022\) & \(0.298\) \\ & BC & \(-\) & \(0.080\) & \(0.636\) & \(0.009\) & \(0.627\) \\ & AB (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \hline \hline \end{tabular} Notes: \(n=12,124\).
\end{table}
Table 4: Lasso Logistic Regression Results for Virtual Wallet Dependent Variable
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline Variables & Categories & syv LLasso & \(\hat{\theta}^{DB}\) & p-value & \(\widetilde{\text{AME}}^{DB}\) & p-value \\ \hline _Intercept Location_ & & 1.334 & \(1.100^{***}\) & 0.000 & \(-\) & \(-\) \\ & Urban (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Rural & \(-\) & \(-\) & \(0.125\) & 0.134 & \(-0.020\) & 0.140 \\ _Age_ & 15–24 & \(-0.363\) & \(-0.522^{***}\) & 0.000 & \(-0.088^{***}\) & 0.000 \\ & 25–34 & \(-\) & 0.055 & 0.630 & 0.008 & 0.644 \\ & 35–44 & \(-\) & 0.135 & 0.188 & 0.020 & 0.213 \\ & 45–54 (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \\ & 55–64 & \(-\) & \(-\) & \(-\) & \(-\) & \\ & 65 and older & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ _Gender_ & Male (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Female & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ _Aboriginal identity_ & Non-aboriginal (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Aboriginal & \(-\) & \(0.198\) & 0.306 & 0.029 & 0.347 \\ _Language_ & English & \(0.216\) & \(0.019\) & 0.928 & 0.003 & 0.933 \\ & French & \(-0.192\) & \(-0.679^{**}\) & 0.006 & \(-0.116^{**}\) & 0.005 \\ & Non-official language & \(-\) & \(-0.044\) & 0.844 & \(-0.007\) & 0.849 \\ & English and French & \(-\) & \(-0.185\) & 0.646 & \(-0.030\) & 0.644 \\ & English and Non-official language (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & French and Non-official language & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & English, French and Non-official language & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ _Employment_ & Employed & \(0.002\) & \(0.148^{*}\) & 0.083 & \(0.023^{*}\) & 0.091 \\ & Not employed (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ _Education_ & High school or less & \(-0.411\) & \(-0.453^{***}\) & \(0.000\) & \(-0.073^{***}\) & 0.000 \\ & Some post-secondary (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & University degree & \(0.357\) & \(0.490^{***}\) & \(0.000\) & \(0.073^{***}\) & 0.000 \\ _Visible minority_ & Visible minority & \(-\) & \(-0.235^{*}\) & \(0.044\) & \(-0.037^{*}\) & 0.046 \\ & Not a visible minority (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ _Household type_ & Family with children under 18 (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Family without children under 18 & \(0.035\) & \(0.335^{***}\) & \(0.000\) & \(0.051^{***}\) & 0.000 \\ & Single & \(-\) & \(0.317^{**}\) & \(0.002\) & \(0.046^{**}\) & 0.005 \\ & Other household type & \(-\) & \(0.161\) & \(0.430\) & \(0.024\) & 0.463 \\ _Income_ & \$52,203 and lower & \(-0.073\) & \(-0.286^{**}\) & \(0.004\) & \(-0.046^{**}\) & 0.005 \\ & \$52,204–892,485 (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & \$92,486–8146,559 & \(-\) & \(0.097\) & \(0.306\) & 0.015 & 0.328 \\ & \$146,560 and higher & \(-\) & \(0.084\) & \(0.393\) & 0.013 & 0.415 \\ _Immigration_ & Landed immigrant (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ & Non-handed immigrant & \(-\) & \(0.151\) & \(0.232\) & \(0.024\) & 0.237 \\ _Province_ & NL & \(-\) & \(-0.287^{*}\) & \(0.081\) & \(-0.047^{*}\) & 0.072 \\ & PEI & \(-\) & \(0.078\) & \(0.637\) & \(0.012\) & 0.655 \\ & NS & \(-\) & \(-0.045\) & \(0.783\) & \(-0.007\) & 0.788 \\ & NB & \(-\) & \(-0.012\) & \(0.944\) & \(-0.002\) & 0.946 \\ & QC & \(-0.112\) & \(-0.042\) & \(0.798\) & \(-0.007\) & 0.810 \\ & ON & \(0.029\) & \(0.241^{*}\) & \(0.042\) & \(0.037^{*}\) & 0.051 \\ & MB & \(-\) & \(0.035\) & \(0.829\) & \(0.005\) & 0.837 \\ & SK & \(-\) & \(0.022\) & \(0.891\) & \(0.003\) & 0.895 \\ & BC & \(-\) & \(0.211\) & \(0.131\) & \(0.031\) & 0.161 \\ & AB (omitted) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \hline \hline \end{tabular} Notes: \(n=12,124\).
\end{table}
Table 5: Lasso Logistic Regression Results for Credit Card Use Dependent Variable
minority)\(\times\)(MB)_ have highly significant effects on the email use. Moreover, similarly to the online banking model with interactions, the language and age group category together have significant cross-effects.
Overall, the second-order interaction terms illustrate the complex relationship present between the use of digital technologies and the different demographic characteristics of the user, and point toward the continued presence of digital divide in Canada.
### Multiple correspondence analysis
Multiple correspondence analysis is used to show the association measures between various categorical variables in the dataset. We also calculate and study correlations between the quantitative scores evaluated from subsamples of individuals distinguished with respect to their individual characteristics. The coordinate plots which represent the variable categories in two dimensional space are provided in Figures 1 and 2.
Internet use, email use and online banking.Figure 1 is a plot of the variable categories from the internet use, email use, and online banking regression models. The groupings of variable categories show the underlying structure of the data. The green labeled variable categories are the supplemental variables in the MCA and the dependent variables in our regression models. The red labeled categories are the explanatory variables in our regression models.
The most apparent grouping of variable categories is in the top left quadrant of the graph. This grouping includes people who did not use the internet, email or online bank
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{5}{c}{CV error} \\ \hline Models & 1 & 2 & 3 & 4 & 5 \\ \hline
1st order & 0.395 & 0.955 & 0.644 & 0.684 & 0.942 \\
2nd order & 0.396 & 0.944 & 0.643 & 0.692 & 0.945 \\ sample size & 17409 & 17135 & 17268 & 12124 & 12124 \\ \hline \hline \end{tabular} Notes: The table reports the mean-squared 10-fold cross-validation error for first-order model with 41 covariates and the second-order model with 674 covariates based on adaptive Lasso estimator obtained using the R package polywog.
\end{table}
Table 6: Order selection
\begin{table}
\begin{tabular}{l l r r r} \hline \hline \multicolumn{1}{c}{ Variables} & \multicolumn{1}{c}{Categories} & \multicolumn{1}{c}{svy LLasso} & \multicolumn{1}{c}{\(\hat{\theta}^{DB}\)} & \multicolumn{1}{c}{p-value} \\ \hline _Intercept_ & & 1.013 & 3.151\({}^{**}\) & 0.006 \\ _Language_ & English & \(-\) & \(-2.663^{**}\) & 0.007 \\ & High school or less & \(-0.597\) & \(-1.554^{*}\) & 0.012 \\ & Visible minority & \(-0.114\) & \(-1.292^{*}\) & 0.050 \\ _Location \(\times\) Immigration_ & (Rural) \(\times\) (Non-landed immigrant) & \(-\) & \(-0.991^{*}\) & 0.029 \\ _Location \(\times\) Province_ & (Rural) \(\times\) (QC) & \(-\) & 0.769\({}^{*}\) & 0.050 \\ & (Rural) \(\times\) (ON) & \(-\) & 0.577\({}^{*}\) & 0.035 \\ _Age \(\times\) Language_ & (15-24) \(\times\) (English) & \(-\) & \(-1.465^{**}\) & 0.049 \\ & (15-24) \(\times\) (English and French) & \(-\) & \(-5.084^{**}\) & 0.006 \\ _Age \(\times\) Employment_ & (15-24) \(\times\) (Employed) & \(-\) & 0.681\({}^{*}\) & 0.024 \\ _Age \(\times\) Education_ & (15-24) \(\times\) (University degree) & \(-\) & 1.514\({}^{*}\) & 0.010 \\ _Age \(\times\) Household type_ & (15-24) \(\times\) (Family without children under 18) & 0.291 & 1.177\({}^{***}\) & 0.000 \\ & (15-24) \(\times\) (Single) & \(-\) & 1.087\({}^{*}\) & 0.025 \\ & (15-24) \(\times\) (Other household type) & \(-\) & 2.096\({}^{**}\) & 0.006 \\ & (65 and older) \(\times\) (Single) & \(-0.065\) & \(-0.857^{*}\) & 0.044 \\ _Gender \(\times\) Language_ & (Female) \(\times\) (English) & 0.068 & 0.752\({}^{*}\) & 0.047 \\ _Gender \(\times\) Employment_ & (Female) \(\times\) (Employed) & 0.153 & 0.342\({}^{*}\) & 0.017 \\ & (Female) \(\times\) (University degree) & \(-\) & \(-0.378^{*}\) & 0.013 \\ _Language \(\times\) Education_ & (English) \(\times\) (High school or less) & \(-\) & 1.643\({}^{***}\) & 0.001 \\ _Language \(\times\) Income_ & (English) \(\times\) ($146,560 and higher) & \(-\) & 1.405\({}^{*}\) & 0.019 \\ _Language \(\times\) Immigration_ & (English) \(\times\) (Non-landed immigrant) & \(-\) & 1.480\({}^{***}\) & 0.001 \\ _Language \(\times\) Education_ & (French) \(\times\) (High school or less) & \(-\) & 1.331\({}^{*}\) & 0.014 \\ _Language \(\times\) Household type_ & (French) \(\times\) (Single) & \(-\) & \(-1.802^{*}\) & 0.012 \\ _Language \(\times\) Immigration_ & (French) \(\times\) (Non-landed immigrant) & \(-\) & 1.254\({}^{*}\) & 0.042 \\ _Language \(\times\) Education_ & (Non-official language) \(\times\) (High school or less) & \(-\) & 1.144\({}^{*}\) & 0.026 \\ _Language \(\times\) Immigration_ & (Non-official language) \(\times\) & \(-\) & 0.963\({}^{*}\) & 0.044 \\ _Language \(\times\) Employment_ & (Non-landed immigrant) & \(-\) & 0.963\({}^{*}\) & 0.044 \\ _Language \(\times\) Employment_ \(\times\) _Income_ & (French and Non-official) \(\times\) (Employed) & \(-\) & 3.790\({}^{*}\) & 0.041 \\ _Employment \(\times\) Income_ & (Employed) \(\times\) ($146,560 and higher) & \(-\) & \(-0.464^{*}\) & 0.036 \\ _Household type \(\times\) Income_ & (Family without children under 18) \(\times\) ($52,203 and lower) & \(-\) & \(-0.650^{*}\) & 0.016 \\ & (\$52,203 and lower) & \(-\) & \(-0.659^{*}\) & 0.013 \\ \hline \hline \end{tabular} Notes: \(n=17,135\). The coefficients shown in this table are found to be significant at the 5% level based on their estimated p-values.
\end{table}
Table 7: Lasso Logistic Regression with Interactions for Online Banking Dependent Variable
ing in the last three months. Grouped with these dependent variable categories are the explanatory categories _65 years and older, Not employed, Single, High school or less_, and people who earn less than \(\$52,204\) a year. These explanatory variables were all statistically significant in our logistic regressions and were chosen by svy LLasso.
In the lower right quadrant of the plot we see another grouping. The dependent variable categories of people who used the internet, email and online baking are in this quadrant grouped relatively close to the variables _University degree_, income of \(\$92,485-\$146,559\), income greater than \(\$146,559\), Families with children under 18, Employed_, and age group categories _45-54, 35-44, and 25-34_. In Tables 1, 2, and 3, these variables are all statistically significant and have positive coefficients. svy LLasso also selected these variable categories.
The other relevant variable groupings seen in Figure 1 are in the top right quadrant, where we see _Non-official language_ speakers, _Visible minority_, and _Landed immigrant_ grouped together. This grouping of categories makes sense as many new immigrants to Canada are visible minorities and would likely speak a non-official Canadian language.
\begin{table}
\begin{tabular}{l l r r r} \hline \hline \multicolumn{1}{c}{ Variables} & Categories & svy LLasso & \(\hat{\theta}^{DB}\) & p-value \\ \hline _Intercept_ & & \(1.936\) & \(4.597^{**}\) & \(0.002\) \\ _Age_ & \(55\)-\(64\) & \(-0.321\) & \(-2.188^{*}\) & \(0.035\) \\ _Language_ & English & \(-\) & \(-2.799^{*}\) & \(0.028\) \\ & French & \(-\) & \(-5.688^{*}\) & \(0.033\) \\ & English, French and Non-official & \(-\) & \(-33.857^{***}\) & \(0.001\) \\ _Location \(\times\) Age_ & (Rural) \(\times\) (35-44) & \(-\) & \(0.750^{**}\) & \(0.039\) \\ & (Rural) \(\times\) (65 and older) & \(-\) & \(0.745^{**}\) & \(0.008\) \\ _Location \(\times\) Language_ & (Rural) \(\times\) (English, French and Non-official) & \(-\) & \(32.644^{*}\) & \(0.041\) \\ _Age \(\times\) Immigration_ & (25-34) \(\times\) (Non-landed immigrant) & \(-\) & \(-1.195^{*}\) & \(0.035\) \\ _Age \(\times\) Province_ & (25-34) \(\times\) (MB) & \(-\) & \(1.766^{*}\) & \(0.023\) \\ _Age \(\times\) Language_ & (55-64) \(\times\) (English) & \(-\) & \(1.945^{*}\) & \(0.022\) \\ _Age \(\times\) Province_ & (55-64) \(\times\) (French) & \(-\) & \(2.074^{*}\) & \(0.026\) \\ _Age \(\times\) Province_ & (55-64) \(\times\) (MB ) & \(-\) & \(1.439^{*}\) & \(0.022\) \\ _Age \(\times\) Language_ & (65 and older) \(\times\) (English) & \(-\) & \(1.705^{*}\) & \(0.048\) \\ _Age \(\times\) Income_ & (65 and older) \(\times\) (\$52,203 and lower) & \(-0.223\) & \(-0.697^{*}\) & \(0.040\) \\ _Language \(\times\) Income_ & (English) \(\times\) (\$146,560 and higher) & \(-\) & \(1.857^{*}\) & \(0.022\) \\ _Language \(\times\) Province_ & (French) \(\times\) (MB) & \(-\) & \(6.461^{*}\) & \(0.018\) \\ _Visible minority \(\times\) Province_ & (Visible minority) \(\times\) (MB) & \(-\) & \(-1.825^{**}\) & \(0.005\) \\ \hline \hline \end{tabular} Notes: \(n=17,268\). The coefficients shown in this table are found to be significant at the 5% level based on their estimated p-values.
\end{table}
Table 8: Lasso Logistic Regression with Interactions for Email Dependent Variable
Virtual wallet and credit card use.Figure 2 is a plot of the variable categories from the virtual wallet and credit card regression models. The dependent variable categories _Used virtual wallet_ and _Did not use credit card_ have obvious groupings of explanatory variable categories around them. On the other hand, the dependent variable categories _Did not use virtual wallet_ and _Used credit card_ are not as well represented in two-dimensional space. These dependent variable categories are grouped in the middle of the plot along with explanatory variables with relatively low contribution to the dimensions of the plot.
In the top right quadrant of the plot, the dependent variable category _Did not use credit card_ is grouped with the explanatory variable categories \(\$52,203\)-\(\$92,485\), _Single, High school or less, Not employed_, income less than \(\$52,204\), _15-24_, and _65 and older_. In Table 5, we see that svy LLasso has selected the lowest age group category _15-24_ and _High school or less_. The MCA grouping around _No credit card_ usage is relatively consistent with the variables selected by svy LLasso.
The top left quadrant of the plot has the dependent variable category _Used virtual wallet_. The explanatory variables grouped around _Used virtual wallet_ are _Urban, 25-34, ON and AB_. In Table 4, the explanatory variables selected by svy LLasso are all the age group categories, _Rural_, _Visible minority_, the highest income category, and _University degree_. The grouping around the _Used virtual wallet_ is mostly consistent with the variable categories selected by the Lasso.
svy LLasso selected the variable _Visible minority_ and although it is not in the close grouping of variables around virtual wallet, it is in the same quadrant of the graph. _Visible minority_ is closely grouped with _Landed immigrant_, which is consistent with Figure 1. The other explanatory variables selected by svy LLasso but not grouped with _Used virtual wallet_ are grouped together in the bottom left quadrant of the graph. The highest income category is grouped with _Employed_ and the age group category _45-54_, likely due to the fact the people with high income tend to be in the older segment of the working age population and people with high incomes are typically employed.
### Digital literacy score
We design and compute a measure (score) of digital inclusion/digital divide and study its distributional properties in the entire sample and subsamples of individuals with different demographic characteristics pertaining to the social groups with different origin, gender, age, location, and education level.
The digital literacy score is based on the answers of survey respondents to 10 questions from CIUS 2020. Respondents that answer _Yes_ to these questions are given 1 point per
Figure 1: Coordinate plot for Internet Use, Email Use and Online Banking
_Yes_ response. The higher the score (out of 10), the higher the perceived digital literacy of the respondent [See Appendix C for the list of 10 questions our score comprises]. We take the average scores for respondents grouped using variables from our analysis and display these results in Table 9. For example the first two rows of the table show the average score out of 10 for respondents of the survey that reside in urban and rural locations.
The average score from the respondents in Table 9 is 6.88, and the standard deviation is equal to 0.50. Therefore respondents answered an average of just under seven questions
Figure 2: Coordinate plot for Virtual Wallet and Credit Card Use
with _Yes_. The first characteristic variable we investigate is the location of respondents. Urban residents score slightly higher than the average respondent, and rural slightly lower. This rural/urban divide is consistent with our svy LLasso and MCA results that show a divide, albeit sometimes minor, between rural and urban residents regarding internet connectivity and digital literacy.
The age group variable shows one of the most significant divides regarding digital literacy score. The oldest age group category _65 years and older_ has the second lowest digital literacy score in our study. The youngest age group also scores relatively lower than the three middle age categories. Due to the type of questions that make up the digital literacy score, younger respondents may have been less likely to answer _Yes_ to these questions. Many of the questions have to do with making purchases online and using digital technology that may be skewed towards people in the middle age groups.
There is no significant difference between the scores of males and females. The lack of digital divide among gender is consistent with our svy LLasso results, where svy LLasso only selects the gender variable in the email use regression model. Similarly to the gender variable _Aboriginal identity_ does not seem to significantly affect a respondent's digital literacy score. The small difference in the scores of aboriginals and non-aboriginals is surprising since we know from previous research that aboriginal people are often marginalized when it comes to internet connectivity and digital technology. A possible reason for this disconnect between the results of our score and previous examples of under-utilization of digital technologies in indigenous communities is that CIUS 2020 was an off-reserve survey and only included the 10 Canadian provinces, not the territories. It may be that the most prominent digital divide between _Aboriginal_ and _Non-aboriginal_ Canadians comes from the aboriginal person's on/off reserve status.
Employment status, educational attainment and income all show significant discrepancies between their varibale categories' digital literacy scores. _Employed_ people scored an average of one point higher than _Non-employed_. People with low educational attainment of a _High school or less_ score the second lowest only to people _65 years and older_ on our digital literacy score test. Educational attainment of a _University degree_ shows an average of almost two point difference in their digital literacy score compared to those with a _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school score_ is _High school_ score, which is _High school_ score. The _High school score_ is _High school_ score, which is _High school_ score. The _High school score_ is _High school_ score, which is _High school_ score. The _High school score_ is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school score_ is _High school_ score, which is _High school_ score. The _High school score_ is _High school_ score, which is _High school_ score. The _High school score_ is _High school_ score, which is _High school_ score. The _High school score_ is _High school_ score, which is _High school_ score. The _High school score_ is _High school_ score, which is _High school_ score. The _High school score_ is _High school_ score, which is _High school_ score. The _High school score_ is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school score_. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school score_. The _High school_ score is _High school score_. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score_. The _High school_ score is _High school score_. The _High school_ score is _High school score_. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school score_. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school_ score, which is _High school_ score. The _High school_ score is _High school score_. The _High school_ score is _High school score_. The _High school_ score is _High school score_. The _High school_ score is _High school_ score_. The _High school_ score is _High school score_.
school or less_.
The lowest income category of people making \(\$52,203\)_and lower_ has the lowest digital literacy score. The digital literacy score increases as income categories increase with the highest income category having the highest digital literacy score. These results are very consistent with the Lasso inference results. svy LLasso selected employment status, income, and education variables and the debiased Lasso results showed they affect the dependent variables in almost every regression specification.
Both immigration status and visible minority status have surprising results. The immigration status variable category _Landed immigrant_ is found to have slightly higher digital literacy score than _Non-handed immigrant_ (non-immigrant/non-recent immigrant). The variable visible minority status also shows that the category _Visible minority_ scores higher on our digital literacy score than the category _Non-visible minority_.
From our MCA results, we know that the variable categories _Landed immigrant_ and _Visible minority_ are grouped together, suggesting that many recent immigrants are also visible minorities. New immigrants to Canada often have to use the internet and online resources when applying to immigrate to Canada and become citizens. These requirements could explain why visible minorities and recent immigrants in our study have slightly higher digital literacy scores than non-visible minorities and non-immigrants.
The digital literacy scores for each province are relatively similar. The maritime provinces, Newfoundland and Labrador (_NL_), Prince Edward Island (_PEI_), Nova Scotia (_NS_) and New Brunswick (_NB_), score the lowest, while British Columbia (_BC_) scores the highest. In each plot's MCA results, we saw that the maritime provinces were often grouped together with the location variable category _rural_. It is then consistent that these provinces would score slightly lower than others on the digital literacy score. Ontario (_ON_), British Columbia (_BC_), and Alberta (_AB_) score the highest out of the provinces and have almost identical scores.
People who have used a virtual wallet score the highest on our digital literacy score with an average score of 9.5. Non-virtual wallet users' scores are practically equivalent to our analysis's average digital literacy score. The similarity between non-virtual wallet users and the average score suggests that the only Canadians currently using digital wallet
technologies are those with very high digital literacy, much higher than the average Canadian. For Canadians to use a newly implemented CBDC, they would likely have to have a much higher degree of digital literacy than they currently possess.
## 5 Concluding remarks
This paper used different methods, which include a survey-weighted Lasso variable selection/inference techniques, multiple correspondence analysis, and a digital literacy score, to assess the degree of the digital divide in Canada. All methods show consistent results.
Younger working-age Canadians who are employed with high incomes and a high degree of educational attainment have, on average, the highest digital literacy and utilize digital technologies the most. Although somewhat significant, the difference between rural and urban residents does not seem to be the driving factor any longer in the Canadian digital divide. Instead, the leading cause of the digital divide seems to be from the difference in economic class.
These results imply that to implement a CBDC in Canada, significant work and investment is needed to close the digital divide. If a CBDC were implemented today in Canada, people with lower incomes and education would have difficulty adapting to the new monetary system and payment methods. People from lower socioeconomic classes would be negatively impacted by the disappearance of cash, leading to further societal disadvantages. In order to improve this divide between socioeconomic classes concerning digital literacy and digital financial technologies, the government should focus investments not just in rural Canada but also in lower-income areas, irrespective of where they are located. For a CBDC to be beneficial in Canada, each Canadian needs to be able to understand and use it.
Connecting Canadians to the internet is no longer sufficient to improve the digital divide. For a CBDC to be a valuable tool to all Canadians, significant investments need to be made in education and industry so Canadians who already have internet access can learn how to utilize it properly. To some, switching from a cash economy to a cashless economy through the use of a CBDC would be an easy transition. To others, it would likely be impossible without significant training and investment. As shown in our analysis, with
\begin{table}
\begin{tabular}{r l r} \hline \hline Variables & Categories & Digital Literacy Score \\ \hline \hline _Location_ & Urban & 6.96 \\ & Rural & 6.58 \\ _Age_ & 15–24 & 7.00 \\ & 25–34 & 7.62 \\ & 35–44 & 7.57 \\ & 45–54 & 7.04 \\ & 55–64 & 6.55 \\ & 65 and older & 5.97 \\ _Gender_ & Male & 6.82 \\ & Female & 6.88 \\ _Aboriginal identity_ & Non-aboriginal & 6.86 \\ & Aboriginal & 6.65 \\ _Employment status_ & Employed & 7.22 \\ & Not employed & 6.28 \\ _Education_ & High school or less & 6.00 \\ & Some post-secondary & 6.74 \\ & University degree & 7.56 \\ _Visible minority status_ & Visible minority & 7.16 \\ & Not a visible minority & 6.80 \\ _Household type_ & Family with children under 18 & 7.43 \\ & Single & 6.43 \\ & Family without children under 18 & 6.72 \\ & Other household type & 6.89 \\ _Income_ & \$52, 203 and lower & 6.26 \\ & \$52, 204–892,485 & 6.64 \\ & \$92,486–8146,559 & 7.06 \\ & \$146, 560 and higher & 7.37 \\ _Immigration status_ & Landed immigrant & 7.17 \\ & Non-handed immigrant & 6.81 \\ _Province_ & NL & 6.70 \\ & PEI & 6.66 \\ & NS & 6.72 \\ & NB & 6.49 \\ & QC & 6.89 \\ & ON & 6.94 \\ & MB & 6.85 \\ & SK & 6.83 \\ & BC & 6.93 \\ & AB & 7.03 \\ _Virtual wallet_ & Used virtual wallet & 8.32 \\ & No virtual wallet & 6.72 \\ \hline \hline \end{tabular} Notes: Digital Literacy Score shows the average score out of 10 based on respondents answers grouped by location, age, gender, aboriginal identity, employment status, education, visible minority status, household type, income, immigration status, province, and virtual wallet use.
\end{table}
Table 9: Digital Literacy Score
the current state of the digital divide in Canada the implementation of a CBDC would potentially increase the already apparent divide in digital literacy and the use of digital technologies between high and low socioeconomic classes in Canada. |
2304.13026 | Symplectic $\mathbb{C}^*$-manifolds I: Filtration on Quantum Cohomology | We define a large new class of open symplectic manifolds, which includes all
Conical Symplectic Resolutions. They come with a pseudoholomorphic
$\mathbb{C}^*$-action, whose $S^1$-part is Hamiltonian, and admit at infinity a
pseudoholomorphic $S^1$-equivariant map to a positive symplectisation. We
construct a filtration by ideals on their quantum cohomology, which is
sensitive to the choice of $\mathbb{C}^*$-action. In particular, this
determines a family of filtrations on singular cohomology for any Conical
Symplectic Resolution. These filtrations can be viewed as a Floer-theoretic
analogue of Atiyah-Bott filtrations, arising from stratifying a manifold by
gradient flowlines of a Morse-Bott function, but they are distinct from those
and they can detect non-topological properties of the quantum product. Our main
tool is the construction and computation of symplectic cohomology for these
spaces. These spaces are rarely exact at infinity, so this construction
involves new foundational methods. | Alexander F. Ritter, Filip ŽivanoviÄ | 2023-04-25T17:55:29Z | http://arxiv.org/abs/2304.13026v3 | # Symplectic \(\mathbb{C}^{*}\)-manifolds I:
###### Abstract.
We define a large new class of symplectic manifolds, which includes all Conical Symplectic Resolutions. They come with a pseudoholomorphic \(\mathbb{C}^{*}\)-action, whose \(S^{1}\)-part is Hamiltonian, and admit at infinity a pseudoholomorphic \(S^{1}\)-equivariant map to a positive symplectisation. We construct a filtration by ideals on their quantum cohomology, and we show that this filtration is sensitive to the choice of \(\mathbb{C}^{*}\)-action. In particular, this determines a family of filtrations on singular cohomology for any Conical Symplectic Resolution. These filtrations can be viewed as a Floer-theoretic analogue of Atiyah-Bott filtrations, arising from stratifying a manifold by gradient flowlines of a Morse-Bott function, but they are distinct from those and they can detect non-topological properties of the quantum product. Our main tool is the construction and computation of symplectic cohomology for these spaces.
The second author is supported by ERC Starting Grant 850713 - HMS
###### Contents
* 1 Introduction
* 2 Symplectic \(\mathbb{C}^{*}\)-manifolds
* 3 Torsion, periods, holomorphic spheres, and the attraction graph
* 4 Robbin-Salamon and Maslov index calculations
* 5 Symplectic cohomology associated to a Hamiltonian \(S^{1}\)-action
* 6 Filtration on quantum cohomology
* 7 Filtrations on cohomology of Conical Symplectic Resolutions
* 8 Filtration separating the periods of orbits
* 9 Example: Semiprojective toric manifolds
* 10 Example: the Slodowy variety \(\mathcal{S}_{32}\)
* A Grading for Hamiltonian Floer theory
* B Cotangent bundles and negative vector bundles
## 1. Introduction
### Motivation for the definition of symplectic \(\mathbb{C}^{*}\)-manifolds
We will apply Floer-theoretic techniques to describe the topology and cohomology of a large new class of symplectic manifolds.
This class brings under one umbrella many interesting families of spaces: cotangent bundles of projective varieties, negative complex vector bundles, Conical Symplectic Resolutions (CSRs), Moduli spaces of Higgs bundles, crepant resolutions of quotient singularities (arising in the generalised McKay Correspondence), and many non-compact Fano toric varieties. The family of CSRs itself includes: ALE spaces, hypertoric varieties, Nakajima quiver varieties, and Springer resolutions of Slodowy varieties.
To illustrate some features, recall that a **weight-\(s\) CSR** is a projective \(\mathbb{C}^{*}\)-equivariant resolution \(\pi:\mathfrak{M}\to\mathfrak{M}_{0}\) of a normal affine variety \(\mathfrak{M}_{0}\), whose \(\mathbb{C}^{*}\)-action contracts \(\mathfrak{M}_{0}\) to a point, and having a holomorphic symplectic structure \((\mathfrak{M},\omega_{\mathbb{C}})\) compatible with the \(\mathbb{C}^{*}\)-action: \(t\cdot\omega_{\mathbb{C}}=t^{s}\omega_{\mathbb{C}}\). These spaces are much studied in Geometric Representation Theory, whilst their symplectic topology is less investigated. Known examples of CSRs are hyperkahler: the complex structures \(J,K\) give rise to
## 1. Introduction
In this paper we study the \(\mathbb{C}^{*}\)-action on \(\mathbb{C}^{*}\
cohomology admits a filtration by subspaces [1], the Atiyah-Bott filtration, arising from ordering the summands of (2) essentially by the \(H\)-values of the \(\mathfrak{F}_{\alpha}\). We may assume \(Y\) is connected, so only one summand \(H^{*}(\mathfrak{F}_{\alpha})\) in (2) has \(\mu_{\alpha}=0\): the one containing the unit \(1\in H^{0}(Y)\), and it is \(\mathfrak{F}_{\min}:=\min H\).
The main goal of this paper is to construct a filtration \(\mathcal{F}_{\lambda}^{\varphi}\) by ideals on quantum cohomology \(QH^{*}(Y)\), ordered by \(\lambda\in\mathbb{R}\cup\{\infty\}\). This construction relies on Floer theory: symplectic cohomology \(SH^{*}(Y,\varphi)\), whose chain level generators are loosely the \(S^{1}\)-orbits; and its positive version \(SH^{*}_{+}(Y,\varphi)\), which ignores the constant \(S^{1}\)-orbits in \(\mathfrak{F}\). We anticipate the big picture that relates them:
**Theorem 1.1**.: 2 _The canonical algebra homomorphism \(c^{*}:QH^{*}(Y)\to SH^{*}(Y,\varphi)\) is surjective, equal to localisation at a Gromov-Witten invariant \(Q_{\varphi}\in QH^{2\mu}(Y),\) where \(\mu\) is the Maslov index of \(\varphi\)._
Footnote 2: The technical assumption, needed for Floer theory and to make sense of \(QH^{*}(Y)\), is that \(Y\) satisfies a certain weak+ monotonicity property (Remark 5.7). This includes for example all non-compact Calabi-Yau and non-compact Fano \(Y\).
\[SH^{*}(Y,\varphi)\cong QH^{*}(Y)/E_{0}(Q_{\varphi})\cong QH^{*}(Y)_{Q_{\varphi}}\]
_where \(E_{0}(Q_{\varphi})=\ker c^{*}\subset QH^{*}(Y)\) is the generalised \(0\)-eigenspace of quantum product by \(Q_{\varphi}\). This yields a Floer-theoretic presentation of \(QH^{*}(Y)\) as a \(\mathbb{K}\)-module,_
\[QH^{*}(Y)\cong SH^{*-1}_{+}(Y,\varphi)\oplus SH^{*}(Y,\varphi). \tag{3}\]
_Moreover, for \(N^{+}\in\mathbb{R}\) just above \(N\in\mathbb{N}\), the continuation maps \(c^{*}_{N^{+}}\) (whose direct limit is \(c^{*}\)),_
\[c^{*}_{N+}:QH^{*}(Y)\to HF^{*}(H_{N^{+}}), \tag{4}\]
_can be identified with quantum product \(N\) times by \(Q_{\varphi}\) on \(QH^{*}(Y)\). In particular,_
\[SH^{*}(Y,\varphi)=0\Leftrightarrow(\mathcal{F}_{\lambda}^{\varphi}=QH^{*}(Y) \text{ for some }\lambda<\infty)\Leftrightarrow(Q_{\varphi}\in QH^{2\mu}(Y)\text{ is nilpotent}),\]
_which always occurs if \(c_{1}(Y)=0\), including all CSRs, crepant resolutions of quotient singularities, Higgs moduli spaces, and cotangent bundles of projective varieties.3_
Footnote 3: With the non-exact symplectic structure explained in Example 1.6 (the zero section is symplectic).
For CSRs, \(QH^{*}(Y)\cong H^{*}(Y)\) is ordinary cohomology (with suitable coefficients) so we obtain a \(\varphi\)-dependent filtration on \(H^{*}(Y)\) by ideals with respect to cup-product, and \(H^{*}(Y)\cong SH^{*-1}_{+}(Y,\varphi)\).
In the sequel [10] we use the foundational results from this paper to construct a Morse-Bott-Floer spectral sequence that converges to \(QH^{*}(Y)\), whose \(E_{1}\)-page involves the cohomologies of Morse-Bott manifolds of \(1\)-orbits of the moment map (1) intersected with higher and higher level sets of \(H\). This can be interpreted as a Floer-theoretic generalisation of (2), which instead arises from a Morse-Bott spectral sequence for ordinary Morse-Bott cohomology for the moment map.
In general, there is an obstacle to defining quantum cohomology, let alone Floer cohomology, due to the non-compactness of \(Y\). The danger is the non-compactness of moduli spaces of PDE solutions which escape to infinity. The symplectic form \(\omega\) is very rarely exact at infinity for these spaces, indeed for CSRs in almost all examples one finds closed \(I\)-holomorphic curves appearing at infinity.
We, therefore, require one final condition, and in retrospect arriving at this definition was the decisive idea to open up this large class of examples to Floer-theoretic study going beyond just quiver varieties. We say \(Y\) is a symplectic \(\mathbb{C}^{*}\)-manifold **over a convex base** if on the outside \(Y^{\mathrm{out}}:=Y\setminus\mathrm{int}(Y^{\mathrm{in}})\) of a compact subset \(Y^{\mathrm{in}}\) there is a pseudoholomorphic proper map
\[\Psi:Y^{\mathrm{out}}\to B=\Sigma\times[R_{0},\infty) \tag{5}\]
to the positive symplectisation of a closed contact manifold \(\Sigma\), such that \(X_{S^{1}}\) maps to the Reeb field,
\[\Psi_{*}X_{S^{1}}=\mathcal{R}_{B}. \tag{6}\]
_Remark 1.2_.: The existence of \(\Psi\) automatically ensures that \(\varphi\) is contracting. The condition of \(Y\) being "convex at infinity" would correspond to requiring that \(\Psi\) is a _symplectic isomorphism_. In our setting, the map \(\Psi\) is typically not symplectic. Indeed, if \(Y^{\mathrm{out}}\) admits non-constant closed \(I\)-holomorphic curves at infinity, then \(\Psi\) cannot be a diffeomorphism as Stokes's theorem on \(B\) prohibits such curves.
We also do not require any conditions on the dimension of \(B\), and \(\Psi_{*}\) is allowed to have a large kernel with varying ranks. We do not assume that \(\Psi\) is surjective, so although the Reeb flow will be periodic on the image of \(\Psi\), this need not hold everywhere on \(B\). One can allow a positive constant factor in (6), but after a rescaling argument we may assume (6) holds as written. We caution the reader that level sets of \(H\) are often not preserved by the \(\mathbb{R}_{+}\)-action, so level sets of \(H\) need not map into slices \(\Sigma\times\{R\}\) via \(\Psi\) (an exception are cotangent bundles in Example 1.6 and negative vector bundles).
In (6), \(\mathcal{R}_{B}=X_{R}\) is a Hamiltonian vector field for the radial coordinate \(R\in[R_{0},\infty)\). In this paper, but not in [10], we can allow a more general condition in (6) (useful for instance in Example 1.11):
\[\Psi_{*}X_{S^{1}}=X_{fR}\qquad\text{ for a Reeb-invariant function }f:\Sigma\to(0,\infty). \tag{7}\]
One can view \(Y^{\mathrm{out}}\) as the complement of a disc bundle of a complex line bundle over an orbifold \(\{H=const\}/S^{1}\) (assuming \(H\) is proper), for example by viewing it as a subset of the symplectic cut. However, one usually cannot hope to use this as the space \(B\) in (5), as the identification is rarely pseudoholomorphic [12, 13].
It is sometimes possible to extend (5) to a **globally defined**\(\mathbb{C}^{*}\)-equivariant pseudoholomorphic proper map \(\Psi:Y\to B\) to a non-compact symplectic manifold \(B\) which is convex at infinity (for example, a Liouville manifold), admitting a \(\mathbb{C}^{*}\)-action whose \(S^{1}\)-part generates the Reeb flow at infinity.
### Examples of symplectic \(\mathbb{C}^{*}\)-manifolds over a convex base
**Example 1.3** (Equivariant Resolutions).: Given an affine variety \(X\) with a contracting4 algebraic \(\mathbb{C}^{*}\)-action, any equivariant projective resolution \(Y\to X\) with \(H^{1}(Y;\mathbb{R})=0\) satisfies our assumptions, for a globally defined \(\Psi\). Indeed, the coordinate ring \(\mathbb{C}[X]\) is \(\mathbb{N}\)-graded by the \(\mathbb{C}^{*}\)-action; we choose homogeneous generators \((f_{i})_{i=1}^{N}\) with weights \(w_{i}\geq 1\); and we obtain a proper \(I\)-holomorphic map
Footnote 4: The \(\mathbb{C}^{*}\)-action contracts \(X\) to a single fixed point. Algebraically, the coordinate ring \(\mathbb{C}[X]\) is \(\mathbb{N}\)-graded with \(\mathbb{C}[X]_{0}=\mathbb{C}\).
\[\Psi:Y\to X\hookrightarrow\mathbb{C}^{N}\to\mathbb{C}^{N},\quad y\mapsto\pi(y )\mapsto(f_{1},\ldots,f_{N})|_{\pi(y)}\mapsto(f_{1}^{w/w_{1}},\ldots,f_{N}^{w/ w_{N}})|_{\pi(y)},\]
where \(w:=\mathrm{lcm}_{i}\{w_{i}\}\). The second map above is an embedding, and the third map is a local embedding except at \(0\). The third map ensures that \(\Psi\) is \(\mathbb{C}^{*}\)-equivariant, using the weight-\(w\) diagonal action on \(\mathbb{C}^{N}\). Since \(Y\) is projective above an affine variety, it is a quasi-projective variety.5 Embedding \(Y\subset\mathbb{C}P^{M}\) one can pull-back the Fubini-Study form and \(S^{1}\)-average it, so that the \(S^{1}\)-action becomes symplectic, and thus Hamiltonian as \(H^{1}(Y;\mathbb{R})=0\). Particular examples of this are6 CSRs and crepant resolutions of quotient singularities.
Footnote 5: We can embed \(i:Y\hookrightarrow X\times\mathbb{C}P^{n}\) for some \(n\) ([10, p.103]). Being affine, \(X\hookrightarrow\mathbb{C}^{m}\) for some \(m\). So \(Y\hookrightarrow\mathbb{C}^{m}\times\mathbb{C}P^{n}\).
Footnote 6: \(H^{1}(Y)=0\) for CSRs as they have vanishing odd cohomology, and for crepant resolutions we use [11, Thm.7.8]
**Example 1.4** (\(A_{2}\)-singularity).: CSRs of the lowest dimension are ADE resolutions.7 Let us consider the \(A_{2}\) case, the minimal resolution \(\pi:M\to\mathbb{C}^{2}/(\mathbb{Z}/3)\) of the quotient singularity for the action \((x,y)\mapsto(\zeta x,\zeta^{-1}y)\) of third roots of unity \(\zeta\) on \(\mathbb{C}^{2}\). Embedding \(\mathbb{C}^{2}/(\mathbb{Z}/3)\hookrightarrow\mathbb{C}^{3}\), \([x,y]\mapsto(x^{3},y^{3},xy)\), the resolution \(M\) arises from blowing up the image variety \(V(XY-Z^{3})\subset\mathbb{C}^{3}\) at \(0\). The classical McKay \(\mathbb{C}^{*}\)-action on \(M\) is obtained by lifting the \(\mathbb{C}^{*}\)-action induced by \((x,y)\mapsto(tx,ty)\). The \(\mathbb{C}^{*}\)-equivariant map \(\Psi:M\to\mathbb{C}^{3}\) is given by \((X^{2},Y^{2},Z^{3})\), for the weight \(6\) diagonal \(\mathbb{C}^{*}\)-action on \(\mathbb{C}^{3}\).
Footnote 7: Minimal resolutions of quotient singularities \(M\to\mathbb{C}^{2}/\Gamma\), where \(\Gamma\leq SL(2,\mathbb{C})\) is a finite subgroup.
**Example 1.5** (Submanifolds).: Given any space \(Y\) from our class of spaces, any \(\mathbb{C}^{*}\)-invariant properly embedded \(I\)-pseudoholomorphic submanifold \(j:S\hookrightarrow Y\) will also belong to our class. Indeed, the \(I\)-compatible form \(j^{*}\omega\) makes \(S\) symplectic; the \(S^{1}\)-action is Hamiltonian with moment map \(H\circ j\); and we use \(\Psi\circ j:S^{\mathrm{out}}:=S\cap j^{-1}(Y^{\mathrm{out}})\to\Sigma\times[R_ {0},\infty)\).
**Example 1.6** (Cotangent Bundles).: A simple instance of Example 1.3, and of a weight-\(1\) CSR, is
\[T^{*}\mathbb{C}P^{N}\to\overline{\mathcal{O}_{min}}:=\{A\mid A^{2}=0,\ \mathrm{rk}(A)=1\}\subset\mathfrak{sl}_{N+1}, \tag{8}\]
which is an example of a Springer resolution. Here \(\varphi\) is the standard \(\mathbb{C}^{*}\)-action that contracts fibres, and \(\mathbb{C}^{*}\) acts by dilation on \(\mathfrak{sl}_{N+1}\). Now consider any projective variety \(X\). By definition, it admits an
embedding \(X\hookrightarrow\mathbb{C}P^{N}\) for some \(N\). This induces a natural proper embedding \(j:T^{*}X\hookrightarrow T^{*}\mathbb{C}P^{N}\), which is \(\mathbb{C}^{*}\)-equivariant for the standard \(\mathbb{C}^{*}\)-action on fibres. Thus, \(T^{*}X\) is a symplectic \(\mathbb{C}^{*}\)-manifold globally defined over the convex base \(\mathfrak{sl}_{N+1}\), by Example 1.5. Notice that the symplectic form this yields on \(T^{*}X\) is not the canonical exact form, rather the zero section is \(I\)-holomorphic and \(\omega\)-symplectic.
An interesting class of such examples are Springer resolutions, i.e. cotangent bundles of flag varieties. For example the variety \(\mathcal{B}\) of all flags \(0\subset F_{1}\subset F_{2}\subset\mathbb{C}^{3}\): there is a resolution of singularities \(\nu:T^{*}\mathcal{B}\to\mathcal{N}=\{3\times 3\text{ nilpotent matrices in }\mathfrak{sl}_{3}\}\). The singular affine variety \(\mathcal{N}\) has three strata: (i) the point \(\mathcal{O}_{1,1,1}:=0\) to which \(\mathcal{N}\) contracts to under the \(\mathbb{C}^{*}\)-action; (ii) the stratum \(\mathcal{O}_{2,1}\) which is the adjoint orbit of the nilpotent Jordan normal form with blocks of sizes 2,1; and (iii) a generic stratum \(\mathcal{O}_{3}\). The orbit \(\mathcal{O}_{2,1}\) is a singular stratum of \(\mathcal{N}\) that goes to infinity. Any transverse slice to \(\mathcal{O}_{2,1}\) (entering from the generic stratum, and avoiding \(\mathcal{O}_{1,1,1}\)) will be an \(A_{2}\)-singularity; its preimage via \(\nu\) gives its resolution; in particular the fibre above the chosen point in \(\mathcal{O}_{2,1}\) consists of two holomorphic \(\mathbb{CP}^{1}\)'s intersecting transversely. This illustrates the general feature that, in our spaces \(Y\), \(I\)-holomorphic spheres can appear arbitrarily far out at infinity within the fibres of the map (5).
**Example 1.7** (Negative Vector Bundles).: A setup where \(\Psi\) may not extend globally arises for negative complex vector bundles \(Y=\operatorname{Tot}(\pi:E\to B)\)[14, Sec.11], for \(\Psi:E\setminus 0\dasharrow L\setminus 0\) the natural map to the tautological line bundle \(L\to\mathbb{P}(E)\) over the complex projectivisation of \(E\), which is defined away from the zero sections. The natural symplectic form \(\omega\) on \(E\) is inevitably non-exact at infinity when \(\operatorname{rank}_{\mathbb{C}}E\geq 2\), whereas for \(L\) the symplectic form is exact at infinity. This map was used in [14, Sec.11.2] to define quantum cohomology and Floer cohomology, despite the non-exactness of \(\omega\) at infinity. We remark that here as well the zero section is \(I\)-holomorphic and \(\omega\)-symplectic.
In Appendix B.1 we explain how the cotangent bundles from Example 1.6 can be endowed with a negative vector bundle structure in the sense of [14, Lem.70].
**Example 1.8** (Higgs moduli).: Another important class of examples are various moduli spaces \(\mathcal{M}\) of Higgs bundles over a Riemannian surface \(\Sigma.\) Roughly speaking, elements of these spaces are (conjugacy classes) of stable pairs \((V,\Phi)\), where \(V\) is a vector bundle over \(\Sigma\) of fixed coprime rank and degree, and \(\Phi\in\operatorname{Hom}(V,V\otimes K_{\Sigma})\) is the so-called Higgs field,8 which potentially can have some poles. The space \(\mathcal{M}\) is a complete hyperkahler manifold, with a natural \(I\)-holomorphic \(\mathbb{C}^{*}\)-action given by \(t\cdot(V,\Phi)=(V,t\Phi)\), whose \(S^{1}\)-part is \(\omega_{I}\)-Hamiltonian with a proper moment map. It also admits the so-called Hitchin fibration, given as the characteristic polynomial of the Higgs field,
Footnote 8: here, \(K_{\Sigma}\) is the canonical bundle of \(\Sigma\).
\[\Psi:\mathcal{M}\to B\cong\mathbb{C}^{N}\quad\text{ where }N:=\tfrac{1}{2} \dim_{\mathbb{C}}\mathcal{M}, \tag{9}\]
which is a proper surjective \(I\)-holomorphic map, and it is \(\mathbb{C}^{*}\)-equivariant for a certain linear \(\mathbb{C}^{*}\)-action on \(B.\) This makes \((\mathcal{M},\omega_{I})\) a symplectic \(\mathbb{C}^{*}\)-manifold over a convex base, with \(\Psi\) globally defined.
**Example 1.9** (Hilbert schemes).: Given a quasi-projective surface \(S\) with a \(\mathbb{C}^{*}\)-action, the Hilbert scheme of \(n\) points on it, \(\operatorname{Hilb}^{n}(S)\), is a symplectic \(\mathbb{C}^{*}\)-manifold. For \(S=\mathbb{C}^{2}\) or an ADE resolution, the corresponding Hilbert schemes are quiver varieties, so CSRs. For \(S\) any parabolic Higgs moduli space (Example 1.30), the corresponding Hilbert scheme is also a Higgs moduli space [11].
Otherwise, we can consider \(S=T^{*}\Sigma\), with the standard \(\mathbb{C}^{*}\)-action, where \(\Sigma\subset\mathbb{C}P^{N}\) is an arbitrary algebraic curve. By Example 1.6, there is a \(\mathbb{C}^{*}\)-equivariant proper map \(\pi:T^{*}\Sigma\hookrightarrow T^{*}\mathbb{C}P^{N}\to\overline{\mathcal{O}_{ min}}\). Thus, composing \(\operatorname{Sym}^{n}(\pi)\) with the Hilbert-Chow morphism
\[\operatorname{Hilb}^{n}(T^{*}\Sigma)\to\operatorname{Sym}^{n}(T^{*}\Sigma) \to\operatorname{Sym}^{n}(\overline{\mathcal{O}_{min}}),\]
and an equivariant embedding of the affine varieties \(\operatorname{Sym}^{n}(\overline{\mathcal{O}_{min}})\subset\mathbb{C}^{M}\) (as in Example 1.3), makes \(\operatorname{Hilb}^{n}(T^{*}\Sigma)\) a \(\mathbb{C}^{*}\)-symplectic manifold globally defined over the convex base \(B=\mathbb{C}^{M}\).
**Example 1.10** (Crepant resolutions of quotient singularities).: Let \(\pi:Y\to\mathbb{C}^{n}/G\) be a crepant resolution,9 where \(G\subset SL(n,\mathbb{C})\) is a finite subgroup. The diagonal \(\mathbb{C}^{*}\)-action on \(\mathbb{C}^{n}/G\) lifts10 to a \(\mathbb{C}^{*}\)-action on \(Y\), so \(\pi\) is a proper \(\mathbb{C}^{*}\)-equivariant \(I\)-holomorphic map.
Footnote 9: a non-singular quasi-projective variety \(Y\) together with a proper, birational morphism \(\pi\) which is a biholomorphism away from the singular locus. The crepant condition here is equivalent to \(c_{1}(Y)=0\).
Footnote 10: by Batyrev [10, Prop.8.2], or [11, Prop.3.6].
Crepant resolutions of singularities are much studied in Algebraic Geometry literature in view of the generalised McKay Correspondence, and we refer to [11] for references. A subset of these examples, when \(n=2k\) and \(G\leq Sp(2k,\mathbb{C})\), can give rise to CSRs \(Y\) (Remark 7.18), which were studied in [12, 1]. Another subset is the case when \(G\) acts freely outside of \(0\in\mathbb{C}^{n}\), so isolated singularities \(\mathbb{C}^{n}/G\), studied in [11] (then \(Y\) is a CSR only11 for \(n=2\)). The isolatedness condition ensured that \(Y\) was convex at infinity, i.e. (5) is a symplectic isomorphism with \(\Sigma=S^{2n-1}/G\) (after conjugating \(G\) to ensure \(G\subset SU(n)\)), which was needed in order to study \(Y\) using Floer theory. Being a special case of Example 1.3, _all_ crepant resolutions fit into our new framework, which makes it possible to apply Floer theory without the isolated singularity condition.
Footnote 11: see the paragraph before Corollary 7.11 for an explanation.
**Example 1.11** (Toric varieties and blow-ups).: Toric varieties \(Y\) contain an open dense algebraic torus \((\mathbb{C}^{*})^{n}\), whose action extends to the whole space, so they naturally admit many holomorphic \(\mathbb{C}^{*}\)-actions. In [11] a class of non-compact Fano toric manifolds was defined, for which symplectic cohomology \(SH^{*}(Y)\cong\operatorname{Jac}(W):=\mathbb{K}[z_{i}^{\pm 1}]/(\partial_{z_{i}}W)\) is the Jacobian ring of the Landau-Ginzburg superpotential12\(W=\sum T^{-\lambda_{i}}z^{e_{i}}\), which is combinatorially determined by the moment polytope \(\Delta=\{x\in\mathbb{R}^{n}:\langle x,e_{i}\rangle\geq\lambda_{i}\}\), the image of the moment map \(\mu:Y\to\Delta\) for the \((S^{1})^{n}\)-action (by contrast, for closed Fano toric manifolds \(M\), \(QH^{*}(M)\cong\operatorname{Jac}(W)\)). The requirements in [11] were that \(Y\) is convex at infinity (so a map \(\Psi\) in (5) which is a _symplectic isomorphism_) and the natural Hamiltonian \(S^{1}\)-rotations \(\varphi_{i}\) around the toric divisors \(D_{i}=\{y\in Y:\langle\mu(y),e_{i}\rangle=\lambda_{i}\}\subset Y\) satisfy (7), allowing more generally \(f_{i}\geq 0\) for \(\varphi_{i}\) but with at least one such rotation \(\varphi_{0}\) having \(f_{0}>0\). The latter ensures that the corresponding \(\mathbb{C}^{*}\)-action \(\varphi_{0}\) is contracting, and \(SH^{*}(Y,\varphi_{0})\cong SH^{*}(Y)\) (the usual symplectic cohomology constructed with Hamiltonians linear in \(R\in[R_{0},\infty)\) at infinity). The classes \(Q_{\varphi_{i}}\in QH^{*}(Y)\) are then well-defined, and via \(c^{*}\) in Theorem 1.1 all \(Q_{\varphi_{i}}\) become invertible in \(SH^{*}(Y,\varphi_{0})\),
Footnote 12: \(z_{i}^{e_{i}}:=z_{i}^{e_{i},1}\cdots z_{n}^{e_{i},n}\), where \(e_{i}=(e_{i,1},...,e_{i,n})\in\mathbb{Z}^{n}\) are the edges of the fan, so inward normals to the facets of \(\Delta\).
\[c^{*}:QH^{*}(Y)\to SH^{*}(Y,\varphi_{0})\cong\operatorname{Jac}(W),\quad \operatorname{PD}[D_{i}]=Q_{\varphi_{i}}\mapsto c^{*}Q_{\varphi_{i}}\mapsto T ^{-\lambda_{i}}z^{e_{i}}.\]
Also \(D_{i}:=\min H_{i}\) for the Hamiltonian \(H_{i}\) generating the \(S^{1}\)-action \(\varphi_{i}\), so it is part of \(\operatorname{Fix}(\varphi_{i})\subset Y\).
By the methods of this paper, it is no longer necessary that \(\Psi\) is a symplectic isomorphism. We will come back to this example in Section 1.10.
In general, the class of spaces in the current paper is closed under blow-ups of compact subvarieties, provided that the \(\mathbb{C}^{*}\)-action lifts to the blow-up, since (7) is only a condition at infinity. In the toric case, [11, Sec.3E-3F] described the effect of such blow-ups on quantum and symplectic cohomology.
**Example 1.12** (Torsion submanifolds).: Given a symplectic \(\mathbb{C}^{*}\)-manifold \(Y\) over a convex base, and an integer \(m\geq 2\), the fixed locus of the subgroup \(\mathbb{Z}/m\leq\mathbb{C}^{*}\) decomposes into finitely many connected \(I\)-pseudoholomorphic submanifolds called **torsion submanifolds**\(Y_{m,\beta}.\) They are symplectic \(\mathbb{C}^{*}\)-submanifolds of \(Y\) over the same convex base as \(Y\), by restricting (5) (we could also replace the restricted \(\mathbb{C}^{*}\)-action by its \(m\)-th root). Each \(Y_{m,\beta}\) contains a subcollection of the \(\mathfrak{F}_{\alpha}\), and has strata that converge to different \(\mathfrak{F}_{\alpha}\). Looking at one such point of convergence \(y_{0}\in\mathfrak{F}_{\alpha}\), and viewing \(T_{y_{0}}Y\) as a complex representation for the linearised \(S^{1}\)-action, one obtains the weight decomposition
\[T_{y_{0}}Y=\oplus_{k\in\mathbb{Z}}H_{k}. \tag{10}\]
It follows that only finitely many weights \(m\) can arise (the weights only depend on the component \(\mathfrak{F}_{\alpha}\), not the choice of \(y_{0}\in\mathfrak{F}_{\alpha}\)), and that locally near \(\mathfrak{F}_{\alpha}\) the subbundle \(\oplus_{b\in\mathbb{Z}}H_{mb}\) parametrises \(Y_{m,\beta}\).
### Floer theory is possible for symplectic \(\mathbb{C}^{*}\)-manifolds
**Proposition 1.13**.: _For symplectic \(\mathbb{C}^{*}\)-manifolds over a convex base, a maximum principle holds on \(Y^{\rm out}\) for the PDEs used in Gromov-Witten theory and in Floer theory, provided one uses the almost complex structure \(I\) and the Hamiltonian vector field on \(Y^{\rm out}\) is a positive constant multiple of \(X_{S^{1}}\)._
We briefly summarise the key idea: locally at infinity we have a PDE
\[\partial_{s}u+I(\partial_{t}u-kX_{S^{1}})=0, \tag{11}\]
where \(z=s+it\) is a holomorphic coordinate for the domain of \(u\). As \(\Psi_{*}\) commutes with \(I\), and satisfies (7), the projected curve \(v=\Psi(u)\) in \(B\) satisfies
\[\partial_{s}v+I(\partial_{t}v-k\,f(v)\cdot\mathcal{R}_{B})=0. \tag{12}\]
By the extended maximum principle due to [13, Sec.C2-C4], \(R\circ v\) cannot have a local maximum.
_Remark 1.14_.: The same argument holds for the PDE equations used to define Lagrangian Floer cohomology and the \(A_{\infty}\)-equations for the (wrapped) Fukaya category. For the purpose of proving a maximum principle as above, one does not require a \(\mathbb{C}^{*}\)-action, one just requires (5) and \(\Psi_{*}X=f\mathcal{R}_{B}\) for the Hamiltonian vector field \(X\) that one wishes to use on \(Y\).
\(H\) in (1) plays a role similar to the radial coordinate \(R\) for symplectic manifolds convex at infinity.13 So it is natural to consider Hamiltonians \(H_{\lambda}\) which at infinity are functions \(H_{\lambda}=c(H)\) with generic constant slope \(c^{\prime}(H)=\lambda\) at infinity. We have control over \(1\)-orbits of \(H_{\lambda}\) arising at slope \(c^{\prime}(H)=\lambda\):
Footnote 13: In the case of Liouville manifolds [10] and of symplectic manifolds convex at infinity [13], one works with the class of Hamiltonians \(H_{\lambda}:M\to\mathbb{R}\) which at infinity are “radial”: they only depend on \(R\) and they are eventually linear in \(R\) with a generic slope \(\lambda>0\). The chain-level generators of Hamiltonian Floer cohomology \(HF^{*}(H_{\lambda})\) are the \(1\)-periodic orbits of \(H_{\lambda}\). Due to a \(1\)-to-\(1\) correspondence between \(T\)-periodic Reeb orbits in \(\Sigma=\{R=1\}\) and \(1\)-orbits of a radial Hamiltonian arising with slope value \(T:=H_{\lambda}^{\prime}(R)\), one has control over those generators. The generic slope \(\lambda\) ensures that there are no \(1\)-orbits in the region at infinity where \(H_{\lambda}\) is linear. The construction is motivated by cotangent bundles \(M=T^{*}N\), where the Reeb flow is the geodesic flow for \(N\) on the sphere bundle \(\Sigma\cong STN\); the slope \(\lambda\) corresponds to the period of the geodesic [13]; and the limit of the \(HF^{*}(H_{\lambda})\) recovers the homology of the free loop space of \(N\).
**Lemma 1.15**.: _By considering the \(1\)-periodic \(S^{1}\)-flow, and the subgroup \(G_{\lambda}:=\langle e^{2\pi i\lambda}\rangle\subset\mathbb{C}^{*}\), we have_
\[(1\text{-orbits of }\lambda H)\stackrel{{ 1:1}}{{\longleftrightarrow}}( \lambda\text{-periodic orbits of the }S^{1}\text{-flow})\stackrel{{ 1:1}}{{\longleftrightarrow}}(G_{\lambda}\text{-fixed points in }Y).\]
Call \(\lambda>0\)**generic** if it is not an \(S^{1}\)-orbit (e.g. irrational \(\lambda\)), this ensures \(H_{\lambda}\) has no \(1\)-orbits at infinity. Non-constant \(1\)-orbits can only arise if \(\lambda=\frac{k}{m}\) is rational, for \(k,m\) coprime, in which case \(G_{\lambda}\cong\mathbb{Z}/m\). The initial points of these \(1\)-orbits define the torsion submanifolds \(Y_{m,\beta}\) from Example 1.12. So only finitely many \(m\in\mathbb{N}\) arise. The non-constant \(1\)-orbits of \(H_{\lambda}\) in fact arise in Morse-Bott families,
\[B_{\frac{k}{m},\,\beta}\cong Y_{m,\beta}\cap\{c^{\prime}(H)=\tfrac{k}{m}\}. \tag{13}\]
These are connected odd-dimensional smooth manifolds. Only the _non-compact_\(Y_{m,\beta}\) arise in (13), which we call **outer torsion manifolds**, and we call the \(c^{\prime}(H)=\frac{k}{m}\) in (13) the **outer \(S^{1}\)-periods**. Compact torsion manifolds \(Y_{m,\beta}\), which lie in \(\operatorname{Core}(Y)\), do not arise in (13), as we ensure the only \(1\)-orbits of \(H_{\lambda}\) near \(\operatorname{Core}(Y)\) are the constant \(1\)-orbits at points of \(\mathfrak{F}=\sqcup\mathfrak{F}_{\alpha}\).
So far, it was not essential that \(H\) in (1) is proper, but from now on we will need to assume this - we show in Lemma 5.5 that one can tweak \(\omega\) so that this holds. We then construct \(HF^{*}(H_{\lambda})\), which at the chain level is generated by the \(1\)-orbits of \(H_{\lambda}\), and we prove the following.
**Corollary 1.16**.: _There is a well-defined symplectic cohomology algebra \(SH^{*}(Y,\varphi):=\varinjlim HF^{*}(H_{\lambda})\) obtained as a direct limit over continuation maps as the generic slopes \(\lambda\to\infty\), admitting a canonical unital algebra homomorphism \(QH^{*}(Y)\to SH^{*}(Y,\varphi)\)._
_Remark 1.17_.: Groman [11] defines a universal symplectic cohomology for symplectic manifolds \(Y\) which are geometrically bounded. The philosophical idea is that rather than choosing a specific growth of the Hamiltonians at infinity, one would like to use as many Hamiltonians as possible, and take a huge direct limit of their Floer cohomologies. A stumbling block for us however was that it is not entirely straightforward to check all the conditions required for this construction, particularly because the Riemannian geometry of our spaces \((Y,\omega)\) at infinity is not so well understood. We, therefore, opted to choose a specific class of Hamiltonians, since our spaces \(Y\) come with a natural choice of \(H\) via (1). This also allowed us to obtain meaningful filtered invariants that were sensitive to the choice of \(\mathbb{C}^{*}\)-action, although we had not initially anticipated this phenomenon.
**Proposition 1.18**.: _When \(c_{1}(Y)=0,\) the \(\varphi\)-symplectic cohomology vanishes: \(SH^{*}(Y,\varphi)=0.\)_
This phenomenon is typical of Hamiltonian \(S^{1}\)-actions seen also for \(\mathbb{C}^{n}\)[10], ALE spaces [12], many non-compact symplectic Calabi-Yau manifolds [12], and crepant resolutions of isolated quotient singularities [12].
### Floer theory induces an \(\mathbb{R}\)-ordered filtration by ideals on quantum cohomology
**Theorem 1.19**.: 14 _There is a filtration by graded ideals of \(QH^{*}(Y)\), ordered by \(p\in\mathbb{R}\cup\{\infty\}\),15_
Footnote 14: We discuss the field of coefficients \(\mathbb{K}\) in Remark 6.6, and mild technical assumptions on \(Y\) in Remark 5.7.
Footnote 15: When \(c_{1}(Y)=0\), \(\mathcal{F}^{\varphi}_{\lambda}=QH^{*}(Y)\) for large enough \(\lambda\), due to the vanishing from Proposition 1.18.
\[\mathcal{F}^{\varphi}_{p}:=\bigcap_{\text{generic}\,\lambda>p}\left(\ker c _{\lambda}^{*}:QH^{*}(Y)\to HF^{*}(H_{\lambda})\right),\qquad\mathcal{F}^{ \varphi}_{\infty}:=QH^{*}(Y), \tag{14}\]
_where \(c_{\lambda}^{*}\) is a continuation map, a grading-preserving \(QH^{*}(Y)\)-module homomorphism._
_As \(\mathcal{F}^{\varphi}_{p}\) are ideals, if \(1\in\mathcal{F}^{\varphi}_{p}\) then \(\mathcal{F}^{\varphi}_{p}=QH^{*}(Y)\) ("unity is the last to die")._
_This filtration is sensitive to the choice of \(\mathbb{C}^{*}\)-action \(\varphi\) (e.g. see Examples 1.24, 1.35, 1.37)._
_In [10] we prove that the filtration satisfies the following_ **stability property**_:_
\[\mathcal{F}^{\varphi}_{\lambda}=\mathcal{F}^{\varphi}_{\lambda^{\prime}}\text { if there are no outer $S^{1}$-periods in the interval $(\lambda,\lambda^{\prime}]$}. \tag{15}\]
_Remark 1.20_.: The real parameter \(p\) of the filtration has a geometric interpretation: \(x\in\mathcal{F}^{\varphi}_{p}(QH^{*}(Y))\) means that \(x\) can be represented as a Floer cocycle involving non-constant \(S^{1}\)-orbits of period \(\leq p\).
For the spaces \(Y\) from Example 1.10, the cohomological McKay correspondence states that the rank of \(H^{2d}(Y)\) equals the number of conjugacy classes in \(G\) with "age grading" \(d\). However, an explicit correspondence between conjugacy classes and elements of \(H^{*}(Y)\) was only obtained for certain families of examples, most notably by Kaledin [13]. In [10] we build, in general, an explicit map \(SH^{*-1}_{(0,\lambda]}(Y,\varphi)\to QH^{*}(Y)\) whose image is \(\mathcal{F}^{\varphi}_{\lambda}\), which is a count of certain "Floer spiked-discs". An interesting feature is that our filtration sometimes refines the filtration by age grading (Remark 7.18).
In algebraic geometry literature and by completely different methods, Bellamy-Schedler [14] constructed vector-space filtrations on cohomologies of Springer fibres (these are examples of cores of certain CSRs). For the examples of \(A_{n}\)-singularities, we observed that using a certain \(\mathbb{C}^{*}\)-action16 our filtration agrees with theirs degree-wise,17 whereas for other actions we get different filtrations.
Footnote 16: See Remark 7.17.
Footnote 17: Their filtrations go from the unit \(1\in H^{0}\) towards the higher classes, whereas ours go the other way around.
The choice of \(\Psi\) in (5) is auxiliary information that does not affect the construction of the continuation maps \(c_{\lambda}^{*}\) in (14), it is only needed to ensure that quantum and Floer cohomology are well-defined. An **isomorphism** of symplectic \(\mathbb{C}^{*}\)-manifolds \(j:Y\to Y^{\prime}\) is a pseudoholomorphic \(\mathbb{C}^{*}\)-equivariant symplectomorphism (without reference to their \(\Psi,\Psi^{\prime}\)). If \(j:Y\to Y^{\prime}\) is a pseudoholomorphic \(\mathbb{C}^{*}\)-equivariant diffeomorphism, then after replacing the symplectic form \(\omega_{Y}\) with \(j^{*}\omega_{Y^{\prime}}\) on \(Y\), \(j\) becomes an isomorphism of symplectic \(\mathbb{C}^{*}\)-manifolds.
**Proposition 1.21**.: \(\mathcal{F}^{\varphi}_{\lambda}(QH^{*}(Y))\) _is an invariant of \(Y\) up to isomorphism of symplectic \(\mathbb{C}^{*}\)-manifolds._
One may wish to weaken the notion of isomorphism. We note however that the standard parametrised moduli space argument, that proves \(QH^{*}(Y)\) is invariant under deformations of the almost complex structure \(I\) or of the symplectic form \(\omega\), applies in our setting provided that this is accompanied by a deformation of the map \(\Psi\) in (5), satisfying (6) or (7), so a maximum principle applies. Otherwise one needs a justification for why the parametrised moduli spaces remain compact during the deformation.
### Effective methods to compute the \(\varphi\)-filtration on \(Qh^{*}(Y)\)
Although Floer invariants are notoriously difficult to compute, we have developed two efficient tools to compute the \(\varphi\)-filtration. One tool is developed in [10]: a Morse-Bott-Floer spectral sequence, whose \(E_{1}\)-page essentially consists of \(H^{*}(Y)\) together with ordinary cohomology \(H^{*}(B_{k/m})[-\mu(B_{p,\beta})]\) of the slices from Equation (13), i.e. the Morse-Bott manifolds of period-\(k/m\)\(S^{1}\)-orbits. This spectral sequence is induced by the period-filtration we construct on Floer chain complexes. The columns of the spectral sequence are labelled by the slope values \(\lambda=k/m\), so the \(\varphi\)-filtration value of a class in \(H^{*}(Y)\) is the smallest slope needed so that the columns up that slope have gathered enough cohomology to kill the given class in the spectral sequence. This method relies on knowing information about the torsion \(m\)-submanifolds from Example 1.12, and an analysis of the Morse-Bott-Floer indices \(\mu(B_{p,\beta})\).
In this paper, we instead develop a method based on decomposing and computing the continuation map \(c_{\lambda}^{*}\) from Theorem 1.19 for Hamiltonians of type \(H_{\lambda}=\lambda H\). This involves knowing information about the fixed loci \(\mathfrak{F}_{\alpha}\) and analysing certain indices \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\) which are a generalisation of the Morse-Bott indices \(\mu_{\alpha}\), that are dependent on the slope \(\lambda.\) The precise definition of these indices in terms of the weight decompositions (10), and the study of their properties, is carried out in Section 4.
In Section 10 we will explicitly describe the filtration for the Springer resolution of the Slodowy variety \(\mathcal{S}_{32}\), and we compare the continuation method with the spectral sequence method from [10].
To simplify this introductory discussion, we suppose \(Y\) has no odd degree cohomology. This holds for example for all CSRs, and all crepant resolutions of quotient singularities.
**Theorem 1.22**.: _Suppose \(H^{*}(Y)\) lies in even degrees, then_
\[HF^{*}(H_{\lambda})\cong\oplus H^{*}(\mathfrak{F}_{\alpha})[-\mu_{\lambda}( \mathfrak{F}_{\alpha})]. \tag{16}\]
_Thus, using (2), the continuation map in (14) for generic \(\lambda>0\) becomes_
\[c_{\lambda}^{*}:QH^{*}(Y)\cong\bigoplus_{\alpha}H^{*}(\mathfrak{F}_{\alpha})[ -\mu_{\alpha}]\longrightarrow\bigoplus_{\alpha}H^{*}(\mathfrak{F}_{\alpha})[ -\mu_{\lambda}(\mathfrak{F}_{\alpha})]. \tag{17}\]
So knowing the fixed components \(\mathfrak{F}_{\alpha}\) and their \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\), we obtain lower bounds on the ranks of the filtration \(\mathcal{F}_{\lambda}^{\varphi}\) as follows, which holds also without the above simplification.
**Corollary 1.23**.: _Assume \(c_{1}(Y)=0\). Let \(t=\max\{d:H^{d}(Y)\neq 0\}\), so \(0\leq t\leq 2\dim_{\mathbb{C}}Y-2\). Then for generic \(\lambda>0\), the rank of the filtration \(\mathcal{F}_{\lambda}^{\varphi}\) in cohomological degree \(k\) satisfies_
\[\text{rk}(\mathcal{F}_{\lambda}^{\varphi})_{k}\geq\sum_{\alpha}b_{k-\mu_{ \alpha}}(\mathfrak{F}_{\alpha})-b_{k-\mu_{\lambda}(\mathfrak{F}_{\alpha})}( \mathfrak{F}_{\alpha}), \tag{18}\]
_where \(b_{i}(\mathfrak{F}_{\alpha}):=\text{rk}\,H^{i}(\mathfrak{F}_{\alpha})\) are the Betti numbers. Moreover, for any integer \(N>0\),_
\[\text{rk}\,(\mathcal{F}_{N}^{\varphi})_{k}\geq b_{k}(Y)-b_{k+2N \mu}(Y)\ \ \text{for all}\ k;\] \[(\mathcal{F}_{N}^{\varphi})_{k}=QH^{k}(Y)\ \ \text{for}\ k\geq t+1-2N\mu;\] \[\mathcal{F}_{\lambda}^{\varphi}=QH^{*}(Y)\ \ \text{for}\ \lambda\geq\lceil\tfrac{t+1}{2\mu}\rceil. \tag{19}\]
_If the action \(\varphi\) is the \(m\)-th power of a \(\mathbb{C}^{*}\)-action, the above also holds for \(N\in\tfrac{1}{m}\mathbb{Z}_{>0}\)._
Above, (19) follows from (18) because \(\mu_{N+\delta}(\mathfrak{F}_{\alpha})=\mu_{\alpha}-2N\mu\) for small \(\delta>0\), so
\[c_{N+\delta}^{*}:QH^{*}(Y)\cong HF^{*}(H_{\delta})\to HF^{*}(H_{N+\delta}) \cong QH^{*}(Y)[2N\mu]\]
is the continuation map for "\(N\) full-rotations" (see Equation (4)).
**Example 1.24** (\(A_{2}\)-singularity).: In the minimal resolution \(M\to\mathbb{C}^{2}/(\mathbb{Z}/3)\) from Example 1.4, the core \(\mathfrak{L}=\pi^{-1}(0)=S_{1}^{2}\cup S_{2}^{2}\) consists of two copies of \(S^{2}\) intersecting transversely at a point \(p\). There are three natural \(\mathbb{C}^{*}\)-actions, obtained by lifting via \(\pi\) the following actions on \(V(XY-Z^{3})\subset\mathbb{C}^{3}\):
(a) \((t^{3}X,t^{3}Y,t^{2}Z)\); \(\mathfrak{F}=p_{1}\sqcup p\sqcup p_{2}\) (3 points), where \(p=\mathfrak{F}_{\min}\) and \(p_{i}\in S_{i}^{2}\).
The map (5) is \(\Psi=(X^{2},Y^{2},Z^{3}):M\to\mathbb{C}^{3}\), for the weight 6 diagonal \(\mathbb{C}^{*}\)-action on \(\mathbb{C}^{3}\).
(b) \((tX,t^{2}Y,tZ)\); \(\mathfrak{F}=S_{1}^{2}\sqcup p_{2}\), where \(S_{1}^{2}=\mathfrak{F}_{\min}\).
The map (5) is \(\Psi=(X^{2},Y,Z^{2}):M\to\mathbb{C}^{3}\), for the weight 2 diagonal \(\mathbb{C}^{*}\)-action on \(\mathbb{C}^{3}\).
(c) \((t^{2}X,tY,tZ)\); \(\mathfrak{F}=p_{1}\sqcup S_{2}^{2}\), where \(S_{2}^{2}=\mathfrak{F}_{\min}\).
The map (5) is \(\Psi=(X,Y^{2},Z^{2}):M\to\mathbb{C}^{3}\), for the weight 2 diagonal \(\mathbb{C}^{*}\)-action on \(\mathbb{C}^{3}\).
Case (a) is a CSR of weight 2, cases (b-c) are CSRs of weight 1 and \(\mathfrak{F}_{\min}\) is the minimal \(\omega_{J}\)-Lagrangian mentioned earlier. The weight decomposition in (10) has 1-dimensional summands with weights:
(a) \((1,1)\) at the point \(\mathfrak{F}_{\min}=p\), and \((3,-1)\), at both \(p_{1}\) and \(p_{2}\)
(b-c) \((0,1)\) at the sphere \(\mathfrak{F}_{\min}\), and \((2,-1)\) at \(p_{i}\).
Below we show the presentation (2) by Frankel (thus the Atiyah-Bott filtration) and our presentation (16), writing \(\lambda^{+}\) for a slope just above \(\lambda\), abbreviating \(\mathbb{K}_{m}:=\mathbb{K}[-m]\) (a field summand in degree \(m\)).
\begin{tabular}{l l l} (a) & \(H^{*}(M):\) & \(\mathbb{K}_{0}\oplus\mathbb{K}_{2}\oplus\mathbb{K}_{2}\) & (b-c) \(H^{*}(M):\) & \(H^{*}(S^{2})\oplus\mathbb{K}_{2}\) \\ \(HF^{*}(\frac{1}{3}^{+}H):\) & \(\mathbb{K}_{0}\oplus\mathbb{K}_{0}\oplus\mathbb{K}_{0}\) & \(HF^{*}(\frac{1}{2}^{+}H):\) & \(H^{*}(S^{2})\oplus\mathbb{K}_{0}\) \\ \(HF^{*}(\frac{2}{3}^{+}H):\) & \(\mathbb{K}_{0}\oplus\mathbb{K}_{-2}\oplus\mathbb{K}_{-2}\) & \(HF^{*}(1^{+}H):\) & \(H^{*}(S^{2})[2]\oplus\mathbb{K}_{0}=H^{*}(M)[2]\); \\ \(HF^{*}(1^{+}H):\) & \(\mathbb{K}_{-4}\oplus\mathbb{K}_{-2}\oplus\mathbb{K}_{-2}=H^{*}(M)[4]\) & \(HF^{*}(2^{+}H):\) & \(H^{*}(S^{2})[4]\oplus\mathbb{K}_{-2}=H^{*}(M)[4]\) \\ \end{tabular} The maps (17) are composites of grading-preserving vertical downward maps between the above groups (they need not preserve summands). We deduce the following information:
\begin{tabular}{l l l} (a) & \(\mathcal{F}_{1/3}^{\varphi}\supset H^{2}(M)\), & \(\mathcal{F}_{1}^{\varphi}=H^{*}(M)\). \\ (b-c) & \(\operatorname{rk}\left(\mathcal{F}_{1/2}^{\varphi}\right)_{2}\geq 1\), & \(\mathcal{F}_{1}^{\varphi}\supset H^{2}(M)\), & \(\mathcal{F}_{2}^{\varphi}=H^{*}(M)\). \\ \end{tabular} We will come back to this later, to show that \(\mathcal{F}_{N}^{\varphi}\) distinguishes all three cases.
In particular, we will show \(\mathcal{F}_{1}^{\varphi}\neq H^{*}(M)\) in (b-c), so the filtration is different from the classical Atiyah-Bott filtrations of \(H^{*}(M)\) (reviewed in Remark 2.15). In those filtrations, the unit is not separated from the rest of \(H^{*}(\mathfrak{F}_{\min})\), whereas our filtration distinguishes the unit.
### Computing the continuation map for full rotations: the \(Q_{\varphi}\) invariant
Ideally one would like to compute the continuation maps \(c_{\lambda}^{*}\), but such computations in Floer theory are usually impossible. An exception were the full rotations \(c_{N^{+}}^{*}\) in (4), which we discuss in Section 6.4. Namely, \(c_{N^{+}}^{*}\) is \(N\)-fold quantum product by \(Q_{\varphi}\in QH^{2\mu}(Y)\), where \(\mu>0\) is the Maslov index of the \(S^{1}\)-action (Section 4.1), which is easily computable from (10). Thus, denoting quantum product by \(\star\),
\[\mathcal{F}_{N}^{\varphi}=\ker\left(Q_{\varphi}^{\star N}\,\star\,\cdot\,:QH^{ *}(Y)\to QH^{*}(Y)[2N\mu]\right). \tag{20}\]
In general, it is difficult to compute \(Q_{\varphi}\). If the intersection product \(H_{2\mu}(Y)\otimes H_{2\dim_{\mathbb{C}}Y-2\mu}(Y)\to\mathbb{K}\) is zero and \(c_{1}(Y)=0\), we show that \(Q_{\varphi}=0.\) However, we prove the following non-vanishing result.
**Proposition 1.25**.: _Suppose \(Y\) is Kahler with \(c_{1}(Y)=0\) (or non-compact Fano), and \(\mathfrak{F}_{\min}\) only has weights \(0\) and \(1\). If the Euler class \(e_{\min}\) of the normal bundle of \(\mathfrak{F}_{\min}\subset Y\) is non-zero,18_
Footnote 18: here PD stands for the Poincaré–Lefschetz dual of the locally-finite cycle \([\mathfrak{F}_{\min}]\in H^{IJ}_{2\dim_{\mathbb{C}}Y-2\mu}(Y)\).
\[Q_{\varphi}=PD[\mathfrak{F}_{\min}]+(\text{linearly independent classes})+(\text{terms with }T^{>0})\neq 0\in QH^{2\mu}(Y). \tag{21}\]
_In particular, \(\mathcal{F}_{1}^{\varphi}\neq QH^{*}(Y)\)._
### Illustration of the \(\varphi\)-filtration in examples
**Example 1.26** (Cotangent bundles).: The filtration is not only sensitive to the \(\mathbb{C}^{*}\)-action, but also to the global topology of the symplectic manifold \(Y\), in a way that the Atiyah-Bott filtration is not. Coming back to Example 1.6 where \(Y=T^{*}X\) for a projective variety \(X\), the \(\varphi\)-filtration detects the vanishing of the Euler characteristic \(\chi(T^{*}X)=\chi(X).\) Namely, we have \(0\subset\mathcal{F}_{1}^{\varphi}=H^{*}(X)\) if \(\chi(X)=0\)
and \(0\subset\mathcal{F}_{1}^{\varphi}=H^{\geq 1}(X)\subset\mathcal{F}_{2}^{\varphi}=H^{ *}(X)\) if \(\chi(X)\neq 0\), due to Proposition 1.25 and Equation (20). By contrast, the Atiyah-Bott filtration is \(0\subset H^{*}(X)\) in both cases.
**Example 1.27** (CSRs).: For weight-\(s\) CSRs, \(c_{1}(Y)=0\) as they are holomorphic symplectic. Also, \(QH^{*}(Y)=H^{*}(Y)\) is ordinary cohomology,19 which lies in even degrees in \([0,\dim_{\mathbb{C}}Y]\), and \(2\mu=s\cdot\dim_{\mathbb{C}}Y\). For \(s=2\), \(Q_{\varphi}=0\) for degree reasons, so \(\mathcal{F}_{1}^{\varphi}=H^{*}(Y)\) (this can also be deduced from filtration rank estimates in (19)). For \(s=1\), the minimal \(\omega_{J}\)-Lagrangian \(\mathfrak{F}_{\min}\) has \(\chi(\mathfrak{F}_{\min})\neq 0\),20 and it has only21 weights \(0\) and \(1\). Hence \(Q_{\varphi}\neq 0\in H^{\dim_{\mathbb{C}}Y}(Y).\) Thus,
Footnote 19: Since by [20], \((Y,I)\) can be deformed to an affine algebraic variety.
Footnote 20: Due to the fact that CSRs have even only cohomology, thus the same holds for the fixed loci by Equation (2).
Footnote 21: due to a weight-\(1\) duality \(H_{k}\leftrightarrow H_{1-k}\) induced by \(\omega_{\mathbb{C}}\) on the weight spaces in (10), and all \(k\geq 0\) at the minimum.
\[\mathcal{F}_{1}^{\varphi}=H^{\geq 2}(Y)\quad\text{ and }\quad\mathcal{F}_{2}^{ \varphi}=H^{*}(Y).\]
So the filtration separates the unit \(1\in H^{0}(Y)\). In Example 1.34 we show \(\mathcal{F}_{\lambda}^{\varphi}\neq H^{\geq 2}(Y)\) for all \(\lambda<1\). In the simple Example 1.24, \(\mathcal{F}_{1}^{\varphi}=H^{*}(M)\) for the action in (a), whereas \(\mathcal{F}_{1}^{\varphi}=H^{\geq 2}(M)\) for (b-c).
**Example 1.28** (Higgs moduli).: Unlike CSRs, Higgs moduli in Example 1.8 can have odd-cohomology and their intersection form is degenerate. In particular, the moduli \(\mathcal{M}:=\mathcal{M}_{G}(d,g)\) of ordinary22\(G\in\{GL_{n},SL_{n}\}\)-Higgs bundles of degree \(d\) coprime to \(n\) over a Riemannian surface of genus \(g\) have vanishing intersection form.23 Thus \(Q_{\varphi}=0\), and \(\mathcal{F}_{1}^{\varphi}(\mathcal{M})=H^{*}(\mathcal{M}).\) We note that here the minimal \(\omega_{J}\)-Lagrangian \(\mathfrak{F}_{\min}\) is the moduli space of stable bundles.
Footnote 22: Meaning that the Higgs field does not have poles.
Footnote 23: due to [16] for \(SL(n,\mathbb{C})\) and to [17] for \(GL(n,\mathbb{C})\).
**Example 1.29** (An example with \(c_{1}(Y)\neq 0\)).: When \(c_{1}(Y)\neq 0\), one needs some caution as \(SH^{*}(Y,\varphi)\) is not \(\mathbb{Z}\)-graded. If \(Y\) is non-compact Fano (or monotone, see Remark 5.7) one can get a \(\mathbb{Z}\)-grading if one suitably grades the formal variable \(T\) used to define the Novikov field \(\mathbb{K}\), over which \(QH^{*}(Y)\) and \(SH^{*}(Y)\) are defined. A simple example is the total space \(Y\) of the negative line bundle \(\pi:\mathcal{O}(-k)\rightarrow\mathbb{CP}^{m}\) for \(1\leq k\leq m\). Here \(c_{1}(Y)=(1+m-k)x\neq 0\) where \(x=\pi^{*}\omega_{\mathbb{CP}^{m}}\). For a \(\mathbb{Z}\)-grading one places \(T\) in grading \(|T|=2(1+m-k)\). For simplicity, \(\mathbb{K}=\mathbb{Q}(\!(T)\!)=\){Laurent series in \(T\)}. Abbreviate \(y:=x^{1+m-k}-(-k)^{k}T\). By [18], for the standard \(\mathbb{C}^{*}\)-action \(\varphi\) on the fibres,
\[QH^{*}(Y)=\mathbb{K}[x]/(x^{k}y),\quad Q_{\varphi}=-kx,\quad E_{0}(Q_{\varphi })=\langle y\rangle\subset QH^{*}(Y),\quad\text{ and }\quad SH^{*}(Y)\cong \mathbb{K}[x]/(y).\]
Our \(\varphi\)-filtration by ideals on \(QH^{*}(Y)\) is:
\[0\subset\langle x^{k-1}y\rangle\subset\langle x^{k-2}y\rangle\subset\cdots \subset\langle xy\rangle\subset E_{0}(Q_{\varphi})=\langle y\rangle\subset QH^ {*}(Y).\]
This filtration is not topological, it _depends_ on \(\omega\): ordinary cohomology \(H^{*}(Y)=k[x]/(x^{1+m-k})\) with cup-product gives \(x\cdot(x^{k-1}y)=-(-k)^{k}Tx^{k}\) which is not in the ideal \(\langle x^{k-1}y\rangle=\mathbb{K}x^{k-1}y\subset QH^{*}(Y)\).
By the methods of Venkatesh [19, Thm.1], using a suitable refinement of symplectic cohomology involving a certain Archimedean norm, for negative line bundles \(Y=\operatorname{Tot}(E\to X)\) one should get a filtration of certain other sums of generalised eigensummands of the \(c_{1}(E)\) action on \(QH^{*}(Y)\).
**Example 1.30**.: (Higgs moduli) Coming back to the Example 1.8 of Higgs moduli spaces \(\Psi:\mathcal{M}\to B\). Their cohomology has the canonical "\(P=W\)" filtration [1, 10, 14, 15] obtained as the Perverse filtration of the map \(\Psi\), or as the Weight filtration of the corresponding twisted character variety \(\mathcal{M}^{\prime}\), where \(\mathcal{M}\cong\mathcal{M}^{\prime}\) by the nonabelian Hodge correspondence. In [14] we compare the \(P=W\) filtration with the \(\varphi\)-filtration for parabolic24 Higgs bundles given as crepant resolutions
Footnote 24: Meaning that the Higgs field does not have poles.
\[\mathcal{M}_{\Gamma}\rightarrow(T^{*}E)/\Gamma,\]
of finite group quotients of the cotangent bundle of an elliptic curve \(E\), where \(\Gamma\in\{0,\mathbb{Z}/2,\mathbb{Z}/3,\mathbb{Z}/4,\mathbb{Z}/6\}\) is a group of automorphisms on \(E.\) The core of \(\mathcal{M}_{\Gamma}\) is an affine Dynkin tree of spheres, apart from the case \(\Gamma=\{0\}\) when it is equal to \(E.\) We prove the following:
**Proposition 1.31**.: _[_10_]_ _The filtration \(\mathcal{F}_{\lambda}(H^{2}(\mathcal{M}_{\Gamma}))\) is a refinement of the P=W filtration. Moreover, its ranks can be described in terms of the root system of the corresponding affine Dynkin graph._
### Lowest order approximations of the continuation maps
Corollary 1.23 gives estimates on the ranks of the filtration, but one would like to compute the actual ideals from the continuation maps. Although one usually cannot compute the general continuation map \(c_{\lambda}^{*}\) in (17), we are sometimes able to describe the lowest order \(T\)-terms.25 We preface that continuation maps factorise well, so if \(\lambda<\gamma\) are generic slopes, then there is a continuation map \(\psi_{\gamma,\lambda}\) such that:
Footnote 25: \(T\) is a formal variable in the Novikov field \(\mathbb{K}\), and Quantum/Floer solutions are counted with a factor \(T^{\mathrm{energy}\geq 0}\). For continuation maps, under suitable conditions the factor is \(T^{>0}\) except for constant solutions it is \(T^{0}\) (Remark 6.13).
\[c_{\gamma}^{*}=\psi_{\gamma,\lambda}\circ c_{\lambda}^{*}:QH^{*} (Y)\to HF^{*}(\lambda H)\to HF^{*}(\gamma H), \tag{23}\] \[\psi_{\gamma,\lambda}:HF^{*}(\lambda H)\cong\oplus_{\alpha}H^{*} (\mathfrak{F}_{\alpha})[-\mu_{\lambda}(\mathfrak{F}_{\alpha})]\to\oplus_{ \alpha}H^{*}(\mathfrak{F}_{\alpha})[-\mu_{\gamma}(\mathfrak{F}_{\alpha})] \cong H^{*}(\gamma H). \tag{22}\]
**Proposition 1.32**.: _Suppose \(H^{*}(Y)\) lies in even degrees, and that for each weight \(k\) of \(\mathfrak{F}_{\alpha}\) in (10) there are no integers in the interval \((|k|\lambda,|k|\gamma)\)._
_Then \(\mu_{\lambda}(\mathfrak{F}_{\alpha})=\mu_{\gamma}(\mathfrak{F}_{\alpha})\) and the part of the map (23) given by \(H^{*}(\mathfrak{F}_{\alpha})[-\mu_{\lambda}(\mathfrak{F}_{\alpha})]\to H^{*}( \mathfrak{F}_{\alpha})[-\mu_{\gamma}(\mathfrak{F}_{\alpha})]\) is injective and equal to the identity map up to higher order \(T\)-terms._
**Corollary 1.33**.: _Let \(\lambda_{\alpha}:=1/(\text{maximal absolute weight of $\mathfrak{F}_{\alpha}$})\). If \(H^{*}(Y)\) lies in even degrees, then_
\[\mathcal{F}_{\lambda}^{\varphi}\subset\bigoplus\ \{H^{*}(\mathfrak{F}_{\alpha} )[-\mu_{\alpha}]:\lambda_{\alpha}\leq\lambda\}.\]
_If \(c_{1}(Y)=0\) then, without assumptions on \(H^{*}(Y)\), \(1\notin\mathcal{F}_{\lambda}^{\varphi}\) if \(\lambda<\lambda_{\min}\) and \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\geq 0\) for all \(\alpha\)._
**Example 1.34**.: In Example 1.27, for any weight-\(1\) CSR, all weights at \(\mathfrak{F}_{\min}\) are \(0\) or \(1\). Thus
\[\mathcal{F}_{\lambda}^{\varphi}\subset\oplus_{\alpha\neq\min}H^{*}(\mathfrak{ F}_{\alpha})[-\mu_{\alpha}]\qquad\text{ for }\lambda<1.\]
In degree \(d=\dim_{\mathbb{C}}Y\), we have \(\mathcal{F}_{1^{-}}^{\varphi}(H^{d}(Y))=\oplus_{\alpha\neq\min}H^{d-\mu_{\alpha}}( \mathfrak{F}_{\alpha})\). In other words, \(H^{d}(\mathfrak{F}_{\min})\subset H^{d}(Y)\) is the last to die. This is because \(\mu_{\alpha}\geq 2\) for \(\alpha\neq\min\), and \(\mu_{1^{-}}(\mathfrak{F}_{\alpha})=0\) (using Lemma 4.14(5)).
For weight-\(s\) CSRs with \(s\geq 2\), \(\mathcal{F}_{1^{-}}^{\varphi}=H^{*}(Y)\). For \(s=2\) with \(\mathfrak{F}_{\min}=\{\text{point}\}\), we prove \(\mathcal{F}_{1^{-}}^{\varphi}\neq H^{*}(Y)\).
**Example 1.35** (\(A_{2}\)-singularity).: Continuing Example 1.24, Corollary 1.33 yields the sharper results:
\[\begin{array}{ll}\text{(a)}&\mathcal{F}_{1/3^{-}}^{\varphi}=0,\quad \mathcal{F}_{1/3}^{\varphi}=H^{2}(M)=\mathcal{F}_{1^{-}}^{\varphi},\quad \mathcal{F}_{1}^{\varphi}=H^{*}(M).\\ \text{(b-c)}&\mathcal{F}_{1/2^{-}}^{\varphi}=0,\quad\mathcal{F}_{1/2}^{\varphi }=\mathbb{K}p_{i}=\mathcal{F}_{1^{-}}^{\varphi},\quad\quad\mathcal{F}_{1}^{ \varphi}=H^{\geq 2}(M),\quad\mathcal{F}_{2}^{\varphi}=H^{*}(M),\end{array}\]
where \(i=1,2\) for (c),(b), respectively, and we used Example 1.27 to compute \(\mathcal{F}_{1}^{\varphi}\) for (b-c).
We abusively wrote \(p_{i}\) to denote the summand \(H^{*}(p_{i})[-2]\cong\mathbb{K}_{2}\) from the presentation (2). In (b), \(H^{*}(M)=H^{*}(S_{1}^{2})\oplus H^{*}(p_{2})[-2]\). By considering the Atiyah-Bott filtration, \(H^{*}(p_{2})[-2]\) arises from the unstable cell of \(p_{2}\) which yields the class \(H^{2}(S_{2}^{2})\). So \(H^{*}(M)=H^{*}(S_{1}^{2})\oplus H^{2}(S_{2}^{2})\), using pull-backs to identify classes. Similarly in case (c), \(H^{*}(M)=H^{2}(S_{1}^{2})\oplus H^{*}(S_{2}^{2})\).
Thus (b) & (c) are distinguished since \(\mathcal{F}_{1/2}^{\varphi_{(b)}}=H^{2}(S_{2}^{2})\) whereas \(\mathcal{F}_{1/2}^{\varphi_{(c)}}=H^{2}(S_{1}^{2})\).
In Proposition 1.32, when \(\lambda\) passes a "critical time" \(\frac{k}{m}\), where \(m\) is a weight of \(\mathfrak{F}_{\alpha}\), the index \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\) often drops. When \(\mathfrak{F}_{\alpha}\) does not have negative weights \(-mb\) for \(b\in\mathbb{N}\), it is the minimal locus of the restricted moment map \(H|_{Y_{m,\beta}}\) for the torsion \(m\)-submanifold containing \(\mathfrak{F}_{\alpha}\), so we call \(\mathfrak{F}_{\alpha}\)\(m\)**-minimal**. In that case, \(Y_{m,\beta}\) near \(\mathfrak{F}_{\alpha}\) is a complex vector bundle over \(\mathfrak{F}_{\alpha}\), and we denote \(\mathrm{rk}(Y_{m,\beta})\) its complex rank. We show that \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\) drops precisely by \(2\mathrm{rk}(Y_{m,\beta})\) when \(\lambda\) passes the critical time. Working within that bundle, there is an \(m\)-th root \(\varphi^{1/m}\) of the action on \(Y_{m,\beta}\), and that corresponds to a full rotation in \(Y_{m,\beta}\) thus it has an associated class \(Q_{\varphi^{1/m}}\in QH^{*}(Y_{m,\beta})\). One expects from [11] that \(Q_{\varphi^{1/m}}\) is the Euler class \(e(Y_{m,\beta})\), at least up to higher order \(T\) terms. So we expect the following, which is a generalisation of the result for \(\mathfrak{F}_{\min}\) in (21):
**Conjecture 1.36**.: _For \(m\)-minimal \(\mathfrak{F}_{\alpha}\), if \((m\lambda,m,\gamma)\) only contains one integer \(k\), and \(k\) is coprime to \(m\), then the above map \(H^{*}(\mathfrak{F}_{\alpha})[-\mu_{\lambda}(\mathfrak{F}_{\alpha})]\to H^{*}( \mathfrak{F}_{\alpha})[-\mu_{\gamma}(\mathfrak{F}_{\alpha})]\) is cup-product by the Euler class of the local complex vector bundle \(Y_{m,\beta}\to\mathfrak{F}_{\alpha}\) up to higher order \(T\) terms._
### Different \(\varphi\)-actions and their filtrations
In summary, given any symplectic \(\mathbb{C}^{*}\)-manifold satisfying (5), and (6) or (7), we have built a map
\[\{\text{contracting $\mathbb{C}^{*}$-actions on $Y$}\}\to\{\mathbb{R}_{\infty} \text{-ordered filtrations of $QH^{*}(Y)$}\},\ \ \varphi\mapsto\mathcal{F}_{\lambda}^{\varphi}. \tag{24}\]
A power \(\varphi^{N}\) causes a dilation in the ordering: \(\mathcal{F}_{\lambda}^{\varphi^{N}}=\mathcal{F}_{N\lambda}^{\varphi}\). We have seen in Examples 1.24, 1.35 that our filtration distinguished actions with different fixed loci. Even when the fixed loci and the \(\mathcal{F}\)-ideals are the same, the \(\mathbb{R}_{\infty}\)-ordering is additional information that sometimes distinguishes two actions:
**Example 1.37**.: (\(A_{2}\)-singularity) Continuing Example 1.35, consider the following two new actions on the minimal resolution \(M\to\mathbb{C}^{2}/(\mathbb{Z}/3)\) given by lifting \(\varphi_{1}:=(t^{5}X,tY,t^{2}Z)\) and \(\varphi_{2}:=(t^{7}X,t^{2}Y,t^{3}Z)\). Using the same notation as before, we see that both these actions have the same fixed loci \(\mathfrak{F}=p_{1}\sqcup p\sqcup p_{2}\), with their weight decompositions respectively equal to \((5,-3),(3,-1),(1,1)\) for \(\varphi_{1}\) and \((7,-4),(4,-1),(1,2)\) for \(\varphi_{2}.\) In particular, they have the same Morse-Bott indices (being \(2,2,0\) respectively). By Proposition 1.32, \(c_{1/5^{+}}^{2}(ap_{1}+bp)=bp+(T^{>0}\)-terms) for \(\varphi_{1}\), thus \(\mathcal{F}_{1/5}^{\varphi_{1}}=\ker c_{1/5^{+}}^{2}=\mathbb{K}p_{1}\). Similarly, \(\mathcal{F}_{1/7}^{\varphi_{2}}=\mathbb{K}p_{1}\). Thus, with no filtered levels in between,26
Footnote 26: Moreover, since \(Q_{\varphi_{1}}\in H^{4}(M)=0\), and using Proposition 1.32 for \(p_{2}\), the unit \(p_{2}\) dies at time \(\lambda=1\) for \(\varphi_{1}\). By grading considerations and Proposition 1.32 for \(p_{2}\), it dies at some time \(\lambda\in\{1/2,4/7\}\) for \(\varphi_{2}\).
\[\mathcal{F}_{1/5}^{\varphi_{1}}=\mathbb{K}p_{1},\ \ \mathcal{F}_{2/5}^{ \varphi_{1}}=\mathbb{K}p_{1}\oplus\mathbb{K}p\ \ \text{whereas}\ \ \mathcal{F}_{1/7}^{\varphi_{2}}=\mathbb{K}p_{1},\ \ \mathcal{F}_{2/7}^{\varphi_{2}}=\mathbb{K}p_{1}\oplus\mathbb{K}p.\]
Hence, the \(\mathbb{R}\)-ordering distinguishes the filtrations for the two actions. Notice that the Atiyah-Bott filtration does not distinguish the two actions, as they have the same fixed loci and Morse-Bott indices.
In general, the spaces \(Y\) can have several different \(\mathbb{C}^{*}\)-actions \(\varphi\), inducing several different filtrations \(\mathcal{F}^{\varphi}\) by ideals on \(QH^{*}(Y)\). Given two such filtrations by ideals \(I_{i},J_{j}\) of \(QH^{*}(Y)\), where \(i,j\) are ordered indices, algebraically one can consider the bigraded filtration \(K_{ij}=I_{i}\cap J_{j}\) of ideals of \(QH^{*}(Y)\), so \(K_{ij}\subset K_{i^{\prime}j^{\prime}}\) for \(i\leq k^{\prime}\), \(j\leq j^{\prime}\), which leads to a collection of filtrations of \(QH^{*}(Y)\) by ideals. Geometrically, the various filtrations can partly be related. Generalising the idea from Example 1.11, if \(\varphi^{\prime}\) is any \(\mathbb{C}^{*}\)-action on \(Y\) which satisfies (7) (here one can allow \(f\geq 0\) rather than \(f>0\)), then one obtains a class \(Q_{\varphi^{\prime}}\in QH^{*}(Y)\) which via \(c^{*}\) becomes invertible in \(SH^{*}(Y,\varphi)\cong QH^{*}(Y)/E_{0}(\varphi)\). If \(\varphi,\varphi^{\prime}\) commute, their composition \(\varphi^{\prime}\circ\varphi\) is a \(\mathbb{C}^{*}\)-action satisfying (7) (with \(f>0\)) for which a full rotation continuation map corresponds to the quantum product by \(Q_{\varphi^{\prime}\circ\varphi}=Q_{\varphi^{\prime}}\star Q_{\varphi}\). Thus
\[\mathcal{F}_{N}^{\varphi^{\prime}}+\mathcal{F}_{N}^{\varphi}\subset\mathcal{F} _{N}^{\varphi^{\prime}\circ\varphi}\quad\text{for $N\in\mathbb{N}$}.\]
When \(\varphi^{\prime}\) satisfies (7) but we drop sign-requirements (so \(f:\Sigma\to\mathbb{R}\) is Reeb-invariant), the class \(Q_{\varphi^{\prime}}\) may not be well-defined in \(QH^{*}(Y)\), but it still27 exists in \(SH^{*}(Y,\varphi)\) by [14, 15]. This is of course only interesting when \(SH^{*}(Y,\varphi)\neq 0\), e.g. the non-compact Fano examples from [14, 15].
Footnote 27: to illustrate this, this is also the reason why the \(Q_{\varphi}\)-class need not be invertible in \(QH^{*}(Y)\) but it is in \(SH^{*}(Y)\), because for the \(SH^{*}\)-class one can consider the inverse \(\varphi^{-1}\) of the action.
**Example 1.38** (\(A_{2}\)-singularity).: Given the minimal resolution \(M\to\mathbb{C}^{2}/(\mathbb{Z}/3)\), notice that the actions from Example 1.24 satisfy \(\varphi_{(a)}=\varphi_{(b)}\circ\varphi_{(c)}\), and that by Example 1.35 we have
\[\mathcal{F}_{1/2}^{\varphi_{(b)}}+\mathcal{F}_{1/2}^{\varphi_{(c)}}=H^{2}(M)= \mathcal{F}_{1/3}^{\varphi_{(a)}},\ \ \ \ \mathcal{F}_{1}^{\varphi_{(b)}}+\mathcal{F}_{1}^{\varphi_{(c)}}=H^{\geq 2}(M) \subsetneq H^{*}(M)=\mathcal{F}_{1}^{\varphi_{(a)}}.\]
Finally, we remark that CSRs typically contain many different \(\mathbb{C}^{*}\)-actions, arising from compositions \(\varphi:=\phi^{k}\circ G\) of a canonically given contracting \(\mathbb{C}^{*}\)-action \(\phi\) and the \(\mathbb{C}^{*}\)-subgroups \(G\) of a maximal torus \(T\leq\operatorname{Symp}(Y,\omega_{\mathbb{C}})\), which is usually non-trivial.28 Subgroups \(G\) for which \(\varphi\) is contracting, for fixed \(k\), constitute a convex subset \(K_{k}\) of the lattice \(\Lambda\) of all \(\mathbb{C}^{*}\)-subgroups of \(T.\) Thus via (24) we get a \(K_{k}\)-labelled family of filtrations on \(H^{*}(Y)\) by cup-product ideals, where \(\cup_{k\in\mathbb{N}}K_{k}=\Lambda\) exhausts the whole lattice. We discuss this explicitly in the example \(T^{*}\mathbb{C}P^{n}\) in [10].
### Example: semiprojective toric varieties
A class of examples having a natural family of \(\mathbb{C}^{*}\)-actions which we wish to compare are toric varieties. As mentioned in Example 1.11, we can lift the restrictive conditions from [11] as long as we can construct the map in (5).
A natural subclass to consider are **semiprojective toric manifolds**, introduced by Hausel-Sturmfels [10]: non-compact toric manifolds \(Y\) for which the affinisation map \(\pi:Y\to Y_{0}:=\operatorname{Spec}(H^{0}(Y,\mathcal{O}_{Y}))\) is projective, and \(Y\) has at least one torus-fixed point (i.e. the moment polytope29\(\Delta\) has at least one vertex). We can construct a map \(\Psi\) as in (5) from this affinisation using Example 1.3 for any of the contracting \(\mathbb{C}^{*}\)-actions \(\varphi_{v}\) which we describe below.
Footnote 29: We abusively call it polytope, although it is really an unbounded rational polyhedron [10, p.498].
Recall a general fact about toric varieties. There is a fan \(\Sigma\) associated to any toric variety, and we denote \(|\Sigma|\subset\mathbb{Z}^{n}\) the union of the cones of the fan. To \(v\in\mathbb{Z}^{n}\), one can associate a \(1\)-parameter subgroup \(\varphi_{v}:\mathbb{C}^{*}\to\mathbb{T}\). Those which generate contracting actions precisely correspond to \(v\in|\Sigma|\), and for these we can construct \(\Psi\) as mentioned above. As a consequence of Theorem 1.1, for each \(v\in|\Sigma|\),
\[c^{*}:QH^{*}(Y)\to SH^{*}(Y,\varphi_{v})\]
is surjective and corresponds to localisation at \(Q_{\varphi_{v}}\). Thus \(SH^{*}(Y,\varphi_{v})\) can in general yield different algebras, depending on \(v\in|\Sigma|.\) In Section 9 we prove the following generalisation of [111]:
**Proposition 1.39**.: _If the semiprojective toric manifold \(Y\) is monotone, then after an \(SL(n,\mathbb{Z})\)-transformation applied to the fan \(\Sigma\) we obtain the presentations_
\[\mathbb{K}[x_{1},\dots,x_{r}]/\mathcal{J} \cong QH^{*}(Y),\;x_{i}\mapsto D_{i},\] \[\mathbb{K}[x_{1},\dots,x_{r},x^{\pm v}]/\mathcal{J} \cong SH^{*}(Y,\varphi_{v}),\;x_{i}\mapsto c^{*}(D_{i}),\]
_where \(r\geq n\) is the number of toric divisors, \(x^{v}:=x_{1}^{v_{1}}x_{2}^{v_{2}}\cdots x_{n}^{v_{n}}\), and \(\mathcal{J}\) is the ideal generated by the linear relations and the quantum Stanley-Reisner relations (combinatorially determined by \(\Delta\))._
_If all \(v_{i}>0\) then \(SH^{*}(Y,\varphi_{v})\cong\operatorname{Jac}(W)\) recovers the Jacobian ring associated to the superpotential \(W\) for \(\Delta\) (compare Example 1.11), in particular it is independent of \(v\)._
In the last statement above, inverting \(x^{v}\) inverts all \(x_{i}\): \(\mathbb{K}[x_{1},\dots,x_{r},x^{\pm v}]=\mathbb{K}[x_{1}^{\pm 1},\dots,x_{r}^{ \pm 1}]\). Similarly to Example 1.29, we deduce information about the \(\varphi_{v}\)-filtration:
**Corollary 1.40**.: _In the notation of Proposition 1.39, \(E_{0}:=E_{0}(Q_{\varphi_{v}})\) is the generalised \(0\)-eigenspace of \(x^{v}=Q_{\varphi_{v}}\), and let \(N\) be its order of nilpotency.30 The integer-time \(\varphi_{v}\)-filtration is:_
Footnote 30: i.e. minimal \(N\in\mathbb{N}\) such that \(x^{Nv}E_{0}=0\).
\[0\subset\ \mathcal{F}_{1}^{\varphi_{v}}=x^{(N-1)v}E_{0}\ \subset\ \mathcal{F}_{2}^{\varphi_{v}}=x^{(N-2)v}E_{0}\ \subset\cdots\subset\ \mathcal{F}_{N-1}^{\varphi_{v}}=x^{v}E_{0}\ \subset\mathcal{F}_{N}^{\varphi_{v}}=E_{0}\subset \mathcal{F}_{\infty}^{\varphi_{v}}=QH^{*}(Y).\]
Outside of the monotone setup, there is a precise description of quantum cohomology for semiprojective toric manifolds due to Smith [10], and it is in general a difficult question to determine which monomials are being inverted when passing to symplectic cohomology [10, Rmk.1.8]. For almost any \(v\in|\Sigma|\), we show that a large multiple of the moment map for \(\varphi_{v}\) can be made larger than the moment map of any other \(\varphi_{v^{\prime}}\). This allows us to prove the following:
**Proposition 1.41**.: 31 _For any semiprojective toric manifold \(Y\), for almost any \(v\in|\Sigma|\), \(SH^{*}(Y,\varphi_{v})\) is the localisation of \(QH^{*}(Y)\) at all toric divisors \(D_{i}\), in particular it is independent of \(v\)._
Footnote 31: i.e. minimal \(N\in\mathbb{N}\) such that \(x^{Nv}E_{0}=0\).
### \(S^{1}\)-equivariant symplectic cohomology
We will also apply our methods to the \(S^{1}\)-equivariant symplectic cohomology \(ESH^{*}(Y,\varphi)\), which involves letting \(S^{1}\) act by reparametrisation on the Hamiltonian orbits that generate Hamiltonian Floer cohomology. Following conventions from [10], one obtains a \(\mathbb{K}[u]\)-module \(ESH^{*}(Y,\varphi)\) with a canonical \(\mathbb{K}[u]\)-module homomorphism
\[Ec^{*}:H^{*}(Y)\otimes_{\mathbb{K}}\mathbb{F}\to ESH^{*}(Y,\varphi).\]
Here \(u\) is a degree two formal variable (the equivariant parameter), and at the chain level each \(1\)-orbit contributes a copy of the \(\mathbb{K}[u]\)-module \(\mathbb{F}:=\mathbb{K}(\!(u)\!)/u\mathbb{K}[\![u]\!]\cong H_{-\ast}(\mathbb{C} \mathbb{P}^{\infty})\).
When \(c_{1}(Y)=0\), we have \(ESH^{\ast}(Y,\varphi)=0\) for the same grading reasons that prove \(SH^{\ast}(Y,\varphi)=0\). Thus \(SH^{\ast-1}_{+}(Y,\varphi)\) is a free \(\mathbb{F}\)-module isomorphic to \(H^{\ast}(Y)\otimes\mathbb{F}\) as a \(\mathbb{K}[u]\)-module.
**Theorem 1.42**.: _There is an \(\mathbb{R}_{\infty}\)-ordered filtration by graded \(\mathbb{K}[u]\)-submodules of \(H^{\ast}(Y)\otimes_{\mathbb{K}}\mathbb{F}\),_
\[E\mathcal{F}^{\varphi}_{p}:=\bigcap_{\mathrm{generic}\,\lambda>p}\left( \ker Ec_{\lambda}^{\ast}:H^{\ast}(Y)\otimes_{\mathbb{K}}\mathbb{F}\to EHF^{ \ast}(H_{\lambda})\right),\qquad E\mathcal{F}^{\varphi}_{\infty}:=H^{\ast}(Y )\otimes_{\mathbb{K}}\mathbb{F},\]
_where \(Ec_{\lambda}^{\ast}\) is an equivariant continuation map, a grading-preserving \(\mathbb{K}[u]\)-linear map._
_In general, \(\mathcal{F}^{\varphi}_{\lambda}\subset E\mathcal{F}^{\varphi}_{\lambda}\). If \(H^{\ast}(Y)\) lies in even degrees (e.g. CSRs), then_
\[\mathcal{F}^{\varphi}_{\lambda}=H^{\ast}(Y)\cap E\mathcal{F}^{\varphi}_{ \lambda}.\]
The reason \(\mathcal{F}^{\varphi}_{\lambda}\subset E\mathcal{F}^{\varphi}_{\lambda}\) may be larger is that the image \(c_{\lambda}^{\ast}(x)\) may lie in the \(\mathbb{K}\)-span of the images of the higher-order maps \(\delta_{j}\), \(j\geq 1\), involved in the equivariant Floer differential \(d=\delta_{0}+\sum u^{j}\delta_{j}\) (for example \(\delta_{1}=\Delta\) induces the BV-operator on \(SH^{\ast}(Y,\varphi)\), see [10]). In that case, \(Ec_{\lambda}^{\ast}(x)=0\).
We will see in [11] that the \(S^{1}\)-equivariant spectral sequence for \(ESH^{\ast}_{+}(Y,\varphi)\) often collapses on the \(E_{1}\)-page, making this approach computationally easier, but it comes at the cost of needing to analyse the presence of repeated copies of generators coming from the parameters \(u^{-j}\). We illustrate an explicit example in Section 10 for a CSR, the Springer resolution of the Slodowy variety \(\mathcal{S}_{32}\), and we show what happens for the non-equivariant and the \(S^{1}\)-equivariant spectral sequences.
### Acknowledgements
We thank Alexandre Minets, Paul Seidel, Ivan Smith, and Nicholas Wilkins for helpful conversations. The first author is grateful to the Mathematics Department of Stanford University for their hospitality during the author's sabbatical year, where the paper was completed. Part of this work is contained in the second author's DPhil thesis [12], and he acknowledges the support of Oxford University, St Catherine's College, and the University of Edinburgh.
## 2. Symplectic \(\mathbb{C}^{\ast}\)-manifolds
### Moment map, fixed locus, convergence points, and contracting actions
**Definition 2.1**.: A **symplectic \(\mathbb{C}^{\ast}\)-manifold**\((Y,\omega,I,\varphi)\) consists of a connected32 symplectic manifold \((Y,\omega)\), a choice of \(\omega\)-compatible almost complex structure \(I\) on \(Y\) (so \(g(\cdot,\cdot):=\omega(\cdot,I\cdot)\) is a Riemannian metric), and a **pseudoholomorphic33**\(\mathbb{C}^{\ast}\)-action \(\varphi\) on \((Y,I)\), such that its \(S^{1}\)-part is Hamiltonian. A **symplectic \(\mathbb{C}^{\ast}\)-submanifold**\(W\subset Y\) is a connected \(\mathbb{C}^{\ast}\)-invariant submanifold which is \(I\)-pseudoholomorphic, so \(\omega\)-symplectic.34 Thus \((W,\omega|_{W},I|_{W},\varphi|_{W})\) is a symplectic \(\mathbb{C}^{\ast}\)-manifold.
Footnote 32: This condition is inessential, but we assume it for convenience.
_Remark 2.2_.: If we only had an \(I\)-pseudoholomorphic35\(S^{1}\)-action, \(\psi_{t}:=\varphi_{e^{2\pi it}}:Y\to Y\), then this locally extends to a \(\mathbb{C}^{\ast}\)-action. The Lie derivative of its vector field \(X_{S^{1}}\) satisfies \(\mathcal{L}_{X_{S^{1}}}(I)=0\), so \(X_{S^{1}}\) and \(X_{\mathbb{R}_{+}}:=-IX_{S^{1}}\) commute.36 So we get a partially defined pseudoholomorphic map \(\varphi:\mathbb{C}^{\ast}\times Y\to Y\), \(\varphi_{e^{2\pi(s+it)}}=\mathrm{Flow}^{\mathrm{s}}_{X_{\mathbb{R}_{+}}}\circ \psi_{t}\). If \(X_{\mathbb{R}_{+}}\) is integrable then this \(\varphi\) becomes a globally defined \(\mathbb{C}^{\ast}\)-action.
Footnote 35: pseudoholomorphicity of \(\varphi\) means the differential of \(\varphi:\mathbb{C}^{\ast}\times Y\to Y\) is \(((i,I),I)\)-holomorphic.
Footnote 34: \(\omega(t,Iv)=g(v,v)\neq 0\) for \(v\neq 0\).
Footnote 35: meaning \((\psi_{t})_{\ast}\circ I=I\circ(\psi_{t})_{\ast}\) for \(t\in\mathbb{R}\).
Instead of "pseudoholomorphic," we will abusively just use the term "holomorphic" from now on. If \(H^{1}(Y)=0\), asking for the \(S^{1}\)-action to be Hamiltonian is equivalent to it being symplectic.37
Let \(\varphi^{y}:\mathbb{C}^{*}\to Y\), \(\tau\mapsto\tau\cdot y\), denote the \(\mathbb{C}^{*}\)-action acting on \(y\in Y\). By definition, \(\varphi^{y}\) is \((i,I)\)-holomorphic. We call \(\mathbb{R}_{+}\) the subgroup \(\mathbb{R}\hookrightarrow\mathbb{C}^{*}\), \(s\mapsto e^{2\pi s}\), and \(S^{1}\) the subgroup \(\mathbb{R}/\mathbb{Z}\hookrightarrow\mathbb{C}^{*}\), \(s\mapsto e^{2\pi is}\subset\mathbb{C}^{*}\). The flows of the \(\mathbb{R}_{+}\)**-part** and \(S^{1}\)**-part** of \(\varphi\) are given by the vector fields
\[X_{\mathbb{R}_{+}}(y)=\partial_{s}|_{s=0}\,\varphi^{y}(e^{2\pi s})\quad\text{ and }\quad X_{S^{1}}(y)=\partial_{s}|_{s=0}\,\varphi^{y}(e^{2\pi is}). \tag{25}\]
As we parametrise \(S^{1}\) by \(s\in\mathbb{R}/\mathbb{Z}\), the \(S^{1}\)-flow is \(1\)-periodic in \(s\). By assumption, this \(S^{1}\)-action is generated by a Hamiltonian vector field \(X_{S^{1}}=X_{H}\) for a smooth function \(H\) as in Equation (1), called the **moment map** of the action (our convention is that \(\omega(\cdot,X_{H})=dH\)).
**Lemma 2.3**.: \(X_{\mathbb{R}_{+}}=\nabla H\) _and \(X_{S^{1}}=X_{H}=I\nabla H=IX_{\mathbb{R}_{+}}\)._
Proof.: By definition, \(\omega(\cdot,I\nabla H)=g(\cdot,\nabla H)=dH=\omega(\cdot,X_{H})\) so \(X_{H}=I\nabla H\). As \(\varphi^{y}:\mathbb{C}^{*}\to Y\) is holomorphic, \(X_{S^{1}}(y)=d\varphi^{y}\circ 2\pi i=I(d\varphi^{y}\circ 2\pi)=IX_{ \mathbb{R}_{+}}\).
**Lemma 2.4**.: _The fixed locus of the \(\mathbb{C}^{*}\)-action equals the fixed locus of the \(S^{1}\)-action,_
\[\mathfrak{F}:=\operatorname{Fix}(\varphi)=\operatorname{Fix}(\varphi|_{S^{1}} )=\operatorname{Crit}(H)=\operatorname{Zeros}(X_{\mathbb{R}_{+}}).\]
_This is a closed subset of \(Y\), and if the \(-X_{\mathbb{R}_{+}}\) flow from \(y\in Y\) converges then its limit lies in \(\mathfrak{F}\)._
Proof.: Since \(X_{\mathbb{R}_{+}}=IX_{S^{1}}\), a fixed point of the \(S^{1}\)-action is automatically a fixed point of the \(\mathbb{C}^{*}\)-action, so the two fixed loci agree and coincide with \(\operatorname{Crit}(H)=\operatorname{Zeros}(X_{S^{1}})=\operatorname{Zeros} (X_{\mathbb{R}_{+}})\operatorname{since}X_{S^{1}}=X_{H}\).
**Lemma 2.5**.: \(\mathfrak{F}\) _is a symplectic \(I\)-pseudoholomorphic submanifold, whose connected components \(\mathfrak{F}_{\alpha}\) are the critical submanifolds of the Morse-Bott function \(H\). The Morse-Bott indices and Morse-Bott coindices of the \(\mathfrak{F}_{\alpha}\) are all even._
Proof.: By Kobayashi [10], the fixed locus of the Hamiltonian \(S^{1}\)-action is a smooth oriented even-dimensional submanifold, since by an averaging argument we can pick an \(S^{1}\)-invariant Riemannian metric so the \(S^{1}\)-action gives rise to a Killing vector field as required in [10]. Indeed we can recall the short proof. The \(S^{1}\)-invariance of the metric ensures that at a fixed point \(y\), a neighbourhood of \(y\) can be \(S^{1}\)-equivariantly identified with a neighbourhood of \(0\in T_{y}Y\), using the linearised \(S^{1}\)-action on \(T_{y}Y\)[15, Ch.VI, Thm.2.2]. The fixed locus near \(y\) is parametrised via \(\exp_{p}\) by the vector subspace \(H_{0}\) of all \(v\in T_{y}Y\) with \(d_{y}\varphi\cdot v=0\). The holomorphicity assumption on \(\varphi\) ensures that \(IH_{0}=H_{0}\). Viewing \(T_{y}Y\) as an \(S^{1}\)-representation via the action of \(d_{y}\varphi\), the orthogonal complement of \(H_{0}\subset T_{y}Y\) decomposes orthogonally into a direct sum of two-dimensional planes \(T_{i}\) which are rotated with speed \(w_{i}\in\mathbb{Z}\). Thus, as \(Y\) is even-dimensional and oriented, so is \(H_{0}\). Although we do not assume that \(Y\) is Kahler, Frankel's argument that the Hamiltonian is Morse-Bott in the proof of [15, Lem.1] nevertheless applies also in our case by letting \(\phi\) (in his notation) be our Hamiltonian \(H\), \(X\) be our \(S^{1}\)-vector field, and using our holomorphicity assumption for \(\varphi\) to get the equation \(IS=SI\) needed in that Proof. By definition, the Morse-Bott index \(\mu_{\alpha}\) of a component \(\mathfrak{F}_{\alpha}\subset\mathfrak{F}\) equals the sum of the dimensions of the planes \(T_{i}\) which are rotated with negative speeds \(w_{i}<0\). The coindex \(\dim_{\mathbb{R}}Y-\dim_{\mathbb{R}}\mathfrak{F}_{a}-\mu_{\alpha}\) is also even as \(Y,\mathfrak{F}_{\alpha}\) are symplectic, hence have even dimensions.
_Remark 2.6_.: The critical points of the Morse-Bott function \(H\) are hyperbolic (the flow on the \(T_{j}\) complex line above is \(e^{2\pi iw_{j}}\) which contributes two eigenvalues of the differential of the flow of modulus equal to \(2\pi|w|\in 2\pi\mathbb{Z}\), so not \(1\)), so the unstable/stable manifolds are non-properly embedded Euclidean spaces (a classical theorem by Hadamard-Perron and Hartman-Grobman). The non-properness occurs if the flowlines converge to a critical point that does not belong to that unstable/stable manifold, or if a flowline goes to infinity in finite time (which can only occur if \(H\) is not proper).
**Definition 2.7**.: If \(\varphi_{t}(y)\) has a limit as \(\mathbb{C}^{*}\ni t\to 0\), we say \(y\in Y\) has **convergence point**\(y_{0}\), where
\[y_{0}:=\lim_{\mathbb{C}^{*}\ni t\to 0}t\cdot y\ \in\mathfrak{F}.\]
**Lemma 2.8**.: _A \(y\in Y\) has a convergence point if and only if the \(-\nabla H\) flowline from \(y\) converges. The subset of points \(y\in Y\) with a given convergence point \(y_{0}\in\mathfrak{F}\) is the stable manifold \(W^{s}_{-\nabla H}(y_{0})\)._
Proof.: This essentially follows from Lemma 2.4. The \(-\nabla H=-X_{\mathbb{R}_{+}}\) flowline \(\gamma(s)\) with \(\gamma(0)=y\) corresponds to the action \(\gamma(s)=e^{-2\pi s}\cdot y\) by \(-s\in\mathbb{R}_{+}\). So the convergence assumption on the flowline is equivalent to assuming \(\lim(t\cdot y)\) exists for \(\mathbb{R}_{+}\ni t\to 0\). If that limit exits then, by continuity of the \(S^{1}\)-action, \(e^{2\pi i\theta}\lim(t\cdot y)=\lim((e^{2\pi i\theta}t)\cdot y)\) as \(\mathbb{R}_{+}\ni t\to 0\). It follows that the whole circle \(S^{1}\cdot y\) will uniformly converge to the same limit point under the action of \(\mathbb{C}^{*}\ni t\to 0\).
**Definition 2.9**.: For a symplectic \(\mathbb{C}^{*}\)-manifold \((Y,\omega,I,\varphi)\), the \(\mathbb{C}^{*}\)-action \(\varphi\) is **contracting** if there is a compact subdomain \(Y^{\rm in}\subset Y\) such that the \(-X_{\mathbb{R}_{+}}\) flow starting from any point \(y\in Y\) will eventually enter and stay in \(Y^{\rm in}\). In particular, \(\mathfrak{F}\subset Y^{\rm in}\).
**Lemma 2.10**.: _The following are equivalent:_
1. \(\varphi\) _is contracting._
2. \(H\) _is bounded below and_ \(\mathfrak{F}\) _is compact._
3. \(\mathfrak{F}\) _is compact and all points have convergence points._
Proof.: If \(\mathfrak{F}\) is compact, we can pick a compact neighbourhood \(Y^{\rm in}\) of \(\mathfrak{F}\). The flow of \(-\nabla H/\|\nabla H\|^{2}\) starting from \(y\notin\mathfrak{F}={\rm Crit}(H)\) will decrease \(H\) at the rate of \(1\) in time \(1\) unless the flowline reaches \(\mathfrak{F}\). So if \(H\) is bounded below then this flowline must reach \(Y^{\rm in}\) in finite time. Thus \(\varphi\) is contracting. Conversely, suppose \(\varphi\) is contracting. As \(\mathfrak{F}\subset Y^{\rm in}\) is closed and \(Y^{\rm in}\) is compact, also \(\mathfrak{F}\) is compact. There are no zeros of \(X_{\mathbb{R}_{+}}\) outside of \(Y^{\rm in}\) by the contracting assumption, so \(H\) achieves a global minimum inside \(Y^{\rm in}\) as \(-\nabla H\)-flowlines are eventually in \(Y^{\rm in}\). So \(H\) is bounded below. Finally, the assumption on convergence points in (3) is equivalent to saying \(-\nabla H\)-flowlines converge to points in \(\mathfrak{F}\), which is equivalent to saying the absolute minima of \(H\) lie in \(\mathfrak{F}\) (assuming \(\mathfrak{F}\) is compact).
**Assume henceforth that \((Y,\omega,I,\varphi)\) is a symplectic \(\mathbb{C}^{*}\)-manifold with contracting \(\varphi.\)** By condition (3), these are symplectic generalisations of smooth semiprojective varieties [10].
### The topology of the core
For any symplectic \(\mathbb{C}^{*}\)-manifold \(Y\) with contracting \(\varphi\), define
\[\operatorname{Core}(Y):=\{y\in Y\mid\lim_{\mathbb{C}^{*}\ni t\to\infty}t\cdot y \text{ exists}\}=\{y\in Y\mid\text{the }+\nabla H\text{-flowline from $y$ converges}\}.\]
Thus the forward \(X_{\mathbb{R}_{+}}\)-flow of a core point \(y\) converges to some \(y_{\infty}\in\mathfrak{F}\), and the unstable manifold of the \(-\nabla H\) flow of a point \(y_{\infty}\in\mathfrak{F}\) consists of those \(y\in\operatorname{Core}(Y)\) that limit to it.
If \(Y\) is compact, then \(Y=\operatorname{Core}(Y)\). For non-compact \(Y\), we now prove \(\operatorname{Core}(Y)\) is compact. We first need a technical lemma about \(X_{\mathbb{R}_{+}}\)-flowlines, which arise as \(\gamma(t)=e^{2\pi t}\cdot\gamma(0)\) with flow-time \(t\in\mathbb{R}\).
**Lemma 2.11**.: _Given any compact subset \(K\subset Y\), there cannot be \(-X_{\mathbb{R}_{+}}\)-flowlines \(\gamma_{n}:[a_{n},b_{n}]\to Y\) with endpoints \(\gamma_{n}(a_{n}),\gamma(b_{n})\in K\), with \(c_{n}\in[a_{n},b_{n}]\) such that38\(\gamma_{n}(c_{n})\to\infty\)._
Footnote 38: i.e. given any compact subset \(K^{\prime}\subset Y\), for all large enough \(n\) the \(\gamma_{n}\) leave \(K^{\prime}\) at some time \(c_{n}\).
_If \(\gamma\) is an \(X_{\mathbb{R}_{+}}\)-flowline with \(\gamma(0)=p\) and \(\gamma(c_{n})\to\infty\) for some \(c_{n}\to\infty\), then there is a neighbourhood \(p\in U_{p}\) such that for every compact \(K\subset Y\) there is a time \(\tau_{K}>0\) such that \(\varphi_{t}(U_{p})\cap K=\emptyset\) for all \(\tau_{K}<t\in\mathbb{R}_{+}\) (i.e."flowlines uniformly diverge to infinity near a divergent flowline")._
Proof.: Consider an exhaustion39 of \(Y\) by a sequence of compact subsets \(\emptyset=K_{-1}\subset K_{0}\subset K_{1}\subset\cdots\). By redefining the \(K_{n}\), we may assume that \(Y^{\rm in}\subset K_{0}=K\) and that \(K_{n}\subset{\rm Int}(K_{n+1})\) (note that any compact \(K^{\prime}\subset Y\) lies inside some \(K_{n}\), otherwise \(({\rm Int}(K_{n+2})\setminus K_{n})\cap K^{\prime}\) is an open cover of \(K^{\prime}\) with no finite subcover). Suppose the \(\gamma_{n}\) are as in the claim. By \(\mathbb{R}\)-reparametrisation (so \(\gamma_{n}(\cdot+c_{n})\) and translating \([a_{n},b_{n}]\) by \(-c_{n}\)) we may assume that \(p^{\prime}_{n}:=\gamma_{n}(0)\in K_{n}\setminus K_{n-1}\). Let \(\tau_{n}\geq 0\) be the smallest time for which \(p_{n}:=\gamma_{n}(-\tau_{n})\in K_{1}\). After passing to a subsequence, we may assume that \(p_{n}\to p\) in \(K_{1}\). Let \(\gamma:\mathbb{R}\to Y\) be the (unique) \(-\nabla H\) flowline with \(\gamma(0)=p\). By \(\mathbb{R}\)-reparametrisation, we may now assume \(p_{n}=\gamma_{n}(0)\) and \(p^{\prime}_{n}=\gamma(\tau_{n})\). By construction, \(\gamma_{n}|_{[0,\tau_{n}]}\subset Y\setminus K_{0}\subset Y\setminus Y^{\rm in}\). By smooth dependence of ODE solutions on initial conditions, given a compact interval \([0,T]\subset\mathbb{R}\), for large enough \(n\) the restrictions of \(\gamma,\gamma_{n}\) to \([0,T]\) are arbitrarily close. If the \(\tau_{n}\) are unbounded, then it
follows that \(\gamma(t)\subset Y\setminus Y^{\mathrm{in}}\) for \(t\geq 0\), contradicting the contracting assumption on \(\varphi\) as \(\gamma(0)\) does not flow into \(Y^{\mathrm{in}}\) via \(-X_{\mathbb{R}_{+}}=-\nabla H\). By passing to a subsequence, we may assume \(\tau_{n}\to\tau>0\). Then the image \(C\) of \(\gamma:[0,\tau]\to Y\) is compact, but also enters every compact subset \(K_{n}\), meaning that \((\mathrm{Int}(K_{n+2})\setminus K_{n})\cap C\) is an open cover of \(C\) with no finite subcover, contradiction.
To prove the second claim, we first check the sub-claim that \(\gamma(t)\notin K\) for all \(t\geq T_{K}\), for some \(T_{K}\geq 0\) depending on \(K\). In the notation above, we may assume \(\gamma(c_{n})\in K_{n}\setminus K_{n-1}\), and by contradiction \(\gamma(b_{n})\in K\) for some \(b_{n}>c_{n}\). Then \(\gamma_{n}:=\gamma|_{[0,b_{n}]}\) contradicts the first claim (for the compact subset \(K\cup\{p\}\)). Now suppose the second claim fails, so there is a compact subset \(K\) of \(Y\) and there are \(p_{n}\to p\), such that the flowlines \(\gamma_{n}\) of \(X_{\mathbb{R}_{+}}\) starting at \(\gamma_{n}(0)=p_{n}\) satisfy \(\gamma_{n}(b_{n})\in K\) for some \(b_{n}\to\infty\). By smooth dependence of ODE solutions on initial conditions, given \(T>0\), the \(\gamma_{n},\gamma\) restricted to \([0,T]\) are arbitrarily close for sufficiently large \(n\). Thus, after passing to a subsequence of the flowlines \(\gamma_{n}\) and renumbering, we may assume \(\gamma,\gamma_{n}\) are very close on \([0,c_{n}]\), thus: \(\gamma_{n}(c_{n}),\gamma(c_{n})\) are both in \(K_{n}\setminus K_{n-1}\) and both \(\gamma_{n}(t),\gamma(t)\notin K\) for \(t\in[T_{K},c_{n}]\). As \(b_{n}\to\infty\) and \(\gamma_{n}(b_{n})\in K\), this implies that \(b_{n}>c_{n}\) for sufficiently large \(n\). Thus \(\gamma_{n}|_{[0,b_{n}]}\) contradicts the first claim (for the compact subset \(K\cup\{p\}\cup\{p_{n}:n\geq 1\}\)).
**Lemma 2.12**.: _The core is a \(\mathbb{C}^{*}\)-invariant compact subset of \(Y\). It is the union of all unstable manifolds,_
\[\mathrm{Core}(Y)=\bigcup_{y\in\mathfrak{F}}W^{u}_{-\nabla H}(y)=\bigcup_{ \alpha}W^{u}_{-\nabla H}(\mathfrak{F}_{\alpha}).\]
_The complement \(Y^{\prime}:=Y\setminus\mathrm{Core}(Y)\) is a trivial real line bundle over the smooth compact manifold \(\Sigma:=Y^{\prime}/\mathbb{R}_{+}\), whose \(\mathbb{R}\)-fibres parameterise the \(X_{\mathbb{R}_{+}}\)-flowlines in \(Y^{\prime}\). We can identify \(\Sigma\) with a smooth real hypersurface in \(Y^{\prime}\) along which \(X_{\mathbb{R}_{+}}\) is everywhere strictly pointing outward (to infinity)._
Proof.: Recall the convergence point \(y_{0}=\lim_{t\to 0}t\cdot y\in\mathfrak{F}\), and clearly \(\mathfrak{F}\subset\mathrm{Core}(Y)\). So the core is the union of all \(-\nabla H\) flowlines that converge at both ends,40 including asymptotics. Such flowlines must be trapped in some large enough compact subset of \(Y\), otherwise we could build a sequence of them that contradict Lemma 2.11. As they are all trapped in a compact subset, standard arguments about breakings of families of negative gradient flowlines (or arguing directly, by smooth dependence on initial conditions of ODE solutions of \(\gamma^{\prime}(t)=-\nabla H\)) imply that \(\mathrm{Core}(Y)\) is compact.
Footnote 40: Convergence at the negative end is automatic by the contracting assumption.
As \(\mathfrak{F}\subset\mathrm{Core}(Y)\) is disjoint from \(Y^{\prime}\), the \(\mathbb{R}_{+}\)-action on \(Y^{\prime}\) is free. To show that \(Y^{\prime}/\mathbb{R}_{+}\) is a smooth manifold and \(Y^{\prime}\to Y^{\prime}/\mathbb{R}_{+}\) is a smooth principal \(\mathbb{R}_{+}\)-bundle, it is therefore enough to show that this action is also proper.41 One characterisation of properness42 is that whenever we have convergent sequences \(y_{n}\to p\), \(r_{n}\cdot y_{n}\to q\) in \(Y^{\prime}\), then \(r_{n}\in\mathbb{R}_{+}\) needs to have a convergent subsequence. Assuming the contrary, there is a subsequence \(r_{n}\to\infty\) (switching the roles of \(y_{n}\) and \(r_{n}\cdot y_{n}\) changes \(r_{n}\) to \(1/r_{n}\), so this also deals with the case \(r_{n}\to 0\)). By the second claim in Lemma 2.11, \(r_{n}\cdot y_{n}\to\infty\) since \(r_{n}\cdot p\to\infty\) as \(r_{n}\to\infty\). This contradicts that \(r_{n}\cdot y_{n}\to q\).
Footnote 41: This is a standard theorem, see e.g. [1, Thm.1.11.4.].
Footnote 42: Given e.g. in [1, Prop.21.5(b)].
Hence we have a smooth principal \(\mathbb{R}_{+}\)-bundle \(Y^{\prime}\to Y^{\prime}/\mathbb{R}_{+}\), which is thus trivial and has a section.43 Identifying \(\mathbb{R}\cong\mathbb{R}_{+}\) by the exponential map identifies it with a real line bundle whose zero section is the image of the above section. The claim now follows immediately, up to the compactness of \(\Sigma:=Y^{\prime}/\mathbb{R}_{+}\). Since \(\mathrm{Core}(Y)\) is compact, by enlarging \(Y^{\mathrm{in}}\) we may assume that its interior contains \(\mathrm{Core}(Y).\) Then \(\Sigma\) is compact being the image of a surjective quotient map \(\partial Y^{\mathrm{in}}\to Y^{\prime}/\mathbb{R}_{+}\) from the compact \(\partial Y^{\mathrm{in}}\).
Footnote 43: Principal \(\mathbb{R}_{+}\)-fibre bundles are classified up to isomorphism by homotopy classes of maps \(Y^{\prime}/\mathbb{R}_{+}\to B\mathbb{R}_{+}\) into the classifying space. But we can take \(E\mathbb{R}_{+}=\mathbb{R}_{+}\) with the standard \(\mathbb{R}_{+}\)-action, so \(B\mathbb{R}_{+}=(E\mathbb{R}_{+})/\mathbb{R}_{+}=\mathrm{point}\).
**Lemma 2.13**.: _We may always assume \(Y^{\mathrm{in}}\) contains \(\mathrm{Core}(Y)\) in its interior, \(\Sigma:=\partial Y^{\mathrm{in}}\) is a smooth \(S^{1}\)-invariant hypersurface along which \(X_{\mathbb{R}_{+}}\) points strictly outwards, and there is an isomorphism_
\[\Psi:Y^{\mathrm{out}}=Y\setminus\mathrm{int}(Y^{\mathrm{in}})\cong\Sigma\times[ 1,\infty),\quad\rho\cdot y\leftrightarrow(y,\rho).\]
_In particular, \(\rho\) is an \(S^{1}\)-invariant function._
Proof.: If we ignore the \(S^{1}\)-invariance claims, then the claim follows immediately from Lemma 2.12. As \(S^{1}\) is compact, we can ensure that \(S^{1}\cdot Y^{\mathrm{in}}\) does not intersect \(\Sigma\times[\rho_{1},\infty)\) for some finite \(\rho_{1}\), and we redefine \(Y^{\mathrm{out}}\) to be the latter set. The \(S^{1}\)-invariant function \(\widetilde{H}(y)=\int_{S^{1}}\rho(e^{2\pi is}\cdot y)\ ds\) is an exhausting (i.e. proper and bounded below) smooth function on \(Y\), where we extend \(\rho\equiv 1\) on \(Y^{\mathrm{in}}.\) By computing \(d\widetilde{H}(\nabla H)\), using that \(\nabla H=X_{\mathbb{R}_{+}}\), \(d\varphi_{e^{2\pi is}}\cdot X_{\mathbb{R}_{+}}=X_{\mathbb{R}_{+}}\), and \(d\rho(X_{\mathbb{R}_{+}})>0\) by construction, we deduce that \(d\widetilde{H}(\nabla H)>0\). Thus the preimage of a large regular value \(M\) of \(\widetilde{H}\) is a smooth \(S^{1}\)-invariant compact hypersurface that avoids the compact set \(S^{1}\cdot\partial Y^{\mathrm{out}}\), and \(X_{\mathbb{R}_{+}}\) is strictly outward pointing along it. This hypersurface can be taken as the new definition of \(\Sigma:=\widetilde{H}^{-1}(M)=\partial Y^{\mathrm{in}}\) (so the new \(Y^{\mathrm{out}}\) is the \(X_{\mathbb{R}^{+}}\) forward flow of this \(\Sigma\)). The final claim follows as the \(S^{1}\) and \(\mathbb{R}_{+}\) actions commute.
We now show that the core captures the topology of \(Y\) in all reasonable cases.
**Proposition 2.14**.: _If \((Y,\mathrm{Core}(Y))\) is a CW-pair,44 then \(Y\) deformation retracts onto \(\mathrm{Core}(Y)\). This assumption holds when \(I\) is integrable and \(\mathrm{Core}(Y)\subset Y\) is cut out by analytic equations (e.g. all CSRs)._
Footnote 44: More generally, it suffices that \((Y,\mathrm{Core}(Y))\) is a neighbourhood deformation retract pair, in other words the inclusion \(\mathrm{Core}(Y)\to Y\) is a cofibration [11, Sec.6.4].
Proof.: First we observe that \(Y\) can be deformation retracted onto an arbitrarily small neighbourhood \(U\) of \(A:=\mathrm{Core}(Y)\). Namely, by rescaling \(-X_{\mathbb{R}_{+}}\) by a cut-off function vanishing near the core,45 the flow of the resulting vector field can be used to define that deformation. Secondly, recall that for any CW pair \((Y,A)\), \(A\) is a deformation retract of some neighbourhood \(N_{A}\) of \(A\subset Y\)[10, Prop.A.5].
Footnote 45: We remark that the map defined by taking the limits of the \(-X_{\mathbb{R}_{+}}\) flow is usually a discontinuous map.
Now combine the two steps: deform \(Y\) onto an open neighbourhood \(U\) with \(A\subset U\subset N_{A}\), then restrict to \(U\) the deformation of \(N_{A}\) onto \(A\) to get a deformation retraction of \(U\) onto \(A\).
The CW assumption on \((Y,A)\) holds for any compact analytic subset \(A\) of any analytic manifold \(Y\): by Giesecke and Lojasiewicz [11, 12] the pair \((Y,A)\) can even be triangulated.46 This applies to CSRs \(Y=\mathfrak{M}\): the (compact) core is the preimage of an analytic map \(\pi:\mathfrak{M}\to\mathfrak{M}_{0}\) of the unique \(\mathbb{C}^{*}\)-fixed point in \(\mathfrak{M}_{0}\), so it is a compact analytic subset of the analytic manifold \(\mathfrak{M}\).
Footnote 46: by analytic simplices, so the homeomorphisms from the simplices are analytic on the interiors of the simplices.
_Remark 2.15_ (**Atiyah-Bott Filtration**).: Unlike the fixed locus \(\mathfrak{F}\subset Y,\) the core is in general singular. As in Atiyah-Bott's discussion [1, Prop.1.1.9], \(Y\) has a \(\mathbb{C}^{*}\)-equivariant finite Morse stratification: \(Y\) is the disjoint union of the locally closed submanifolds given by the stable manifolds \(U_{\alpha}:=W^{s}_{-\nabla H}(\mathfrak{F}_{\alpha})\) of points whose \(-\nabla H\) flowlines converge to the component \(\mathfrak{F}_{\alpha}\subset\mathfrak{F}=\mathrm{Crit}(H)\). There is a partial order on the indices: \(\alpha<\beta\) if a sequence of \(-\nabla H\)-trajectories \(\gamma_{n}:Y\to\mathbb{R}\), with \(\gamma_{n}(0)\in U_{\alpha}\), converges to a broken trajectory, and one of the breakings occurs at a point in \(\mathfrak{F}_{\beta}\). This gives the closure condition:
\[\overline{U_{\alpha}}\subset\sqcup_{\alpha\leq\beta}U_{\beta}. \tag{26}\]
The partial order is constructed inductively by starting with a local minimum \(\mathfrak{F}_{\alpha}\), for which \(\alpha\) will be minimal w.r.t. \(<\), and investigating breakings. In particular, \(\alpha<\beta\) implies \(H(\mathfrak{F}_{\alpha})<H(\mathfrak{F}_{\beta})\), but the partial order may be "finer" than the total order by \(H\)-values (for the latter, we replace \(\mathfrak{F}_{\alpha}\) by the disjoint union of all \(\mathfrak{F}_{\alpha}\) with the same \(H(\mathfrak{F}_{\alpha})\) value so that antisymmetry holds for the order \(\leq\)). By making the partial ordering "coarser" to obtain a total ordering (taking disjoint unions of \(\mathfrak{F}_{\alpha}\)'s if necessary), we obtain a filtration of \(Y\) by subspaces \(\emptyset=W_{0}\subset W_{1}\subset\cdots\subset Y\). This induces a filtration of \(H^{*}(Y)\) by cup-product ideals by considering \(\ker(H^{*}(Y)\to H^{*}(W_{i}))\). These ideals arise as sums of sub-collections of summands from (28), e.g. those above a given \(H\)-value if we order by \(H\)-values.
We also obtain a stratification of \(\mathrm{Core}(Y)\) by unstable manifolds \(D_{\alpha}:=W^{u}_{-\nabla H}(\mathfrak{F}_{\alpha})\), which could be extended to a stratification of \(Y\) by artificially allowing as a "stable manifold" the open subset \(Y\setminus\mathrm{Core}(Y)\) of points whose \(+\nabla H\) flow goes to infinity. So \(\mathrm{Core}(Y)\) is a compact stratified subspace of \(Y\): the union of the \(W^{u}_{-\nabla H}(\mathfrak{F}_{\alpha})=W^{s}_{+\nabla H}(\mathfrak{F}_{\alpha})\) by Lemma 2.12. Note that to reduce to the previous discussion we would switch the sign to \(+\nabla H\), so if we keep the original partial order \(<\) then (26)
changes inequality-direction:
\[\overline{D}_{\alpha}\subset\sqcup_{\alpha\geq\beta}D_{\beta}. \tag{27}\]
If the stratification by unstable manifolds \(D_{\alpha}\) were a Whitney stratification (thus a Thom-Mather stratification), then \((Y,\operatorname{Core}(Y))\) would be a CW pair by Goresky [11]. If the Morse-Smale condition47 holds then unstable manifolds determine a Whitney stratification by Nicolaescu [13, Ch.4],48 so Proposition 2.14 applies. We are unsure whether \((Y,\operatorname{Core}(Y))\) is a CW pair in complete generality.
Footnote 47: \(W^{*}_{-\nabla H}(p)\) is transverse to \(W^{u}_{-\nabla H}(q)\) for any critical points \(p,q\in\operatorname{Crit}(H)\).
Footnote 48: And proved originally by Laudenbach [12], under an additional hypothesis.
Atiyah-Bott call a subset \(J\) of the indices **open** if \(\lambda\in J\) implies that any index \(\alpha<\lambda\) is also in \(J\) (and an index-subset is **closed** if the complement is open). Then \(U_{J}=\cup_{\alpha\in J}U_{\alpha}\) is open if and only if \(J\) is open (and analogously for the closed case). For \(J\) open, \(U_{J}\subset Y\) is an open symplectic \(\mathbb{C}^{*}\)-submanifold with \(\operatorname{Core}(U_{J})=D_{J}=\cup_{\alpha\in J}D_{\alpha}\), but if \(Y\) lives over a convex base (Definition 5.1), this does not immediately imply that \(U_{J}\) lies over a convex base because the restriction \(\Psi|_{U_{J}}\) is typically not proper. In the subsequent discussion we will use
\[J^{+}:=J\cup\{\lambda\},\]
where \(\lambda\) is a minimal index in the complement \(J^{\prime}\) of \(J\), so \(U_{\lambda}\subset U_{J}\) is closed, and \(D_{\lambda}\subset D_{J}\) is open.
One can also filter \(\operatorname{Core}(Y)\) by "stable subspaces",
\[U_{\alpha}^{C}:=U_{\alpha}\cap\operatorname{Core}(Y),\]
so that we obtain a closure condition like in (26), \(\overline{U}_{\alpha}^{C}\subset\sqcup_{\alpha\leq\beta}U_{\beta}^{C}.\) In particular, \(U_{\alpha}^{C}\subset U_{J}^{C}:=\cap_{\alpha\in J}U_{\alpha}^{C}\) is closed (for open \(J\)). However, the Thom isomorphism argument of [1, Eq.(1.16)] need not apply to the \(U_{J}^{C}\) because in general there is no clear way to deformation retract a neighbourhood \(U_{\lambda}^{tub,C}\) of \(U_{\lambda}\subset U_{J^{+}}\) onto \(\mathfrak{F}_{\lambda}^{tub}:=U_{\lambda}^{tub,C}\cap D_{\lambda}\)_while staying within \(\operatorname{Core}(Y)\)_, unlike the easy case of a neighbourhood \(U_{\lambda}^{tub}\) of \(U_{\lambda}\subset U_{J^{+}}\) deformation retracting to \(\mathfrak{F}_{\lambda}^{tub}\) within \(Y\) by combining a rescaling of the \(-\nabla H\) flow and then exploiting \(\exp_{p}\) for an \(S^{1}\)-invariant metric, for \(p\in\mathfrak{F}_{\lambda}\)[13, Sec.5.1 p.54].
**Definition 2.16**.: A space \(C\) is **HLC** (homologically locally connected) if for each neighbourhood \(U\subset C\) of any \(c\in C\), \(c\) has a neighbourhood \(V\subset U\) making \(\widetilde{H}_{*}(V)\to\widetilde{H}_{*}(U)\) vanish in reduced singular homology. A space is **HLC+** if it is HLC, paracompact49 and Hausdorff. CW complexes are HLC+ spaces [12, Prop.A.4], so this includes all topological manifolds (possibly with boundary) and all real or complex algebraic (or semi-algebraic) varieties [14, 15]. Examples of HLC spaces are locally contractible spaces and any space homotopy equivalent to a CW complex. For paracompact Hausdorff spaces \(X\), Alexander-Spanier cohomology \(H^{*}_{AS}(X)\), and Cech cohomology \(\widetilde{H}^{*}(X)\) are isomorphic [11, Cor.6.8.8] (isomorphic to sheaf cohomology of \(X\) for the sheaf of locally constant coefficients). For \(X\) HLC+ these are isomorphic to singular cohomology \(H^{*}(X)\)[11, Cor.6.9.5].
Footnote 49: A useful fact is that a locally compact, Hausdorff, second countable space is paracompact.
**Proposition 2.17**.: _For any symplectic \(\mathbb{C}^{*}\)-manifold,_
\[H^{*}(Y)\cong H^{*}_{AS}(\operatorname{Core}(Y))\cong\check{H}^{*}( \operatorname{Core}(Y)).\]
_If \(\operatorname{Core}(Y)\) is HLC, then these are isomorphic to \(H^{*}(\operatorname{Core}(Y))\)._
\(\operatorname{Core}(Y)\) _is HLC if and only if each \(\mathfrak{F}_{\alpha}\) admits an HLC neighbourhood in \(\operatorname{Core}(Y)\)._
Proof.: Abbreviate \(A:=\operatorname{Core}(Y)\). By [11, Cor.6.9.9], \(H^{*}_{AS}(A)\cong\varinjlim H^{*}(U)\) over restrictions for all neighbourhoods \(U\) of \(A\). We can pick a cofinal sequence of such \(U\) using the \(\mathbb{C}^{*}\)-action, then it is a direct limit over isomorphisms, so \(H^{*}_{AS}(A)\cong H^{*}(U)\cong H^{*}(Y)\). Note \(A\) is paracompact (since \(A\) is closed and \(Y\) is paracompact) and Hausdorff (since \(Y\) is).
For the last claim, only one direction is non-trivial. Given a neighbourhood \(c\in U\subset\operatorname{Core}(Y)\), \(\varphi_{t}(c)\) converges towards some \(\mathfrak{F}_{\alpha}\) as \(t\to\infty\). So \(c^{\prime}:=\varphi_{H}^{t}(c)\) lies in an HLC neighbourhood of \(\mathfrak{F}_{\alpha}\) for large \(t\in\mathbb{R}_{+}\). Pick neighbourhoods \(c^{\prime}\in V^{\prime}\subset W^{\prime}\) with vanishing \(\widetilde{H}_{*}(V^{\prime})\to\widetilde{H}_{*}(W^{\prime})\) and \(W:=\varphi_{H}^{-t}(W^{\prime})\subset U\). Then \(c\in V:=\varphi_{H}^{-t}(V^{\prime})\subset U\) and \(\widetilde{H}_{*}(V)\to\widetilde{H}_{*}(W)\to\widetilde{H}_{*}(U)\) vanishes as the first map vanishes.
**Corollary 2.18**.: \(\mathrm{Core}(Y)\) _is connected, indeed path-connected._
Proof.: \(H^{0}(Y)\cong H^{0}_{AS}(\mathrm{Core}(Y))\) has rank one as \(Y\) is connected, and Alexander-Spanier cohomology in degree \(0\) detects connectedness [11, Cor.6.4.7]. Path-connectedness will follow from Lemma 2.22 and Corollary 3.16: any \(\mathfrak{F}_{\alpha}\) can be path-connected to \(\mathfrak{F}_{\min}:=\min H\) (which is a connected manifold and thus path-connected) by a succession of \(\mathbb{C}^{*}\)-invariant holomorphic spheres in \(\mathrm{Core}(Y)\) (see Corollary 3.16), and any point of \(\mathrm{Core}(Y)\) lies on such a sphere.
_Remark 2.19_.: In the setup of smooth semiprojective varieties, Hausel-Rodriguez-Villegas [10, Thm.1.3.1 and Cor.1.3.6] studied the Bialynicki-Birula decomposition of the variety, which corresponds in our context to the stable/unstable manifolds above. They first prove that \(H^{*}(Y)\cong H^{*}(\mathrm{Core}(Y))\); secondly, they prove that \(\pi_{*}(\mathrm{Core}(Y))\cong\pi_{*}(Y)\); and in the end, they use that \(\mathrm{Core}(Y)\) is a CW complex in their setting to conclude that the inclusion \(\mathrm{Core}(Y)\to Y\) is a homotopy equivalence by Whitehead's theorem. Their proof, as written, only relies on classical algebraic topology arguments involving the \(U_{\alpha},D_{\alpha}\) from Remark 2.15 and using the Thom isomorphism from [1, Eq.(1.16)]. We believe there is an imprecision in their proof. The second Thom isomorphism in [10, p.119] is claimed for \(\mathrm{Core}(Y)\) using the filtration by \(D_{\alpha}\), but this involves the different closure condition (27). Perhaps the misapprehension was assuming that \(D_{\lambda}\subset D_{J^{+}}\) is closed: this would explain why in [10, Eq.(1.3.9)] they claim to use an embedding \(F^{tub}_{\lambda}\cap(U_{J}\cap D_{J^{+}})\to D_{J}\) which does not in fact exist50 (e.g. in Example 1.4 using \(D_{\lambda}=S^{2}\setminus p\), for the minimum point \(D_{J}=\{p\}=\min H\)). One can salvage both proofs of [10, Thm.1.3.1 and Cor.1.3.6] (respectively the statement that the inclusion induces an isomorphism \(H^{*}(\mathrm{Core}(Y))\cong H^{*}(Y)\) and isomorphisms \(\pi_{*}(\mathrm{Core}(Y))\cong\pi_{*}(Y)\)) for any symplectic \(\mathbb{C}^{*}\)-manifold for which a neighbourhood \(V:=D_{J^{+}}\cap\{H<H(\mathfrak{F}_{\lambda})-(\text{small constant})\}\) of \(D_{J}\) in \(D_{J^{+}}\) is homotopy equivalent to \(D_{J}\) (for example, this holds if a neighbourhood of \(D_{J}\) in \(\mathrm{Core}(Y)\) deformation retracts to \(D_{J}\)). Indeed, one can then excise \(D_{J}\) from \(V\) so: \(H^{*}(D_{J^{+}},D_{J})\cong H^{*}(D_{J^{+}},V)\cong H^{*}(D_{J^{+}}-D_{J},V-D_ {J})\cong H^{*-\mu_{\alpha}}(\mathfrak{F}_{\lambda})\), using the Thom isomorphism by viewing \(D_{\lambda}\) as a vector bundle over \(\mathfrak{F}_{\lambda}\), and the rest of the proof of [10, Thm.1.3.1] would hold. In the proof of [10, Cor.1.3.6], one makes the following replacements in [10, Eq.(1.3.9)]: \(\pi_{1}(D_{\lambda}^{tub})\) by \(\pi_{1}(\mathfrak{F}_{\lambda}^{tub})\); \(\pi_{1}(D_{J})\) by \(\pi_{1}(U_{J}\cap D_{J^{+}})\); then one uses a suitable rescaling of \(-\nabla H\) to obtain a flow inducing \(\pi_{1}(U_{J}\cap D_{J^{+}})\cong\pi_{1}(V)\), and the assumption on \(V\) gives \(\pi_{1}(V)\cong\pi_{1}(D_{J})\). In the context of smooth semiprojective varieties, we are dealing with analytic subvarieties \(D_{J}\subset D_{J^{+}}\), so one could argue the existence of \(V\) as in the proof of Proposition 2.14; and for the cohomological claim it is enough to use the proof of Proposition 2.17 to argue \(H^{*}(D_{J^{+}},D_{J})\cong\varinjlim H^{*}(D_{J^{+}},V_{n})\) over a sequence of \(V_{n}\) which shrink to \(D_{J}\), obtained from \(V\) by flowing with \(-\nabla H\). However, if one already uses those proofs, one might as well bypass the Atiyah-Bott filtration.
Footnote 50: If \(D_{J}\) was a typo for \(U_{J}\cap D_{J^{+}}\), one still needs to prove that \(\pi_{1}(U_{J}\cap D_{J^{+}})\cong\pi_{1}(D_{J})\), which is not obvious as the question of whether \(U_{J}\cap D_{J^{+}}\) deformation retracts onto \(D_{J}\) is not clear without appealing to CW complex structures. One can salvage the proof if there existed a map \(\pi_{1}(U_{J}\cap D_{J})\to\pi_{1}(D_{J})\) that can be inserted in the second diagram of [10, Eq.(1.3.9)], so \(\pi_{1}(\mathfrak{F}_{\lambda}^{tub}\cap(U_{J}\cap D_{J^{+}}))\to\pi_{1}(U_{J} \cap D_{J^{+}})\to\pi_{1}(D_{J})\), making the diagram commute.
### Topology via the moment map, and dealing with its possible non-properness
**Lemma 2.20**.: _There is an exhausting (i.e. proper and bounded from below) Morse-Bott function \(f:Y\to\mathbb{R}\) which equals \(H\) on \(Y^{\mathrm{in}}\), with \(df(X_{\mathbb{R}_{+}})>0\) on \(Y^{\mathrm{out}}\). One can choose \(\Sigma\) in Lemma 2.13 to be a (sufficiently large) level set of \(f\), and \(\nabla f\) is strictly outward pointing on each hypersurface \(\Sigma\times\{\rho\}\). The \(\mathrm{Core}(Y)\) is unaffected, and the Morse-Bott cohomologies of \(H\) and \(f\) coincide at chain level._
Proof.: \(H\) is bounded below, but we may need to modify it on \(Y^{\mathrm{out}}\) to make it proper. In the proof of Lemma 2.13, we built \(\widetilde{H}\) on \(Y^{\mathrm{out}}\). We interpolate \(H\) with \(\widetilde{H}\) on \(\Sigma\times[1,\infty)\), \(f:=(1-\psi(\rho))H+\psi(\rho)\widetilde{H}\), where \(\psi:[1,\infty)\to[0,1]\) is a cut-off function such that \(\psi\) increases from \(\psi=0\) near \(\rho=1\) to \(\psi=1\) for \(\rho\geq 2\). We may assume that \(\widetilde{H}\geq H\) in the region \(\rho\in[1,2]\) (by rescaling \(\widetilde{H}\) by a large constant). A simple calculation shows \(df(X_{\mathbb{R}_{+}})>0\) on \(Y^{\mathrm{out}}\) (using that \(dH,d\widetilde{H},d\rho\) are strictly positive on \(X_{\mathbb{R}_{+}}\) on \(Y^{\mathrm{out}}\) and \(\psi^{\prime}\geq 0\)). Regarding \(\Sigma\): in the proof of Lemma 2.13, we picked \(\Sigma\) to be a level set of \(\widetilde{H}\), and we may choose this level set to lie in the region \(\rho\geq 2\) where \(f=\widetilde{H}\).
**Corollary 2.21**.: _Recall we always assume \(Y\) is connected. If \(H\) is proper, then the fibres of \(H:Y\to\mathbb{R}\) are connected. If \(H\) is not proper,51 then the minimum of \(H\) is connected and so are all level sets \(H=c\) with52\(c<\liminf H\). Moreover, the \(\Sigma\) in Lemma 2.13 is connected._
Footnote 51: e.g. this often occurs naturally for the open subsets \(U_{J}\subset Y\) from Remark 2.15, due to the removal of \(U_{\alpha}\)’s from \(Y\).
Footnote 52: For any sequence of points \(p_{n}\to\infty\in Y\) (meaning \(p_{n}\) leaves any \(K_{i}\) of a given exhaustion of \(Y\) by compact subsets \(K_{i}\)), let \(H_{p}:=\liminf H(p_{n})\). Then \(\liminf H:=\inf\{H_{p}:\text{all sequences $p_{n}\to\infty$}\}\). So \(H\leq c\) is compact for \(c<\liminf H\).
Proof.: The moment map of a Hamiltonian \(S^{1}\)-action on a connected compact symplectic manifold has connected fibres [1, Lemmas 2.1 and 2.2]. That same proof goes through for connected open symplectic manifolds if one assumes that the moment map is exhausting.
If \(H\) is not proper, then consider instead the Morse-Bott function \(f\) in Lemma 2.20. The connectivity lemma in Nicolaescu [12, Lem.3.62] proves the claim for Morse-Bott functions on closed connected manifolds whose indices and co-indices are even. We can adjust his proof to work for exhausting Morse-Bott functions as follows. Suppose \(f\geq c\) is disconnected, for large \(c>0\). By the same handle-attachment argument as in Nicolaescu's proof, \(Y\) is obtained from \(f\geq c\) by handle-attachments which do not change the number of connected components, so this contradicts the connectedness of \(Y\). So \(f\geq c\) is connected. The flow of \(\frac{\nabla f}{\|\nabla f\|^{2}}\) is defined for \(f\geq c\) (there are no critical points), so yields a diffeomorphism \(\{f\geq c\}\cong\{f=c\}\times[c,\infty)\), with the second coordinate being the value of \(f\). So all level sets of \(f\) above \(c\) are connected (it follows in particular that \(\Sigma\) is connected). The rest of Nicolaescu's proof applies, after replacing any occurrences of "\(f=f_{\text{max}}\)" by \(f=c\). The second claim now follows by ensuring that \(H=f\) on a sufficiently large compact set that includes \(H\leq c\) in the interior (and we know that outside of this set, \(df(X_{\mathbb{R}_{+}})>0\) by Lemma 2.20 so \(f>c\) there).
**Lemma 2.22**.: _Over any field of coefficients, we have a module isomorphism_
\[H^{*}(Y)\cong\bigoplus_{\alpha}H^{*}(\mathfrak{F}_{\alpha})[-\mu_{\alpha}]. \tag{28}\]
_Moreover, over field or integer coefficients, \(Y\) has no odd degree cohomology if and only if none of the \(\mathfrak{F}_{\alpha}\) have odd degree cohomology, and \(Y\) has no torsion in cohomology if and only if all \(\mathfrak{F}_{\alpha}\) do not. As \(Y\) is connected, there is a unique_ **minimal component**_\(\mathfrak{F}_{\text{min}}:=\mathfrak{F}_{\alpha}\) with \(\mu_{\alpha}=0\) which generates \(1\in H^{0}(Y)\). It is the locus of absolute minima of \(H\) (there are no other local minima)._
Proof.: By Frankel [11, Sec.4],53 a Hamiltonian \(S^{1}\)-action whose moment map \(H\) is proper and bounded below is a perfect Morse-Bott function, and its fixed loci \(\mathfrak{F}_{\alpha}\) are symplectic submanifolds satisfying (28). In particular, a spectral sequence argument (whose Floer analogue is Corollary 6.9) yields (28). If \(H\) is not proper, we use \(f\) as in Lemma 2.20 instead, which does not affect the argument. Then (28) follows by using the filtration by values of \(H\) to run the argument [11, Sec.4] (it only relies on \(H\) being the moment map of an \(S^{1}\)-action in the vicinity of the critical locus \(\mathfrak{F}\)). The claim about odd cohomology follows immediately, using the fact that the Morse-Bott indices are even. The statement about odd cohomology and torsion follows by universal coefficients [11, Cor.1].
Footnote 53: For any sequence of points \(p_{n}\to\infty\in Y\) (meaning \(p_{n}\) leaves any \(K_{i}\) of a given exhaustion of \(Y\) by compact subsets \(K_{i}\)), let \(H_{p}:=\liminf H(p_{n})\). Then \(\liminf H:=\inf\{H_{p}:\text{all sequences $p_{n}\to\infty$}\}\). So \(H\leq c\) is compact for \(c<\liminf H\).
Footnote 53: [11] deals with compact Kähler manifolds, but we imposed on \(Y\) the conditions needed in the proof. Alternatively, Kirwan [10] and Nakajima [14, Sec.5.1] prove it for Hamiltonian torus actions on symplectic manifolds.
## 3. Torsion, periods, holomorphic spheres, and the attraction graph
### Weight spaces
In the proof of Lemma 2.5 we obtained a unitary54 decomposition
Footnote 54: Using the Hermitian form \(\langle\cdot,\cdot\rangle=g(\cdot,\cdot)+i\omega(\cdot,\cdot)\) on \(T_{p}Y\), where \(g(\cdot,\cdot)=\omega(\cdot,\cdot\cdot)\).
\[T_{y}Y=\oplus_{i}T_{i}\ \ \text{at}\ y\in\mathfrak{F} \tag{29}\]
into two-planes \(T_{i}\) of **weight55**\(w_{i}\in\mathbb{Z}\). Collecting summands of equal weights,
Footnote 55: Explicitly, \(v\neq 0\in T_{y}Y\) has **weight**\(w\in\mathbb{Z}\) means \(d_{y}\varphi_{t}\cdot v=t^{w}v\) for \(t=e^{2\pi is}\) and \(s\in\mathbb{R}/\mathbb{Z}\).
\[T_{y}Y=\oplus_{k\in\mathbb{Z}}H_{k}\ \ \text{at}\ y\in\mathfrak{F}, \tag{30}\]
where the complex vector subspace \(H_{k}=\oplus\{T_{i}:w_{i}=k\}\subset T_{y}Y\) is the **weight space** for weight \(k\in\mathbb{Z}\).
**Lemma 3.1**.: _The \(H_{k}\) are symplectically orthogonal symplectic subspaces for \(\omega\)._
Proof.: Let \(t\in S^{1}\), then \(\varphi_{t}^{*}\omega=\omega\) since \(\varphi_{t}\) is symplectic. So if \(a\in H_{k}\), \(b\in H_{k^{\prime}}\), then
\[\omega(a,b)=\omega(d\varphi_{t}\cdot a,d\varphi_{t}\cdot b)=\omega(t^{k}\,a,t^ {k^{\prime}}\,b)=\cos((k^{\prime}-k)\theta)\omega(a,b)+\sin((k^{\prime}-k) \theta)\omega(a,Ib),\]
where \(t=e^{i\theta}\). As this holds for all \(\theta\in\mathbb{R}\), this implies \(\omega(a,b)=0\) and \(\omega(a,Ib)=0\) unless \(k=k^{\prime}\). Non-degeneracy of \(\omega\) implies the rest of the claim.
**Lemma 3.2**.: _The set of all weights of \(T_{y}Y\) is constant on a connected component \(\mathfrak{F}_{\alpha}\) of \(\mathfrak{F}\), and (30) induces a bundle decomposition \(T_{\mathfrak{F}_{\alpha}}Y=\oplus_{k}H_{k}\) over \(\mathfrak{F}_{\alpha}.\) This yields well-defined numbers_
\[h_{k}^{\alpha}:=\dim_{\mathbb{C}}H_{k}. \tag{31}\]
Proof.: Using a weight decomposition of \(T_{y}Y\), \(d_{y}\varphi_{\tau}\) can be represented by a diagonal matrix with diagonal entries \(\tau^{w_{i}}\) where \(w_{i}\) are the (possibly repeated) weights \(w\in\mathbb{Z}\). Along any path in \(\mathfrak{F}\), the eigenvalues of \(d_{y}\varphi_{\tau}\) vary continuously in \(y\). Our eigenvalues \(\tau^{w_{i}}\) depend on a discrete parameter \(w_{i}\in\mathbb{Z}\), so the \(w_{i}\) are constant on (path-)connected components of \(\mathfrak{F}\). The bundle decomposition follows, since \(H_{k}\) is determined by the property of being the \(\tau^{k}\)-eigenspace of the endomorphism \(d\varphi_{\tau}\) of \(T_{\mathfrak{F}_{\alpha}}Y\).
_Remark 3.3_.: The zero weight subspace at \(y\in\mathfrak{F}_{\alpha}\) corresponds to the fixed component \(H_{0}=T_{y}\mathfrak{F}_{\alpha}\). The negative weight subspace \(H_{-}=\oplus_{k<0}H_{k}\) corresponds (via the exponential map) to \(-\nabla H\) flowlines coming out of \(\mathfrak{F}_{\alpha}\) so they lie in \(\operatorname{Core}(Y)\). For the positive weight subspace \(H_{+}=\oplus_{k>0}H_{k}\) it is more complicated. Consider a small sphere subbundle \(S_{+}\) of \(H_{+}\), and let \(p=\exp_{y}(v)\) for \(v\in S_{+}\). For a subset \(P_{in}\) of \(v\in S_{+}\), the \(+\nabla H\)-flow of \(p\) stays in \(\operatorname{Core}(Y)\) (i.e. flows to a finite point), whereas for \(P_{out}=S_{+}\setminus P_{in}\) it flows to infinity. By Lemma 2.11, \(P_{out}\) is an open subset of \(S_{+}\) (a point \(p^{\prime}\) sufficiently close to \(p\) will also flow out of \(Y^{in}\) and thus not belong to the core). So \(P_{in}=S_{+}\setminus P_{out}\) is closed. In general, the subsets \(P_{in},P_{out}\) can be horrible, as we do not assume the Morse-Smale property for \(H\). We call **outer** weight of the action \(\varphi\) the weights occurring in the subset \(P_{out}\subset H_{+}\).
### Torsion points and torsion submanifolds \(Y_{m,\beta}\)
**Definition 3.4**.: Identify \(\mathbb{Z}/m\) with the subgroup \(\mathbb{Z}/m\hookrightarrow S^{1}\subset\mathbb{C}^{*}\), \(k\mapsto e^{2\pi ik/m}\). For \(m\geq 2\), a \(\mathbb{Z}/m\)**-torsion point** is a \(\mathbb{Z}/m\)-fixed point. Such points define a \(\mathbb{C}^{*}\)-invariant smooth submanifold56\(Y_{m}\subset Y\). We can decompose \(Y_{m}=\sqcup_{\beta}Y_{m,\beta}\) into connected components \(Y_{m,\beta}\) called \(\mathbb{Z}/m\)**-torsion submanifolds**. We show below that each \(Y_{m,\beta}\) converges via the \(\mathbb{C}^{*}\)-flow to (possibly several) fixed components \(\mathfrak{F}_{\alpha}\), and that each \(Y_{m,\beta}\) contains a subcollection of the \(\mathfrak{F}_{\alpha}\).
Footnote 56: a fixed locus of a compact Lie group action on a smooth manifold is a smooth submanifold [4, p.108].
**Lemma 3.5**.:
1. \(Y_{m}\) _contains all_ \(Y_{mb}\) _for integers_ \(b\geq 1\)_;_
2. \(Y_{m}\subset Y\) _is a closed subset, with a relatively open dense stratum_ \(Y_{m}\setminus\cup_{b\geq 2}Y_{mb}\)_;_
3. _each_ \(Y_{m,\beta}\) _is a symplectic_ \(\mathbb{C}^{*}\)_-submanifold of_ \(Y\)_, and its_ \(\mathbb{C}^{*}\)_-action admits an_ \(m\)_-th root;_
4. _if_ \(Y\) _is a symplectic_ \(\mathbb{C}^{*}\)_-submanifold over a convex base (Definition_ 5.1_), then so is each_ \(Y_{m,\beta}\) _with either action from (_3_)._
Proof.: By continuity, a sequence of \(\mathbb{Z}/m\)-fixed points converges to a \(\mathbb{Z}/m\)-fixed point (which may also be a \(\mathbb{Z}/mb\)-fixed point for \(b\geq 2\)). So (1) and (2) are immediate. The linearised \(\mathbb{Z}/m\) action on \(TY|_{Y_{m}}\) decomposes it \(I\)-holomorphically into into \(\mathbb{Z}/m\)-weight spaces, and the zero weight space is \(TY_{m}\), so \(Y_{m}\) is \(I\)-pseudoholomorphic, and thus \(\omega\)-symplectic. So (3) follows (the \(m\)-th root of the \(\mathbb{C}^{*}\)-action is well-defined as \(m\)-th roots of unity act as the identity on \(Y_{m}\)). Claim (4) follows by restricting the map from Equation (5) to \(\Psi|_{Y_{m,\beta}}:Y_{m,\beta}\to B=\Sigma\times[R_{0},\infty)\). If we use the \(m\)-th root of the \(\mathbb{C}^{*}\)-action, we just need to rescale the data \(\alpha,R,R_{0}\) on \(B\) from Definition 5.1 to \(m\alpha\), \(R/m\), \(R_{0}/m\) (this rescales the Reeb field \(\mathcal{R}_{B}\) by \(1/m\): note the definition does not require \(B\) to admit an \(S^{1}\)-action on all of \(B\)).
**Lemma 3.6**.: _At \(p\in\mathfrak{F}_{\alpha}\) the tangent space \(T_{p}Y_{m}\) is the \(\mathbb{Z}/m\)-fixed locus of the linearised action, so_
\[T_{p}Y_{m}=\oplus_{b\in\mathbb{Z}}H_{mb}\subset T_{p}Y. \tag{32}\]
_In a sufficiently small neighbourhood of \(\mathfrak{F}_{\alpha}\), \(Y_{m}\) is the image \(Y_{\alpha,m}^{\rm loc}\) via \(\exp_{\mathfrak{F}_{\alpha}}\) of a small neighbourhood of the zero section in \(\oplus_{b}H_{mb}\subset T_{\mathfrak{F}_{\alpha}}Y\). Globally \(Y_{m}=\cup_{\alpha}(\mathbb{C}^{*}\cdot Y_{\alpha,m}^{\rm loc})\)._
Proof.: The possible proper isotropy groups (i.e. stabilisers) of the \(S^{1}\)-action which contain \(\mathbb{Z}/m\) are \(\mathbb{Z}/(mb)\) for \(b\in\mathbb{N}\). The claim therefore follows by the proof of Lemma 2.5, since near \(\mathfrak{F}_{\alpha}\) the points whose isotropy is \(\mathbb{Z}/(mb)\) corresponds via the exponential map to tangent vectors in \(TY|_{\mathfrak{F}_{\alpha}}\) fixed by \(\mathbb{Z}/(mb)\) (the case \(b=0\) gives \(H_{mb}=H_{0}=T_{p}\mathfrak{F}_{\alpha}\), corresponding to isotropy \(S^{1}\), but that case is already dealt by \(x=p=\exp_{p}(0)\)). The final claim also follows, noting that the summand \(H_{mb}\) for \(b<0\) corresponds to points with isotropy \(\mathbb{Z}/(|b|m)\) but which flow to \(\mathfrak{F}_{\alpha}\) via \(+X_{\mathbb{R}_{+}}\) instead of \(-X_{\mathbb{R}_{+}}\).
**Corollary 3.7**.: _There is a finite number of \(Y_{m,\beta},\) each of which is a union of various \(\mathbb{C}^{*}\cdot Y_{\alpha,m}^{\rm loc}\)._
Proof.: By Lemma 3.6, locally near \(\mathfrak{F}_{\alpha}\) there is a unique \(Y_{m,\beta}\) (if it exists), so for fixed \(m\) we have \(\#\{Y_{m,\beta}\}\leq\#\{\mathfrak{F}_{\alpha}\}\), which is finite as \(\mathfrak{F}=\sqcup\mathfrak{F}_{\alpha}\) is compact. The possible torsion groups \(\mathbb{Z}/m\) is also finite, as each \(m\) is the absolute value of a weight of some \(\mathfrak{F}_{\alpha}\).
**Lemma 3.8**.: _The stable manifold \(U_{\rm min}:=W^{s}_{-\nabla H}(\mathfrak{F}_{\rm min})\) of the minimal component \(\mathfrak{F}_{\rm min}:=\min H\) is open, connected and dense. Moreover, \(U_{\rm min}\) is diffeomorphic to the normal bundle of \(\mathfrak{F}_{\rm min}\)._
Proof.: \(U_{\rm min}=\mathbb{R}_{+}\cdot V\) for any neighbourhood \(V\subset Y\) of \(\mathfrak{F}_{\rm min}\). So \(U_{\rm min}\subset Y\) is open and connected. Any \(\mathfrak{F}_{\gamma}\neq\mathfrak{F}_{\rm min}\) has some negative weight by Lemma 2.22, so its stable manifold has real codimension\(\geq 2\). As \(Y\) is the disjoint union of the finitely many stable manifolds, it follows that \(U_{\rm min}\) is dense.
**Definition 3.9**.: Viewing \(Y_{m,\beta}\) as a symplectic \(\mathbb{C}^{*}\)-manifold, by Lemma 3.8 it contains a unique minimal component \(\min(H|_{Y_{m,\beta}}:Y_{m,\beta}\to\mathbb{R})\) which must arise as some component \(\mathfrak{F}_{\alpha}\) of \(Y\). By minimality, it is the only \(\mathfrak{F}_{\alpha}\subset Y_{m,\beta}\) with the property that all weights \(mb\) in (32) are non-negative.
Call \(\mathfrak{F}_{\alpha}\)\(m\)**-minimal** if it is the minimal component of some \(Y_{m,\beta}\), equivalently if \(\mathfrak{F}_{\alpha}\) has at least one non-zero weight divisible by \(m\) and all such weights are positive (\(T_{p}Y_{m,\beta}=T_{p}\mathfrak{F}_{\alpha}\oplus\oplus_{b\geq 1}H_{mb}\) in (32)).
The \(U_{\rm min}\)-locus of \(Y_{m,\beta}\) is \(U_{\alpha}\cap Y_{m,\beta}\) where \(U_{\alpha}=W^{s}_{-\nabla H}(\mathfrak{F}_{\alpha})\subset Y\), where \(\mathfrak{F}_{\alpha}\) is the minimal component of \(Y_{m,\beta}\). We call \(U_{\rm min}\) the **generic locus** or locus of **generic points** of \(Y_{m,\beta}\). By Lemma 3.8, the generic \(y\in Y_{m,\beta}\) are precisely those points admitting a neighbourhood in \(Y_{m,\beta}\) which converges to the same fixed component in \(Y\) (which must be the minimal component \(\mathfrak{F}_{\alpha}\) of \(Y_{m,\beta}\)).
**Corollary 3.10**.: _The intersection \(Y_{m,\beta}\cap{\rm Core}(Y)\) is path-connected._
Proof.: This follows by Corollary 2.18, as \({\rm Core}(Y_{m,\beta})=Y_{m,\beta}\cap{\rm Core}(Y)\).
**Proposition 3.11**.: _If \(Y_{m,\beta}\) contains a single \(\mathfrak{F}_{\alpha}\), then \(Y_{m,\beta}\) is diffeomorphic to the weight \(m\)-part \(\oplus_{b}H_{mb}=\oplus_{b\geq 0}H_{mb}\) of the normal bundle of \(\mathfrak{F}_{\alpha}\), and \(Y_{m,\beta}\cap{\rm Core}(Y)=\mathfrak{F}_{\alpha}\)._
Proof.: \(\mathfrak{F}_{\alpha}=\min H|_{Y_{m,\beta}}\), so \(Y_{m,\beta}\) equals the \(U_{\rm min}\)-set of \(Y_{m,\beta}\) viewed as a symplectic \(\mathbb{C}^{*}\)-manifold. Let \(p\in Y_{m,\beta}\cap{\rm Core}(Y)\). Then \(\lim_{t\to\infty}t\cdot p\in Y_{m,\beta}\cap\mathfrak{F}_{\alpha}\). But \(H(t\cdot p)\) is non-decreasing in \(t\in\mathbb{R}_{+}\), so \(H(t\cdot p)\leq H(\mathfrak{F}_{\alpha})\), which forces \(H(t\cdot p)=\min H|_{Y_{m,\beta}}\), so \(t\cdot p\in\mathfrak{F}_{\alpha}\) for all \(t\). Thus \(Y_{m,\beta}\cap{\rm Core}(Y)=\mathfrak{F}_{\alpha}\).
**Definition 3.12**.: If \(Y_{m,\beta}\) converges to a single \(\mathfrak{F}_{\alpha}\) we call it a **torsion bundle**, \(\mathcal{H}_{m}\to\mathfrak{F}_{\alpha}\).
### \(S^{1}\)-orbits, holomorphic spheres, Hamiltonian periods, and generic slopes
An \(S^{1}\)**-orbit** is a closed orbit \(x:S^{1}\to Y\) of \(X_{S^{1}}\). As the \(S^{1}\)-flow has period \(1\), non-constant orbits have minimal period \(1/m\) for some positive \(m\in\mathbb{N}\). These consist of fixed points of the \(\mathbb{Z}/m\) action, however the \(\mathbb{Z}/m\) action also fixes orbits of minimal period \(1/mb\) for any \(b\in\mathbb{N}\).
**Corollary 3.13**.: \(Y_{m}\) _is the image of all \(1\)-periodic Hamiltonian orbits of \(\frac{1}{m}H:Y\to\mathbb{R}\)._
Proof.: The \(1\)-orbits of \(\lambda H\) for \(\lambda=1/m\) have period \(1/m\), so minimal period \(1/m\) for some \(b\in\mathbb{N}\). Near \(p\in\mathfrak{F}_{\alpha}\) the \(S^{1}\)-invariant submanifold obtained by evaluating \(\exp_{p}\) on a small neighbourhood of the zero section of \(\oplus_{b\in\mathbb{Z}}H_{mb}\) yields an \(S^{1}\)-invariant submanifold near \(p\), consisting precisely of the points near \(p\) with period57\(1/m\), which are precisely the \(\mathbb{Z}/m\)-fixed points.
Footnote 57: if we wanted _minimal_ period \(1/m\), we would need \(\exp_{p}(v)\) with \(v\in\oplus_{b\in\mathbb{Z}}H_{mb}\) having non-zero entry in \(H_{-m}\oplus H_{m}\).
**Lemma 3.14**.: _Any \(y\in Y\) yields a pseudoholomorphic disc \(\psi_{y}:\mathbb{D}\to Y\), \(\psi_{y}(z)=\varphi_{z}(y)\), \(\psi_{y}(0)=y_{0}\). A unitary basis \(v_{i}\) for \(T_{y_{0}}Y\) induces a canonical unitary (so symplectic) trivialisation \(v_{i}(z)\) of \(\psi_{y}^{*}TY\) with \(v_{i}(z)=v_{i}\). The trivialisation is \(S^{1}\)-equivariant in \(y\) in the sense that \(v_{i}(tz)\) is the canonical trivialisation of \(\psi_{ty}^{*}TY\) induced by \(v_{i}\), for any \(t\in S^{1}\)._
_If the \(S^{1}\)-orbit of \(y\) has minimal period \(1/m\), for \(m\in\mathbb{N}\), then \(\psi_{y}(z)\) is an \(m\)-fold cover of a pseudoholomorphic disc \(\hat{\psi}_{y}:\mathbb{D}\to Y\), and \(v_{i}(z)\) is induced by a canonical trivialisation of \(\hat{\psi}_{y}^{*}TY\), in particular it is \(\mathbb{Z}/m\)-equivariant: \(v_{i}(z)=v_{i}(z)\) whenever \(\zeta^{m}=1\)._
Proof.: Define \(\psi_{y}:\{z\in\mathbb{C}:0<|z|\leq 1\}\to Y\) by \(\psi_{y}(z)=\varphi_{z}(y).\) Observe that \(\psi_{y}\) extends continuously over \(0\) via the convergence point \(\psi_{y}(0):=y_{0}\). If this were a complex holomorphic setup (i.e. for an integrable complex structure), we could argue that in local coordinates \(\psi\) extends holomorphically over \(0\) in each coordinate by the classical removable singularity theorem. In the almost complex setup, Gromov's removable singularity theorem (e.g. see [11, Thm.4.1.2]) implies the same result provided we show that the energy of \(\psi_{y}\) is bounded. As \(y_{0}\) is a hyperbolic fixed point (Remark 2.6), the flow converges exponentially to \(y_{0}\). Thus \(\psi_{y}\), viewed as a pseudoholomorphic cylinder, converges exponentially near \(z=0\), and therefore it has bounded energy as required.
The bundle \(\psi_{y}^{*}TY\) is a complex vector bundle over \(\mathbb{D}\). Taking the \(S^{1}\)-invariant almost Kahler metric \(g(\cdot,\cdot)=\omega(\cdot,I\cdot)\), we can trivialise \(\psi_{y}^{*}TY\) unitarily by parallel transporting a unitary basis \(v_{i}\) of \(T_{y_{0}}Y\) radially outwards from the centre of the disc.58 This trivialisation is canonical up to a choice of unitary basis for \(T_{y_{0}}Y\), and it is also symplectic with respect to \(\omega\) since \((Y,g,\omega,I)\) is almost Kahler. The final two claims follow by the construction (noting that \(\hat{\psi}_{y}(z^{m})=\psi_{y}(z)\)).
Footnote 58: Recall that parallel transport for an almost Kähler manifold \((Y,g,\omega,I)\) is unitary.
**Corollary 3.15**.: _Any \(S^{1}\)-orbit \(x=x(t)\) with minimal period \(\lambda\) yields a holomorphic map \(c=\psi_{x(0)}:\mathbb{C}\to Y\), \(c(e^{2\pi(s+it)})=\varphi_{e^{2\pi s}}(x(t))\), converging to a fixed point \(p=c(0)=x(0)_{0}\in\mathfrak{F}\), and \(1/\lambda\) is a weight of \(T_{p}Y\). In particular, the minimal periods of all orbits of \(X_{S^{1}}\) form a finite subset of \(S^{1}\)._
Proof.: The tangent space to the filling disc \(c\) in the claim lies \(TY_{m}\) where \(m=1/\lambda\). By Lemma 3.2, there are finitely many weights for each of the finitely many components of \(\mathfrak{F}\).
**Corollary 3.16**.: \(\operatorname{Core}(Y)\) _is covered by copies of \(\mathbb{CP}^{1}\) arising as the closures of \(\mathbb{C}^{*}\)-orbits. These spheres are embedded except for the constant spheres that lie at fixed points (where several different \(\mathbb{CP}^{1}\) may meet). The \(\mathbb{C}^{*}\)-orbit closure of any \(y\in\operatorname{Core}(Y)\) determines a pseudoholomorphic sphere_
\[u_{y}:\mathbb{CP}^{1}\to Y,\ \ u_{y}([1:t])=\varphi_{t}(y),\ \ u_{y}([1:0])=y_{0},\ \ u _{y}([0:1])=y_{\infty},\]
_where \(y_{0}\) is the convergence point of \(y\), and \(y_{\infty}\) is the limit of \(\varphi_{t}(y)\) as \(|t|\to+\infty\). In particular,_
\[\mathbb{C}\cong\operatorname{im}du_{y}|_{[1:0]}\subset T_{y_{0}}Y\ \ \text{and}\ \ \mathbb{C}\cong \operatorname{im}du_{y}|_{[0:1]}\subset T_{y_{\infty}}Y\]
_are weight subspaces of opposite weights \(k\) and \(-k\) respectively, for some integer \(k\geq 0\) (with \(k=0\) precisely if \(u_{y}\equiv y\in\mathfrak{F}\) is constantly equal to a fixed point)._
Proof.: The first part follows by Lemma 3.14, where we can use the inverse action by \(s=t^{-1}\in\mathbb{C}^{*}\) to deal with the extension over \(y_{\infty}\) (which exists as \(y\in\operatorname{Core}(Y)\)). The statement about weight subspaces follows from the observation that \(y_{0},y_{\infty}\) are fixed points and the image of \(u_{y}\) is a subspace preserved by the \(\mathbb{C}^{*}\)-action, so the tangent spaces to those subspaces at \(y_{0},y_{\infty}\) are \(\mathbb{C}^{*}\)-invariant. The fact that the weights are opposite is due to the above-mentioned change of local variables \(s=t^{-1}\) on \(\mathbb{P}^{1}\). That \(k\geq 0\) is because \(t^{k}\) must be contracting on \(\operatorname{im}du_{y}|_{[1:0]}\) as \(y_{0}\) is the convergence point.
**Definition 3.17**.: We call \(\lambda\in[0,\infty)\) a **(\(\varphi\)-)generic slope** if \(\lambda\) is not equal to a period of an \(S^{1}\)-orbit; equivalently \(\lambda\notin\cup_{i}\mathbb{Z}\cdot\frac{1}{w_{i}}\subset\mathbb{Q}\), where \(w_{i}\) are the weights of \(\varphi.\) Such \(\lambda\) define a generic set in \([0,\infty)\) as there are only finitely many possible weights. We call \(\lambda>0\) an **outer \(S^{1}\)-period** if \(\lambda=\frac{k}{m}\), for coprime positive integers \(k,m\), such that \(Y_{m}\) is non-compact, or equivalently \(m\) is the outer weight of some \(\mathfrak{F}_{\alpha}\) (see Remark 3.3).59
Footnote 59: Only the outer \(S^{1}\)-periods arise when we consider the Morse-Bott manifolds \(B_{k/m,\beta}\) (see the introduction). One can allow a Hamiltonian \(H_{\lambda}\) to have non-generic slope \(\lambda\), if \(\lambda\) is not an outer \(S^{1}\)-period. Our definition of generic \(\lambda\) is the most suitable one for stating results about the indices in Section 4. If \(\lambda\) is an \(S^{1}\)-period which is not an outer \(S^{1}\)-period, one must be cautious that the Hamiltonian \(\lambda H\) will detect the \(1\)-orbits in \(\operatorname{Core}(Y)\) of “inner” \(S^{1}\)-period \(\lambda\).
**Corollary 3.18**.: _For generic slopes \(\lambda\), the only \(1\)-periodic orbits of the rescaled Hamiltonian \(\lambda\cdot H\) are the constant orbits at points \(x\in\mathfrak{F}\), in particular there are no \(1\)-periodic orbits in \(Y^{\operatorname{out}}=Y\setminus Y^{\operatorname{in}}.\)_
### Attraction graphs
The geometry of the \(\mathbb{C}^{*}\)-flow within the core is depicted as follows:
**Definition 3.19**.: The **attraction graph**\(\Gamma_{\varphi}\) is a directed graph whose vertices represent the \(\mathfrak{F}_{\alpha}\), and there are \(N\) edges from \(\alpha_{1}\) to \(\alpha_{2}\) if the space of \(\mathbb{C}^{*}\)-flowlines from \(\mathfrak{F}_{\alpha_{1}}\) to \(\mathfrak{F}_{a_{2}}\) (as \(t\to\infty\), so \(H\) increases) has \(N\) connected components. The **leaves** of \(\Gamma_{\varphi}\) are vertices with no outgoing edges, i.e. if \(\mathfrak{F}_{\alpha}\) is a local maximum of \(H|_{\operatorname{Core}(Y)}\). A leaf is \(m\)**-minimal** if the corresponding \(\mathfrak{F}_{\alpha}\) is \(m\)-minimal (Definition 3.9).
**Lemma 3.20**.: \(\Gamma_{\varphi}\) _is a connected directed acyclic graph._
Proof.: \(H\) is strictly increasing along the \(\mathbb{R}_{+}\)-action as \(X_{\mathbb{R}_{+}}=\nabla H\), so there is no directed cycle. \(\Gamma_{\varphi}\) is connected as there is a path of edges from \(\mathfrak{F}_{\min}\) to any \(\mathfrak{F}_{\alpha}\) (by Lemma 2.22 and Corollary 3.16).
**Proposition 3.21**.: _Each \(m\)-minimal leaf for \(m\geq 2\) has an \(m\)-torsion bundle converging to it (thus the action \(\varphi\) is not free outside of \(\operatorname{Core}(Y)\))._
_For a CSR with \(\#\text{Vertices}(\Gamma_{\varphi})\geq 2\), every leaf \(\alpha\) is \(m_{\alpha}\)-minimal for the largest weight \(m_{\alpha}\geq 2\) of \(\mathfrak{F}_{\alpha}\)._
_If \(c_{1}(Y)=0\), \(\#\text{Vertices}(\Gamma_{\varphi})\geq 2\) and \(\operatorname{Core}(Y)\subset Y\) is equicodimensional near \(\mathfrak{F}_{\min}\) and near some leaf, then \(\varphi\) does not act freely outside of \(\operatorname{Core}(Y)\)._
Proof.: Let \(\alpha\) be any leaf, so \(TY|_{\mathfrak{F}_{\alpha}}=T\mathfrak{F}_{\alpha}\oplus H_{-}\oplus H_{+}\), and \(\exp|_{\mathfrak{F}_{\alpha}}\) maps a neighbourhood of the zero section of \(T\mathfrak{F}_{\alpha}\oplus H_{-}\) into \(\operatorname{Core}(Y)\), whereas for \(H_{+}\setminus\{0\}\) it maps into \(Y\setminus\operatorname{Core}(Y)\).
Suppose a leaf \(\alpha\) is \(m\)-minimal, and let \(Y_{m,\beta}\) be the torsion submanifold containing \(\mathfrak{F}_{\alpha}\). Near \(\mathfrak{F}_{\alpha}\), \(Y_{m,\beta}\) only intersects \(\operatorname{Core}(Y)\) at \(\mathfrak{F}_{\alpha}\) (since there are no negative weights \(mb\) for \(b<0\) and all positive weight directions point out of the core at a leaf). By Corollary 3.10, \(Y_{m,\beta}\) contains a single fixed component, \(\mathfrak{F}_{\alpha}\), so it is a torsion bundle over \(\mathfrak{F}_{\alpha}\).
For the second claim we use (52): the \(\omega_{\mathbb{C}}\)-duality isomorphism \(H_{s-(-m)}=H_{s+m}\cong H_{-m}\) for all \(m\), where \(s\geq 1\) is the weight of the CSR. For \(m_{\alpha}\) as in the claim, there cannot be a weight \(-m_{\alpha}b\) for \(b\geq 1\) as \(\omega_{\mathbb{C}}\)-duality would imply there is a weight \(s-(-m_{\alpha}b)=s+m_{\alpha}b>m_{\alpha}\).
For the third claim, we prove more generally that if \(\varphi\) is free outside of \(\operatorname{Core}(Y)\), and \(c_{1}(Y)=0\), then the Maslov index \(\mu\) (Section 4.1) satisfies the following at leaves \(\alpha\), for \(H_{+}\) as in Remark 3.3,
\[\operatorname{codim}_{\mathbb{C}}\mathfrak{F}_{\min}\leq\mu\leq c_{\alpha}:= \dim_{\mathbb{C}}H_{+}=(\text{local complex codimension of $\operatorname{Core}(Y)$ near $\mathfrak{F}_{\alpha}$}),\]
and the second inequality is strict if \(\#\text{Vertices}(\Gamma_{\varphi})\geq 2\). By the freeness assumption, all positive weights at \(\mathfrak{F}_{\alpha}\) must be \(+1\) (as any \(m\)-torsion submanifold (for \(m\geq 2\)) would contradict freeness). So \(\mu\leq c_{\alpha}\) by definition, with strict inequality if \(\mathfrak{F}_{\alpha}\) has a negative weight, equivalently when \(\alpha\) is not an isolated leaf (equivalently, by connectedness in Lemma 3.20, \(\#\text{Vertices}(\Gamma_{\varphi})\geq 2\)). At \(\mathfrak{F}_{\min}\) all non-zero weights are positive, so the Maslov index satisfies \(\mu\geq\operatorname{codim}_{\mathbb{C}}\mathfrak{F}_{\min}\). The third claim follows.
**Definition 3.22**.: The **extended attraction graph**\(\widetilde{\Gamma}_{\varphi}\) decorates \(\Gamma_{\varphi}\) with an outward-pointing arrow at a vertex \(\alpha\) for each torsion bundle \(\mathcal{H}_{m}\to\mathfrak{F}_{\alpha}\).
**Example 3.23**.: Let \(Y\) be the minimal resolution of the \(A_{4}\)-singularity \(\pi:X_{\mathbb{Z}/5}\to\mathbb{C}^{2}/(\mathbb{Z}/5).\) The standard weight-\(2\)\(\mathbb{C}^{*}\)-action on this CSR is induced from the weight-\(1\) diagonal action on \(\mathbb{C}^{2}.\) The core \(\pi^{-1}(0)\) consists of a Dynkin \(A_{4}\) tree of spheres that intersect transversely. There are five \(\mathbb{C}^{*}\)-fixed
points, three of which are intersections of spheres. The other two are the leaves of \(\widetilde{\Gamma}_{\varphi}\) in Figure 1. Each leaf has one torsion bundle attached to it, so \(\widetilde{\Gamma}_{\varphi}\) has two outward arrows labelled with the weight of the torsion bundle (in blue). Each directed edge of \(\Gamma_{\varphi}\) is labelled by the positive outgoing weight and negative incoming weight, which are opposite by Corollary 3.16. That pairs of edges at each vertex have weights summing to \(2\) is due to this CSR having weight \(s=2\), and using (52).
## 4. Robbin-Salamon and Maslov index calculations
### Maslov index
Let \((Y,\omega,I,\varphi)\) be a symplectic \(\mathbb{C}^{*}\)-manifold with non-trivial \(\mathbb{C}^{*}\)-action. We now assume \(c_{1}(Y)=0\), so Hamiltonian Floer cohomology (and thus \(SH^{*}(Y,\varphi)\)) can be \(\mathbb{Z}\)-graded by making a choice of trivialisation of the canonical bundle \(\Lambda_{\mathbb{C}}^{\text{top}}T^{*}Y\) (see Appendix A and [13, Sec.3.6]). The Hamiltonian \(S^{1}\)-action \(\varphi\) admits a **Maslov index**60\(\mu=\mu(\varphi)\)[10]. At a fixed point of the \(S^{1}\)-action, its linearisation is a loop of unitary matrices \(U_{t}\) and the Maslov index \(\mu\) equals61 the degree in \(\mathbb{Z}\) of the loop of determinants \(\det U_{t}\subset S^{1}\). Equivalently,62\(\varphi_{t}^{*}\) acts on \(\Lambda_{\mathbb{C}}^{\text{top}}T^{*}Y\) by rotation with speed \(\mu\). For the definition of Robbin-Salamon indices we refer to Appendix A.
Footnote 60: In general, this involves a certain choice of lift of the action to the cover formed by capping discs, but we will always choose the lift which preserves the constant disc at a (hence any, as \(c_{1}(Y)[\pi_{2}(Y)]=0\)) fixed point of the \(S^{1}\)-action.
Footnote 61: compare [13, Lemma 48 and 71].
Footnote 62: compare [13, Thm.48].
**Lemma 4.1**.: \(\mu=\sum w_{i}>0\) _for the weights \(w_{i}\) of the \(S^{1}\)-action at any fixed point \(x\in\mathfrak{F}\subset Y\)._
Proof.: This follows from (29). At \(p\in\mathfrak{F}_{\text{min}}\) non-zero weights are positive, and there must be a positive weight (otherwise \(Y=\mathfrak{F}_{\text{min}}\), \(H\equiv 0\), and the \(\mathbb{C}^{*}\)-action is trivial), indeed \(\mu\geq\operatorname{codim}_{\mathbb{C}}\mathfrak{F}_{\alpha}\geq 1\).
**Proposition 4.2**.: _For a generic slope \(\lambda\), the only \(1-\)periodic orbits of the Hamiltonian \(\lambda H\) are the constant orbits, i.e. the fixed points \(x\in\mathfrak{F}\), and their Robbin-Salamon indices satisfy (uniformly in \(x\))_
\[\lim_{\lambda\to+\infty}RS(x,\lambda H)=+\infty. \tag{33}\]
Proof.: By Corollary 3.18 we just need to consider \(x\in\mathfrak{F}\). By (29), on \(T_{x}Y=\oplus T_{i}\) the linearisation of the flow of \(\lambda H\) is rotation in \(T_{i}\) with speed \(\lambda w_{i}\). The latter rotation contributes \(\mathbb{W}(\lambda w_{i})\) to the Robbin-Salamon index, in the notation of Theorem A.1. Using Equation (75):
\[RS(x,\lambda H)=\sum_{i}\mathbb{W}(\lambda w_{i})\geq\sum_{i}(2\lambda w_{i}-1 )=2\lambda\sum_{i}w_{i}-\dim_{\mathbb{C}}Y=2\lambda\mu-\dim_{\mathbb{C}}Y,\]
(we did not assume genericity of \(\lambda\) here). Since \(\mu>0\), this diverges to \(\infty\) uniformly in \(x\) as \(\lambda\to\infty\).
_Remark 4.3_.: For a non-constant \(1\)-orbit \(x\) of \(\lambda H\) for non-generic \(\lambda\), by homotopy invariance of Robbin-Salamon indices we can first push \(x\) towards the core by using the \(\mathbb{R}_{+}\)-action. Thus the index of \(x\) equals the index of the convergence point \(x_{0}\), whose index we computed above. If \(x\) arises as a \(1\)-orbit of a Hamiltonian \(F\) in a region where \(F=c(H)\), for a smooth function \(c\), the Robbin-Salamon index of \(x\) agrees with that computed for \(\lambda H\), where \(\lambda=c^{\prime}(H(x))\), up to a shift in the index by \(\pm\frac{1}{2}\) depending on the sign of \(c^{\prime\prime}(H(x))\), due to a shear [1, Sec.3.3]. Thus the Robbin-Salamon indices also diverge to \(+\infty\) for \(1\)-orbits \(x\) of \(c(H)\) arising with higher and higher slopes \(c^{\prime}(H(x))\).
### The indices \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\)
Our **grading convention** for Floer cohomology follows [13], namely a \(1\)-periodic orbit \(x\) of a Hamiltonian \(F\) has grading
\[|x|:=\dim_{\mathbb{C}}Y-RS(x,F).\]
Our Floer theoretic grading conventions ensure that for \(C^{2}\)-small Morse Hamiltonians the Floer grading agrees with the Morse index grading.
Figure 1. Extended attraction graph for the minimal resolution of \(A_{4}\)-singularity
For the Hamiltonian \(\lambda H\), the **grading** of a connected component \(\mathfrak{F}_{\alpha}\) of \(\mathfrak{F}\) is defined as
\[\boxed{\mu_{\lambda}(\mathfrak{F}_{\alpha}):=\dim_{\mathbb{C}}\ Y-\dim_{ \mathbb{C}}\,\mathfrak{F}_{\alpha}-RS(x,\lambda H)}\]
which is independent of the choice of \(x\in\mathfrak{F}_{\alpha}\). This \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\) is the Floer grading of \(\mathfrak{F}_{\alpha}\) seen as a Morse-Bott manifold of Hamiltonian \(1\)-orbits of \(\lambda H\) (see [10, Appendix A]).
**Lemma 4.4**.: _The \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\) are even integers for positive \(\lambda\notin\cup_{i}\mathbb{Z}\cdot\frac{1}{w_{i}}\subset\mathbb{Q}\), where \(w_{i}\) are weights of \(\mathfrak{F}_{\alpha}\)._
Proof.: The linearised flow for \(\lambda H\) acts on \(H_{k}\) by rotation with speed \(\lambda k\), so its contribution to the RS-index is \(\mathbb{W}(\lambda k)\cdot\dim_{\mathbb{C}}H_{k}\). For \(k\neq 0\), the assumption on \(\lambda\) implies \(\lambda k\not\in\mathbb{Z}\) for the weights \(k\) of \(\mathfrak{F}_{\alpha}\), thus \(\mathbb{W}(\lambda k)\) is odd, and for \(k=0\) we have \(\mathbb{W}(\lambda k)=\mathbb{W}(0)=0\) (see Equation (75)). Thus,
\[\mu_{\lambda}(\mathfrak{F}_{\alpha})=\dim_{\mathbb{C}}Y-\dim_{\mathbb{C}}\, \mathfrak{F}_{\alpha}-\sum_{k\neq 0}\dim_{\mathbb{C}}(H_{k})\mathbb{W}( \lambda k)\stackrel{{\mathrm{mod}\,2}}{{=}}\dim_{\mathbb{C}}Y- \dim_{\mathbb{C}}H_{0}-\sum_{k\neq 0}\dim_{\mathbb{C}}(H_{k})=0.\qed\]
By definition and the proof of Lemma 2.5, the Morse-Bott index \(\mu_{\alpha}\) of \(\mathfrak{F}_{\alpha}\) is twice the number of two-planes \(T_{i}\) in (29) on which the linearised \(S^{1}\)-action rotates clockwise, so it is automatically even. In the notation \(h_{k}^{\alpha}=|H_{k}|\) of (31), where \(|V|:=\dim_{\mathbb{C}}\,V\), we can rewrite \(\mu_{\alpha}\) and the Maslov index \(\mu\):
\[\mu_{\alpha}=2\sum_{k<0}h_{k}^{\alpha}\quad\text{ and }\quad\mu=\sum_{k}k \cdot h_{k}^{\alpha}=\sum_{k>0}k(h_{k}^{\alpha}-h_{-k}^{\alpha}).\]
**Corollary 4.5**.: \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\leq 2|Y|-|\mathfrak{F}_{\alpha}|-2\lambda\mu\) _for any \(\lambda>0\), so \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\to-\infty\) as \(\lambda\to\infty\), and_
(34) \[\mu_{\lambda}(\mathfrak{F}_{\alpha})=|Y|-|\mathfrak{F}_{\alpha}|-\sum_{k} \mathbb{W}(\lambda k)\,h_{k}^{\alpha}=\sum_{k\neq 0}(1-\mathbb{W}(\lambda k)) \,h_{k}^{\alpha}=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Example 4.9**.: We illustrate how the indices vary for an example of a \(\mathbb{C}^{*}\)-action on the minimal resolution of the \(A_{3}\)-singularity \(\{XY-Z^{4}=0\}\subset\mathbb{C}^{3}\), obtained by lifting the \(\mathbb{C}^{*}\)-action
\[t\cdot(X,Y,Z)=(tX,t^{3}Y,tZ).\]
The fixed locus consists of \(\mathbb{C}P^{1}:=\mathfrak{F}_{\beta}\) and two points \(\mathfrak{F}_{\alpha}\), \(\mathfrak{F}_{\gamma}\), with weight decompositions \(H_{3}\oplus H_{-2}\), \(H_{2}\oplus H_{-1}\) where all \(|H_{i}|=1\). Then \(\mu_{\lambda}(F_{\alpha})=2-\mathbb{W}(3\lambda)+\mathbb{W}(2\lambda)\) and \(\mu_{\lambda}(F_{\gamma})=2-\mathbb{W}(2\lambda)+\mathbb{W}(\lambda)\) (using \(\mathbb{W}(-2\lambda)=-\mathbb{W}(2\lambda)\)). The \(0\)-th critical times are the \(\lambda<\frac{1}{3}\), the other critical times are \(\frac{1}{3}\), \(\frac{1}{2}\), \(\frac{2}{3}\), \(1\), etc. At critical times \(\mu_{\lambda}(F_{\alpha})\) is (\(\mu_{\alpha}=2\)), \(1\), \(1\), \(1\), \(0\), etc., and \(\mu_{\lambda}(F_{\gamma})\) is (\(\mu_{\gamma}=2\)), \(2\), \(1\), \(0\), \(0\), etc. If we allow values of \(\lambda\) in between critical times, then the first sequence fails to be non-increasing as \(\mathfrak{F}_{\alpha}\) is not \(2\)-minimal (Definition 3.9): \(\mu_{\lambda}(F_{\alpha})\) is (\(\mu_{\alpha}=2\)), \(1\), \(\mathbf{0}\), \(1\), \(\mathbf{2}\), \(1\), \(\mathbf{0}\), \(0\), etc. (in bold the values in between critical times). Whereas \(\mu_{\lambda}(F_{\gamma})\) is (\(\mu_{\gamma}=2\)), \(2\), \(\mathbf{2}\), \(1\), \(\mathbf{0}\), \(0\), \(\mathbf{0}\), etc. There is a torsion submanifold \(Y_{3}\) containing \(\mathfrak{F}_{\alpha}\), \(Y_{2}^{\prime}\) containing \(\mathfrak{F}_{\alpha}\), \(Y_{2}\) containing \(\mathfrak{F}_{\gamma}\). Then \(\operatorname{rk}(Y_{m})=1\) for \(m=2,3\). There is a flowline from \(\mathfrak{F}_{\alpha}\) to \(\mathfrak{F}_{\gamma}\) following the \(H_{-2}\) direction while staying within \(Y_{2}^{\prime}\) (Definition 3.9), so in fact \(Y_{2}=Y_{2}^{\prime}\) are the \(\mathbb{Z}/2\)-torsion points of \(Y\). Note \(\mu_{\lambda}(F_{\alpha})=\mu_{\lambda}(F_{\gamma})\) for \(\lambda=\frac{1}{2}\) as \(Y_{2}=Y_{2}^{\prime}\) connects \(\mathfrak{F}_{\alpha}\), \(\mathfrak{F}_{\gamma}\). This fails for general \(\lambda\) as we cannot connect \(\mathfrak{F}_{\alpha}\) to \(\mathfrak{F}_{\beta}\) by \(1\)-orbits of \(\lambda H\).
We seek a reasonable condition (which holds for all CSRs) that ensures that \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\) is bounded above by \(\mu_{\alpha}\). At \(\mathfrak{F}_{\alpha}\), order the non-zero-weight spaces by decreasing absolute weight: \((H_{k_{1}}\oplus H_{-k_{1}})\oplus(H_{k_{2}}\oplus H_{-k_{2}})\oplus\cdots\) where \(k_{1}>k_{2}>\cdots>0\) (we allow the possibility that one of \(H_{k_{j}},H_{-k_{j}}\) is zero). For a general \(Y\), we cannot exclude that \(h^{\alpha}_{-k_{1}}>h^{\alpha}_{k_{1}}\), which would cause \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\) to jump above \(\mu_{a}\) at the critical time \(\frac{1}{k_{1}}\). So the condition involves the behaviour of the differences \(h^{\alpha}_{k_{j}}-h^{\alpha}_{-k_{j}}\).
**Definition 4.10**.: We call \(Y\)**compatibly-weighted** if \(\delta^{\alpha}_{r}\geq 0\) for all \(r\) and all \(\alpha\), where
\[\delta^{\alpha}_{r}:=\sum_{j=1}^{r}(h^{\alpha}_{k_{j}}-h^{\alpha}_{-k_{j}}).\]
**Example 4.11**.: All CSRs are compatibly-weighted. We just need the fact (52) about CSRs: the \(\omega_{\mathbb{C}}\)-duality isomorphism \(H_{s-(-m)}=H_{s+m}\cong H_{-m}\) for all \(m\), where \(s\geq 1\) is the weight of the CSR. Thus \(h^{\alpha}_{s+m}-h^{\alpha}_{-m}=0.\) It easily follows that \(\delta^{\alpha}_{r}\geq 0\) since if \(h^{\alpha}_{-m}\) appears in \(\delta^{\alpha}_{r}\) for \(m\geq 0\), then so must \(h^{\alpha}_{s+m}\) since \(s+m\geq|-m|\). In Example 4.9 we had a CSR \(Y\) with \(s=1\), and the weight \(s+m=3\) ensured that \(-m=-2\) (causing the problematic \(\mathbb{W}(-2\lambda)\)) did not make \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\) jump beyond \(\mu_{\alpha}\).
**Proposition 4.12**.: _If \(Y\) is compatibly-weighted (Definition 4.10) then \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\leq\mu_{\alpha}\) for all \(\lambda>0\)._
Proof.: Recall \(\mu_{\alpha}-\mu_{\lambda}(\mathfrak{F}_{\alpha})=\sum_{k>0}(\mathbb{W}( \lambda k)-1)(h^{\alpha}_{k}-h^{\alpha}_{-k})\). We will use the property that \(\mathbb{W}(a)\geq\mathbb{W}(b)\) for \(a\geq b\) repeatedly, and that \(\mathbb{W}(\lambda k_{j})-1\geq 0\) since \(k_{j}\geq 1\). We claim by induction on \(r\geq 1\) that
\[\sum_{j=1}^{r}(\mathbb{W}(\lambda k_{j})-1)(h^{\alpha}_{k_{j}}-h^{\alpha}_{-k_{j }})\geq(\mathbb{W}(\lambda k_{r})-1)\delta^{\alpha}_{r}.\]
This implies the claim: taking \(r\) to be the final value, we get \(\mu_{\alpha}-\mu_{\lambda}(\mathfrak{F}_{\alpha})\geq(\mathbb{W}(\lambda k_{r} )-1)\delta^{\alpha}_{r}\geq 0\). For \(r=1\) the inductive claim is an equality. Now the inductive step, using \(\delta^{\alpha}_{r}\geq 0\):
\[\sum_{j=1}^{r+1}(\mathbb{W}(\lambda k_{j})-1)(h^{\alpha}_{k_{j}}-h ^{\alpha}_{-k_{j}}) \geq(\mathbb{W}(\lambda k_{r})-1)\delta^{\alpha}_{r}+(\mathbb{W}( \lambda k_{r+1})-1)(h^{\alpha}_{k_{r+1}}-h^{\alpha}_{-k_{r+1}})\] \[\geq(\mathbb{W}(\lambda k_{r+1})-1)\delta^{\alpha}_{r}+(\mathbb{W}( \lambda k_{r+1})-1)(h^{\alpha}_{k_{r+1}}-h^{\alpha}_{-k_{r+1}})\] \[=(\mathbb{W}(\lambda k_{r+1})-1)\delta^{\alpha}_{r+1}.\]
**Notation 4.13**.: We write \(\lambda^{+}\) to mean a non-critical time just above a given critical time \(\lambda=\frac{k_{0}}{m}\) (\(k_{0},m\) coprime), with no critical times between \(\lambda\) and \(\lambda^{+}\). Similarly for \(\lambda^{-}\) below \(\lambda\). Abbreviate
\[f_{m}^{\alpha}:=(h_{m}^{\alpha}-h_{-m}^{\alpha}),\quad\text{ and }\quad F_{m}^{ \alpha}:=\sum_{b\geq 1}f_{mb}^{\alpha}\qquad(\text{thus }\delta_{r}^{\alpha}=\sum_{m\geq k_{r}}f_{m}^{ \alpha}),\]
\[N_{m}\mathcal{I}:=\text{double-count}\lx@note{footnote}{if it arises in the interior of the interval we count it twice, but it only contributes one if it arises at a boundary point. Example: $N_{6}([\frac{1}{2},1])=1+2+2+1=6$ due to $3,4,5,6\in[3,6]=6\cdot[\frac{1}{2},1]$, whereas $N_{6}((\frac{1}{2},1])=2+2+1=5$.}\]
\[C_{m}\mathcal{I}:=\text{double-count}\text{ of the number of integers in the interval }m\cdot\mathcal{I}\text{ coprime to }m.\]
**Lemma 4.14**.: __
1. \(F_{m}^{\alpha}=\mu_{\lambda^{-}}(\mathfrak{F}_{\alpha})-\mu_{\lambda}( \mathfrak{F}_{\alpha})\) _at a critical time_ \(\lambda=\frac{k_{0}}{m}\)_,_ \(\gcd(k_{0},m)=1\)_;_
2. \(F_{m}^{\alpha}=\mu_{\lambda}(\mathfrak{F}_{\alpha})-\mu_{\lambda^{+}}( \mathfrak{F}_{\alpha})\) _at a critical time_ \(\lambda=\frac{k_{0}}{m}\)_,_ \(\gcd(k_{0},m)=1\)_;_
3. \(2F_{m}^{\alpha}=\mu_{\lambda^{-}}(\mathfrak{F}_{\alpha})-\mu_{\lambda^{+}}( \mathfrak{F}_{\alpha})\) _at a critical time_ \(\lambda=\frac{k_{0}}{m}\)_,_ \(\gcd(k_{0},m)=1\)_;_
4. _if_ \(\mathfrak{F}_{\alpha}\) _is_ \(m\)_-minimal (Definition_ 3.9_), then_ \(F_{m}^{\alpha}=\operatorname{rk}(Y_{m,\beta})\)_;_
5. _In particular, for_ \(H_{\pm}\) _as in Remark_ 3.3_, and_ \(N\in\mathbb{N}\)_,_ \[\mu_{N^{-}}(\mathfrak{F}_{\alpha})=\mu_{\alpha}-2N\mu+2(|H_{+}|-|H_{-}|).\] _E.g. for weight-1 CSRs,_ \(\mu_{1^{-}}(\mathfrak{F}_{\alpha})=0\)_._
6. _for all_ \(\lambda>0\)_,_ \[\mu_{\lambda}(\mathfrak{F}_{\alpha}) =\mu_{\alpha}-\sum N_{m}(0,\lambda]\cdot f_{m}^{\alpha}\] (35) \[=\mu_{\alpha}-\sum C_{m}(0,\lambda]\cdot F_{m}^{\alpha}.\]
Proof.: (1)-(3) follow from \(\mathbb{W}(\lambda mb)-\mathbb{W}(\lambda^{-}mb)=1=\mathbb{W}(\lambda^{+}mb)- \mathbb{W}(\lambda mb)\) (by Equation (74)). In (4), \(h_{-mb}^{\alpha}=0\) so \(F_{m}^{\alpha}=\operatorname{rk}(Y_{m,\beta})\) by Definition 3.9. The first part of (5) follows from Corollary 4.7, applying (3) in the case \(\frac{k_{0}}{m}=\frac{N}{1}\), using \(F_{1}^{\alpha}=|H_{+}|-|H_{-}|.\) The claim for weight-1 CSRs follows from Lemma 7.9 (so \(2\mu=\dim_{\mathbb{C}}Y\)) and Equation (52) (which implies that \(|H_{+}|-|H_{-}|=h_{0}^{\alpha}=h_{0}^{\alpha}=\dim_{\mathbb{C}}\mathfrak{F}_{ \alpha}\)), so \(\mu_{N^{-}}(\mathfrak{F}_{\alpha})=2\dim_{\mathbb{C}}\mathfrak{F}_{\alpha}+ \mu_{\alpha}-N\dim_{\mathbb{C}}Y\), then use Lemma 7.7(5). (6) follows from (1)-(3), and the observation that fractions \(\frac{\text{integer}}{b}\) in an interval \(\mathcal{I}\) correspond (by rescaling by \(b\)) to integers in \(b\cdot\mathcal{I}\).
**Example 4.15**.: We illustrate (35) for \(k_{1}=11\), \(k_{2}=10\), \(k_{3}=7\), \(k_{4}=6\) with \(h_{11}^{\alpha}=3\), \(h_{-10}^{\alpha}=3\), \(h_{7}^{\alpha}=1\), \(h_{-6}^{\alpha}=1\) (all other \(h_{\pm k}^{\alpha}=0\)). Thus \(F_{6}^{\alpha}=F_{3}^{\alpha}=-1\), \(F_{7}^{\alpha}=1\), \(F_{10}^{\alpha}=F_{5}^{\alpha}=-3\), \(F_{11}^{\alpha}=3\). Suppose we already knew \(\mu_{2/7}(\mathfrak{F}_{\alpha})=1\), and we want to compute \(\mu_{3/7}(\mathfrak{F}_{\alpha})=\mu_{2/7}(\mathfrak{F}_{\alpha})-\sum C_{m}[2 /7,3/7]\,F_{m}^{\alpha}\). The critical times in \(\mathcal{I}:=[\frac{2}{7},\frac{3}{7}]\) are \(\frac{2}{7}<\frac{3}{10}<\frac{2}{6}<\frac{4}{11}<\frac{4}{10}<\frac{3}{7}.\) By definition \(C_{3}\mathcal{I}=1\) due to \(\frac{2}{6}=\frac{1}{3}\), \(C_{5}\mathcal{I}=1\) due to \(\frac{4}{10}\), \(C_{6}\mathcal{I}=0\) (the \(\frac{2}{6}\) is already accounted for in \(C_{3}\mathcal{I}\)), \(C_{7}\mathcal{I}=1\) due to \(\frac{2}{7}\), \(\frac{3}{7}\) being boundary points, \(C_{10}\mathcal{I}=1\) due to \(\frac{3}{10}\) (not \(\frac{4}{10}\)), \(C_{11}\mathcal{I}=1\). Thus, as expected, \(\sum C_{m}[2/7,3/7]F_{m}^{\alpha}=2F_{6}^{\alpha}+2F_{7}^{\alpha}+4F_{10}^{ \alpha}+2F_{11}^{\alpha}=-6.\) So \(\mu_{3/7}(\mathfrak{F}_{\alpha})=\mu_{2/7}(\mathfrak{F}_{\alpha})+6=7\) jumps up, because unexpectedly there were more fractions in \([\frac{2}{7},\frac{3}{7}]\) with the lower denominator \(10\) than with \(11\). In the following Proposition, it is more frequent to observe "drops" \(f(p)\geq f(p+1)\), nevertheless even for positively-weighted \(Y\) a jump \(f(p)<f(p+1)\) is possible, and several consecutive jumps are possible.
**Proposition 4.16**.: _Fix \(\alpha\). Let \(\lambda_{p}=\frac{p}{m}\) be a sequence of \(\alpha\)-critical times for fixed \(m\geq 1\), for \(p=1,2,3,4,\ldots\) not necessarily coprime to \(m\). Let \(f(p)\) denote any of the following three functions,_
\[p\mapsto\mu_{\lambda_{p}^{-}}(\mathfrak{F}_{\alpha}),\ \ p\mapsto\mu_{\lambda_{p} }(\mathfrak{F}_{\alpha}),\ \ p\mapsto\mu_{\lambda_{p}^{+}}(\mathfrak{F}_{\alpha}),\]
_letting \(f(0):=\mu_{\alpha}\). Then \(K_{j}:=N_{k_{j}}[\lambda_{p},\lambda_{p+1}]\in\mathbb{N}\) is the double-count of divisors of \(m\) in the list of consecutive integers \(pk_{j},pk_{j}+1,\ldots,(p+1)k_{j}\), and_
\[f(p)-f(p+1)=\sum_{j=1}^{r}K_{j}\cdot f_{kj}^{\alpha}=K_{r}\delta_{r}^{\alpha}+(K _{r-1}-K_{r})\delta_{r-1}^{\alpha}+\cdots+(K_{1}-K_{2})\delta_{1}^{\alpha}, \tag{36}\]
_where \(K_{m}=2\); \(K_{j}\in\{0,1,2\}\) for \(k_{j}<m;\) and the brackets \((K_{j-1}-K_{j})\in\{-1,0,1,2,\ldots\}\), where \(-1\) cannot occur if \(mb+m-2\geq k_{j-1}>k_{j}\geq mb\) fails for all \(b\in\mathbb{N}\) (e.g. it fails if \(k_{j-1}-k_{j}\geq m-1\))._
_If \(Y\) is compatibly-weighted, and \(\delta^{\alpha}_{j-1}=0\) when \(K_{j-1}-K_{j}=-1\), then \(f\) is non-increasing._
Proof.: The fractions \(\frac{\operatorname{integer}}{b}\) in \([\frac{p}{m},\frac{p+1}{m}]\) correspond (by rescaling by \(mb\)) to the integers in \([pb,(p+1)b]\) divisible by \(m\). By (35), \(\mu_{\lambda_{p}}(\mathfrak{F}_{\alpha})-\mu_{\lambda_{p+1}}(\mathfrak{F}_{ \alpha})=\sum_{j=1}^{r}N_{k_{j}}[\lambda_{p},\lambda_{p+1}]\cdot f^{\alpha}_{k _{j}}\) so (36) holds for \(f(p)=\mu_{\lambda_{p}}(\mathfrak{F}_{\alpha})\). The other two cases follow by Lemma 4.14.(1)-(2). The number of divisors of \(m\) in \(N\) consecutive integers is either \(\lfloor\frac{N}{m}\rfloor\) or \(\lceil\frac{N}{m}\rceil\) (these equal when \(m\) divides \(N\)). If \(k_{j-1}=a>k_{j}=c\) but \(K_{j-1}<K_{j}\), then \(\lfloor\frac{1+c}{m}\rfloor\leq\lfloor\frac{1+a}{m}\rfloor<\lceil\frac{1+c}{m}\rceil\), so \(K_{j-1}-K_{j}=-1\). That condition means \(m(b+1)>1+a>1+c>mb\) for some \(b\in\mathbb{N}\). Rearranging gives the claim about \(-1\) occurrences. The other claims are easy.
### Relating weight spaces of \(\mathbb{C}^{*}\)-related fixed loci
Each \(Y_{m,\beta}\) is a symplectic \(\mathbb{C}^{*}\)-submanifold, so we can associate a Maslov index \(\mu_{m,\beta}\) to the \(S^{1}\)-action as in Seidel [11, Lem.2.6] (see also [13, Sec.7.8]), despite often having \(c_{1}(Y_{m,\beta})\neq 0\) even when \(c_{1}(Y)=0\). In practice, at \(p\in Y_{m,\beta}\), \(\mu_{m,\beta}(o_{p},c_{p})\) depends not just on the \(S^{1}\)-orbit \(o_{p}\) of period \(\frac{1}{m}\), with \(o_{p}(0)=p\), but also on a choice of capping \(c_{p}:\mathbb{D}\to Y_{m,\beta}\), \(c_{p}|_{S^{1}}=o_{p}\). The \(\mu_{m,\beta}(o_{p},c_{p})\) is constant if one varies \((o_{p},c_{p})\) continuously.
Suppose \(Y_{m,\beta}\) contains \(\mathfrak{F}_{\alpha}\). For the constant \(p\equiv o_{p}:[0,1/m]\to Y\) with constant \(c_{p}\equiv p\),
\[\mu_{m,\beta}(p,p)=\sum_{b}mbf^{\alpha}_{mb}=\sum_{b}mb(h^{\alpha}_{mb}-h^{ \alpha}_{-mb}).\]
Suppose that there is a \(-\nabla H\) flowline in \(Y_{m,\beta}\) from \(\mathfrak{F}_{\alpha}\) to \(\mathfrak{F}_{\gamma}\) (so also \(\mathfrak{F}_{\gamma}\subset Y_{m,\beta}\)). By Corollary 3.16, there is a \(\mathbb{C}^{*}\)-invariant pseudo-holomorphic sphere \(S_{q}\) in \(Y_{m,\beta}\) from \(p\in\mathfrak{F}_{\alpha}\) to \(q\in\mathfrak{F}_{\gamma}\). To simplify the discussion, suppose \(\mathfrak{F}_{\alpha}\) and \(\mathfrak{F}_{\gamma}\) do not have weights \(\pm bm\) for \(b>1\). Then, using [11, Lem.2.6],
\[mf^{\alpha}_{m}=\mu_{m,\beta}(p,p)=\mu_{m,\beta}(q,S_{q})=mf^{\gamma}_{m}+c_{1 }(Y_{m,\beta})[S_{q}].\]
Abbreviate \(|X|:=\dim_{\mathbb{C}}X\). Using \(|\mathfrak{F}_{\alpha}|+h^{\alpha}_{m}+h^{\alpha}_{-m}=|Y_{m,\beta}|=|\mathfrak{ F}_{\gamma}|+h^{\gamma}_{m}+h^{\gamma}_{-m}\), we deduce
\[h^{\alpha}_{m} =h^{\gamma}_{m}+\tfrac{1}{2}(|\mathfrak{F}_{\gamma}|-|\mathfrak{ F}_{\alpha}|)+\tfrac{1}{2m}c_{1}(Y_{m,\beta})[S_{q}],\] \[h^{\alpha}_{-m} =h^{\gamma}_{-m}+\tfrac{1}{2}(|\mathfrak{F}_{\gamma}|-|\mathfrak{ F}_{\alpha}|)-\tfrac{1}{2m}c_{1}(Y_{m,\beta})[S_{q}].\]
Now inductively proceed with \(Y_{k,\beta}\supset Y_{m,\beta}\) for a divisor \(k|m\), assuming by induction that we already related \(h^{\alpha}_{\pm k^{\prime}}\) with \(h^{\beta}_{\pm k^{\prime}}\) for \(k^{\prime}>k\). This method determines all \(h^{\alpha}_{\pm k}\) in terms of \(h^{\gamma}_{\pm k}\) and \(c_{1}(Y_{k,\beta})[S_{q}]\) for all divisors \(k|m\). The inductive equations, summing over divisors \(n|m\) with \(n>k\), are
\[h^{\alpha}_{k} =h^{\gamma}_{k}+\tfrac{1}{2}(|\mathfrak{F}_{\gamma}|-|\mathfrak{ F}_{\alpha}|)+\tfrac{1}{2k}\sum((k+n)(h^{\gamma}_{n}-h^{\alpha}_{n})+(k-n)(h^{\gamma}_{-n}-h^{ \alpha}_{-n})+c_{1}(Y_{n,\beta})[S_{q}]),\] \[h^{\alpha}_{-k} =h^{\gamma}_{-k}+\tfrac{1}{2}(|\mathfrak{F}_{\gamma}|-|\mathfrak{ F}_{\alpha}|)+\tfrac{1}{2k}\sum((k-n)(h^{\gamma}_{n}-h^{\alpha}_{n})+(k+n)(h^{ \gamma}_{-n}-h^{\alpha}_{-n})-c_{1}(Y_{n,\beta})[S_{q}]).\]
## 5. Symplectic cohomology associated to a Hamiltonian \(S^{1}\)-action
### Symplectic \(\mathbb{C}^{*}\)-manifolds over a convex base
We will assume the reader is familiar with Hamiltonian Floer theory and symplectic cohomology (e.g. [11, 12, 13]), in particular we use the notation, terminology and conventions from [13] unless stated otherwise.
**Definition 5.1**.: \((Y,\omega,I,\varphi)\) is a **symplectic \(\mathbb{C}^{*}\)-manifold over a convex base** if there is an \((I,I_{B})\)-pseudoholomorphic proper64 map
Footnote 64: meaning that preimages via \(\Psi\) of compact subsets in \(B\) are compact in \(Y\).
\[\Psi:Y^{\operatorname{out}}=Y\setminus\operatorname{int}(Y^{\operatorname{in}}) \to B^{\operatorname{out}}=\Sigma\times[R_{0},\infty),\]
where \(R_{0}\in\mathbb{R}\) is any constant; \((\Sigma,\alpha)\) is a closed contact manifold; \(I_{B}\) is a \(d(R\alpha)\)-compatible almost complex structure on \(B^{\operatorname{out}}\) of contact type such that
\[\Psi_{*}X_{S^{1}}=X_{fR} \tag{37}\]
where \(f:\Sigma\to(0,\infty)\) is a Reeb-invariant function (i.e. \(df(\mathcal{R}_{B})=0\)). Here \(I\) and \(I_{B}\) are _almost_ complex structures (but we often abusively say \(\Psi\) is holomorphic).
We also assume that \(B^{\mathrm{out}}\) is geometrically bounded at infinity (see Remark 5.2 for explanations).
We often abusively write \(\Psi:Y\to B\) even though the map \(\Psi\) is only defined at infinity, and we sometimes write \(B^{\mathrm{out}}\) instead of \(B\) even though \(B^{\mathrm{out}}\) is not required to have a filling \(B\).
A stronger condition is to be a symplectic \(\mathbb{C}^{*}\)-manifold **globally defined over a convex base**: we mean there is a pseudoholomorphic proper \(S^{1}\)-equivariant map \(\Psi:(Y,I)\to(B,I_{B})\) defined on all of \(Y\), whose target is a symplectic manifold \((B,\omega_{B},I_{B})\) convex at infinity, with a Hamiltonian \(S^{1}\)-action, whose Reeb flow at infinity agrees with the \(S^{1}\)-action. It is understood that \(I_{B}\) is an \(\omega_{B}\)-compatible almost complex structure, of contact type at infinity. The definition in fact implies that \(\Psi\) is also \(\mathbb{C}^{*}\)-equivariant (see Remark 5.2).
_Remark 5.2_.: \(B\) being **convex at infinity** means there is a compact subdomain \(B^{\mathrm{in}}\subset B\) outside of which we have a **conical end**\(B^{\mathrm{out}}:=B\setminus\mathrm{int}(B^{\mathrm{in}})\cong\Sigma\times[R _{0},\infty)\) such that the symplectic form becomes \(\omega_{B}=d(R\alpha)\). The radial coordinate \(R\in[R_{0},\infty)\) yields the Reeb vector field \(\mathcal{R}_{B}\) for the contact hypersurface \((\Sigma,\alpha)\), \(\Sigma:=\{R=R_{0}\}\) (defined by \(d\alpha(\mathcal{R}_{B},\cdot)=0\) and \(\alpha(\mathcal{R}_{B})=1\)). So \(\mathcal{R}_{B}=X_{R}\) is the Hamiltonian vector field for the function \(R\). After increasing \(R_{0}\) if necessary, we can always assume that \(I_{B}\) is \(\omega_{B}\)-compatible and of **contact type** on \(B^{\mathrm{out}}\), meaning65\(I_{B}Z_{B}=\mathcal{R}_{B}\), where \(Z_{B}=R\partial_{R}\) is the **Liouville vector field** defined by \(\omega_{B}(Z_{B},\cdot)=R\alpha\) on \(B^{\mathrm{out}}\). If \(I_{B}\) does not depend on \(R\), then clearly \(B^{\mathrm{out}}\) is geometrically bounded at infinity due to the radial symmetry. We allow non-radially invariant \(I_{B}\) as it imposes fewer constraints on the pseudoholomorphicity assumption on \(\Psi\). If \(I_{B}\) depends on \(R\) (on the \(\xi=\ker d\alpha\) orthogonal summand of \(TB^{\mathrm{out}}=\xi\oplus\mathbb{R}Z_{B}\oplus\mathbb{R}\mathcal{R}_{B}\)), then it is desirable to require that \(B^{\mathrm{out}}\) is geometrically bounded at infinity. This assumption is needed to prove that Floer solutions "consume \(F\)-filtration" if they go far out at infinity on a long region on which \(c^{\prime}(H)\) is linear (the \(F\)-filtration is constructed in Section 8, but this property is proved in [10]). This property is needed in Proposition 6.4, in the construction of the \(Q_{\varphi}\) class (see the footnote to Theorem 6.17), and it is used in [10] to ensure a certain consistency between Morse-Bott-Floer spectral sequences so that we can take the direct limit over slopes \(\lambda\).
Footnote 65: Equivalently \(dR=R\alpha\circ I_{B}\), so \(I_{B}\) preserves the contact distribution \(\xi=\ker\alpha\subset T\Sigma\). The \(\omega_{B}\)-compatibility condition ensures that \(d\alpha\) is an \(I_{B}\)-compatible symplectic form on \(\xi\). By [10, Lemma C.9], it suffices to assume \(a(R)dR=R\alpha\circ I_{B}\) for a positive smooth function \(a\), equivalently \(I_{B}Z_{B}=a(R)\mathcal{R}_{B}\).
If \(f\equiv w>0\) is a constant in (7), one may as well assume Equation (6) by rescaling \(R,\alpha,R_{0}\) to \(wR,\alpha/w,wR_{0}\) (leaving \(\omega_{B}=d(R\alpha)\) and the Liouville form \(\theta=R\alpha\) unchanged on \(B^{\mathrm{out}}\)).
If Equation (6) holds, then it follows that the Reeb flow on the image of \(\Psi\) is an \(S^{1}\)-action and the map \(\Psi:Y^{\mathrm{out}}\to\mathrm{Im}(\Psi)\) is \(S^{1}\)-equivariant. In the more general case of (7),
\[\Psi_{*}X_{S^{1}}=X_{fR}=f\mathcal{R}_{B}+RX_{f}\]
generates an \(S^{1}\)-action on \(\mathrm{Im}(\Psi)\). As we do not assume that \(\Psi\) is surjective, in both cases it is not necessary for those flows to arise from an \(S^{1}\)-action defined on all of \(B\). On the image \(\mathrm{Im}(\Psi)\subset B^{\mathrm{out}}\) the \(S^{1}\)-action in fact extends to a (partially defined) \(\mathbb{C}^{*}\)-action. Indeed, in \(Y\) the vector fields \(X_{\mathbb{R}_{+}}\), \(X_{S^{1}}\) commute, therefore the same holds for their \(\Psi_{*}\)-pushforwards \(\Psi_{*}X_{\mathbb{R}_{+}}\), \(\Psi_{*}X_{S^{1}}\) on \(\mathrm{Im}(\Psi)\), where
\[\Psi_{*}X_{\mathbb{R}_{+}}=\Psi_{*}(-IX_{S^{1}})=-I_{B}\Psi_{*}X_{S^{1}}=-I_{B }X_{fR}=\nabla(fR).\]
Also, the integrability of \(X_{\mathbb{R}^{+}},X_{S^{1}}\) combined with the \(\Psi\)-projection of their flow implies the integrability of those \(\Psi_{*}\)-pushforwards on \(\mathrm{Im}(\Psi)\). If (6) holds, then \(\Psi_{*}X_{\mathbb{R}_{+}}=\nabla R=Z_{B}\) is the Liouville field.
_Remark 5.3_ (Examples).: When a symplectic \(\mathbb{C}^{*}\)-manifold \(Y\) is a Liouville manifold whose Reeb flow at infinity is the \(S^{1}\)-action, it lies over a convex base with \(\Psi\) being the restriction to \(Y^{\mathrm{out}}\) of the identity map \(\mathrm{id}:Y\to B=Y\) (or of a suitable Liouville flow map if \(\Psi_{*}X_{S^{1}}=w\mathcal{R}_{B}\) and one wants to get rid of the constant \(w>0\) by the rescaling trick). In this case, symplectic cohomology \(SH^{*}(Y,\varphi)\) (defined later) agrees with the usual symplectic cohomology \(SH^{*}(Y)\)[12, 13]. The analogous statement holds when a symplectic \(\mathbb{C}^{*}\)-manifold \(Y\) is convex at infinity. Examples of non-Liouville but convex examples are negative complex line bundles (e.g. see [10]). Examples of non-convex symplectic \(\mathbb{C}^{*}\)-manifolds \(Y\) globally defined over non-Liouville convex bases are negative complex vector bundles \(E\to M\) over closed symplectic manifolds \(M\), using the natural \(\mathbb{C}^{*}\)-action on the fibres [10, Sec.11.2].
**Lemma 5.4**.: _The action \(\varphi\) on a symplectic \(\mathbb{C}^{*}\)-manifold over a convex base is contracting._
Proof.: This follows by the properness of \(\Psi\) and that on \(B^{\rm out}\) we have \(\Psi_{*}X_{\mathbb{R}_{+}}=\nabla(fR)\), with \(f>0\).
Note that \(H_{B}:=fR\) is a Hamiltonian for the \(S^{1}\)-flow on \({\rm Im}(\Psi)\). By Definition 5.1, using \(X_{S^{1}}=X_{H}\) on \(Y\), and \(\mathcal{R}_{B}=X_{R}\) on \(B^{\rm out}\), we deduce that on \(B^{\rm out}\)
\[\Psi_{*}X_{H}=X_{fR}=f\mathcal{R}_{B}+RX_{f}=X_{H_{B}}. \tag{38}\]
However, there may be no relationship between \(\omega\) and \(\Psi^{*}\omega_{B}\), so \(H\) need not be related to \(H_{B}\circ\Psi\).
We now show that one can "twist" the symplectic structure on \(Y\) without affecting the class \([\omega]\in H^{2}(Y;\mathbb{R})\) in order to get a _proper_ moment map. This will be useful in later sections.
**Lemma 5.5**.: _For \(\Psi:Y^{\rm out}\to B^{\rm out}=\Sigma\times[R_{0},\infty)\) any symplectic \(\mathbb{C}^{*}\)-manifold over a convex base,_
\[\omega_{\phi}:=\omega+d(\Psi^{*}(\phi(R)\alpha))\]
_is a symplectic form cohomologous to \(\omega\) for which the \(S^{1}\)-action is Hamiltonian and has a proper moment map. Here \(\phi:[R_{0},\infty)\to[0,\infty)\) is any non-decreasing smooth function vanishing near \(R=R_{0}\)._
_If \(\Psi:Y\to B\) is a symplectic \(\mathbb{C}^{*}\)-manifold globally defined over a convex base, \(\omega+\Psi^{*}\omega_{B}\) is a symplectic form for which the \(S^{1}\)-action is Hamiltonian and has a proper moment map._
Proof.: We first prove the second statement: by holomorphicity, \((\Psi^{*}\omega_{B})(v,Iv)=\omega_{B}(\Psi_{*}v,I_{B}\Psi_{*}v)\geq 0\) as \(I_{B}\) is \(\omega_{B}\)-compatible. Similarly, \((\Psi^{*}\omega_{B})(v_{1},Iv_{2})=(\Psi^{*}\omega_{B})(v_{2},Iv_{1})\). So \(\omega+\Psi^{*}\omega_{B}\) is \(I\)-compatible, thus symplectic. As \(\Psi\) is \(S^{1}\)-equivariant and \(X_{S^{1},B}=X_{H_{B}}\) is Hamiltonian (with \(H_{B}=R\) at infinity),
\[(\Psi^{*}\omega_{B})(\cdot,X_{S^{1}})=\omega_{B}(\Psi_{*},X_{H_{B}})=\Psi^{* }dH_{B}=d(H_{B}\circ\Psi).\]
Hence \(H+H_{B}\circ\Psi\) is the Hamiltonian for \(X_{S^{1}}\) on \((Y,\omega+\Psi^{*}\omega_{B})\) outside of a compact subset of \(Y\). As \(H\) is bounded below, on the sublevel set \(Y_{\leq C}=\{H+H_{B}\circ\Psi\leq C\}\) (for \(C\in\mathbb{R}\)) the function \(H_{B}\circ\Psi\) is bounded, so \(Y_{\leq C}\) lies in the \(\Psi\)-preimage of a sublevel set of \(H_{B}\) in \(B\). As \(H_{B}\) is proper, since \(H_{B}=R\) at infinity, and \(\Psi\) is proper, it follows that \(Y_{\leq C}\) is compact. Properness of \(H+H_{B}\circ\Psi\) follows.
The form \(\omega_{\phi}=\omega+\Psi^{*}(d(\phi(R)\alpha))\) is well-defined on \(Y\) even though \(\Psi\) is only defined on \(Y^{\rm out}\), because \(\phi\) vanishes near \(R=R_{0}\) so \(\Psi^{*}(d(\phi(R)\alpha))\) extends by zero over \(Y^{\rm in}\). Also \(\omega_{\phi}\) is \(I\)-compatible because \(d(\phi(R)\alpha)=\phi^{\prime}(R)\,dR\wedge\alpha+\phi(R)d\alpha\) satisfies \((d(\phi(R)\alpha))(v,I_{B}v)\geq 0\) (using that \(I_{B}\) is of contact type and \(\phi^{\prime}\geq 0\)). The \(S^{1}\)-flow on \(Y\) is symplectic for \(\omega+\Psi^{*}(d(\phi(R)\alpha))\) because \((d(\phi(R)\alpha))(\cdot,X_{R})=\phi^{\prime}(R)\,dR=d(\phi(R))\) (using that \(X_{R}\) is the Reeb vector field). The Hamiltonian is now \(H+\phi(R\circ\Psi)\), and the proof of properness is analogous, using that \(Y^{\rm in}\) is compact so \(H|_{Y^{\rm in}}\) is proper.
_Remark 5.6_.: [Weak convexity condition at infinity] At the cost of some intuition, Definition 5.1 need not make any reference to \(B\) or \((\Sigma,\alpha)\) as follows. We pull-back all data via \(\Psi\) to \(Y\),
\[\Theta:=\Psi^{*}(R\alpha),\qquad\Omega:=\Psi^{*}\omega_{B}=d\Theta,\qquad\rho: =R\circ\Psi,\qquad F:=f\circ\Psi.\]
By [11, Lemma C.6], the Hamiltonians \(H_{B}=fR\), with \(f\) as in Equation (7), are characterised by the conditions \(H_{B}\geq 0\), \(dR(X_{H_{B}})=0\), \(R\alpha(X_{H_{B}})=H_{B}\). The proof in [11, Theorem C2, Lemma C7] of the extended maximum principle for Floer solutions in \(B=\Sigma\times[R_{0},\infty)\) for such Hamiltonians only requires the contact type condition \(R\alpha=-dR\circ I_{B}\). We can rephrase the above conditions on \(Y\) as: \(\Theta=-d^{c}\rho\) where \(d^{c}\rho:=d\rho\circ I\), so \(\Omega=-dd^{c}\rho\); the condition \(d\rho(X_{S^{1}})=0\), equivalently \(S^{1}\)-invariance of \(\rho\); the relation \(d\Theta(\cdot,X_{S^{1}})=d(F\rho)\) corresponds in \(B\) to \(\omega_{B}(\cdot,X_{H_{B}})=d(fR)\); and finally \(\Theta(X_{S^{1}})=F\rho\).
The condition \(d\Theta(\cdot,X_{S^{1}})=d(F\rho)\) can be rewritten as a Lie derivative condition \(\mathcal{L}_{X_{S^{1}}}\Theta=0\): using \(\Theta(X_{S}^{1})=F\rho\) in Cartan's formula, \(d\Theta(\cdot,X_{S^{1}})=-i_{X_{S^{1}}}d\Theta=d(i_{X_{S^{1}}}\Theta)-\mathcal{ L}_{X_{S^{1}}}\Theta=d(F\rho)-\mathcal{L}_{X_{S^{1}}}\Theta\).
Thus we propose the definition: a symplectic manifold \(Y\) with a Hamiltonian \(S^{1}\)-action is **weakly convex at infinity** if outside of a compact subset there is an exhausting \(S^{1}\)-invariant function
\[\rho:Y^{\rm out}\to\mathbb{R},\]
giving rise to a semi-positive \((1,1)\)-form \(-dd^{c}\rho\), such that
\[\Theta(X_{S^{1}})=F\rho\qquad\text{and}\quad\mathcal{L}_{X_{S^{1}}}\Theta=0,\]
where \(\Theta:=-d^{c}\rho\), and \(F:Y^{\rm out}\to[0,\infty)\) is some smooth function.
Note \(-dd^{c}\rho=d\Theta\) need not be symplectic on \(Y^{\rm out}\). It is closed and semi-positive: \(-dd^{c}\rho(v,Iv)\geq 0\) for all \(v\in TY^{\rm out}\). The class of Hamiltonians to use for Floer theory on \(Y\) should66 equal \(\lambda H+\text{constant}\) at infinity, for a constant \(\lambda>0\).
Footnote 66: In the presence of the map \(\Psi:Y^{\rm out}\to B\), a local Floer solution \(u\) in \(Y\) for such a Hamiltonian (using \(I\)) maps to a local Floer solution \(v=\Psi\circ u\) in \(B\) for \((\lambda H_{B},I_{B})\), for which the extended maximum principle applies at infinity.
### Maximum principle for admissible Hamiltonians
_Remark 5.7_ (Technical symplectic assumptions on \(Y\)).: We always tacitly assume that our symplectic manifold \(Y\) is **weakly-monotone** so that transversality arguments in Floer theory can be dealt with by the methods of Hofer-Salamon [10]. This means one of the following holds:
1. \(c_{1}(Y)(A)=0\) when we evaluate on any spherical class \(A\in\pi_{2}(Y)\), or
2. \(\omega(Y)(A)=0\) when we evaluate on any spherical class \(A\in\pi_{2}(Y)\), or
3. for some \(k>0\) we have \(c_{1}(Y)(A)=k\cdot\omega(A)\) for all \(A\in\pi_{2}(Y)\), or
4. the smallest positive value \(c_{1}(Y)(A)\geq n-2\) where \(\dim_{\mathbb{R}}Y=2n\).
Case (2) holds if \(\omega\) is exact; (3) is the **monotone** case. In Section 6.4 we use **weak\(+\) monotonicity**[11, Sec.2.2] which means the same as above except \(n-2\) in (4) becomes \(n-1\).
We do not use the Floer cohomology of the base \(B\), so we do not require that \(B\) is weakly-monotone when \(Y\) is globally defined over a convex base (of course \(B^{\rm out}\) is exact).
**Definition 5.8**.: For \(\Psi:Y\to B\) a symplectic \(\mathbb{C}^{*}\)-manifold over a convex base, \(\mathcal{H}(Y,\varphi)\) is the class of \(\varphi\)**-admissible Hamiltonians**: any smooth function \(F:Y\to\mathbb{R}\) which at infinity is a linear function \(\lambda\cdot H\) of the moment map for a generic slope \(\lambda>0\). A \(\varphi\)**-admissible homotopy**\(F_{s}\) means
1. \(F_{s}=F_{-}\) for \(s\ll 0\) and \(F_{s}=F_{+}\) for \(s\gg 0\), where \(F_{\pm}\in\mathcal{H}(Y,\varphi)\);
2. \(F_{s}=\lambda_{s}\cdot H\) outside of a compact subset of \(Y\) independent of \(s\), with \(\lambda_{s}>0\) possibly not generic;
3. \(\partial_{s}\lambda_{s}\leq 0\).
The class of \(\varphi\)-admissible Hamiltonians is natural in the sense that an isomorphism of symplectic \(\mathbb{C}^{*}\)-manifolds (see Section 1.4) yields an isomorphism of the corresponding admissible Hamiltonians (via composition with the isomorphism). The class depends on the choice of \(\varphi\), which determines \(H\).
**Theorem 5.9**.: _For any symplectic \(\mathbb{C}^{*}\)-manifold \(Y\) over a convex base, Hamiltonian Floer cohomology \(HF^{*}(F)\) for \(F\in\mathcal{H}(Y,\varphi)\) is well-defined. They form a directed system via Floer continuation maps \(HF^{*}(F_{+})\to HF^{*}(F_{-})\) induced by \(\varphi\)-admissible homotopies \(F_{s}\) (which exist whenever \(F_{+}\leq F_{-}\) at infinity, equivalently for slopes \(\lambda_{+}\leq\lambda_{-}\)). The continuation map is independent of the choice of \(F_{s}\): for slopes \(\lambda_{-}=\lambda_{+}\) there are continuation maps in both directions and they are inverse to each other; so the groups \(HF^{*}(F)\) up to continuation isomorphism only depend on the slope \(\lambda\) at infinity of \(F\), and we write \(\mathbf{HF^{*}(F_{\lambda})}\) for that isomorphism class without specifying \(F_{\lambda}\) except for its generic slope \(\lambda\)._
Proof.: Let \(x_{\pm}\) be any given \(1\)-periodic Hamiltonian orbits for a given \(F\in\mathcal{H}(Y,\varphi)\). We will prove that there is a compact subset \(C\subset Y,\) depending only on \(x_{\pm}\) and \(F\), such that all Floer trajectories for \(F\) converging to \(x_{\pm}\) are contained in \(C\). This ensures that compactness and transversality arguments required to construct \(HF^{*}(F)\) can be dealt with just as in the compact setting of Hofer-Salamon [10], using that \(Y\) is weakly monotone (Remark 5.7).
Let \(u\) be such a Floer trajectory, i.e. a smooth map \(u:\mathbb{R}\times S^{1}\to Y\) with \(u(-\infty,t)=x_{-}(t)\), \(u(+\infty,t)=x_{+}(t)\), satisfying the Floer equation
\[\partial_{s}u+I(\partial_{t}u-X_{F})=0. \tag{39}\]
By admissibility, \(X_{F}=\lambda X_{H}\) outside of a compact subset. Using (37), letting \(H_{B}:=fR\),
\[\Psi_{*}X_{F}=\lambda\,X_{H_{B}}. \tag{40}\]
By holomorphicity of \(\Psi\), outside a compact subset of \(B\), \(v=\Psi\circ u\) satisfies the Floer equation for \(\lambda H_{B}\),
\[\partial_{s}v+I_{B}(\partial_{t}v-\lambda X_{H_{B}})=0. \tag{41}\]
We apply the extended maximum principle for \(v\) in \(B\)[11, Theorem C2, Lemma C7], which prohibits \(v\) from leaving a compact subset of \(B\) which is determined by \(\Psi\circ x_{\pm}\). When we assume instead (6), the simpler maximum principle from [11, App.D] can be applied. Finally, as \(\Psi\) is proper, this implies that \(u\) also lies in a compact subset of \(Y\) determined by \(x_{\pm}\), as required.
To achieve transversality for moduli spaces of Floer solutions, one may need to perturb \(I\) in regions that those solutions cross, which would ruin (41). However, the above argument showed that Floer solutions do not enter a neighbourhood at infinity, so we do not need to perturb \(I\) at infinity.
For the Floer continuation maps, one replaces \(X_{F}\) in equation (39) by \(X_{F_{s}}\), in particular \(x_{\pm}\) are now \(1\)-periodic orbits of \(X_{F_{\pm}}\) respectively. The projection \(v=\Psi\circ u\) outside of a compact subset of \(B\) satisfies equation (41) with \(\lambda\) replaced by \(\lambda_{s}\). The decreasing condition \(\partial_{s}\lambda_{s}\leq 0\) is precisely what ensures that the maximum principle still holds by [11, Thm.C.11] (or the simpler [11, App.D] when (6) holds). The other properties in the claim follow by standard Floer theory arguments.
_Remark 5.10_.: We will not review in detail the chain-level construction of \(HF^{*}(F)\) (see e.g. [11]), but we remind the reader that the chain complex \(CF^{*}(F)\) is a module over a certain Novikov field \(\mathbb{K}\). When \(c_{1}(Y)=0\), we will in fact work over the **Novikov field**
\[\mathbb{K}=\{\sum n_{j}T^{a_{j}}:a_{j}\in\mathbb{R},a_{j}\to\infty,n_{j}\in \mathbb{B}\}, \tag{42}\]
where \(T\) is a formal variable in grading zero, and \(\mathbb{B}\) is any choice of base field. In the monotone case, the same Novikov field can be used but \(T\) will have a non-zero grading [11, Sec.2A]. In other situations, e.g. the weakly-monotone setup, the Novikov field is more complicated [11, Sec.5B]. In general, the chain complex \(CF^{*}(F)\) is a free \(\mathbb{K}\)-module generated by the \(1\)-periodic orbits of a generic \(C^{2}\)-small time-dependent perturbation of \(F\) supported away from the region at infinity where \(F\) is linear. The perturbation of \(F\) ensures, among other things, that the chain complex is finitely generated (and recall that our condition on generic slopes at infinity ensured that no generators existed at infinity, so no perturbation is required there). As the perturbation does not change the slopes at infinity, up to a continuation isomorphism the group \(HF^{*}(F)\) does not depend on the choice of perturbation of \(F\). The Morse-Bott-Floer approach which avoids perturbing \(F\) is explained in [10, Appendix A].
The above proof relied on the trick that (40) yields \(\Psi_{*}X_{F}=\lambda X_{H_{B}}=X\lambda_{H_{B}}\), a Hamiltonian vector field on \(B\). If \(F=c(H)\) were a function of \(H\) then, in that equation, \(\lambda=c^{\prime}(H(u(s,t)))\) becomes a function dependent on the domain coordinates \((s,t)\in\mathbb{R}\times S^{1}\). The maximum principle cannot be ensured for domain-dependent Hamiltonians unless \(\partial_{s}\lambda\leq 0\) (see [11, Thm.C.11] and [11, App.D]). Even assuming (6), instances where that maximum principle must fail can be constructed [10, Rmk.6.2.5] for the minimal resolution \(\mathfrak{M}=X_{\mathbb{Z}/5}\) of the Du Val singularity \(\mathfrak{M}_{0}=\mathbb{C}^{2}/(\mathbb{Z}/5)\cong V(XY-Z^{5})\subset\mathbb{ C}^{3}\) (Example 3.23).
### Symplectic cohomology \(SH^{*}(Y,\varphi)\) for a \(\mathbb{C}^{*}\)-action \(\varphi\)
**Definition 5.11**.: By Theorem 5.9, to any symplectic \(\mathbb{C}^{*}\)-manifold \((Y,\omega,I,\varphi)\) over a convex base we may associate the \(\varphi\)**-symplectic cohomology**, given by the direct limit
\[SH^{*}(Y,\omega,I,\varphi):=\lim_{F\in\mathcal{H}(Y,\varphi)}HF^{*}(F). \tag{43}\]
This is a vector space over the Novikov field \(\mathbb{K}\) (the Floer continuation maps are \(\mathbb{K}\)-linear maps). When \(c_{1}(Y)=0\), there is a well-defined \(\mathbb{Z}\)-grading by the Robbin-Salamon index on \(HF^{*}(F)\) (by Lemma A.2, using that Floer continuation maps are grading-preserving). The above construction holds more generally if we just assume that \(Y\) is weakly convex at infinity with \(S^{1}\)-action \(\varphi\) as in Remark 5.6.
**Proposition 5.12**.: _The \(\varphi\)-symplectic cohomology \(SH^{*}(Y,\omega,I,\varphi)\) is a unital \(\mathbb{K}\)-algebra admitting a unital \(\mathbb{K}\)-algebra homomorphism from the quantum cohomology of \(Y\),_
\[c^{*}:QH^{*}(Y,I)\to SH^{*}(Y,\omega,I,\varphi).\]
_The product is the_ **pair-of-pants product**_, which arises as the direct limit of the pair-of-pants products_
\[HF^{*}(F_{\lambda_{1}})\otimes HF^{*}(F_{\lambda_{2}})\to HF^{*}(F_{\lambda_{1 }+\lambda_{2}}).\]
_If \(c_{1}(Y)=0\), those cohomology groups are \(\mathbb{Z}\)-graded and the morphisms are compatible with the grading._
Proof.: For a detailed discussion of the pair-of-pants product on symplectic cohomology and of gradings we refer to [11, 11]. The new ingredient here is to explain why pair-of-pants solutions \(u:S\to Y\) do not escape to infinity. This follows by the same projection trick we used in the proof of Theorem 5.9, by using the maximum principle for pair-of-pants solutions \(v=\Psi\circ u\) in symplectic manifolds that are convex at infinity such as \(B\)[11, Lemma C7 and the comments under Rmk.C.10]. Explicitly (as explained in [11, Appendix D]) in a complex coordinate \(z=s+\sqrt{-1}t\) on the pair-of-pants surface \(S\) the Floer equations are: \(\partial_{t}u=X_{F}\beta_{t}+I\partial_{s}u-IX_{F}\beta_{s}\) and \(\partial_{s}u=X_{F}\beta_{s}-I\partial_{t}u+IX_{F}\beta_{t}\), where \(\beta\) is a certain auxiliary one-form on \(S\). So the projection trick still applies: at infinity, the projected solution \(v=\Psi\circ u\) in \(B\) satisfies the same equations with \(X_{F}\) and \(I\) replaced respectively by \(\lambda X_{H_{B}}\) and \(I_{B}\). When we assume instead (6), the simpler maximum principle from [11, Appendix D] applies.
### Vanishing of symplectic cohomology when \(c_{1}(Y)=0\)
**Proposition 5.13**.: _For any symplectic \(\mathbb{C}^{*}\)-manifold \((Y,\omega,I,\varphi)\) over a convex base, with \(c_{1}(Y)=0\),_
\[SH^{*}(Y,\omega,I,\varphi)=0.\]
Proof.: The following mimics [11, Thm.48] and [12, Sec.2.6]. We compute \(SH^{*}=\varinjlim HF^{*}(\lambda H)\) as a direct limit using the cofinal sequence of admissible Hamiltonians67\(\lambda H\) for generic slopes \(\lambda\to\infty\). As \(c_{1}(Y)=0\), the maps in the direct limit are \(\mathbb{Z}\)-grading preserving maps. The claim follows by Proposition 4.2, as \(HF^{*}(\lambda H)\) is supported in arbitrarily negative degrees68 for large \(\lambda\).
Footnote 67: Technical remark: note that the 1-periodic orbits of \(\lambda H\) are typically degenerate since they are not isolated, so by convention \(HF^{*}(\lambda H)\) actually means that \(\lambda H\) is first perturbed (usually in a time-dependent way), or that a Morse–Bott model is used. In our case, the orbits are constant, so an autonomous perturbation suffices. By a standard continuation argument, the choice of (compactly supported) perturbation does not matter up to an isomorphism on Floer cohomology.
Footnote 68: A technical remark: in a Morse–Bott model for Floer cohomology, the generators are graded by an index that can differ from the index we calculated for our Morse–Bott manifolds \(\mathfrak{F}_{\alpha}\) of orbits by up to \(\dim_{\mathbb{R}}\mathfrak{F}_{\alpha}\leq 2\dim_{\mathbb{C}}Y\), so the indices still diverge as \(\lambda\to\infty\). There is also a generic perturbation of the Hamiltonian whose 1-periodic orbits are graded as in the Morse–Bott model, so the indices also diverge in a perturbation model for Floer cohomology (Proposition 6.7(4)).
## 6. Filtration on quantum cohomology
### PSS-morphism into symplectic cohomology
For \((Y,\omega,I,\varphi)\) a symplectic \(\mathbb{C}^{*}\)-manifold over a convex base, and \(F_{\lambda}\) any admissible Hamiltonian of slope \(\lambda\) in \(H\) at infinity, [10, 11] yields:
**Proposition 6.1**.: _When \(\delta>0\) is smaller than any period of \(X_{H},\) we have an isomorphism of \(\mathbb{K}\)-algebras_
\[HF^{*}(F_{\delta})\cong QH^{*}(Y).\]
_Note \(HF^{*}(F_{\delta})\cong H^{*}(Y;\mathbb{K})\) as \(\mathbb{K}\)-vector spaces. When \(c_{1}(Y)=0\), all those isomorphisms are \(\mathbb{Z}\)-graded._
Composing the above isomorphism with the continuation map from \(F_{\delta}\) to \(F_{\lambda}\), for generic \(\lambda>\delta\):
\[c_{\lambda}^{*}:QH^{*}(Y)\to HF^{*}(F_{\lambda}). \tag{44}\]
### Filtration of \(QH^{*}(Y)\) by ideals
Now suppose \(SH^{*}(Y,\varphi)=0\) (e.g. when \(c_{1}(Y)=0\), by Proposition 5.13). As \(SH^{*}(Y,\varphi)=0\) is a direct limit of continuation maps, every class in \(QH^{*}(Y)\cong HF^{*}(F_{\delta})\) must map to zero in \(HF^{*}(F_{\lambda})\) via (44) for large enough \(\lambda\). This determines a filtration:
**Definition 6.2**.: The \(\varphi\)**-filtration** is \(\mathcal{F}_{\lambda}^{\varphi}:=\cap\{\ker c_{\mu}^{*}:\mu>\lambda\text{ is generic}\}\), and \(\mathcal{F}_{\infty}^{\varphi}:=QH^{*}(Y)\).
_Remark 6.3_.: The above does not strictly require \(SH^{*}(Y,\varphi)=0\). The same construction yields a filtration on \(\ker(c^{*}=\varinjlim c_{\lambda}^{*}:QH^{*}(Y)\to SH^{*}(Y))\), with the same properties as those we explain below. One can then extend this to a filtration of \(QH^{*}(Y)\) via \(\ker c^{*}\subset QH^{*}(Y)=:\mathcal{F}_{\infty}^{\varphi}\).
Observe the following basic properties:
1. \(\mathcal{F}_{\lambda}^{\varphi}=0\) for \(\lambda\leq 0\).
2. \(\mathcal{F}_{\lambda}^{\varphi}\subset\mathcal{F}_{\lambda^{\prime}}^{\varphi}\) for \(\lambda\leq\lambda^{\prime}\).
3. \(\mathcal{F}_{\lambda}^{\varphi}\subset QH^{*}(Y)\) is a graded69\(\mathbb{K}\)-vector subspace (since \(c_{\mu}^{*}\) is grading-preserving).
Footnote 69: When \(c_{1}(Y)=0\), we refer to a choice of \(\mathbb{Z}\)-grading. In the monotone setting, gradings can be taken in a certain finite group. In general, however, there is only a \(\mathbb{Z}/2\)-grading.
**Proposition 6.4**.: \(\mathcal{F}_{\lambda}^{\varphi}=\mathcal{F}_{\lambda^{\prime}}^{\varphi}\) _if there is no outer70\(S^{1}\)-period in \((\lambda,\lambda^{\prime}]\)._
Footnote 70: recall Definition 3.17
Proof.: This is not immediate: it is a consequence of the \(F\)-filtration defined in Section 8, whose detailed construction is carried out in [10]. The Hamiltonians \(H_{\lambda},H_{\lambda^{\prime}}\) can be constructed to have the same \(1\)-orbits, by modifying \(H_{\lambda}\) in the region at infinity where it has slope \(\lambda\), and increasing the slope to \(\lambda^{\prime}\). In [10] we prove they admit a continuation map \(\psi_{\lambda^{\prime},\lambda}:HF^{*}(H_{\lambda})\to HF^{*}(H_{\lambda}^{ \prime})\) which is an inclusion at the level of complexes. Indeed Floer solutions \(u\) which enter the region where \(H_{\lambda}\neq H_{\lambda^{\prime}}\) must cross a large region where the homotopy \(H_{s}\) has slope \(\lambda\), forcing the drop in \(F\)-filtration \(-\int dF(\partial_{s}u)\,ds\) to be larger than the priori bound \(F(x_{-})-F(x_{+})\) at asymptotics (compare Theorem 8.3 and Lemma 8.6). Once Floer continuations solutions \(u\) are trapped in the region where \(H_{s}=H_{\lambda}=H_{\lambda^{\prime}}\), they cannot be rigid due to \(s\)-translation symmetry, so the \(u\) are \(s\)-independent and the continuation map is an inclusion. A similar argument shows that the Floer trajectories counted by the Floer differential for \(H_{\lambda^{\prime}}\) are also trapped in the region where \(H_{\lambda}=H_{\lambda^{\prime}}\), so the complexes \(CF^{*}(H_{\lambda})\),\(CF^{*}(H_{\lambda^{\prime}})\) are identified. Finally, continuation maps compose compatibly on cohomology, so \(c_{\lambda^{\prime}}^{*}=\psi_{\lambda^{\prime},\lambda}\circ c_{\lambda}^{*}\), thus
\[\mathcal{F}_{\lambda^{\prime}}^{\varphi}=\ker c_{\lambda^{\prime}}^{*}=\ker \psi_{\lambda^{\prime},\lambda}\circ c_{\lambda}^{*}=\ker c_{\lambda}^{*}= \mathcal{F}_{\lambda}^{\varphi},\]
since we showed that \(\psi_{\lambda^{\prime},\lambda}\) is an isomorphism.
**Proposition 6.5**.: _The \(\varphi\)-filtration is a filtration by ideals on the \(\mathbb{K}\)-algebra \(QH^{*}(Y)\). In particular, if the unit \(1\in\mathcal{F}_{\lambda}^{\varphi}\) then \(\mathcal{F}_{\lambda}^{\varphi}=QH^{*}(Y)\) ("unity is the last to die")._
Proof.: By compatibility of continuation maps with pair-of-pants products [111], and Proposition 6.1,
(45)
commutes. Let \(q\in QH^{*}(Y)\) and \(x\in\mathcal{F}_{\lambda}^{\varphi}\). Then \(x\in\ker c_{\lambda+\delta}^{*}\) for any \(\delta>0\), so taking \(k=\lambda+\delta\) in the diagram we deduce \(q\star x\in\ker c_{\lambda+2\delta}^{*}\). As \(\delta>0\) was arbitrarily small, we deduce \(q\star x\in\mathcal{F}_{\lambda}^{\varphi}\).
_Remark 6.6_ (**Choice of coefficients)**.: The Novikov field \(\mathbb{K}\) from (42) is a \(\mathbb{B}\)-vector space, after making a choice of the base field \(\mathbb{B}\). It is flat over \(\mathbb{B}\), so \(H^{*}(Y;\mathbb{K})\cong H^{*}(Y;\mathbb{B})\otimes_{\mathbb{B}}\mathbb{K}\). This yields an (\(\omega\)- and \(\varphi\)-dependent) filtration on \(H^{*}(Y;\mathbb{B})\) by \(\mathbb{B}\)-vector subspaces for any field \(\mathbb{B}\). Floer/quantum cohomology is also defined over a Novikov ring \(\mathbb{K}\) using any underlying ring \(\mathbb{B}\)[11] (by Remark 5.7), and Proposition 6.5 still holds. If one forgoes the multiplicative structure by ideals, one can more generally replace \(\mathbb{B}\) by any abelian group, yielding a filtration on \(H^{*}(Y;\mathbb{B})\) by abelian subgroups.
### Description of the map \(c_{\lambda}^{*}:QH^{*}(Y)\to HF^{*}(F_{\lambda})\)
**Proposition 6.7**.: _Let \(Y\) be a symplectic \(\mathbb{C}^{*}\)-manifold over a convex base. Let \(f_{\alpha}:\mathfrak{F}_{\alpha}\to\mathbb{R}\) be any Morse function on \(\mathfrak{F}_{\alpha}\). One can construct an admissible Hamiltonian \(\widetilde{F}:Y\to\mathbb{R}\) of slope \(\lambda\) by making an autonomous perturbation of \(F=\lambda H\) supported in disjoint neighbourhoods of the \(\mathfrak{F}_{\alpha}\), such that:_
1. \(\widetilde{F}\) _is Morse and its critical points are precisely the critical points_ \(p\) _of the_ \(f_{\alpha}\)_._
2. _The_ \(1\)_-periodic orbits of_ \(\widetilde{F}\) _are the (isolated) constant orbits_ \(x_{\alpha,p}\) _at the_ \(p\in\operatorname{Crit}(f_{\alpha})\)_._
3. _Their gradings_71 _in_ \(HF^{*}(\widetilde{F})\) _are_ Footnote 71: A \(\mathbb{Z}\)-grading if \(c_{1}(Y)=0\); a grading in a certain finite group if \(Y\) is monotone; a \(\mathbb{Z}/2\)-grading otherwise. \[|x_{\alpha,p}|=\mu_{f_{\alpha}}(p)+\mu_{\lambda}(\mathfrak{F}_{\alpha}),\] _where_ \(\mu_{f_{\alpha}}(p)\) _is the Morse index of_ \(p\) _and the_ \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\) _are as in Section_ 4.2_._
_._
4. \(|x_{\alpha,p}|\to-\infty\) _as_ \(\lambda\to+\infty\)_._
5. _For generic choices of_ \(f_{\alpha}\)_, the function_ \(\widetilde{F}\) _is Morse-Smale, in particular the Morse trajectories of_ \(-\varepsilon_{\alpha}\nabla f_{\alpha}\) _in_ \(\mathfrak{F}_{\alpha}\) _are regular Morse trajectories of_ \(-\nabla\widetilde{F}\) _in_ \(Y\)_._
6. _There is a continuation isomorphism_ \[BHF^{*}(F;f_{\alpha})\cong HF^{*}(\widetilde{F}),\] _where_ \(BHF^{*}(F;f_{\alpha})\) _is the Morse-Bott-Floer cohomology of_ \(F:Y\to\mathbb{R}\) _using the auxiliary Morse functions_ \(f_{\alpha}\) _on_ \(\operatorname{Crit}(F)=\mathfrak{F}=\sqcup\mathfrak{F}_{\alpha}\)__\((\)_see_ _[_11_, Appendix A.1]_\()\)_._
7. _There is a convergent spectral sequence_ (46) \[\bigoplus_{\alpha}H^{*}(\mathfrak{F}_{\alpha};\mathbb{K})[-\mu_{\lambda}( \mathfrak{F}_{\alpha})]\Rightarrow BHF^{*}(F;f_{\alpha})\cong HF^{*}( \widetilde{F}),\] _where_ \(H^{*}(\mathfrak{F}_{\alpha};\mathbb{K})[-\mu_{\lambda}(\mathfrak{F}_{\alpha})]\) _arises from the direct product of the Morse complexes for_ \(f_{a}:\mathfrak{F}_{\alpha}\to\mathbb{R}\) _(shifted in grading) arising as the low-energy local Morse-Bott-Floer cohomology_ \((\)_involving only simple cascades in the Morse-Bott submanifolds_ \(\mathfrak{F}_{\alpha}\)_, see_ _[_11_, Appendix A.1]_\()\)_. Equivalently, it arises as the low-energy local Floer cohomology of_ \(\widetilde{F}\) _near_ \(\mathfrak{F}_{\alpha}\)_._
Proof.: Following [1], we pick bump functions \(\rho_{\alpha}\) supported near the \(\mathfrak{F}_{\alpha}\) and we define
\[\widetilde{F}=F+\sum\varepsilon_{\alpha}\rho_{\alpha}\widetilde{f}_{\alpha}, \tag{47}\]
where \(\widetilde{f}_{\alpha}\) is an extension of \(f_{\alpha}\), constant in normal directions to \(\mathfrak{F}_{\alpha}\) (after parametrising a tubular neighbourhood of \(\mathfrak{F}_{\alpha}\) by its normal bundle in Y via the exponential map). Claims (1)-(2) are now a standard perturbation argument. Since \(F\) has only constant \(1\)-periodic orbits (as \(\lambda\) is generic), the Morse-Bott property of \(F\) ensures that the Floer action functional of any sufficiently small autonomous perturbation \(\widetilde{F}\) of \(F\) is still Morse-Bott and its \(1\)-periodic orbits are still constant orbits at the critical points of \(\widetilde{F}\). So (1) follows for sufficiently small constants \(\varepsilon_{\alpha}>0\), and (2) follows from (1). We have \(RS(x_{\alpha,p},\widetilde{F})=RS(p,F)+\frac{1}{2}\dim_{\mathfrak{F}} \mathfrak{F}_{\alpha}-\mu_{f_{\alpha}}(p)\) by [1, Sec.3.3], so we get (3) by our grading conventions (Section 4.2). Claim (4) follows by Corollary 4.5. Claim (5) is a standard transversality result, using that \(F\) is Morse-Bott for \(\mathfrak{F}_{\alpha}\). Claim (6) follows by a Morse-Bott-Floer continuation argument (see [11, Appendix A.2]). Claim (7) follows by an energy-spectral sequence argument (see [11, Appendix B.2], where in our current setting we do not need \(c_{1}(Y)=0\) as we can use constant cappings at constant orbits, and the triviality of the local system follows from the complex-linearity of the linearised \(S^{1}\)-flow). One can alternatively prove this by using [11, Theorem C.9] (without the need to use \(p_{t}\), so take \(p_{t}:=\operatorname{id}\)), to show that for small \(\varepsilon_{\alpha}>0\) the low-energy local Floer cohomology of \(\widetilde{F}\) near \(\mathfrak{F}\) is the direct product of the Morse cohomologies of the \(f_{\alpha}:\mathfrak{F}_{\alpha}\to\mathbb{R}\) with degree shifts by \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\); and this in turn has an energy-spectral sequence converging to \(HF^{*}(\widetilde{F})\).
_Remark 6.8_.: By the proof of Theorem 5.9 all Floer trajectories for \(F=\lambda H\) are trapped in a compact region of \(Y\), and those for \(\widetilde{F}\) in (47) are arbitrarily close to that compact region if we make the support of the \(\rho_{\alpha}\) sufficiently close to the \(\mathfrak{F}_{\alpha}\). Recall that in the Floer differential a Floer trajectory \(u\) is counted with a Novikov weight \(T^{E(u)}\), where \(E(u)\) is the energy. When \(u\) converges at the ends to critical points \(p_{\pm}\) of the relevant Hamiltonian \(G:Y\to\mathbb{R}\), then this energy is in fact determined by the ends and the spherical class \(A\in\pi_{2}(Y)\) of \(u:\mathbb{R}\times S^{1}\to Y\) extended continuously at \(\pm\infty\) using its asymptotics \(p_{\pm}\),
\[E(u):=\int_{\mathbb{R}\times S^{1}}\|\partial_{s}u\|^{2}\,ds\,dt=\omega[A]+G(p _{-})-G(p_{+}).\]
From now on, ordinary cohomology is understood to be taken with coefficients in \(\mathbb{K}\).
**Corollary 6.9**.: _Suppose \(Y\) (equivalently, each \(\mathfrak{F}_{\alpha}\)) has no odd cohomology. Then for generic \(\lambda>0\) the vector space \(HF^{*}(\lambda H)\) is supported in even degrees, and the \(\mathbb{K}\)-linear map from (44) becomes:_
\[c_{\lambda}^{*}:QH^{*}(Y)\cong\bigoplus_{\alpha}H^{*}(\mathfrak{F}_{\alpha})[- \mu_{\alpha}]\to\bigoplus_{\alpha}H^{*}(\mathfrak{F}_{\alpha})[-\mu_{\lambda}( \mathfrak{F}_{\alpha})]\cong HF^{*}(\lambda H). \tag{48}\]
_For small \(\lambda=\delta>0\), \(\mu_{\alpha}=\mu_{\delta}(\mathfrak{F}_{\alpha})\) and the isomorphism becomes (28) so reformulates Proposition 6.1._
Proof.: In Proposition 6.7.(7) the \(E_{1}\)-page is concentrated in even total degrees because \(H^{*}(\mathfrak{F}_{\alpha})\) lives in even degrees by assumption, and by Lemma 4.4\(\mu_{\lambda}(\mathfrak{F}_{\alpha})\) is even for generic \(\lambda\). The differentials in the spectral sequence increase the total grading by one, so all differentials vanish after the \(E_{1}\)-page, so the spectral sequence has already converged. The claim about \(\mu_{\alpha}=\mu_{\delta}(\mathfrak{F}_{\alpha})\) is Corollary 4.7.
_Remark 6.10_.: Suppose \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\leq\mu_{\alpha}\) for all \(\lambda>0\), \(\alpha\) (e.g. see Proposition 4.12). Then even if \(Y\) had odd cohomology, the above argument would imply that \(HF^{*}(\lambda H)\) is supported in degrees
\[*\in\left[\min_{\alpha}\mu_{\lambda}(\mathfrak{F}_{\alpha}),\max_{\alpha} \left(\mu_{\lambda}(\mathfrak{F}_{\alpha})+2|\mathfrak{F}_{\alpha}|\right) \right]\subset\left[\min_{\alpha}\mu_{\lambda}(\mathfrak{F}_{\alpha}),\max_{ \alpha}\left(\mu_{\alpha}+2|\mathfrak{F}_{\alpha}|\right)\right],\]
where \(|V|:=\dim_{\mathbb{C}}V\). So \(HF^{*}(\lambda H)\) has grading \(\leq\) the top supported grading of \(H^{*}(Y)\), by Lemma 2.22.
_Remark 6.11_ (Rank drops of \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\) yield lower bounds on the filtration).: Observe that the total rank of the vector spaces \(QH^{*}(Y)\cong\oplus_{\alpha}H^{*}(\mathfrak{F}_{\alpha})[-\mu_{\alpha}]\) and \(HF^{*}(\lambda H)\cong\oplus_{\alpha}H^{*}(\mathfrak{F}_{\alpha})[-\mu_{ \lambda}(\mathfrak{F}_{\alpha})]\) are the same, but the map \(c_{\lambda}^{*}\) in (48) is grading-preserving. Thus the more \(\mu_{\alpha}-\mu_{\lambda}(\mathfrak{F}_{\alpha})\) drops in \(\lambda\) (without a compensating jump from a different value of \(\alpha\)), the better the lower bound on the rank of \(\ker c_{\lambda}^{*}\) (this also works without the odd-cohomology vanishing assumption of Corollary 6.9, by considering Proposition 6.7.(7)). This yields Equation (18) and Equation (19).
### Computation of the continuation maps
**Proposition 6.12**.: _Suppose \(H^{*}(Y)\) lies in even degrees. If for each weight \(k\) of \(\mathfrak{F}_{\alpha}\) in (10) there are no integers in the interval \((|k|\lambda,|k|\gamma)\), then \(\mu_{\lambda}(\mathfrak{F}_{\alpha})=\mu_{\gamma}(\mathfrak{F}_{\alpha})\) and the part of the map (23) given by \(HF^{*}(\mathfrak{F}_{\alpha})[-\mu_{\lambda}(\mathfrak{F}_{\alpha})]\to HF^{*}( \mathfrak{F}_{\alpha})[-\mu_{\gamma}(\mathfrak{F}_{\alpha})]\) is the identity map up to higher order \(T\) terms._
Proof.: We will use the Morse-Bott model for Floer cohomology, which counts "cascades": this is explained in detail in the Appendix in [10]. Continuation maps in the Morse-Bott model count cascades in which a Floer continuation cylinder must be present. The homotopy of Hamiltonians used is \(\lambda_{s}H\) where \(\lambda_{s}=\gamma\) for \(s\ll 0\); \(\lambda_{s}=\lambda\) for \(s\gg 0\); and \(\partial_{s}\lambda_{s}\leq 0\). The latter condition implies that \(\partial_{s}H_{s}\leq 0\), so continuation cascades will be counted with a factor given by a non-negative power of \(T\), and the constant continuation solution is the only one counted with \(T^{0}\), by the following remark.
_Remark 6.13_.: If \(\partial_{s}H_{s}\leq 0\), the weight \(T^{E_{0}(u)}\) with which Floer continuation solutions \(u\) are counted involves a non-negative quantity \(E_{0}(u)\geq 0\) (e.g. see [10, Sec.3.3]) related to the energy of \(u\) by \(E(u)=E_{0}(u)+\int_{\mathbb{R}\times S^{1}}\partial_{s}H_{s}(u)\,ds\wedge dt\). Thus, \(E_{0}(u)\geq E(u)>0\) unless \(u\) is an \(s\)-independent \(1\)-orbit.
Thus, to prove the claim, it suffices to explain why the constant cascade is regular. We consider the constant continuation cylinder at each constant \(1\)-orbit at \(x\in\operatorname{Crit}(f_{\alpha})\subset\mathfrak{F}_{\alpha}\), where \(f_{\alpha}:\mathfrak{F}_{\alpha}\to\mathbb{R}\) is the auxiliary Morse function used in the Morse-Bott-Floer model. The local model near \(x\) is described by the weight decomposition:
\[\mathbb{C}^{d}\oplus\bigoplus_{i}\mathbb{C}_{w_{i}}\]
where \(d=\dim_{\mathbb{C}}\mathfrak{F}_{\alpha}\), and \(\mathbb{C}_{w_{i}}\) denotes a copy of \(\mathbb{C}\) with the weight \(w_{i}\neq 0\) action. Thus
\[H_{s}=\pi\lambda_{s}\sum w_{i}|z_{i}|^{2}.\]
The Floer equation therefore decouples, and we reduce to considering the continuation map in \(\mathbb{C}\),
\[HF^{*}(\mathbb{C};\pi w_{i}\lambda|z_{i}|^{2})\to HF^{*}(\mathbb{C},\pi w_{i} \gamma|z_{i}|^{2})\]
for \(\pi w_{i}\lambda_{s}|z_{i}|^{2}\). One approach is to verify regularity of the constant Floer continuation solution at \(0\) directly. Indirectly, we just need to show that map, viewed as local low-energy Floer cohomologies, is an isomorphism. But this is known:72 the map is an isomorphism precisely if the \(1\)-periodic \(S^{1}\)-action
on \(\mathbb{C}\) does not have periodic orbits with period inside \((w_{i}\lambda,w_{i}\gamma)\), equivalently \((w_{i}\lambda,w_{i}\gamma)\cap\mathbb{Z}=\emptyset\). The latter holds by the assumption.
Footnote 6: We use the notation \(\mathcal{F}_{\lambda}\) to denote the _canonical_\(\mathcal{F}_{\lambda}\).
Let \(\lambda_{\alpha}:=\min\{\frac{1}{|k|}:h_{k}^{\alpha}\neq 0\text{ for }k\in \mathbb{Z}\backslash\{0\}\}=1/(\text{maximal absolute weight of }\mathfrak{F}_{\alpha})\) (see Corollary 4.7).
**Lemma 6.14**.: \(\lambda_{min}=1\Leftrightarrow\mu=\operatorname{codim}_{\mathbb{C}}\mathfrak{F }_{\min}\Leftrightarrow(\text{all nonzero weights of }\mathfrak{F}_{\min}\text{ are }+1)\)_._
Proof.: There are \(\dim_{\mathbb{C}}Y-\dim_{\mathbb{C}}\mathfrak{F}_{\min}\) non-zero weights for \(\mathfrak{F}_{\min}\), they are positive integers (as \(\mathfrak{F}_{\min}=\min H\)) and their sum is \(\mu\) (Lemma 4.1). So \(\mu=\dim_{\mathbb{C}}Y-\dim_{\mathbb{C}}\mathfrak{F}_{\min}\Leftrightarrow \text{positive weights }=+1\).
**Corollary 6.15**.: _If \(H^{*}(Y)\) lies in even degrees, then_
\[\mathcal{F}_{\lambda}^{\varphi}\subset\bigoplus\ \{H^{*}(\mathfrak{F}_{ \alpha})[-\mu_{\alpha}]:\lambda_{\alpha}\leq\lambda\}.\]
_If \(c_{1}(Y)=0\) then, without assumptions on \(H^{*}(Y)\), \(1\notin\mathcal{F}_{\lambda}^{\varphi}\) if \(\lambda<\lambda_{\min}\) and \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\geq 0\) for all \(\alpha\)._
Proof.: The first claim follows by Proposition 6.12: \(\varphi_{\lambda}^{*}:H^{*}(\mathfrak{F}_{\beta})[-\mu_{\beta}]\to HF^{*}( \lambda H)\) is injective for \(\lambda<\lambda_{\beta}\), so \(\mathcal{F}_{\lambda}^{\varphi}\subset\bigoplus_{\alpha\neq\beta}H^{*}( \mathfrak{F}_{\alpha})[-\mu_{\alpha}]\) for \(\lambda<\lambda_{\beta}\). In the second claim, the unit is a cycle in \(QH^{*}(Y)\) so its image \(c_{\lambda}^{*}(1)=T^{0}\cdot 1+T^{>0}\)-terms (by the proof of Proposition 6.12, using \(\lambda<\lambda_{\alpha}\)) is also a cycle in \(CF^{*}(\lambda H)\). But we need to ensure that it is not a boundary. The conditions \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\geq 0\) ensure no chains in \(CF^{*}(\lambda H)\) have negative grading, in particular grading \(-1\) (which could give rise to a non-trivial differential killing the unit due to high-energy Floer trajectories).
**Corollary 6.16**.: _Let \(Y\) be a weight-2 CSR with \(\mathfrak{F}_{\min}=\{\text{point}\}\), or a weight-1 CSR (so \(\dim_{\mathbb{C}}\mathfrak{F}_{\min}=\frac{1}{2}\dim_{\mathbb{C}}Y\)). Then \(\mathcal{F}_{\lambda}^{\varphi}\subset\bigoplus_{\alpha\neq\min}H^{*}( \mathfrak{F}_{\alpha})[-\mu_{\alpha}]\) for \(\lambda<1\), in particular \(1\in H^{0}(Y)\) survives till time-1._
Proof.: Let \(d:=\frac{1}{2}\dim_{\mathbb{C}}Y\). For a weight-\(s\) CSR, \(\mu=sd\) (Lemma 7.9) When \(s=1\), \(\mathfrak{F}_{\min}\) is an \(\omega_{J}\)-Lagrangian (the "minimal Lagrangian" of [22]) so \(\dim_{\mathbb{C}}\mathfrak{F}_{\min}=d\). By the assumptions, Lemma 6.14 applies, so the claim follows from Corollary 6.15 (\(\lambda<1\) excludes \(\alpha=\min\) as \(\lambda_{\min}=1\)).
**Theorem 6.17** ([14, Sec.1.12]).: _73 The \(S^{1}\)-action \(\varphi_{t}:Y\to Y\) induces a commutative diagram:_
Footnote 73: it is not difficult to verify that the construction and properties of this continue to hold in our setup, by using the maximum principle from Section 5.2, and to carry out [14, Sec.5.4] we use the same monotonicity-lemma arguments for the projection to the convex base as in [23], when we show that Floer solutions consume \(F\)-filtration when they cross large linearity regions. For this part of the argument, we need the assumption that \(B^{\text{out}}\) is geometrically bounded.
\[\begin{CD}SH^{*}(Y)@>{\mathcal{R}_{\widetilde{\varphi}}}>{}>\sim\\ @V{\lim}V{HF^{*}(\lambda H)}V@>{\mathcal{S}_{\widetilde{\varphi}}}>{}>HF^{*+2I( \widetilde{\varphi})}((\lambda-1)H)@>{\text{continuation}}>{}>HF^{*+2I(\widetilde{ \varphi})}(\lambda H)\\ @V{c_{\lambda}^{*}}V{}V@V{}V{c_{\lambda}^{*}}V\\ QH^{*}(Y)@>{}>{}>{}>QH^{*+2I(\widetilde{\varphi})}(Y)\end{CD}\]
_Here \(\mathcal{R}_{\widetilde{\varphi}}\) and \(\mathcal{S}_{\widetilde{\varphi}}\) are \(\mathbb{K}\)-module isomorphisms, and \(r_{\widetilde{\varphi}}\) is a \(\mathbb{K}\)-module homomorphism, given as quantum product by a (typically non-invertible) Gromov-Witten invariant \(Q_{\varphi}:=r_{\widetilde{\varphi}}(1)\in QH^{2I(\widetilde{\varphi})}(Y)\), and \(\mathcal{R}_{\widetilde{\varphi}}\) is pair-of-pants product by the invertible element \(c^{*}(Q_{\varphi})\in SH^{2I(\widetilde{\varphi})}(Y)\)._
_The construction depends on a certain choice of lift \(\widetilde{\varphi}\) of the \(S^{1}\)-action, and this choice can be made [14, Sec.3.1 and Sec.7.8] so that \(I(\widetilde{\varphi})=\mu\) is the Maslov index of \(\varphi\) from Section 4.1._
_It follows that \(c^{*}:QH^{*}(Y)\to SH^{*}(Y)\) is a quotient map, inducing \(\mathbb{K}\)-algebra isomorphisms_
\[SH^{*}(Y)\cong QH^{*}(Y)/E_{0}(Q_{\varphi})\cong QH^{*}(Y)_{Q_{\varphi}},\]
_where \(E_{0}(Q_{\varphi})\) is the generalised \(0\)-eigenspace of quantum product by \(Q_{\varphi}\), and \(QH^{*}(Y)_{Q_{\varphi}}\) denotes localisation at \(Q_{\varphi}\) of the \(\mathbb{K}\)-algebra \(QH^{*}(Y).\) For the latter isomorphism, see [12, Lem.4.3]._
_The association of \(\varphi\) to the above \(\mathbb{K}\)-homomorphisms respects group multiplication, so \(Q_{\varphi^{\mathbb{N}}}=Q_{\varphi}^{\star N}.\)_
When \(QH^{*}(Y)\) is ordinary cohomology or when74\(c_{1}(Y)=0\), it follows that \(SH^{*}(Y)=0\), because \(\mathcal{R}_{\tilde{\varphi}}\) is a \(\mathbb{K}\)-module homomorphism of non-zero degree \(\mu>0\) on a finite rank \(\mathbb{K}\)-module \(SH^{*}(Y)\) (as it is a quotient of \(QH^{*}(Y)\), which has finite rank). This gives an alternative proof of Proposition 5.13.
Footnote 74: The theorem holds in the weak+ monotone setup [12, Sec.2.2], being cautious that the grading is no longer a \(\mathbb{Z}\)-grading outside of the \(c_{1}(Y)=0\) setup.
The Theorem implies that the full-rotation continuation maps
\[c_{N+\delta}^{*}:QH^{*}(Y)\cong HF^{*}(F_{\delta})\to HF^{*}(F_{N+\delta}),\]
can be identified with quantum product \(N\) times by \(Q_{\varphi}\in QH^{2\mu}(Y)\):
\[r_{\tilde{\varphi}}^{N}=Q_{\varphi}^{\star N}\star\cdot\,:QH^{*}(Y)\to QH^{*+2 N\mu}(Y).\]
**Corollary 6.18**.: _For any integer \(N>0\), \(\mathcal{F}_{N}^{\varphi}=\ker(Q_{\varphi}^{\star N}\star\cdot),\) thus \(\mathcal{F}_{N}^{\varphi}=E_{0}(Q_{\varphi})\) for large \(N\). In particular, \(\mathcal{F}_{N}^{\varphi}=QH^{*}(Y)\) if \(Q_{\varphi}^{\star N}=0\). (Compare Corollary 1.23). \(\blacksquare\)_
**Lemma 6.19**.: _Let \(Y\) be a symplectic \(\mathbb{C}^{*}\)-manifold, and \(\mathfrak{F}_{\min}\) its minimal component (Lemma 2.22)._
\[\operatorname{PD}[\mathfrak{F}_{\min}]\neq 0\in H^{*}(Y)\iff e(U_{\min}) \neq 0\in H^{*}(\mathfrak{F}_{\min}), \tag{49}\]
_where \(e(U_{\min})\) is the Euler class of the normal bundle of \(\mathfrak{F}_{\min}\) (see Lemma 3.8)._
_In particular, \(\dim_{\mathbb{C}}\mathfrak{F}_{\min}\geq\frac{1}{2}\dim_{\mathbb{C}}Y\) is a necessary condition for the non-vanishing (49)._
Proof.: The last claim follows for degree reasons: \(e(U_{\min})\in H^{2\operatorname{codim}_{\mathbb{C}}\mathfrak{F}_{\min}}( \mathfrak{F}_{\min})\). The identification of \(U_{\min}:=W^{s}_{-\nabla H}(\mathfrak{F}_{\min})\) with the normal bundle of \(\mathfrak{F}_{\min}\) follows from Proposition 3.11 (letting \(m=1\)).
By Poincare-Lefschetz duality, \(\operatorname{PD}[\mathfrak{F}_{\min}]\neq 0\in H^{*}(Y)\) if and only if the locally finite (lf) cycle \([\mathfrak{F}_{\min}]\in H^{lf}_{*}(Y)\) has non-trivial intersection number with some cycle \(C\in H_{*}(Y)\). We can perturb the \(\mathfrak{F}_{\min}\) in \(U_{\min}\) as the image of a generic smooth section of the normal bundle to \(\mathfrak{F}_{\min}\). We can perturb the metric in the complement of a small neighbourhood of \(\mathfrak{F}_{\min}\) to make the flow of \(H\) Morse-Smale, and we can perturb \(H\) to make it Morse away from \(\mathfrak{F}_{\min}\). Then by Morse theory the cycles in \(H_{*}(Y)\) that do not come from the inclusion \(H_{*}(\mathfrak{F}_{\min})\to H_{*}(Y)\) can be represented as pseudo-cycles by linear combinations of the unstable manifolds of the critical points of \(H\) not in \(\mathfrak{F}_{\min}\) (the algebro-geometrical analogue of this is described in [23, Sec.2.2]). These unstable manifolds correspond to submanifolds75 in \(N^{+}\) of real codimension at least \(2\), where \(N^{+}\subset TY|_{\mathfrak{F}_{\min}}\) is the subbundle where \(\operatorname{Hess}(H)\) is positive definite. It follows that we can construct a smooth section \(\mathfrak{F}_{\min}\to U_{\min}\) of the normal bundle that avoids the closures of those pseudo-cycles. Therefore, we have built an lf-homologous perturbation of \([\mathfrak{F}_{\min}]\) which can only intersect the cycles in \(\mathfrak{F}_{\min}\). This implies that \([\mathfrak{F}_{\min}]\neq 0\in H^{lf}_{*}(Y)\) if and only if \([\mathfrak{F}_{\min}]\neq 0\in H^{lf}_{*}(U_{\min})\). Finally, by the proof of [12, Thm.67], PD of \([\mathfrak{F}_{\min}]\in H^{lf}_{2\operatorname{dim}_{\mathbb{C}}\mathfrak{F}_{ \min}}(U_{\min})\) equals the pull-back in \(H^{2\operatorname{codim}_{\mathbb{C}}\mathfrak{F}_{\min}}(U_{\min})\) of \(e(U_{\min})\). \(\blacksquare\)
Footnote 75: using the Morse-Smale property, and for the codimension claim we use that the unstable manifold of a Morse critical point \(p\) is the Morse index, and that Morse indices will be \(\leq 2\dim_{\mathbb{C}}Y-2\) by Lemma 2.22 and non-compactness of \(Y\).
**Proposition 6.20**.: _Suppose \(Y\) is Kahler with \(c_{1}(Y)=0\) (it also holds for \(Y\) non-compact Fano76) and_
Footnote 76: see [12, Lem.1.6] or Section 4.4 for how to correctly interpret the Maslov index when \(c_{1}(Y)\neq 0\).
\[\mu=\operatorname{codim}_{\mathbb{C}}\mathfrak{F}_{\min}\ \ \text{and}\ \ \ \operatorname{PD}[\mathfrak{F}_{\min}]\neq 0\in H^{2\mu}(Y).\]
_Then \(Q_{\varphi}=\operatorname{PD}[\mathfrak{F}_{\min}]+(\text{linearly independent classes})+(\text{terms with }T^{>0})\neq 0\in QH^{2\mu}(Y).\)_
Proof.: This will rely on [12, Lem.1.6] (that was stated in the non-compact Fano case, but the written proof works also in the non-compact CY case). The contribution \(\mathfrak{F}_{\min}\) arises from certain constant sections counted by the GW-interpretation of \(Q_{\varphi}\), but there can be other moduli spaces of constant sections that sweep out \(\operatorname{lf}\)-cycles inside the other fixed components \(\mathfrak{F}_{\beta}\) (after cutting down the
moduli space using obstruction bundle techniques [12, Sec.8.4-8.6]). Those other lf-cycles have zero intersection number with (compact) cycles in \(\mathfrak{F}_{\min},\) being disjoint, unlike the lf-cycle \([\mathfrak{F}_{\min}]\) which gives a non-zero intersection number by the proof of Lemma 6.19. This implies that \(\operatorname{PD}[\mathfrak{F}_{\min}]\) is linearly independent from the other contributions coming from constant sections. The non-constant sections arise with a factor of \(T^{>0}\) so are automatically linearly independent over the base field \(\mathbb{B}\) (not the Novikov field \(\mathbb{K},\) see Remark 6.6). Thus \(Q_{\varphi}\neq 0.\)
**Corollary 6.21**.: _For any weight-\(1\) CSR, \(Q_{\varphi}\neq 0\), so \(\mathcal{F}_{1}^{\varphi}\neq H^{*}(Y).\) For any weight-\(s\) CSR with \(s\geq 2\), \(Q_{\varphi}=0\), so \(\mathcal{F}_{1}^{\varphi}=H^{*}(Y).\) In addition, \(\mathcal{F}_{\lambda}^{\varphi}\neq H^{*}(Y)\) for \(\lambda<1\) for weight-\(2\) CSRs with \(\mathfrak{F}_{\min}=\{\operatorname{point}\}.\)_
Proof.: For \(s\geq 2\), \(Q_{\varphi}\in QH^{2\mu}(Y)=0\) as \(2\mu=s\dim_{\mathbb{C}}Y>\dim_{\mathbb{C}}Y\), using Lemma 7.9 and Corollary 7.4. For \(Y\) a weight-\(1\) CSR, recall from Corollary 6.16 that \(\mathfrak{F}_{\min}\) is an \(\omega_{J}\)-Lagrangian and \(\mu=\operatorname{codim}_{\mathbb{C}}\mathfrak{F}_{\min}.\) As \(\mathfrak{F}_{\min}\) is \(\omega_{J}\)-Lagrangian, its normal and cotangent bundle can be identified, so \(e(U_{\min})=-e(\mathfrak{F}_{\min}).\) The latter class is non-zero as the Euler characteristic of \(\mathfrak{F}_{\min}\) is non-zero (its cohomology lies in even degree, by Lemma 2.22 and Corollary 7.4). The claim follows by (49) and the previous Proposition.
An alternative proof of Corollary 6.21 follows by non-degeneracy of the intersection pairing which holds for CSRs (Proposition 7.5). When the intersection pairing is trivial, we get the opposite:
**Proposition 6.22**.: _If \(c_{1}(Y)=0\) and the intersection product \(H_{2\mu}(Y)\otimes H_{2\dim_{\mathbb{C}}Y-2\mu}(Y)\to\mathbb{K}\) is trivial, then \(Q_{\varphi}=0.\)_
Proof.: Viewing \(Q_{\varphi}\) as a class in \(H_{2\dim_{\mathbb{C}}Y-2\mu}^{lf}(Y)\) by Poincare-Lefschetz duality, it is a \(\mathbb{K}\)-linear combination of lf-cycles of dimension \(2\dim_{\mathbb{C}}Y-2\mu\) (using that \(c_{1}(Y)=0\), so the Novikov parameter is in degree zero). But those lf-cycles are all compact cycles as (by definition) they arise from evaluation maps on compact moduli spaces [12]. Indeed the lf-cycles are supported close to \(\operatorname{Core}(Y)\) as the maximum principle prevents the sections counted by \(Q_{\varphi}\) from entering the region at infinity where the maximum principle holds. The assumption on the triviality of the intersection product implies that \(Q_{\varphi}\) has zero intersection product with \(H_{2\mu}(Y)\). On the other hand, the intersection product \(H_{2\mu}(Y)\otimes H_{2\dim_{\mathbb{C}}Y-2\mu}^{lf}(Y)\to\mathbb{K}\) is non-degenerate by Poincare-Lefschetz duality. Therefore \(Q_{\varphi}=0.\)
Another example where a global topological property impacts the filtration \(\mathcal{F}\) is the following. Note that the Atiyah-Bott filtration (which is just \(0\subset H^{*}(X)\) here) does not distinguish the two cases.
**Corollary 6.23**.: _For \(Y=T^{*}X,\) the filtration with respect to the fibre-contraction action is_
\[0\subset\mathcal{F}_{1}=H^{*}(X),\text{ when }\chi(X)=0,\]
\[0\subset\mathcal{F}_{1}=H^{\geq 1}(X)\subset\mathcal{F}_{2}=H^{*}(X),\text{ when } \chi(X)\neq 0,\]
_assuming that \(Y\) is a symplectic \(\mathbb{C}^{*}\)-manifold over a convex base (e.g. for all projective varieties \(X\))._
Proof.: Recall the Euler characteristic \(\chi(X)=-[X]\cdot[X],\) where the intersection number is computed in \(T^{*}X.\) The Maslov index \(\mu=\dim_{\mathbb{C}}X\) (the weight decomposition at \(X\) is \(H_{0}\oplus H_{1}\)) satisfies the assumption of Proposition 6.22 for \(d=\dim_{\mathbb{B}}X.\) Thus, when \(\chi(X)=0,\) we get \(Q_{\varphi}=0,\) and \(\mathcal{F}_{1}=H^{*}(X).\) Notice that the action is free so there are no intermediate filtration levels between \(\mathcal{F}_{0}=0\) and \(\mathcal{F}_{1}.\) Assuming \(\chi(X)\neq 0,\) by Proposition 6.20 we have \(Q_{\varphi}\neq 0,\) thus \(\mathcal{F}_{1}\neq H^{*}(X).\) Thus, for degree reasons, \(\mathcal{F}_{1}\supset H^{\geq 1}(X)\) and \(\mathcal{F}_{2}=H^{*}(X).\) For the claim about projective varieties, see Example 1.6.
There is another family of spaces to which Proposition 6.22 can be applied. Consider the moduli space \(\mathcal{M}_{G}(d,g)\) of \(G\)-Higgs bundles of degree \(d\) over a Riemann surface of genus \(g\) (e.g. see [12]), for \(G\in\{GL(n,\mathbb{C}),SL(n,\mathbb{C})\},\) and \(d\geq 0\) coprime to \(n\). Recall these are hyperkahler manifolds satisfying the weight-\(1\) condition \(t\cdot\omega_{\mathbb{C}}=t\omega_{\mathbb{C}}\) for the canonical \(\mathbb{C}^{*}\)-action \(\varphi.\)
**Corollary 6.24**.: \(Q_{\varphi}=0\) _for \(\mathcal{M}_{G}(d,g)\), so in particular \(\mathcal{F}_{1}^{\varphi}=H^{*}(\mathcal{M}_{G}(d,g)).\)_
Proof.: Let \(d:=\dim_{\mathbb{C}}(\mathcal{M})\). We have \(c_{1}(Y)=0\) and \(2\mu=d\) by the same proofs as in Lemma 7.2 and Lemma 7.9. So \(Q_{\varphi}=0\) follows by Proposition 6.22 together with the fact (due to [12] for \(SL(n,\mathbb{C})\) and to [12] for \(GL(n,\mathbb{C})\)) that \(H_{d}(\mathcal{M})\otimes H_{d}(\mathcal{M})\to\mathbb{K}\) is trivial (using \(2\mu=d=2d-2\mu\)).
### \(S^{1}\)-equivariant symplectic cohomology
Applying the methods from [14], \(S^{1}\)-equivariant symplectic cohomology \(ESH^{*}(Y,\varphi)\) is a \(\mathbb{K}[u]\)-module, with a canonical \(\mathbb{K}[u]\)-module homomorphism
\[Ec^{*}:EH^{*}(Y)\cong H^{*}(Y)\otimes_{\mathbb{K}}\mathbb{F}\to ESH^{*}(Y,\varphi).\]
Here \(u\) is a degree two formal variable, and at chain level each \(1\)-orbit contributes a copy of the \(\mathbb{K}[u]\)-module \(\mathbb{F}:=\mathbb{K}(\!(u)\!)/u\mathbb{K}[\![u]\!]\cong H_{-*}(\mathbb{CP}^{ \infty})\) where we identify \(u^{-j}=[\mathbb{CP}^{j}]\), and \(H^{*}(\mathbb{CP}^{\infty})=\mathbb{K}[u]\) acts by the nilpotent cap product action. We recall the notation for locally finite \(S^{1}\)-equivariant homology,
\[EH^{*}(Y):=H^{lf,S^{1}}_{2\dim_{C}Y-*}(Y),\]
which in this case becomes \(H^{*}(Y)\otimes_{\mathbb{K}}\mathbb{F}\) as it arises from constant \(1\)-orbits, and \(S^{1}\) is only acting by \(S^{1}\)-reparametrisation on \(1\)-orbits (not on the space \(Y\)). Theorem 1.19 becomes:
**Theorem 6.25**.: _There is an \(\mathbb{R}_{\infty}\)-ordered filtration by graded \(\mathbb{K}[u]\)-submodules of \(H^{*}(Y)\otimes_{\mathbb{K}}\mathbb{F}\),_
\[E\mathcal{F}_{p}^{\varphi}:=\bigcap_{\operatorname{generic}\lambda>p}(\ker E _{\lambda}^{*}:H^{*}(Y)\otimes_{\mathbb{K}}\mathbb{F}\to EHF^{*}(H_{\lambda})) \,,\qquad E\mathcal{F}_{\infty}^{\varphi}:=H^{*}(Y)\otimes_{\mathbb{K}} \mathbb{F}, \tag{50}\]
_where \(Ec_{\lambda}^{*}\) is an equivariant continuation map, a grading-preserving \(\mathbb{K}[u]\)-linear map._
_In general, \(\mathcal{F}_{\lambda}^{\varphi}\subset E\mathcal{F}_{\lambda}^{\varphi}\). If \(H^{*}(Y)\) lies in even degrees (e.g. CSRs), then_
\[\mathcal{F}_{\lambda}^{\varphi}=QH^{*}(Y)\cap E\mathcal{F}_{\lambda}^{\varphi}.\]
Proof.: The first part is analogous to the non-equivariant case, so we will just explain the second part. In general, we have the following commutative diagram, where \(in:=\operatorname{id}\otimes_{\mathbb{K}}u^{0}\) is the inclusion of the \(u^{0}\)-part, yielding an injective left-vertical arrow below,
Thus \(\mathcal{F}_{\lambda}^{\varphi}\subset E\mathcal{F}_{\lambda}^{\varphi}\). Now consider the Gysin sequence from [14, Sec.4.5]:
\[\cdots\longrightarrow HF^{*}(H_{\lambda},\varphi)\xrightarrow{in}EHF^{*}(H_{ \lambda},\varphi)\xrightarrow{u\cdot}EHF^{*+2}(H_{\lambda},\varphi) \xrightarrow{b}HF^{*+1}(H_{\lambda},\varphi)\longrightarrow\cdots\]
(the connecting map \(b\) at chain level yields images of higher equivariant differentials \(\delta_{j}\), \(j\geq 1\)). So
\[QH^{*}(Y)\cap E\mathcal{F}_{\lambda}^{\varphi}=\ker(QH^{*}(Y)\xrightarrow{c_{ \lambda}^{*}}HF^{*}(H_{\lambda})\longrightarrow HF^{*}(H_{\lambda})/b(EHF^{*+ 1}(H_{\lambda})).\]
The analogue of Equation (16), when \(H^{*}(Y)\) lies in even degrees, is
\[EHF^{*}(H_{\lambda})\cong\oplus H^{*}(\mathfrak{F}_{\alpha})\otimes_{\mathbb{ K}}\mathbb{F}[-\mu_{\lambda}(\mathfrak{F}_{\alpha})], \tag{51}\]
since the \(S^{1}\)-reparametrisation action on the constant orbits in \(\mathfrak{F}_{\alpha}\) is trivial. So (51) is a free \(\mathbb{F}\)-module in even degrees, on which \(u\cdot\) acts surjectively, so \(b=0\) and \(in:HF^{*}(H_{\lambda})\to EHF^{*}(H_{\lambda})\) is injective. Thus \(\mathcal{F}_{\lambda}^{\varphi}=E\mathcal{F}_{\lambda}^{\varphi}\) follows. Also \(HF^{*}(H_{\lambda})=\ker(u:EHF^{*}(H_{\lambda})\to EHF^{*}(H_{\lambda}))\) recovers (16).
## 7. Filtrations on cohomology of Conical Symplectic Resolutions
### Topological properties of CSRs
We refer to Braden-Proudfoot-Webster [1] where CSRs were introduced and studied in detail, although these spaces were also previously considered by various authors, most notably Kaledin [11, 12] and Namikawa [19, 18]. In particular, [1, Sec.2] lists large families of examples of CSRs, including Nakajima quiver varieties, and hypertoric varieties. We recall:
**Definition 7.1**.: A **Conical Symplectic Resolution (CSR)** is a projective resolution77\(\pi:\mathfrak{M}\to\mathfrak{M}_{0}\) of a normal affine variety \(\mathfrak{M}_{0}\), where \((\mathfrak{M},\omega_{\mathbb{C}})\) is a holomorphic symplectic manifold and \(\pi\) is equivariant with respect to \(\mathbb{C}^{*}\)-actions on \(\mathfrak{M}\) and \(\mathfrak{M}_{0}\) (both denoted by \(\varphi\)). These actions satisfy two conditions:
Footnote 77: Meaning: \(\mathfrak{M}\) is a smooth variety, and \(\pi\) is an isomorphism over the smooth locus of \(\mathfrak{M}_{0}\).
1. The complex symplectic form \(\omega_{\mathbb{C}}\) has a **weight**\(s\in\mathbb{N}\), so \(\varphi_{t}^{*}\omega_{\mathbb{C}}=t^{s}\omega_{\mathbb{C}}\) for all \(t\in\mathbb{C}^{*}\).
2. The action \(\varphi\) contracts \(\mathfrak{M}_{0}\) to a single fixed point \(x_{0}\), so \(\forall x\in\mathfrak{M}_{0},\lim_{t\to 0}t\cdot x=x_{0}.\) Algebraically, \(\mathbb{C}[\mathfrak{M}_{0}]=\bigoplus_{n\geq 0}\mathbb{C}[\mathfrak{M}_{0}]^{n}\) and \(\mathbb{C}[\mathfrak{M}_{0}]^{0}=\mathbb{C}\), where \(\mathbb{C}[\mathfrak{M}_{0}]^{n}\) denotes the \(n\)-weight space.78 Footnote 78: Explicitly \(\mathbb{C}[\mathfrak{M}_{0}]^{n}=\{f\in\mathbb{C}[\mathfrak{M}_{0}]\mid(t \cdot f)(x)=f(t\cdot x)=t^{n}f(x)\}\).
We call **weight-\(s\) conical actions** such actions \(\varphi\). A CSR may have many conical actions, of possibly different weights \(s\), so we emphasize the choice by writing \((\mathfrak{M},\varphi)\). We denote the **core** of \((\mathfrak{M},\varphi)\) by \(\mathfrak{L}:=\{p\in\mathfrak{M}\mid\lim_{\mathbb{C}^{*}\ni t\to\infty}t\cdot p\) exists\(\}\) (see Section 2.2).
We remark that our normality assumption on \(\mathfrak{M}_{0}\) is equivalent79 to the condition that \(\pi:\mathfrak{M}\to\mathfrak{M}_{0}\) is the affinisation map,80 that is given in the original definition in [1, Sec.2].
Footnote 79: For the proof of this equivalence see [12, Lem.3.16].
Footnote 80: \(\mathfrak{M}\to\operatorname{Aff}(\mathfrak{M}):=Spec(H^{0}(\mathfrak{M}, \mathfrak{O}_{\mathfrak{M}})),\ p\mapsto\{f\mid f(p)=0\}\).
Footnote 81: Part (1) is immediate (for a proof see e.g. [12, Lem.3.15.]); (4) is due to [1, proof of Prop.2.5], see also Proposition 2.14; (5) is due to Kaledin [19, Prop.2.12], and (2,3) go back to Nakajima [20, Thm.5.8], and one can find the detailed proof in [12, Lem.3.3].
**Lemma 7.2**.: _Any CSR \(\mathfrak{M}\) satisfies \(c_{1}(T\mathfrak{M},I)=0,\) where \(I\) is its complex structure._
Proof.: The top exterior power of the complex symplectic form \(\omega_{\mathbb{C}}\) trivialises the canonical bundle \(\Lambda_{\mathbb{C}}^{top}T^{*}\mathfrak{M}.\) Now recall that \(c_{1}(\Lambda_{\mathbb{C}}^{top}T^{*}\mathfrak{M})=c_{1}(T^{*}\mathfrak{M})=- c_{1}(T\mathfrak{M}).\)
In the following Theorem we summarise some well-known facts about CSRs.81
Footnote 81: The affine variety embeds into some affine space \(\mathbb{C}^{N}\), and there are no non-constant holomorphic functions on \(\mathbb{C}P^{1}\).
**Theorem 7.3**.: _Let \(\pi:\mathfrak{M}\to\mathfrak{M}_{0}\) be a weight-s CSR._
1. _Its core is the central fibre,_ \(\mathfrak{L}=\pi^{-1}(0).\)__
2. \(\mathfrak{L}\) _is an_ \(\omega_{\mathbb{C}}\)_-isotropic subvariety (but usually singular)._
3. _When_ \(s=1\)_,_ \(\mathfrak{L}\) _is a_ \(\frac{1}{2}\dim_{\mathbb{C}}\mathfrak{M}\)_-equidimensional variety, so_ \(\omega_{\mathbb{C}}\)_-Lagrangian (usually singular)._
4. _The inclusion_ \(\mathfrak{L}\subset\mathfrak{M}\) _is a homotopy equivalence._
5. _Any fibre_ \(F\) _of_ \(\pi\) _has_ \(H^{odd}(F,\mathbb{B})=0,\) _for any field_ \(\mathbb{B}\) _of characteristic zero._
**Corollary 7.4**.: _The cohomology of a weight-s CSR \(\mathfrak{M}\) over characteristic zero fields is supported in even degrees, and at most up to the degree \(\dim_{\mathbb{C}}\mathfrak{M}.\) Moreover, when \(s=1\), \(H^{\dim_{\mathbb{C}}\mathfrak{M}}(\mathfrak{M})\neq 0.\)_
We prove the non-degeneracy of the intersection form of any CSR.
**Proposition 7.5**.: _Any CSR \(\mathfrak{M}\) has a definite intersection form \(H_{\dim_{\mathbb{C}}\mathfrak{M}}(\mathfrak{M})\times H_{\dim_{\mathbb{C}} \mathfrak{M}}(\mathfrak{M})\to\mathbb{Q}.\)_
Proof.: By [1, Cor.2.1.14] the intersection pairing for fibres above relevant strata of semismall resolutions is definite over \(\mathbb{Q}\) coefficients. Symplectic resolutions are semismall [13, Prop.1.2], so this applies to \(\mathfrak{L}=\pi^{-1}(0)\). Finally, the inclusion \(\mathfrak{L}=\pi^{-1}(0)\subset\mathfrak{M}\) is a homotopy equivalence for any CSR \(\pi:\mathfrak{M}\to\mathfrak{M}_{0},\) (Theorem 7.3(4)).
By Kaledin-Verbitsky [16, Thm.1.1] and Namikawa [20] it is also known that any CSR \(\mathfrak{M}\) has a (topologically trivial) deformation whose base is \(H^{2}(\mathfrak{M},\mathbb{C})\) and whose generic fibre is an affine algebraic variety. In the latter variety, there are no non-constant \(I\)-holomorphic spheres82 so the quantum product is equal to the usual cup product. As the quantum product is preserved under deformations of the complex structure (where quantum cohomology is defined over the Novikov field \(\mathbb{K}\) over any base field), we deduce:
Footnote 82: The affine variety embeds into some affine space \(\mathbb{C}^{N}\), and there are no non-constant holomorphic functions on \(\mathbb{C}P^{1}\).
**Proposition 7.6**.: _For any CSR, there is a ring isomorphism \(QH^{*}(\mathfrak{M})\cong H^{*}(\mathfrak{M},\mathbb{K}).\) _
The next lemma describes the basic information on the fixed locus of the \(\mathbb{C}^{*}\)-action on a CSR.
**Lemma 7.7**.: _Consider a weight-\(s\) CSR \((\mathfrak{M},\varphi).\) We have the following:_
1. _Its fixed locus_ \(\mathfrak{F}:=\mathfrak{M}^{\varphi}\) _is a smooth subvariety contained in the core_ \(\mathfrak{L}.\)__
2. \(\mathfrak{F}\) _is a proper_83 _variety which breaks into finitely many connected components_ \(\mathfrak{F}=\sqcup_{\alpha}\mathfrak{F}_{\alpha}.\)__ Footnote 83: Meaning: compact in the analytic topology.
3. _Given a fixed point_ \(p\in\mathfrak{F}_{\alpha},\) _the induced_ \(\mathbb{C}^{*}\)_-action on_ \(T_{p}\mathfrak{M}\) _has a weight decomposition_ \[T_{p}\mathfrak{M}=\oplus_{k\in\mathbb{Z}}H_{k},\ H_{k}:=\{v\in T_{p}\mathfrak{ M}\mid t\cdot v=t^{k}v\}.\]
4. _The weight_ \(s\in\mathbb{N}\) _condition_ \(t\cdot\omega_{\mathbb{C}}=t^{s}\omega_{\mathbb{C}}\) _ensures that for each_ \(k\in\mathbb{Z}\) _and each_ \(x\in\mathfrak{F}_{\alpha}\) _the following pairing is non-degenerate,_ (52) \[\omega_{\mathbb{C}}:H_{k}\oplus H_{s-k}\to\mathbb{C}.\qquad(\omega_{\mathbb{C} }\text{-duality})\]
5. _When_ \(s=1\)_, we have_ \(2\dim_{\mathbb{C}}\mathfrak{F}_{\alpha}+\mu_{\alpha}=\dim_{\mathbb{C}} \mathfrak{M}\quad\text{and}\quad\#\{\mathfrak{F}_{\alpha}\}=\operatorname{ rk}(H^{\dim_{\mathbb{C}}\mathfrak{M}}(\mathfrak{M}))\)_._
Proof.: The fixed locus of a reductive group action on a smooth variety is smooth;84 in particular, it applies to \(\mathfrak{F}=\mathfrak{M}^{\varphi}.\) It is contained in the core, as it is the preimage \(\mathfrak{L}=\pi^{-1}(0)\) of the only fixed point \(0\in\mathfrak{M}_{0},\) and \(\pi\) is equivariant. Being closed in a proper variety \(\mathfrak{L},\) the fixed locus is proper itself, thus indeed breaks into finitely many connected components \(\mathfrak{F}=\sqcup_{\alpha}\mathfrak{F}_{\alpha}.\)
Footnote 84: e.g. see [11, Lem.5.11.1].
The \(\mathbb{C}^{*}\)-action at a fixed point \(p\in\mathfrak{F}_{\alpha}\) yields a representation \(\mathbb{C}^{*}\curvearrowright T_{p}\mathfrak{M},\) thus the weight decomposition (3) is immediate. Considering two homogeneous vectors \(v_{1}\in H_{k_{1}},\)\(v_{2}\in H_{k_{2}},\) we have
\[\omega_{\mathbb{C}}(v_{1},v_{2})=t^{-s}\omega_{\mathbb{C}}(t\cdot v_{1},t \cdot v_{2})=t^{-s}\omega_{\mathbb{C}}(t^{k_{1}}v_{1},t^{k_{2}}v_{2})=t^{k_{1 }+k_{2}-s}\omega_{\mathbb{C}}(v_{1},v_{2}),\]
thus, \(\omega_{\mathbb{C}}(v_{1},v_{2})=0\) unless \(k_{1}+k_{2}=s.\) So (4) follows, as \(\omega_{\mathbb{C}}\) is non-degenerate on \(T_{p}\mathfrak{M}=\oplus_{k}H_{k}.\)
When \(s=1,\) this duality forces \(H_{0}\oplus H_{-}=\oplus_{k\leq 0}H_{k}\cong H_{+},\) thus, abbreviating \(|V|:=\dim_{\mathbb{C}}V,\)\(\dim_{\mathbb{C}}\mathfrak{M}=|H_{0}|+|H_{-}|+|H_{+}|=2(|H_{0}|+|H_{-}|)=2\dim_{ \mathbb{C}}\mathfrak{F}_{\alpha}+\mu_{\alpha}.\) Thus, by (28), the number of generators in \(H^{\dim_{\mathbb{C}}\mathfrak{M}}(\mathfrak{M})\) is the number of \(\alpha.\)
**Lemma 7.8**.: _Given a CSR \(\pi:\mathfrak{M}\to\mathfrak{M}_{0},\) if \(\mathfrak{M}_{0}\) is non-singular then \(\mathfrak{M}_{0}\cong\mathbb{C}^{2n}\), for some \(n\in\mathbb{N}.\)_
Proof.: If \(\mathfrak{M}_{0}\) is non-singular, \(\pi:\mathfrak{M}_{0}\to\mathfrak{M}_{0}\) is a CSR, with a single fixed point \(0.\) Thus by the Bialynicki-Birula decomposition theorem for semiprojective varieties and the fact that CSRs are semiprojective [13, Cor.2.7 and Lem.3.15.], we deduce that \(\mathfrak{M}_{0}\) is an affine bundle over a point, hence an affine space, thus isomorphic to \(\mathbb{C}^{2n}\) (with some linear action on it).
### Symplectic structures on a CSR
We now show that any CSR \((\mathfrak{M},\varphi)\) fits into the framework of Section 5. In Corollary 7.15 we construct an explicit \(I\)-compatible \(S^{1}\)-invariant Calabi-Yau85 Kahler structure on \((\mathfrak{M},\varphi),\) with an exhausting moment map. We remark however that such a structure (without the exhausting condition) arises more generally whenever we have a holomorphic embedding \(\iota:\mathfrak{M}\hookrightarrow X\) into a Kahler manifold \((X,\omega_{X}),\) by \(S^{1}\)-averaging:
Footnote 84: e.g. see [11, Lem.5.11.1].
\[\omega_{I}:=\int_{S^{1}}\varphi_{t}^{*}(t^{*}\omega_{X})\,dt. \tag{53}\]
Any CSR \(\mathfrak{M}\) admits such an embedding, being projective over the affine variety \(\mathfrak{M}_{0}.\)86 Abbreviate by \(g(\cdot,\cdot):=\omega_{I}(\cdot,I\cdot)\) the induced Riemannian metric (\(\omega_{I}\) is \(I\)-compatible). Since \(H^{1}(\mathfrak{M})=0\) (Corollary 7.4), it has a moment map \(H.\) We show that \(\mathfrak{M}\) is a symplectic \(\mathbb{C}^{*}\)-manifold over a convex base in Proposition 7.12. Therefore, if \(H\) is not exhausting, we can modify \(\omega_{I}\) so that the new moment map \(H\) is proper using Lemma 5.5, and we know \(H\) is bounded below by Lemma 2.10.
Footnote 84: e.g. see [11, Lem.5.11.1].
Footnote 85: due to Lemma 7.2.
**Lemma 7.9**.: _Any weight-\(s\) CSR \((\mathfrak{M},\varphi)\) is a symplectic \(\mathbb{C}^{*}\)-manifold with an \(S^{1}\)-invariant \(I\)-compatible Kahler structure \((g,I,\omega_{I}).\) The \(S^{1}\)-action is Hamiltonian with Maslov index \(\mu=\frac{1}{2}s\cdot\dim_{\mathbb{C}}\mathfrak{M}.\)_
Proof.: The \(S^{1}\)-action is symplectic as it preserves \(\omega_{I}\). By Corollary 7.4, \(H^{1}(\mathfrak{M})=0,\) so the action is Hamiltonian. The canonical bundle \(\Lambda_{\mathbb{C}}^{\text{top}}T^{*}\mathfrak{M}\) is trivialised by \(\omega_{\mathbb{C}}^{d}\) for \(d:=\frac{1}{2}\dim_{\mathbb{C}}\mathfrak{M}\). The weight \(s\) condition (\(\varphi_{t}^{*}\omega_{\mathbb{C}}=t^{s}\omega_{\mathbb{C}}\)) implies \(\varphi_{t}^{*}(\omega_{\mathbb{C}}^{d})=(\varphi_{t}^{*}\omega_{\mathbb{C}})^{d}= (t^{s}\omega_{\mathbb{C}})^{d}=t^{sd}\omega_{\mathbb{C}}^{d},\) so \(\mu=sd\) (see Section 4.1).
**Lemma 7.10**.: _Let \(\mathfrak{M}\) be a smooth manifold with a
The methods of Section 5 are required, as \((\mathfrak{M},\omega_{I})\) is almost never convex at infinity:
**Proposition 7.10**.: _Suppose that \(0\in\mathfrak{M}_{0}\) is a non-isolated singularity. Then any choice of \(I\)-compatible (real) symplectic form \(\omega\) on \(\mathfrak{M}\) is non-exact at infinity._
Proof.: As \(0\in\mathfrak{M}_{0}\) is a symplectic singularity, by [12, Thm.2.3] there is a finite stratification \(\mathfrak{M}_{0}=\sqcup_{a\in\mathfrak{A}}\mathfrak{M}_{0}^{a}\) by locally closed smooth strata, where \(\mathfrak{M}_{0}^{0}=0.\) By assumption, there is at least another non-generic stratum \(\mathfrak{M}_{0}^{1}.\) As the \(\mathbb{C}^{*}\)-action on \(\mathfrak{M}_{0}\) is algebraic, it leaves the strata invariant. As the points in \(\mathfrak{M}_{0}^{1}\) have finite isotropy subgroups, an arbitrary \(\mathbb{C}^{*}\)-orbit makes the stratum \(\mathfrak{M}_{0}^{1}\) non-compact. Thus, there is a sequence of points \((x_{i})_{i\in\mathbb{N}}\in\mathfrak{M}_{0}^{1}\) that goes to infinity. Their fibres \(\pi^{-1}(x_{i})\) are \(I\)-holomorphic, hence \(\omega\)-symplectic, projective subvarieties in \(\mathfrak{M}.\) Thus, integrating their (possibly singular) irreducible components with a suitable power of \(\omega\) gives a positive value (such integration is well-defined, see [10, p.60]). If \(\omega\) were exact outside of a compact set \(K\), then a power of \(\omega\) would also be exact in this region. But any (possibly singular) irreducible component of a fibre \(\pi^{-1}(x_{i})\) in this region would have a well-defined fundamental class [10, p.61], in particular by Stokes's theorem [10, p.60] integration of any exact form is zero. Contradiction.
By Lemma 7.8, when \(0\in\mathfrak{M}_{0}\) is not a singular point, \(\mathfrak{M}\cong\mathbb{C}^{2n}\) and so it is Liouville with vanishing symplectic cohomology (see e.g. [11, Sec.3]). Symplectic resolutions \(\mathfrak{M}\to\mathfrak{M}_{0}\) (not just CSRs) for which \(0\in\mathfrak{M}_{0}\) is an isolated singularity are completely classified. In complex dimension \(2\), they are the minimal resolutions of Du Val singularities (by [1, Prop.1.3] and [12, Thm.7.5.1]); in higher dimensions they are the cotangent bundles \(T^{*}\mathbb{C}P^{n}\)[16, Thm.8.3]. In the former case, they are convex at infinity and have vanishing symplectic cohomology for \(\omega_{I}\)[13, Lem.42]. In the latter case, for \(n\geq 2\) they are not convex at infinity [13, Rmk. in Sec.11.1]. Thus:
**Corollary 7.11**.: _A CSR is convex at infinity only if it is isomorphic to \(\mathbb{C}^{2n}\) for some \(n\), or it is a minimal resolution of a Du Val singularity._
### CSRs are symplectic \(\mathbb{C}^{*}\)-manifolds globally defined over a convex base
**Proposition 7.12**.: _Any CSR \((\mathfrak{M},\varphi)\) is a symplectic \(\mathbb{C}^{*}\)-manifold globally defined over the convex base \(\mathbb{C}^{N}\), in the sense of Definition 5.1. Indeed, there is a proper \(\mathbb{C}^{*}\)-equivariant holomorphic map_
\[\Psi=\Theta\circ j\circ\pi:\mathfrak{M}\to\mathbb{C}^{N},\]
_with \(\Psi^{-1}(0)=\mathfrak{L}\), where \(\mathbb{C}^{*}\) acts diagonally on \(\mathbb{C}^{N}\) by a certain weight \(w>0\). The map \(\Theta\circ j:\mathfrak{M}_{0}\to\mathbb{C}^{N}\) is a \(\mathbb{C}^{*}\)-equivariant holomorphic map, which is a local embedding except at \(0\in\mathfrak{M}_{0}\)._
Proof.: By Definition 7.1, \(\mathfrak{M}\xrightarrow{\pi}\mathfrak{M}_{0}\) is \(\mathbb{C}^{*}\)-equivariant, and the coordinate ring of the affine variety \(\mathfrak{M}_{0}\) is the graded ring \(\mathbb{C}[\mathfrak{M}_{0}]=\bigoplus_{n\geq 0}\mathbb{C}[\mathfrak{M}_{0}]^{n}\), whose grading prescribes the weight of the \(\mathbb{C}^{*}\)-action. Fix a choice of homogeneous polynomials \(f_{1},\ldots,f_{N}\) that generate \(\mathbb{C}[\mathfrak{M}_{0}]\). As \(\mathbb{C}[\mathfrak{M}_{0}]^{0}=\mathbb{C}\), we may assume that all \(f_{i}\) are non-constant with positive weights \(w_{i}\). These determine an embedding
\[j:\mathfrak{M}_{0}\to\mathbb{C}^{N},\qquad p\mapsto(f_{1}(p),\ldots,f_{N}(p)),\]
with \(j(0)=0\). Let \(w=\operatorname{lcm}(w_{1},\ldots,w_{N}).\) The holomorphic map \(j\circ\pi:\mathfrak{M}\to\mathbb{C}^{N}\) is \(\mathbb{C}^{*}\)-equivariant for the \(\mathbb{C}^{*}\)-action \(t\cdot(z_{1},\ldots,z_{N})=(t^{w_{1}}z_{1},..,t^{w_{N}}z_{N})\) on \(\mathbb{C}^{N}\). Let \(\Theta\) be the holomorphic map
\[\mathbb{C}^{N}\xrightarrow{\Theta}\mathbb{C}^{N},\ \ \Theta(z_{1},\ldots,z_{N})=(z_{1}^{w/ w_{1}},\ldots,z_{N}^{w/w_{N}}).\]
Then \(\Psi\) is proper as \(\pi,j,\Theta\) are proper. Finally, \(\mathfrak{M}\) is connected, as it is homotopy equivalent to \(\pi^{-1}(0)=\mathfrak{L}\) (Theorem 7.3(1,4)) and the fibres of \(\pi:\mathfrak{M}\to\mathfrak{M}_{0}\) are connected [11, Prop.3.17].
_Remark 7.13_.: Explicitly, \(\Psi=(\pi^{*}(f_{1})^{w/w_{1}},\ldots,\pi^{*}(f_{N})^{w/w_{N}}).\) Denoting \(\hat{\pi}\) the constant \(3.1415\ldots\), the \(1\)-periodic \(S^{1}\)-action \(e^{2\hat{\pi}it}\) on \(\mathbb{C}^{N}\) has Hamiltonian \(\hat{\pi}(|z_{1}|^{2}+\cdots+|z_{N}|^{2})\), and we denote its pull-back by
\[\Phi:=\hat{\pi}\,\Psi^{*}(|z_{1}|^{2}+\cdots+|z_{N}|^{2})=\hat{\pi}\sum\pi^{*}(| f_{i}|^{2w/w_{i}}):\mathfrak{M}\to\mathbb{R}, \tag{54}\]
so \(\mathfrak{L}=\Phi^{-1}(0)\). The function \(\Phi\) is typically not related to the moment map \(H\) on \(\mathfrak{M}\) in (1).
### An explicit \(S^{1}\)-invariant Kahler form with exhausting Hamiltonian
**Lemma 7.14**.: _Given a CSR \(\pi:\mathfrak{M}\to\mathfrak{M}_{0},\) for some integer \(M>0\) there is a \(\mathbb{C}^{*}\)-equivariant embedding \(\mathfrak{M}\hookrightarrow\mathfrak{M}_{0}\times\mathbb{P}^{M}\) using the diagonal action on the target, which is linear on the \(\mathbb{P}^{M}\)-factor._
Proof.: The morphism \(\pi\) is projective, thus \(\pi\) factors through a closed immersion and a projection
\[\mathfrak{M}\xrightarrow{\iota}\mathfrak{M}_{0}\times\mathbb{P}^{n} \xrightarrow{\pi_{\mathfrak{M}_{0}}}\mathfrak{M}_{0},\]
for some integer \(n\). Composing that inclusion \(\iota\) with projection to the second factor yields
\[\pi_{\mathbb{P}^{n}}\circ\iota:\mathfrak{M}\hookrightarrow\mathbb{P}^{n} \times\mathfrak{M}_{0}\to\mathbb{P}^{n},\]
and thus a pull-back bundle \(\mathcal{L}=\iota^{*}\pi_{\mathbb{P}^{n}}^{*}(\mathcal{O}(1)).\) By [13, Lem.01VT] that bundle is \(\pi\)-relatively ample, using that \(\mathfrak{M}_{0}\) is affine and \(\pi\) is of finite type (as \(\pi\) is projective, by [13, Lem.01WC] it is proper, and hence of finite type by definition). Recall that a bundle on \(\mathfrak{M}\) is \(\mathbb{C}^{*}\)-linearisable if it admits a \(\mathbb{C}^{*}\)-action linear on the fibres which lifts the action on \(\mathfrak{M}\). As \(\mathfrak{M}\) is normal, [16, Thm.2.14] ensures that \(\mathcal{L}^{\otimes k}\) is \(\mathbb{C}^{*}\)-linearisable for some positive integer \(k.\) The same holds for positive tensor powers of \(\mathcal{L}^{\otimes k}\). As \(\mathcal{L}\) is \(\pi\)-relatively ample, so is \(\mathcal{L}^{\otimes k}\), this follows from [11, Ch.II, Prop.4.5.6(i)]. By [13, Lem.01VU], the quasi-compactness of \(\mathfrak{M}_{0}\) (being an affine variety) and the finite type property of \(\pi\) ensure that some positive power \(L:=(\mathcal{L}^{\otimes k})^{\otimes d}\) is \(\pi\)-relatively very ample. By [13, Lem.02NP], as \(\mathfrak{M}_{0}\) is affine and \(\pi:\mathfrak{M}\to\mathfrak{M}_{0}\) is of finite type, such an \(L\) yields an immersion
\[j:\mathfrak{M}\hookrightarrow\mathbb{P}^{M}\times\mathfrak{M}_{0}\]
for some integer \(M>0,\) with \(L\cong j^{*}\pi_{\mathbb{P}^{M}}^{*}\mathcal{O}(1).\) It remains to prove that one can construct this immersion to be \(\mathbb{C}^{*}\)-equivariant, where the action on \(\mathbb{P}^{M}\) is linear. That this immersion is \(\mathbb{C}^{*}\)-linear follows by construction, by the same argument as in [10, Prop.1.7] (using work of Kambayashi and Sumihiro). We remark that the proof above is essentially the same argument as in [10, Prop.1.7] for the linear algebraic group \(G=\mathbb{C}^{*}\), except we are working with schemes that are proper over \(\mathfrak{M}_{0}=\operatorname{Spec}(R)\) (where \(R\) is the coordinate ring of the affine variety \(\mathfrak{M}_{0}\)) rather than working over \(\operatorname{Spec}(k)\).
**Corollary 7.15**.: _Any CSR admits an \(S^{1}\)-invariant Kahler structure with an exhausting moment map._
Proof.: Combining Lemma 7.14 with \(\Theta\circ j\) from Proposition 7.12 we obtain a \(\mathbb{C}^{*}\)-equivariant morphism
\[\Pi:\mathfrak{M}\to\mathfrak{M}_{0}\times\mathbb{P}^{M}\to\mathbb{C}^{N}\times \mathbb{P}^{M}.\]
This morphism is proper, holomorphic, \(\mathbb{C}^{*}\)-equivariant (using the rescaled action on \(\mathbb{C}^{N}\) as in Proposition 7.12), and locally it is a closed topological embedding. Moreover, \(\Pi:\mathfrak{M}\to\mathbb{C}^{N}\times\mathbb{P}^{M}\) is the composite of two closed immersions, so it is a closed immersion. To conclude that \(\Pi\) is locally a holomorphic embedding, it remains to show that the differential \(\Pi_{*}:T\mathfrak{M}\to T\mathbb{C}^{N}\times T\mathbb{P}^{M}\) at any point \(p\) is injective (so that the implicit function theorem applies). This will follow from the surjectivity of the dual map, viewed as the algebro-geometrical map \(\mathfrak{m}/\mathfrak{m}^{2}\to\mathfrak{n}/\mathfrak{n}^{2}\) on cotangent spaces.87 As \(\Pi\) is a closed immersion, we may assume \(\Pi^{\#}:B\to A\) is surjective. By construction, \((\Pi^{\#})^{-1}(\mathfrak{n})=\mathfrak{m}\), therefore \(\Pi^{\#}:\mathfrak{m}\to\mathfrak{n}\) is surjective, and thus the induced map \(\mathfrak{m}/\mathfrak{m}^{2}\to\mathfrak{n}/\mathfrak{n}^{2}\) is surjective, as required.
Footnote 87: \(\mathfrak{n}\) is the maximal ideal of functions vanishing at \(p\) in the coordinate ring for an affine patch \(\operatorname{Spec}(A)\) around \(p\in\mathfrak{M}\); \(\mathfrak{m}\) are functions vanishing at \(\Pi(p)\) in the coordinate ring for an affine patch \(\operatorname{Spec}(B)\) around \(\Pi(p)\in\mathbb{C}^{N}\times\mathbb{P}^{M}\).
The claim follows by pulling back the standard \(S^{1}\)-invariant Kahler structure \(\omega_{Y}\) from \(Y=\mathbb{C}^{N}\times\mathbb{P}^{M}\) via \(\Pi\). If \(H_{Y}:Y\to\mathbb{R}\) is the Hamiltonian generating the \(S^{1}\)-vector field \(X_{S^{1},Y}\) on \(Y\), then \(H=H_{Y}\circ\Pi\) is the Hamiltonian on \(\mathfrak{M}\) generating the \(S^{1}\)-action.88 As \(H_{Y}\) is the sum of the Hamiltonians on the two factors \(\mathbb{C}^{N}\) and \(\mathbb{P}^{M}\), and the Hamiltonian on the \(\mathbb{C}^{N}\) factor grows like a power of the norm on \(\mathbb{C}^{N}\), \(H_{Y}\) is exhausting. The properness of \(\Pi\) and \(H_{Y}\) imply the properness of \(H=H_{Y}\circ\Pi\), and \(H\) is bounded below since \(H_{Y}\) is. Thus \(H\) is exhausting.
### Implications from Section 5 and 6
By Section 5, for generic slopes \(\lambda>0\), the only \(1\)-periodic orbits of \(\lambda H\) are the constant orbits inside the core \(\mathfrak{L}\) given by the fixed points \(x\in\mathfrak{F}:=\mathfrak{M}^{\varphi}\) of the \(S^{1}\)-action (i.e. the critical locus of \(H\)). As \(c_{1}(\mathfrak{M})=0\), Proposition 5.13 implies that
\[SH^{*}(\mathfrak{M},\varphi,\omega_{I})=0.\]
The fixed locus \(\mathfrak{F}\) decomposes into connected components \(\mathfrak{F}_{\alpha}\) which are the Morse-Bott submanifolds for \(H\). At \(x\in\mathfrak{F}\), the tangent space \(T_{x}\mathfrak{M}=\oplus_{k}H_{k}\) has a unitary decomposition given by the weight \(k\) subspaces \(H_{k}\) for the linearised \(S^{1}\)-action, where \(H_{0}=T_{x}\mathfrak{F}\). We defined an even integer grading
\[\mu_{\lambda}(\mathfrak{F}_{\alpha})=\dim_{\mathbb{C}}\,\mathfrak{M}-\dim_{ \mathbb{C}}\,\mathfrak{F}_{\alpha}-\sum_{k}\dim_{\mathbb{C}}(H_{k})\mathbb{W }(\lambda k)\]
where \(\mathbb{W}(\lambda k)=2\lfloor\lambda k\rfloor+1\) for \(k\neq 0\), and \(\mathbb{W}(0)=0\) (see Lemma 4.4).
By Proposition 7.6, \(QH^{*}(\mathfrak{M})=H^{*}(\mathfrak{M};\mathbb{K})\) as \(\mathbb{K}\)-algebras. Thus, Section 6 and Equation (14) yields:
**Corollary 7.16**.: _For any CSR \((\mathfrak{M},\varphi)\), the \(\varphi\)-filtration ordered by \(p\in\mathbb{R}\),_
\[\mathcal{F}_{p}^{\varphi}:=\bigcap_{\text{generic $\lambda>p$}}\ker(c_{ \lambda}^{*}:H^{*}(\mathfrak{M};\mathbb{K})\to HF^{*}(\lambda H)),\]
_is a filtration on the singular cohomology ring \(H^{*}(\mathfrak{M};\mathbb{K})\) by ideals, with respect to cup product._
We suspect that these filtrations, up to \(\lambda\)-reparametrisation, do not depend on the choice of Kahler form \(\omega_{I}\), although we will not try to prove it.89 The filtrations however do depend on the choice of \(\varphi\).
Footnote 89: For small deformations of the Kähler form, this should follow by the methods developed by Benedetti-Ritter [20].
We now abbreviate cohomology \(H^{*}\) to mean with coefficients in \(\mathbb{K}\). The cohomology \(H^{*}(\mathfrak{M})\) is concentrated only in even degrees by Corollary 7.4, therefore Corollary 6.9 yields
\[HF^{*}(F_{\lambda})\cong\bigoplus_{\alpha}H^{*}(\mathfrak{F}_{\alpha})[-\mu_{ \lambda}(\mathfrak{F}_{\alpha})],\]
where \(H^{*}(\mathfrak{F}_{\alpha})\) lives in even degrees and \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\) is even. Thus the map \(c_{\lambda}^{*}\) in Corollary 7.16 is a grading-preserving \(\mathbb{K}\)-linear homomorphism between \(\mathbb{K}\)-modules supported in even degrees:
\[H^{*}(\mathfrak{M})\cong\bigoplus_{\alpha}H^{*}(\mathfrak{F}_{\alpha})[-\mu_ {\alpha}]\to\bigoplus_{\alpha}H^{*}(\mathfrak{F}_{\alpha})[-\mu_{\lambda}( \mathfrak{F}_{\alpha})],\]
where \(\mu_{\alpha}\) is the (even) Morse-Bott index of \(\mathfrak{F}_{\alpha}\) (and the first isomorphism is (28)).
_Remark 7.17_.: **Comparison with the literature.** There is an interest in filtrations on cohomology of CSRs in the representation-theoretic literature. Bellamy-Schedler [18] construct filtrations on cohomologies of Springer fibres, which are one of the principal examples of cores of CSRs. Their filtrations are also compatible with the cohomological grading, just as ours, and one might ask how these two filtrations are related in the top-degree cohomology. In the example of the Springer fibres which are cores of resolutions of Du Val singularities of type \(A_{n}\), [18, Ex.1.5] there is a choice of \(\mathbb{C}^{*}\)-action \(\varphi\) that yields (rank-wise) the same filtration as theirs. Explicitly, given the \(A_{n}\) singularity \(XY=Z^{n+1}\), this action is given by lifting the action \(t\cdot(X,Y,Z)=(t^{n}X,tY,tZ).\) For \(n=2\) this corresponds to action \((c)\) in Example 1.24.
_Remark 7.18_.: **Refinement of the McKay correspondence.** For resolutions of Du Val singularities (and possibly of other holomorphic-symplectic quotient singularities \(\mathbb{C}^{2n}/\Gamma\)), our filtration yields a refinement of the McKay correspondence [19] which states that a graded basis for the cohomology of the resolution is in graded bijection with the conjugacy classes of the given finite group using the so-called age grading on conjugacy classes. An example of this for the resolution \(\mathfrak{M}\) of \(\mathfrak{M}_{0}:=\mathbb{C}^{2}/(\mathbb{Z}/5)\) is shown in [17] (the spectral sequence for \(X_{\mathbb{Z}/5}\)). The top cohomology has two filtration levels, which correspond to two pairs of orbits, which all have age grading equal to \(1\) (the orbits are loops in \(\mathfrak{M}\setminus\mathfrak{L}\), and those are labelled naturally by their free homotopy classes, as explained e.g. in [14, Eq.(1.4)] for the case of isolated quotient singularities, and in the subsequent footnote for the general
case). Thus, our filtration makes a distinction between the orbits lying above \([e^{2\pi it/5},0],[0,e^{2\pi it/5}]\) and \([e^{4\pi it/5},0],[0,e^{4\pi it/5}]\), corresponding to the conjugacy classes \([\varepsilon^{1}],[\varepsilon^{-1}]\) and \([\varepsilon^{2}],[\varepsilon^{-2}]\) in \(\mathbb{Z}/5\) (here \(\varepsilon\) is a primitive \(5^{th}\)-root of unity).
The McKay correspondence involves crepant resolutions of quotient singularities \(\mathbb{C}^{n}/\Gamma\) for finite subgroups \(\Gamma\subset SL(2,\mathbb{C})\). For these to arise as CSRs, we need \(\mathfrak{M}_{0}=\mathbb{C}^{2n}/\Gamma\) for \(\Gamma\subset\operatorname{Sp}(2n,\mathbb{C})\) as \(\mathfrak{M}_{0}\) has a Poisson structure. Apart from four exceptional examples, this has a conical symplectic resolution \(\mathfrak{M}\) precisely when \(\Gamma\) is of type \(G^{n}\rtimes S_{n}\), where \(S_{n}\) is the symmetric group and \(G\subset SL(2,\mathbb{C})\) is a finite subgroup, [1]. These \(\mathfrak{M}\) are in fact all quiver varieties of affine ADE type, [10]. These were studied by Kaledin [11] and Bezrukavnikov-Kaledin [1].
## 8. Filtration separating the periods of orbits
In this Section, \((Y,\omega,I,\varphi)\) is a symplectic \(\mathbb{C}^{*}\)-manifold over a convex base \(B\), satisfying (6). Recall that we can always tweak \(\omega\) by Lemma 5.5 to make \(H\) proper in (1), which we assume from now on.
### Construction of a specific Hamiltonian \(H_{\lambda}\)
We have an \(S^{1}\)-equivariant proper holomorphic map \(\Psi:Y^{\mathrm{out}}\to B=\Sigma\times[R_{0},\infty)\). We abusively write \(\Psi:Y\to B\) but it is understood that all constructions involving \(\Psi\) are only defined on \(Y^{\mathrm{out}}\).
The Hamiltonian \(H_{\lambda}\) will be constructed as in Figure 2 in terms of a function \(c\) of \(H\),
\[H_{\lambda}:=c\circ H.\]
This ensures that \(X_{H_{\lambda}}=c^{\prime}(H)\cdot X_{H}\), so \(1\)-periodic Hamiltonian orbits of \(H_{\lambda}\) corresponds precisely to orbits of period \(T=c^{\prime}(H)\) of the flow of \(X_{S^{1}}=X_{H}\). We construct \(c:[\min H,+\infty)\to\mathbb{R}\) so that
1. \(c^{\prime}\geq 0\).
2. \(c^{\prime\prime}\geq 0\).
3. \(c^{\prime\prime}(H)>0\) whenever \(c^{\prime}(H)\) is a period of an \(S^{1}\)-orbit (these will be outer \(S^{1}\)-periods).
4. \(c(H)=\lambda_{0}H\) on some interval, say for \(H\in[H_{0}^{\prime\prime},H_{1}^{\prime}]\), for \(0<\lambda_{0}<\min\{\text{positive $S^{1}$-periods}\}\).
5. \(c(H)=\lambda H\) for all sufficiently large \(H\).
We also assume that \(c^{\prime}\) is sufficiently small on \(Y^{\mathrm{in}}\) so that there are no non-constant \(1\)-periodic orbits of \(H_{\lambda}\) in \(Y^{\mathrm{in}}\) since the potential period values \(c^{\prime}\) are smaller than the minimal positive \(S^{1}\)-period. Thus in \(Y^{\mathrm{in}}\) the only \(1\)-orbits are constants at points of the \(\mathfrak{F}_{\alpha}\) submanifolds. More precisely, we may assume
\[Y^{\mathrm{in}}:=\{H\leq m\}\]
is a sublevel set, where we choose \(m\) large enough so that \(Y^{\mathrm{in}}\supset H^{-1}(H(\operatorname{Core}(Y))\supset\operatorname{Core }(Y).\) We construct \(c^{\prime}\) to be small on \([\min(H),m]\), and ensure that \(c^{\prime}\neq 0\) except possibly at \(\operatorname{Crit}(H)\), so that \(\operatorname{Crit}(H)=\operatorname{Crit}(H_{\lambda})\), and so that \(H_{\lambda}\) has the same (constant) \(1\)-orbits as \(H\) in that region. In particular, (1) and (2) above are really only needed for \(Y^{\mathrm{out}}=\{H\geq m\}\).
When \(Y=\mathfrak{M}\) is a CSR, \(\Psi\) is globally defined over \(B=\mathbb{C}^{N}\), and \(H_{B}=\pi w\|z\|^{2}\) is defined everywhere (the \(S^{1}\)-action has weight \(w\), see Proposition 7.12), and one can pick \(\mathfrak{M}^{\mathrm{in}}=\{H\leq m\}\) to be any sublevel set containing the core \(\mathfrak{L}=\Psi^{-1}(0)\). We recall that by the rescaling trick in Remark 5.2 we can get rid of the factor of \(w\) that would appear in (6) for CSRs.
**Lemma 8.1**.: _The level sets \(\Phi^{-1}(q)=\Psi^{-1}(\{H_{B}=q\})\subset Y\) for \(q\geq R_{0}\) are closed submanifolds of \(Y.\)_
Proof.: \(d\Phi=dH_{B}\circ\Psi_{*}\) is non-zero on \(\nabla H=X_{\mathbb{R}_{+}}\) as \(\Psi_{*}X_{\mathbb{R}_{+}}=\nabla H_{B}\) (by Lemma 2.3).
We choose \(H_{0}^{\prime\prime},H_{1}^{\prime}\) so that we have a nesting
\[\{H\leq H_{0}^{\prime\prime}\}\subset\{\Phi\leq R_{0}^{\prime\prime}\}\subset \{\Phi\leq R_{1}^{\prime}\}\subset\{H\leq H_{1}^{\prime}\}\]
for some values \(0<R_{0}^{\prime\prime}<R_{1}^{\prime}\). The nesting condition ensures that \(H_{\lambda}=c(H)=\lambda_{0}H+\text{constant}\) in a region of \(Y\) that covers (via \(\Psi\)) the region in \(B\) where \(R\in L_{0}:=[H_{0}^{\prime\prime},H_{1}^{\prime}]\), called **linearity region**. That this nesting can be achieved follows from \(\Phi\) and \(H\) being proper.
### Filtration functional on \(B\)
Choose a smooth cut-off function \(\phi:[0,+\infty)\to\mathbb{R}\) satisfying
1. \(\phi=0\) on \([0,R_{0}^{\prime\prime}]\).
2. \(\phi^{\prime}\geq 0\) everywhere.
3. \(\phi_{0}:=\int_{L_{0}}\phi^{\prime}(R)\,dR>0.\)
The choice of \(\phi\) on \(R\geq R_{1}^{\prime}\) is not so important. Let us choose \(\phi=\phi_{0}\) to be constant there. The cut-off function \(\phi\) defines the exact \(2\)-form \(\eta\) on \(B\),
\[\eta:=d(\phi(R)\alpha)=\phi(R)\,d\alpha+\phi^{\prime}(R)\,dR\wedge\alpha, \tag{55}\]
and an associated \(1\)-form \(\Omega_{\eta}\) on the free loop space \(\mathcal{L}B=C^{\infty}(S^{1},B)\), involving the Reeb field \(\mathcal{R}_{B}\),
\[\Omega_{\eta}:T_{x}\mathcal{L}B=C^{\infty}(S^{1},x^{*}TB)\to\mathbb{R},\ \ \xi\mapsto-\!\int\!\eta(\xi,\partial_{t}x-\lambda_{0}\mathcal{R}_{B})\,dt. \tag{56}\]
Define the **filtration functional**\(F:\mathcal{L}B\to\mathbb{R}\) on the free loop space by
\[F(x):=-\int_{S^{1}}x^{*}(\phi\alpha)+\lambda_{0}\int_{S^{1}}\phi(R(x(t)))\,dt.\]
**Lemma 8.2**.: _[_11_, Thm.6.2(1)]_\(F\) _is a primitive of \(\Omega_{\eta}.\) That is, \(dF(x)(\xi)=\Omega_{\eta}(x)(\xi).\)_
When \(\Psi\) is not globally defined, a loop \(y\in\mathcal{L}Y\) may not have a well-defined projection \(x=\Psi\circ y\). Nevertheless it makes sense to talk about \(F(x)\) and \(\Omega_{\eta}|_{x}\) because the relevant integrands in the base \(B\) will vanish near \(\Sigma\times\{R_{0}\}\) as \(\phi=\phi^{\prime}=0\) near \(R=R_{0}\) (also see [10] for a detailed description of how to define "pull-backs" of the functionals to \(\mathcal{L}Y\)).
### The filtration inequality for \(CF^{*}(H_{\lambda})\)
**Theorem 8.3**.: _The Floer chain complex \(CF^{*}(H_{\lambda})\) has a filtration given by the value of \(F.\) That is, given two \(1\)-periodic orbits \(x_{-},x_{+}\) of \(H_{\lambda}\) and a Floer cylinder for \((H_{\lambda},I)\) from \(x_{-}\) to \(x_{+},\)_
\[F(x_{-})\geq F(x_{+}). \tag{57}\]
Proof.: The Floer cylinder \(u:\mathbb{R}\times S^{1}\to Y\) for \(H_{\lambda}\) satisfies \(\partial_{s}u+I(\partial_{t}u-X_{H_{\lambda}})=0.\) By Equation (6),
\[\Psi_{*}(X_{H_{\lambda}})=\Psi_{*}(c^{\prime}(H)X_{H})=c^{\prime}(H)\Psi_{*}(X _{H})=c^{\prime}(H)\mathcal{R}_{B}, \tag{58}\]
noting that \(c^{\prime}(H)\) depends on the original coordinates in \(Y\). Projecting \(u\) via \(\Psi\) defines a map
\[v:=\Psi\circ u:\mathbb{R}\times S^{1}\to B,\quad\partial_{s}v+I_{B}(\partial_ {t}v-k(s,t)\mathcal{R}_{B})=0 \tag{59}\]
that converges to \(y_{-}=\Psi(x_{-})\), \(y_{+}=\Psi(x_{+})\) at \(s=-\infty\), \(+\infty\), respectively, where
\[k(s,t):=c^{\prime}(H(u(s,t)))\]
is a domain-dependent function. The key observation is that we chose \(\phi^{\prime}(R)=0\) except on the region \(L_{0}\), and over \(L_{0}\) we chose \(k(s,t)=c^{\prime}(H(u(s,t)))=\lambda_{0}\). Thus, using \(d\alpha(\cdot,\mathcal{R}_{B})=0\),
\[\eta(\cdot,k(s,t)\mathcal{R}_{B})=\eta(\cdot,\lambda_{0}\mathcal{R}_{B}) \tag{60}\]
holds everywhere, and it recovers the integrand used in (56). Combining with Lemma 8.2,
\[\begin{split} F(x_{-})-F(x_{+})&=-\int_{-\infty}^{+ \infty}dF(v(s,t))(\partial_{s}v)\ ds\\ &=-\int_{-\infty}^{+\infty}\Omega_{\eta}(v(s,t))(\partial_{s}v) \ ds\\ &=\int_{-\infty}^{+\infty}\int_{S^{1}}\eta(\partial_{s}v, \partial_{t}v-\lambda_{0}\mathcal{R}_{B})\,dt\,ds\\ &=\int_{-\infty}^{+\infty}\int_{S^{1}}\eta(\partial_{s}v, \partial_{t}v-k(s,t)\mathcal{R}_{B})\,dt\,ds\\ &=\int_{-\infty}^{+\infty}\int_{S^{1}}\eta(\partial_{s}v,I_{B} \partial_{s}v)\,dt\,ds,\end{split} \tag{61}\]
where we used (59) in the final equality. Hence, we reduced the problem to the same computation as in the convex setting [12, Lem.6.1]: abbreviating \(\rho=R\circ v\),
\[\begin{split}\eta(\partial_{s}v,I_{B}\partial_{s}v)& =\phi(\rho)\cdot d\alpha(\partial_{s}v,I_{B}\partial_{s}v)+\phi^{ \prime}(\rho)\cdot(dR\wedge\alpha)(\partial_{s}v,I_{B}\partial_{s}v)\\ &=\text{positive}\cdot\text{positive}+\text{positive}\cdot(dR \wedge\alpha)(\partial_{s}v,I_{B}\partial_{s}v),\end{split} \tag{62}\]
where "positive" here means "non-negative". To estimate the last term, we may assume that \(R\geq R_{0}^{\prime\prime}\) since \(\phi^{\prime}=0\) otherwise. Thus, we decompose \(\partial_{s}v\) according to an orthogonal decomposition of \(TB\):
\[\partial_{s}v=C\oplus y\mathcal{R}_{B}\oplus zZ\in\xi\oplus\mathbb{R} \mathcal{R}_{B}\oplus\mathbb{R}Z, \tag{63}\]
where \(Z=-I_{B}\mathcal{R}_{B}=R\partial_{R}\) is the Liouville vector field and \(\xi=\ker\alpha|_{R=1}\). Notice that \(\ker\alpha=\xi\oplus\mathbb{R}Z\). Thus: \(dR(\partial_{s}v)=Rz\) and \(\alpha(I_{B}\partial_{s}v)=\alpha(I_{B}zZ)=\alpha(z\mathcal{R}_{B})=z\). So,
\[(dR\wedge\alpha)(\partial_{s}v,I_{B}\partial_{s}v)=dR(\partial_{s}v)\alpha(I_ {B}\partial_{s}v)-\alpha(\partial_{s}v)dR(I_{B}\partial_{s}v)=Rz^{2}+Ry^{2} \geq 0. \tag{64}\]
This yields the claim.
_Remark 8.4_.: How one deals with the issue of transversality, without ruining the filtration, is a rather tricky matter that will be dealt with in [13].
### The \(F\)-filtration values on \(1\)-orbits
**Corollary 8.5**.: _The \(F\)-filtration values satify the following properties:_
1. \(F=0\) _at the constant orbits, so at each point of_ \(\mathfrak{F}=\sqcup_{\alpha}\mathfrak{F}_{\alpha}\)_;_
2. \(F(y)=F(\Psi(x))<0\) _for every non-constant_ \(1\)_-periodic orbit_ \(x\) _of_ \(H_{\lambda}\)_;_
3. \(F(y)\) _only depends on the Reeb period_ \(c^{\prime}(H(x))\) _of the projected orbit_ \(y\)_, see (_65_);_
4. _for non-constant orbits,_ \(F(y)\) _decreases as_ \(H(y)\) _increases;_
5. _on non-constant orbits, the_ \(F\)_-filtration is equivalent to filtering by_ \(-H\)_, or equivalently: filtering by negative_ \(S^{1}\)_-period values_ \(-c^{\prime}(H)\)_._
Proof.: Let us calculate the value of the functional \(F(y)\) explicitly for the projection \(y:=\Psi(x(t))\) of a \(1\)-periodic orbit \(x\) of \(H_{\lambda}.\) If \(x\) is a fixed point, \(y\) lies in the region where \(\phi=0\) so \(F(y)=0.\) Otherwise, \(c^{\prime}(H(x))=:T\) is the period of some \(S^{1}\)-orbit in \(Y\) (in particular \(T>\lambda_{0}\)), and \(\phi(y)=\phi_{0}\). Thus
\[F(y(t))=-\int_{S^{1}}y^{*}(\phi\alpha)+\lambda_{0}\int_{S^{1}}\phi(R(y(t)))\ dt=-\phi(y)T+\lambda_{0}\phi_{0}=(\lambda_{0}-T)\phi_{0}<0. \tag{65}\]
The drop in filtration value for \(1-\)orbits \(y_{1},y_{2}\) arising for successive slopes \(T_{1}<T_{2}\) is:
\[F(y_{1})-F(y_{2})=\phi_{0}(T_{2}-T_{1})>0. \tag{66}\]
Thus \(F(y(t))<0\) strictly decreases when \(c^{\prime}(H(x))\), hence \(H(x)\), increases.
### Period-bracketed symplectic cohomology
Our convention is that \(x_{-}\) appears in the output of the chain differential \(\partial(x_{+})\) if a Floer trajectory \(u\) flows from \(x_{-}\) to \(x_{+}\). As \(F(x_{-})\geq F(x_{+})\), this means \(\partial\) "increases the \(F\)-filtration", so it decreases \(H\), and decreases the period \(T=c^{\prime}(H)\). Restricting \(1\)-orbits by the condition \(F\geq A\) defines a subcomplex, \(CF^{*}_{[A,\infty)}(H_{\lambda})\), and a quotient complex
\[CF^{*}_{(A,B]}(H_{\lambda}):=CF^{*}_{[B,\infty)}(H_{\lambda})/CF^{*}_{[A, \infty)}(H_{\lambda}).\]
These fit into a short exact sequence, which induces a long exact sequence on cohomology,
\[\cdots\to HF^{*}_{[A,\infty)}(H_{\lambda})\to HF^{*}_{[B,\infty)}(H_{\lambda})\to HF ^{*}_{(A,B]}(H_{\lambda})\to HF^{*+1}_{[A,\infty)}(H_{\lambda})\to\cdots \tag{67}\]
**Lemma 8.6**.: _A Floer continuation map \(CF^{*}(H_{\lambda})\to CF^{*}(H_{\lambda^{\prime}})\) for \(\lambda\leq\lambda^{\prime}\), for a homotopy \(H_{s}:=c_{s}\circ H\), respects the filtration if \(H_{\lambda}\), \(H_{\lambda^{\prime}}\), \(H_{s}\) are linear in \(H\) over \(L_{0}\), and \(\partial_{s}c^{\prime}_{s}\leq 0\) over \(L_{0}\)._
Proof.: Abbreviate \(\rho=R(v(s,t))\), and \(\lambda_{0,s}:=c^{\prime}_{s}\) on \(L_{0}\). We have \(\phi^{\prime}(\rho)k(s,t)=\phi^{\prime}(\rho)\lambda_{0,s}\) everywhere. We now use an \(s\)-dependent filtration and \(s\)-dependent filtration one-form,
\[F_{s}(x):=-\int_{S^{1}}x^{*}(\phi\alpha)+\lambda_{0,s}\int_{S^{1}}\phi(R(x(t)) )\,dt,\qquad\Omega_{\eta}(\xi):=-\int_{S^{1}}\eta(\xi,\partial_{t}x-\lambda_{0,s}\mathcal{R}_{B})\,dt.\]
Equation (60) becomes \(\eta(\cdot,k(s,t)\mathcal{R}_{B})=\eta(\cdot,\lambda_{0,s}\mathcal{R}_{B})\). However, in (61) a new term appears because \(\partial_{s}(F_{s}\circ u)=d_{u}F_{s}\cdot\partial_{s}u+(\partial_{s}F_{s})\circ u\), but it has the sign needed for the argument to work because:
\[(\partial_{s}F_{s})(u)=(\partial_{s}\lambda_{0,s})\cdot\int_{S^{1}}\phi(R(u)) \,dt\leq 0,\]
using \(\phi\geq 0\) and the assumption \(\partial_{s}\lambda_{0,s}\leq 0\) (see [14, Sec.6.5] for the proof in the convex setting).
Lemma 8.6 implies that continuation maps \(CF^{*}(H_{\lambda})\to CF^{*}(H_{\lambda^{\prime}})\) can be built for \(\lambda\leq\lambda^{\prime}\) in a way that preserves the filtration (see [10] for details). Thus, taking direct limits as \(\lambda\to\infty\) in (67),
\[\cdot\to SH^{*}_{[A,\infty)}(Y,\varphi)\to SH^{*}_{[B,\infty)}(Y,\varphi)\to SH ^{*}_{(A,B]}(Y,\varphi)\to SH^{*+1}_{[A,\infty)}(Y,\varphi)\to\cdots\]
### Positive symplectic cohomology
**Definition 8.7**.: Abbreviate \(CF^{*}_{0}(H_{\lambda}):=CF^{*}_{[0,\infty)}(H_{\lambda})\subset CF^{*}(H_{ \lambda})\) the subcomplex generated by the fixed locus \(\mathfrak{F}=\sqcup_{\alpha}\mathfrak{F}_{\alpha}\) (the constant orbits have filtration value zero). The **positive Floer cohomology**\(HF^{*}_{+}(H_{\lambda})=H_{*}(CF_{+}(H_{\lambda}))\) is the cohomology of the quotient complex \(CF_{+}(H_{\lambda}):=CF^{*}(H_{\lambda})/CF^{*}_{0}(H_{\lambda})\). The direct limit over continuation maps is **positive symplectic cohomology**,
\[SH_{+}(Y,\varphi,\omega):=\lim_{\lambda\to\infty}HF^{*}_{+}(H_{\lambda}).\]
_Remark 8.8_.: When (7) holds, the construction of the \(F\)-filtration is not possible. However one can still define, somewhat unsatisfactorily, \(SH^{*}_{+}(Y,\varphi):=\operatorname{Cone}(c^{*}:QH^{*}(Y)\to SH^{*}(Y,\varphi))\).
**Proposition 8.9**.: _For any symplectic \(\mathbb{C}^{*}\)-manifold satisfying (5)-(6), there is a long exact sequence_
\[\cdots\to QH^{*}(Y)\to SH^{*}(Y,\varphi)\to SH^{*}_{+}(Y,\varphi)\to QH^{*+1}( Y)\to\cdots\]
Proof.: The condition \(F\geq 0\) imposed on generators of \(CF^{*}_{0}(H_{\lambda})\) means that the generators are precisely the fixed points in \(\mathfrak{F}\), and that no Floer solution \(u\) (with ends on \(\mathfrak{F}\)) can have \(v=\Psi\circ u\) exit the region \(R\leq R^{\prime\prime}_{0}\), otherwise it would enter the region \(\phi^{\prime}>0\) (where also \(\phi>0\)) causing (62) to be strictly positive unless \(v\) is \(s\)-independent. Denote by \(H_{\lambda_{0}}\) the Hamiltonian of slope \(\lambda_{0}\) obtained by modifying \(H_{\lambda}\) to be linear in \(H\) of slope \(\lambda_{0}\) for \(H\geq H^{\prime\prime}_{0}\). Then the complexes \(CF^{*}_{0}(H_{\lambda})=CF^{*}(H_{\lambda_{0}})\) are equal as the Floer differentials count the same solutions, so their cohomology equals and is isomorphic to \(QH^{*}(Y)\) by Proposition 6.1 (as \(\lambda_{0}\) is smaller than any non-zero \(S^{1}\)-period).
### Compatibility of the \(F\)-filtration with the product
We assume the reader has a familiarity with the construction of the pair-of-pants product on Floer cohomology (e.g. [13] and [1, Sec.3.2-3.3]). The product on \(SH^{*}\) is obtained by taking a direct limit of certain \(\mathbb{K}\)-linear pair-of-pants product maps \(CF^{*}(H_{\lambda})\otimes CF^{*}(H_{\lambda^{\prime}})\to CF^{*}(H_{\lambda^{ \prime\prime}})\) for \(\lambda^{\prime\prime}\geq\lambda+\lambda^{\prime}\), and up to quasi-isomorphism this is independent of the choices made in the construction. One does not need these maps for all such choices of \(\lambda,\lambda^{\prime},\lambda^{\prime\prime}\), it suffices to have these for three cofinal families of such slopes, see [13]. We want to show that choices can be made so that this construction is compatible with the \(F\)-filtration. By [13, Sec.6.5] the \(F\)-filtration can be made consistent with continuation cylinders for monotone homotopies \(H_{s}\). So we can use two continuation cylinders associated to two monotone homotopies to change \(H_{\lambda},H_{\lambda^{\prime}}\) until they both equal some Hamiltonian \(H_{\mu}\), while preserving filtrations. By composing/gluing with
these continuation cylinders, it therefore suffices to build, for a cofinal family of \(H_{\mu}\), a pair-of-pants map \(CF^{*}(H_{\mu})\otimes CF^{*}(H_{\mu})\to CF^{*}(2H_{\mu})\) in a way that is compatible with the \(F\)-filtration.90
Footnote 90: we may also glue/compose with a Floer continuation isomorphism \(CF^{*}(2H_{\mu})\to CF^{*}(H_{2\mu})\) on the output.
We will use the model of the pair-of-pants \(P\) described by Abbondandolo-Schwarz in [1, Sec.3.2], which admits a holomorphic \(2:1\) branched covering \(P\to\mathbb{R}\times S^{1}\) of the cylinder. One can think of \(P=P_{-}\cup P_{+,1}\cup P_{+,2}\) as the union of three half-cylinders such that the positive boundary of \(P_{-}\) is the figure eight-loop consisting of the two negative boundaries of \(P_{+,1}\), \(P_{+,2}\). Under the covering, \(P_{-}\) covers \((-\infty,0]\times S^{1}\) twice, whereas each \(P_{+,i}\) covers \([0,\infty)\times S^{1}\) once. As explained in [1, Sec.3.2], the Floer equation for \(u:P\to Y\) corresponds to the usual equation \(\partial_{s}u+I(\partial_{t}u-X_{H_{\mu}})=0\) on the three half-infinite cylinders except that on \(P_{-}\) the time coordinate is parameterised by \(\mathbb{R}/2\mathbb{Z}\) rather than by \(S^{1}=\mathbb{R}/\mathbb{Z}\), so it can be turned into \(\partial_{s}u+I(\partial_{t}u-X_{2H_{\mu}})=0\) by reparametrising time. We remark that the first term in the integral of the filtration \(1\)-form \(\int\Omega_{\eta}(u)(\partial_{s}u)\,ds=-\int\int\eta(\partial_{s}u,\partial_ {t}u-X_{h})\,dt\wedge ds\) over a cylinder is invariant under conformal rescaling of \(z=s+it\). The second term is not, but it will be invariant under simultaneous rescaling of \((s,t)\) if we rescale \(h\). So for \(P_{-}\) we either use \(F=F_{h}\) as defined for \(h\) but using time interval \(t\in[0,2]\), or we use \(F=F_{2h}\) as defined for \(2h\) with \(t\in[0,1]\).
**Lemma 8.10**.: _The \(F\)-filtration is respected by the pair-of-pants Floer solutions._
Proof.: Let \(x_{-},x_{+,1},x_{+,2}\) be the asymptotic orbits at the ends of \(u\), and denote \(u_{-},u_{+,1},u_{+,2}\) the restriction of \(u:P\to Y\) to the three half-infinite cylinders \(P_{-}\), \(P_{+,1}\), \(P_{+,2}\). Integrating the filtration \(1\)-form over the projection \(v=\Psi\circ u:P\to B\) reduces to computing three separate integrals for Floer solutions \(u_{-}\), \(u_{+,1}\) and \(u_{+,2}\) for Hamiltonians \(2H_{\mu}\), \(H_{\mu}\) and \(H_{\mu}\) respectively, over three half-infinite cylinders. Those three integrals are non-positive by Equation (62), so
\[F(x_{-})-F(u_{-}(0,\cdot))\geq 0,\quad F(u_{+,1}(0,\cdot))-F(x_{+,1})\geq 0, \quad F(u_{+,2}(0,\cdot))-F(x_{+,2})\geq 0.\]
Considering what happens along the figure-eight in the middle of the pair-of-pants:
\[F(u_{-}(0,\cdot))=F(u_{+,1}(0,\cdot))+F(u_{+,2}(0,\cdot)).\]
Combining the two equations yields the required inequality \(F(x_{-})\geq F(x_{+,1})+F(x_{+,2})\).
### Simplification of the construction when \(H\) is a function of \(R\)
Suppose that on \(Y^{\rm out}\) the moment map \(H\) can be written as a function of \(R\circ\Psi\),
\[H=\rho(R\circ\Psi), \tag{68}\]
for some function \(\rho:Y^{\rm out}\to\mathbb{R}\). This does not appear to hold often, as \(\|X_{\mathbb{R}_{+}}\|\) is typically not constant on a level set of \(H\). However, it does hold for \(T^{*}X\) for a projective variety \(X\) (Appendix B.1) and for negative vector bundles [14, Sec.11.2]. In this case, one can considerably simplify the construction because level sets of \(H\) map into level sets of \(R.\) Note \(\rho\) is strictly increasing as \(\Psi_{*}\nabla H=\Psi_{*}X_{\mathbb{R}_{+}}=X_{\mathbb{R}_{+},B}=\nabla R\). In this setup, it suffices that \(c(H)\) satisfies conditions (1)-(3) in Section 8.1. Define
\[h(R):=\int_{0}^{R}c^{\prime}(\rho(r))\,dr,\]
so \(h^{\prime}(R)=c^{\prime}(\rho(R))\), and redefine the filtration functional \(F:\mathcal{L}B\to\mathbb{R}\) as
\[F(x):=-\int_{S^{1}}x^{*}(\phi\alpha)+\lambda_{0}\int_{S^{1}}\int_{0}^{R(x(t))} \phi^{\prime}(\tau)h^{\prime}(\tau)\,d\tau\,dt.\]
Also, redefine the \(1\)-form \(\Omega_{\eta}\) on \(\mathcal{L}B=C^{\infty}(S^{1},B)\) using \(X_{h}=h^{\prime}(R)\mathcal{R}_{B}\):
\[\Omega_{\eta}:T_{x}\mathcal{L}B=C^{\infty}(S^{1},x^{*}TB)\to\mathbb{R},\ \ \xi\mapsto-\!\int\!\eta(\xi,\partial_{t}x-X_{h})\,dt. \tag{69}\]
By [14, Thm.6.2(1)], we again have \(dF(x)(\xi)=\Omega_{\eta}(x)(\xi).\) Abbreviating \(v=v(s,t)\), and using \(k(s,t)=c^{\prime}(H\circ u)=c^{\prime}(\rho(R(v)))=h^{\prime}(R(v))\), (60) becomes
\[\eta(\cdot,k(s,t)\mathcal{R}_{B})=\eta(\cdot,X_{h}), \tag{70}\]
irrespective of our choice of \(\phi\). Thus, we can pick \(\phi\) to just satisfy the two conditions: \(\phi=0\) for \(R\leq R_{0}^{\prime\prime}\), and \(\phi^{\prime}>0\) for \(R>R_{0}^{\prime\prime}\) (similarly to [14, Sec.6.3]). This construction (when (68) holds)
also applies in the more complicated filtration setting of [14], where a very complicated \(\phi\)-function would otherwise need to be constructed. Corollary 8.5 still holds, but the proof needs to be slightly modified: in (65) we have \(F(y)=\chi(R(y))\) where
\[\chi(R):=-\phi(R)h^{\prime}(R)+\int_{0}^{R}\phi^{\prime}(\tau)h^{\prime}(\tau)d\tau,\]
and \(\chi^{\prime}(R)=-\phi(R)h^{\prime\prime}(R)\leq 0\), so \(\chi\) strictly decreases when evaluated at \(R\)-values of projected \(1\)-orbits. Also, (62) (and (64)) imply that \(F(x_{-})-F(x_{+})<0\) unless \(\partial_{s}v\equiv 0\), or \(v\) lies in the region \(R\leq R_{0}^{\prime\prime}\).
## 9. Example: Semiprojective toric manifolds
We now come back to Section 1.10. A **semiprojective toric manifold** is a non-compact toric manifold \(Y\) for which the affnisation map \(\pi:Y\to Y_{0}:=\operatorname{Spec}(H^{0}(Y,\mathcal{O}_{Y}))\) is projective, and \(Y\) has at least one torus-fixed point. By [13, Cor.2.7] these toric varieties can be described in terms of certain GIT-quotients, or in terms of certain types of fans \(\Sigma\). By definition, \(Y\) contains an algebraic torus \(\mathbb{T}=(\mathbb{C}^{*})^{n}\) as a dense open subset, and the action of \(\mathbb{T}\) on itself extends to \(Y\).
Recall a general fact about toric varieties: if \(v\in|\Sigma|:=\) (union of the cones of the fan \(\Sigma\) of \(Y\))\(\subset\mathbb{Z}^{n}\), the \(1\)-parameter subgroup \(\varphi_{v}:\mathbb{C}^{*}\to\mathbb{T}\) associated to \(v\) extends to a map \(\varphi_{v}:\mathbb{C}\to Y\) (otherwise it does not). So for \(v\in|\Sigma|\), the \(\mathbb{C}^{*}\)-action \(\varphi_{v}\) is contracting: \(\varphi_{v}(t)(y)\) converges when \(t\to 0\), for all \(y\in Y\).
For any \(\varphi_{v}\), with \(v\in|\Sigma|\), we can apply (5) to the \(\mathbb{C}^{*}\)-equivariant map \(\pi:Y\to Y_{0}\) using the action that \(\varphi_{v}\) determines on both \(Y\) and \(Y_{0}\). This yields a globally defined \(\Psi\)-map, \(\Psi:Y\to\mathbb{C}^{N}\), which ensures the necessary maximum principle needed to construct \(Q_{\varphi_{v}}\in QH^{*}(Y)\) by Theorem 6.17.
Our moment map \(H:Y\to\mathbb{R}\) in (1) arises as \(H(y)=\langle\mu(y),v\rangle\) for \(v\in|\Sigma|\), where \(\mu:Y\to\mathbb{R}^{n}\) is the moment map of \(\mathbb{T}\), described in [13, Eq.(7)] (in particular, the image of \(\mu\) is the "moment polytope" \(\Delta\), although more accurately we should call it an unbounded rational polyhedron [13, p.498]).
For \(v\in|\Sigma|\), [13, p.503] describes the Bialynicki-Birula decomposition, which corresponds to the stratification of \(Y\) into the stable manifolds \(U_{\alpha}\) of \(\mathfrak{F}_{\alpha}\) from Remark 2.15. There is an explicit description91 of the fixed components \(\mathfrak{F}_{\alpha}\) for the \(\mathbb{C}^{*}\)-action \(\varphi_{v}\) in [13, p.503], in particular for generic \(v\in|\Sigma|\) all \(\mathfrak{F}_{\alpha}\) are just points.
Footnote 91: the \(\mathfrak{F}_{\alpha}\) are the orbit closures associated to the cones \(\sigma_{i}\) of \(\Sigma\) that are minimal with respect to the property \(v\in\mathbb{R}\sigma_{i}\). For example, if \(v\) lies in the interior of a ray of \(\Sigma\), then the fixed locus of \(\varphi_{v}\) is the toric divisor \(D_{i}\) associated to that ray.
The core is the inverse image of the bounded faces of \(\Delta\) and it arises as a fiber \(\operatorname{Core}(Y)=\pi^{-1}(0)\) by [13, Thm.3.2]. As \(\pi^{-1}(Y)\) is cut out by analytic equations, \(Y\) deformation retracts onto \(\operatorname{Core}(Y)\) by Proposition 2.14. Hausel-Sturmfels show in [13, Prop.2.11] that the presentation of the ordinary cohomology is just like in the case of projective toric varieties (e.g. compare [14, Sec.3A]):
\[\mathbb{Z}[x_{1},\ldots,x_{r}]/(\text{Linear relations},\text{Stanley- Reisner relations})\cong H^{*}(Y;\mathbb{Z}),\;x_{i}\mapsto D_{i},\]
which is determined combinatorially from the moment polytope \(\Delta\), where \(D_{i}\) are the toric divisors (recall these correspond to the rays of \(\Sigma\)). As a consequence of Theorem 1.1, for each \(v\in|\Sigma|\),
\[c^{*}:QH^{*}(Y)\to SH^{*}(Y,\varphi_{v})\]
is surjective and corresponds to localisation at \(Q_{\varphi_{v}}\). Thus \(SH^{*}(Y,\varphi_{v})\) can in general yield different algebras, depending on \(v\in|\Sigma|.\) We now prove the quantum version of that presentation, and compare it to \(SH^{*}(Y,\varphi_{v})\).
**Proposition 9.1**.: _If the semiprojective toric manifold \(Y\) is monotone, then after an \(SL(n,\mathbb{Z})\)-transformation applied to the fan \(\Sigma\) we obtain the presentations_
\[\mathbb{K}[x_{1},\ldots,x_{r}]/\mathcal{J}\cong QH^{*}(Y),\;x_{i}\mapsto D_{i},\]
\[\mathbb{K}[x_{1},\ldots,x_{r},x^{\pm v}]/\mathcal{J}\cong SH^{*}(Y,\varphi_{v} ),\;x_{i}\mapsto c^{*}(D_{i}),\]
_where \(r\geq n\) is the number of toric divisors, \(x^{v}:=x_{1}^{v_{1}}x_{2}^{v_{2}}\cdots x_{n}^{v_{n}}\), and \(\mathcal{J}\) is the ideal generated by the linear relations and the quantum Stanley-Reisner relations (combinatorially determined by \(\Delta\))._
_If all \(v_{i}>0\) then \(SH^{*}(Y,\varphi_{v})\cong\operatorname{Jac}(W)\) recovers the Jacobian ring associated to the superpotential \(W\) for \(\Delta\) (compare Example 1.11), in particular it is independent of \(v\)._
Proof.: This will follow as in [11, Sec.3] provided we show that \(Q_{\varphi_{-v}}\) is well-defined in \(SH^{*}(Y,\varphi_{v})\). Above showed that we can build the classes \(Q_{\varphi_{i}}\in QH^{*}(Y)\), where \(\varphi_{i}=\varphi_{b_{i}}\) and \(b_{i}\) are the rays of \(\Sigma\). So we need to show that each class \(Q_{\varphi_{i}}\in QH^{*}(Y)\) is invertible in \(SH^{*}(Y,\varphi_{v})\) (provided \(v\) is sufficiently generic). Observe that the affinisation map \(\pi:Y\to Y_{0}\) of the semiprojective variety \(Y\) is \(\mathbb{T}\)-equivariant, so any \(v\in|\Sigma|\) yields a contracting \(\mathbb{C}^{*}\)-action \(\varphi=\varphi_{v}\) which determines an \(\mathbb{N}\)-grading for the coordinate ring, \(\mathbb{C}[Y_{0}]=\oplus_{n\in\mathbb{N}}\mathbb{C}[Y_{0}]_{n}\) with \(\mathbb{C}[Y_{0}]_{0}=\mathbb{C}\) (compare with Definition 7.1). It remains to deal with the issue of constructing inverses of the \(Q_{\varphi_{i}}\) in \(SH^{*}(Y,\varphi_{v})\).
Any other \(\mathbb{C}^{*}\)-action \(\psi=\varphi_{v^{\prime}}\), for \(v^{\prime}\in|\Sigma|\), commutes with \(\varphi\), so92 preserves \(\mathbb{C}[Y_{0}]_{n}\). Thus, we obtain an \(\mathbb{N}\)-bigrading
Footnote 92: if \(f\in\mathbb{C}[Y_{0}]_{n}\), then \(\varphi_{t}^{*}f=t^{n}f\), so \(\varphi_{t}^{*}\psi_{s}^{*}f=\psi_{s}^{*}\varphi_{t}^{*}f=\psi_{s}^{*}(t^{n}f) =t^{n}\psi_{s}^{*}f\), thus \(\psi_{s}^{*}f\in\mathbb{C}[Y_{0}]_{n}\).
\[\mathbb{C}[Y]=\oplus_{n,m}\mathbb{C}[Y_{0}]_{n,m}\]
such that \(\varphi_{t}^{*}\) acts as \(t^{n}\) on \(\mathbb{C}[Y_{0}]_{n}=\oplus_{m}\mathbb{C}[Y_{0}]_{n,m}\), and \(\psi_{s}^{*}\) acts as \(s^{m}\) on \(\mathbb{C}[Y_{0}]_{n,m}\). We build a map \(\Psi:Y\to\mathbb{C}^{N}\) for \(\varphi\) as in Example 1.3 by choosing homogeneous generators \(f_{i}\) for \(\mathbb{C}[Y_{0}]\), so that each \(f_{i}\) lies in some \(\mathbb{C}[Y_{0}]_{n,m}\). The moment maps for \(\varphi\) and \(\psi\) are
\[H=\langle\mu,v\rangle=\sum v_{i}H_{i}\qquad\text{ and }\qquad K=\langle\mu,v^{ \prime}\rangle=\sum v_{i}^{\prime}H_{i},\]
where \(H_{i}\) is the \(i\)-th coordinate of \(\mu:Y\to\mathbb{R}^{n}\). By construction,
\[\Psi_{*}X_{H}=\oplus\mathcal{R}_{i}\qquad\text{ and }\qquad\Psi_{*}X_{K}= \oplus k_{i}\mathcal{R}_{i},\]
where \(\mathcal{R}_{i}\) is the Reeb field in the \(i\)-th \(\mathbb{C}\) factor of \(\mathbb{C}^{N}\), and \(k_{i}\geq 1\) are constants that arise because the weights of the \(\Psi\)-action may not coincide with those for \(\varphi\).
As in the proof of [11, Lem.A.15], we may apply an \(SL(n,\mathbb{Z})\)-transformation to the fan \(\Sigma\) so that a top-dimensional cone becomes the positive quadrant in \(\mathbb{R}^{n}\). Then choose any \(v\in|\Sigma|\) with \(v_{i}>0\). In that case, for large enough slopes \(\lambda>0\) we have \(\lambda v_{i}\geq v_{i}^{\prime}\) for all \(i\), so93\(\lambda H=\sum\lambda v_{i}H_{i}\geq K=\sum v_{i}^{\prime}H_{i}\).
Footnote 93: we may assume that the \(H_{i},H_{i}^{\prime}\) are positive: the standard basis of \(\mathbb{R}^{n}\) belongs to that top-dimensional cone, and therefore belongs to \(|\Sigma|\), and thus each basis element gives rise to a contracting \(\mathbb{C}^{*}\)-action, whose Hamiltonian must be bounded below, and by adding a constant we may assume it is positive.
Recall \(SH^{*}(Y,\varphi)=\varinjlim HF^{*}(\lambda H)\) for slopes \(\lambda\to\infty\). By Theorem 6.17, the action of the class \(Q_{\varphi_{v^{\prime}}}\) corresponds in Floer theory to an isomorphism \(\mathcal{S}_{\varphi_{v}}:HF^{*}(\lambda H)\cong HF^{*}(\lambda H-K)\) (omitting grading shifts), see [11, Sec.2B]. As the \(\mathcal{S}_{\varphi_{v}}\) are compatible with continuation maps, we obtain
\[\mathcal{S}_{\varphi_{v}}:\varinjlim HF^{*}(\lambda H)\cong\varinjlim HF^{*}( \lambda H-K). \tag{71}\]
We need the analogue of [11, Thm.2.6]: we want an isomorphism \(\varinjlim HF^{*}(\lambda H-K)\cong SH^{*}(Y,\varphi_{v})=\varinjlim HF^{*}( \lambda H)\) induced by continuation maps. In view of what it means to build an isomorphism between two direct limits, it suffices94 to construct continuation maps
Footnote 94: using that the composite of continuations maps is a continuation map, and that continuation maps already arising in a direct limit construction will induce the identity map on the direct limit.
\[HF^{*}(\lambda_{1}H-K)\to HF^{*}(\lambda_{2}H)\to HF^{*}(\lambda_{3}H-K)\to HF^{ *}(\lambda_{4}H) \tag{72}\]
for suitable95\(\lambda_{1}\leq\lambda_{2}\leq\lambda_{3}\leq\lambda_{4}\). More precisely, we want \(\lambda_{1}v_{i}-v_{i}^{\prime}\leq\lambda_{2}v_{i}\leq\lambda_{3}v_{i}-v_{i}^ {\prime}\leq\lambda_{4}v_{i}\) and \(\lambda_{1}-k_{i}\leq\lambda_{2}\leq\lambda_{3}-k_{i}\leq\lambda_{4}\). This will allow us to construct monotone homotopies (i.e. homotopies \(H_{s}\) with \(\partial_{s}H_{s}\leq 0\)) that are also monotone in each \(\mathbb{C}\)-factor of \(\mathbb{C}^{N}\) after projection via \(\Psi\).96 We need to ensure that the monotone homotopy satisfies the maximum principle, i.e. that continuation solutions cannot travel arbitrarily far to infinity without contradicting the \(F\)-filtration. In this setup, it is convenient to consider the \(F\)-filtration computed for each factor \(\mathbb{C}\) separately. This ensures that in each coordinate of \(\mathbb{C}^{N}\) the projected Floer continuation solution \(v=\Psi(u)\) satisfies the maximum principle. Thus, the first two maps in (72) in the direct limit induce the identity map on \(\varinjlim HF^{*}(\lambda H-K)\),
whereas the last two maps induce the identity on \(\varinjlim HF^{*}(\lambda H)\). This ensures that we defined a map \(\varinjlim HF^{*}(\lambda H-K)\to\varinjlim HF^{*}(\lambda H)\) by continuation maps which is both injective and surjective, and thus an isomorphism, as required. The same argument can be run for \(+K\) in place of \(-K\), which gives the inverse of (71). So the map in (71) defines an isomorphism on \(SH^{*}(Y,\varphi_{v})\) called \(\mathcal{R}_{\varphi_{v}}\) (compare Theorem 6.17) which corresponds to pair-of-pants product with \(c^{*}Q_{\varphi_{v}}\), and we showed that (71) admits an inverse, so \(\mathcal{R}_{\varphi_{-v}}=\mathcal{R}_{\varphi_{v}}^{-1}\) exists as required (whereas \(Q_{\varphi_{-v}}\) is typically not defined in \(QH^{*}(Y)\)).
Finally, we explain why \(x^{v}\) is the \(Q_{\varphi_{v}}\) class. This follows from a fact mentioned in Theorem 6.17: the association \(\varphi\mapsto Q_{\varphi}\) is a group homomorphism, so
\[\varphi_{v}=\varphi_{e_{1}}^{v_{1}}\circ\cdots\circ\varphi_{e_{n}}^{v_{n}} \mapsto x^{v_{1}}\cdots x_{n}^{v_{n}}\in QH^{*}(Y),\]
using that the standard basis \(e_{i}\) are rays, and that \(Q_{\varphi_{e_{i}}}=D_{i}\) are toric divisors [16, Lem.4.7], which in turn correspond to \(x_{i}\) in the presentation for \(QH^{*}(Y)\).
**Proposition 9.2**.: _97 For any semiprojective toric manifold \(Y\), for almost any \(v\in|\Sigma|\), \(SH^{*}(Y,\varphi_{v})\) is the localisation of \(QH^{*}(Y)\) at all toric divisors \(D_{i}\), in particular it is independent of \(v\)._
Footnote 97: Subject to mild technical asssumptions on \(Y\) from Remark 5.7 so that Quantum and Floer theory are defined.
Proof.: This follows by the same continuation argument above, for \(v_{i}>0\) for all \(i\). As we could have picked any full dimensional cone in that argument, this proves the claim for almost any \(v\in|\Sigma|\).
## 10. Example: the Slodowy variety \(\mathcal{S}_{32}\)
In [10] we will discuss many examples, in particular Slodowy varieties, out of which we will choose one here for succinct illustration. The partition \((3,2)\) of \(n=5\) defines a standard nilpotent \(5\times 5\) Jordan canonical form \(e\), with Jordan block sizes \(3\) and \(2\). This determines a standard \(\mathfrak{sl}_{2}\)-triple \((e,f,h)\) in \(\mathfrak{sl}_{5}\), where \(h\) is a diagonal matrix with eigenvalues \(h_{i}\), in our case \(2,0,-2,1,-1\)[10, Sec.5.2.1].98 This determines a Slodowy slice
Footnote 98: There is a typo in that thesis: the diagonal entries of \(h_{k}\) are \(h_{0}^{k},\ldots,h_{k-1}^{k}\) starting the numbering at \(i=0\).
\[S_{e}=e+\ker(\mathrm{ad}f)\subset\mathfrak{sl}_{5}.\]
The nilpotent cone \(\mathcal{N}=\{5\times 5\) nilpotent matrices\(\}\subset\mathfrak{sl}_{5}\) determines the Slodowy variety \(\mathcal{S}_{e}:=S_{e}\cap\mathcal{N}\). Now consider the Springer resolution \(\nu:\widetilde{\mathcal{N}}\to\mathcal{N}\), whose fibre \(\mathcal{B}^{x}\) over \(x\in\mathcal{N}\) consists of all complete flags \(F=\{0\subset F_{1}\subset\cdots\subset F_{4}\subset\mathbb{C}^{5}\}\) satisfying \(xF_{i}\subset F_{i-1}\). In fact, \(\widetilde{\mathcal{N}}\cong T^{*}\mathcal{B}\) where \(\mathcal{B}\) is the complete flag variety for \(\mathbb{C}^{5}\). The Springer resolution restricts to a resolution
\[Y:=\widetilde{S}_{e}=\nu^{-1}(\mathcal{S}_{e})\overset{\nu}{\to}\mathcal{S}_ {e},\]
with \(\dim_{\mathbb{C}}Y=\dim_{\mathbb{C}}\mathcal{S}_{e}=4\). It admits the Kazhdan \(\mathbb{C}^{*}\)-action: \(t\cdot(x,F)=(t^{2}\mathrm{Ad}(t^{-h})x,t^{-h}F)\), where \(t^{-h}\) is the diagonal matrix with entries \(t^{-h_{i}}\), thus \(t^{2}\mathrm{Ad}(t^{-h})x\) is explicitly \(t^{2+h_{j}-h_{i}}x_{ij}\) on the entries \(x_{ij}\) of \(x\). It turns out that \(Y\) is a weight-2 CSR, so the Maslov index \(\mu=\dim_{\mathbb{C}}Y=4\).
In [10] we show that the fixed components \(\mathfrak{F}_{\alpha}\) are all points in this case (thus the moment map \(H\) of the \(S^{1}\subset\mathbb{C}^{*}\) action is a Morse function) and we compute their weights:
\(\mathcal{F}_{big}\):(3,3,\(-\)1,\(-\)1), \(\mathcal{F}_{p},\mathcal{F}_{w}\):(5,3,\(-\)3,\(-\)1), \(\mathcal{F}_{j}^{\prime},\mathcal{F}_{y}^{\prime}\):(3,3,\(-\)1,\(-\)1), \(\mathcal{F}_{j}^{3},\mathcal{F}_{y}^{3},\mathcal{F}_{j}^{1},\mathcal{F}_{y}^{ 1}\):(3,1,1,\(-\)1), \(\mathcal{F}_{min}\):(1,1,1,1).
Via a simple computer program we calculate the indices \(\mu_{\lambda}(\mathfrak{F}_{\alpha})\) at generic slopes \(\lambda=T^{+}\) slightly above \(T=0\), \(1/5\), \(1/3\), \(2/5\), \(3/5\), \(2/3\), \(4/5\) and \(1\). These are the only periods when the filtration can possibly change, due to Proposition 6.4. The table below shows how these indices vary: each number indicates a copy of \(\mathbb{K}\) placed in the indicated degree. At the start we get the Morse-Bott indices \(\mu_{\alpha}\) at the \(10\) fixed points, confirming (28): \(H^{*}(Y)=\mathbb{K}_{4}^{5}\oplus\mathbb{K}_{2}^{4}\oplus\mathbb{K}_{0}\).
\[\begin{array}{ccccccccccccc}&\mathcal{F}_{big}&\mathcal{F}_{p}&\mathcal{F}_{w}& \mathcal{F}^{\prime}_{j}&\mathcal{F}^{\prime}_{y}&\mathcal{F}^{3}_{j}&\mathcal{F }^{3}_{y}&\mathcal{F}^{1}_{j}&\mathcal{F}^{1}_{y}&\mathcal{F}_{min}\\ H^{*}(Y):&4&4&4&4&4&2&2&2&2&0\\ HF^{*}(\frac{1}{5}^{+}H):&4&2&2&4&4&2&2&2&2&0\\ HF^{*}(\frac{1}{3}^{+}H):&0&2&2&0&0&0&0&0&0&0\\ HF^{*}(\frac{2}{5}^{+}H):&0&0&0&0&0&0&0&0&0&0\\ HF^{*}(\frac{3}{5}^{+}H):&0&-2&-2&0&0&0&0&0&0\\ HF^{*}(\frac{2}{3}^{+}H):&-4&-2&-2&-4&-4&-2&-2&-2&-2&0\\ HF^{*}(\frac{4}{5}^{+}H):&-4&-4&-4&-4&-2&-2&-2&-2&0\\ HF^{*}(1^{+}H)\cong H^{*}(Y)[8]:&-4&-4&-4&-4&-6&-6&-6&-8\end{array}\]
More rank considerations, using Corollary 1.23, imply the following:
\[\operatorname{rk}(\mathcal{F}^{\varphi}_{1/5})_{4}\geq 2,\quad\mathcal{F}^{ \varphi}_{1/3}\supset H^{4}(Y),\quad\operatorname{rk}(\mathcal{F}^{\varphi}_{ 1/3})_{2}\geq 2,\quad\mathcal{F}^{\varphi}_{2/5}\supset H^{2}(Y)\oplus H^{4}(Y),\quad\mathcal{F}^{\varphi}_{1}=H^{*}(Y).\]
Corollary 6.15 refines this: \(\mathcal{F}^{\varphi}_{1/5}\subset\mathbb{K}\mathcal{F}_{p}\oplus\mathbb{K} \mathcal{F}_{w}\), and the \(\mathbb{K}\mathcal{F}_{min}\) survives at least until time \(1^{-}\). Thus:
\[\mathcal{F}^{\varphi}_{1/5}=\mathbb{K}\mathcal{F}_{p}\oplus\mathbb{K} \mathcal{F}_{w},\quad\mathcal{F}^{\varphi}_{1/3}=\mathbb{K}^{r}_{2}\oplus H^{4 }(Y),\quad\mathcal{F}^{\varphi}_{2/5}=\mathcal{F}^{\varphi}_{1^{-}}=H^{2}(Y) \oplus H^{4}(Y),\quad\mathcal{F}^{\varphi}_{1}=H^{*}(Y).\]
The summand \(\mathbb{K}^{r}_{2}\) of rank \(r\in\{2,3,4\}\) is unknown because higher order \(T\) contributions to \(c^{*}_{1/5^{+}}\) and \(c^{*}_{1/3^{+}}\) may allow \(\mathcal{F}^{3}_{j},\mathcal{F}^{3}_{y},\mathcal{F}^{1}_{j},\mathcal{F}^{1}_{y}\) to have non-trivial images in the summand \((H^{*}(\mathcal{F}_{p})\oplus H^{*}(\mathcal{F}_{w}))[-2]\).
_Remark 10.1_.: For sake of comparison, we also show below the \(E_{1}\)-pages of the Morse-Bott-Floer spectral sequences that converge to symplectic cohomology \(SH^{*}(Y,\varphi)=0\) and to \(S^{1}\)-equivariant symplectic cohomology \(ESH^{*}(Y,\varphi)=0\), respectively. [11] will discuss these in more detail. In the main columns, each dot contributes \(1\) to the rank. In the first picture, the arrows indicate how edge differentials on the \(E_{1}\) and higher pages must kill \(H^{*}(Y)\). In the second picture again all classes must cancel, so arrows go from odd classes to even classes in degree one higher. The \(0\)-th column \(H^{*}(Y;\mathbb{K})\otimes_{\mathbb{K}}\mathbb{F}\) consists of copies of the \(\mathbb{K}[u]\)-module \(\mathbb{F}\) mentioned in Section 1.11, so the smaller dots indicate \(u^{-j}\). (generator) for \(j\geq 1\). The other columns are substantially different in the equivariant case because the \(S^{1}\)-reparametrisation action on \(1\)-orbits means that instead of the ordinary cohomology \(H^{*}(B_{k/m})[-\mu(B_{p,\beta})]\) of the slices from Equation (13) (the Morse-Bott manifolds of period-\(k/m\)\(S^{1}\)-orbits), we have
\[EH^{*}(B_{p,\beta})[-\mu(B_{p,\beta})]\cong H^{*}\left(B_{p,\beta}/S^{1}\right) [-1-\mu(B_{p,\beta})]. \tag{73}\]
We explain in [11] that in many examples, including the current example, (73) lies in odd degrees and \(EH^{*}(Y)\) lies is even degrees; so the spectral sequence for \(ESH^{*}_{+}(Y,\varphi)\) collapsed and we can read off \(H^{*}(Y)\otimes\mathbb{F}\) from it, in fact one can also recover \(SH^{*}_{+}(Y)\cong\ker(u:ESH^{*}_{+}(Y)\to ESH^{*}_{+}(Y))\).
\[\begin{array}{c|
## Appendix A Grading for Hamiltonian Floer theory
For gradings, we follow the conventions in [11, App.C], and we refer the reader to [10, 12] for a detailed discussion and references about the Robbin-Salamon index [13].
Gradings of non-degenerate \(1\)-periodic Hamiltonian orbits99 are defined using the Conley-Zehnder index. This is a \(\mathbb{Z}\)-valued index defined for certain non-degenerate paths of symplectic matrices. However, for Hamiltonians that have degenerate orbits (e.g. autonomous Hamiltonians), one uses the more general **Robbin-Salamon index**[13]. The latter is a \(\frac{1}{2}\mathbb{Z}\)-valued index defined for _any_ continuous path \([0,1]\to Sp(\mathbb{C}^{n},\Omega_{0})\) of real symplectic matrices in \(\mathbb{C}^{n}\) with the standard real symplectic structure \(\Omega_{0}\). We recall here the main properties that we will need. Denote by \(\psi_{1}\circ\psi_{2}:\mathbb{C}^{n}\oplus\mathbb{C}^{m}\to\mathbb{C}^{n} \oplus\mathbb{C}^{m}\) the direct sum of two symplectic matrices \(\psi_{1}:\mathbb{C}^{n}\to\mathbb{C}^{n}\) and \(\psi_{2}:\mathbb{C}^{m}\to\mathbb{C}^{m}\).
Footnote 99: I.e. \(1\)-periodic orbits \(x(t)\) of \(X_{H}\) satisfying \(ker((\phi_{1}^{H})_{*}-Id)_{x(0)}=0\), where \(\phi_{1}^{H}\) is the Hamiltonian flow of \(H\).
**Theorem A.1**.: _The Robbin-Salamon index satisfies the following properties:100_
Footnote 100: We follow [11, App.C] but abbreviated \(\mathbb{W}(x)=W(2\pi x)\) compared to the function \(W\) in that paper.
_(1) \(\mu_{RS}\) is invariant under homotopies with fixed endpoints._
_(2) \(\mu_{RS}\) is additive under concatenation of paths._
_(3) \(\mu_{RS}\) is compatible with sums: \(\mu_{RS}(\psi_{1}\circ\psi_{2})=\mu_{RS}(\psi_{1})+\mu_{RS}(\psi_{2})\)._
_(4) Consider two continuous paths of symplectic matrices \(\psi,\phi:[0,1]\to Sp(\mathbb{R}^{2n},\Omega_{0}).\) Then_
\[\mu_{RS}(\phi\psi\phi^{-1})=\mu_{RS}(\psi).\]
_(5) \(\mu_{RS}((e^{2\pi is})_{s\in[0,x]})=\mathbb{W}(x),\) where_
\[\mathbb{W}:\mathbb{R}\to\mathbb{Z},\quad\mathbb{W}(x):=\left\{\begin{array}[] {ll}2\lfloor x\rfloor+1&\text{if }x\notin\mathbb{Z}\\ 2x&\text{if }x\in\mathbb{Z}.\end{array}\right. \tag{74}\]
_(6) The Robbin-Salamon index of the symplectic shear \(\begin{bmatrix}1&0\\ b(t)&1\end{bmatrix}\) is equal to \(\frac{1}{2}(\operatorname{sign}(b(1))-\operatorname{sign}(b(0)))\)._
We remark the following basic properties of the function \(\mathbb{W}\):
\[\mathbb{W}(0)=0,\quad\mathbb{W}(-x)=-\mathbb{W}(x),\quad\mathbb{W}(x)\text{ is odd except at }\mathbb{Z},\quad\ 2x\geq\mathbb{W}(x)-1\geq 2x-2. \tag{75}\]
For any \(1\)-orbit \(x\) of a Hamiltonian \(H\) in a symplectic manifold \(M\) of dimension \(2n\), let \(\phi_{t}\) be the Hamiltonian flow and consider its linearisation \((\phi_{t})_{*}:T_{x(0)}M\to T_{x(t)}M\). We pick a symplectic
trivialisation \(\Phi:x^{*}TM\to\mathbb{C}^{n}\times S^{1}\), of the tangent bundle above the orbit \(x\) to get a path of symplectic matrices \(\psi(t)=\Phi_{t}\circ(\phi_{t})_{*}\circ\Phi_{0}^{-1}:\mathbb{C}^{n}\to\mathbb{C }^{n}.\) Then define
\[RS(x,H):=\mu_{RS}(\psi). \tag{76}\]
This may depend on the choice of trivialisation \(\Phi\). When \(c_{1}(M)=0\), one can trivialise the canonical bundle \(\Lambda^{n,0}(T^{*}M)\), and then choose a trivialisation \(\Phi\) compatible with it. If in addition \(H^{1}(M)=0\), then all trivialisations of \(\Lambda^{n,0}(T^{*}M)\) are equivalent, so the indices (76) are canonical [10, Sec.(3a)].
We define the **grading** of a \(1\)-orbit \(x\) of a Hamiltonian \(H:M\to\mathbb{R}\) by
\[|x|:=\dim_{\mathbb{C}}M-RS(x,H). \tag{77}\]
**Lemma A.2**.: _Let \((M,\omega)\) be a symplectic manifold with \(c_{1}(TM,I)=0\) for an an \(\omega\)-compatible almost complex structure \(I\). Then Floer cohomology \(HF^{*}(H)\) is \(\mathbb{Z}\)-graded, and canonically so if \(H^{1}(M)=0\)._
## Appendix B Cotangent bundles and negative vector bundles
### The moment map is a function of the radial coordinate for \(T^{*}\mathbb{C}P^{n}\)
Recall that \(T^{*}\mathbb{C}P^{n-1}\) can be seen as the hyperkahler reduction of the flat space \(M:=\mathbb{C}^{n}\oplus\mathbb{C}^{n}\) with the action \(G:=U(1)\curvearrowright M\) given by \(g\cdot(z,\xi)=(zg^{-1},g\xi).\) The hyperkahler moment map \(\mu=(\mu_{\mathbb{R}},\mu_{\mathbb{C}})\) has real part \(\mu_{\mathbb{R}}=\xi\xi^{*}-z^{*}z\) and complex part \(\mu_{\mathbb{C}}=\xi z\) (viewing \(\xi\) as a row vector). Taking \(\zeta_{\mathbb{R}}>0\), the hyperkahler reduction gives
\[\mathfrak{M}:=\mu^{-1}(-\zeta_{\mathbb{R}}\mathrm{Id},0)/G\cong T^{*}\mathbb{ C}P^{n-1}.\]
It admits an \(S^{1}\)-action induced by the action \(t\cdot(z,\xi)=(z,t\xi)\) on \(M,\) whose moment map is given by \(H=\mathrm{tr}(\xi\xi^{*}).\) There is a projection
\[\Psi:\mathfrak{M}\to\mathfrak{sl}_{n},\ [(z,\xi)]\mapsto z\xi,\]
which is an example of a Springer resolution. This map, together with the \(S^{1}\)-invariant Kahler structure on \(\mathfrak{M}\) (induced from the Kahler structure on \(M\)) makes \(\mathfrak{M}\) a symplectic \(\mathbb{C}^{*}\)-manifold globally defined over the convex base \(B=\mathfrak{sl}_{n}.\) The pull-back of the radial coordinate on \(B\) is thus equal to
\[\Phi=\mathrm{tr}(z\xi(z\xi)^{*})=\mathrm{tr}(z\xi\xi^{*}z^{*})=\mathrm{tr}(z^ {*}z\xi\xi^{*}). \tag{78}\]
Substituting the moment map equation \(\mu_{\mathbb{R}}=\xi\xi^{*}-z^{*}z=-\zeta_{\mathbb{R}}\mathrm{Id}\) into (78) we get
\[\Phi=\mathrm{tr}((\xi\xi^{*}+\zeta_{\mathbb{R}}\mathrm{Id})\xi\xi^{*})=\mathrm{ tr}(\xi\xi^{*}\xi\xi^{*})+\zeta_{\mathbb{R}}\mathrm{tr}(\xi\xi^{*})=\mathrm{tr}( \xi\xi^{*}\xi\xi^{*})+\zeta_{\mathbb{R}}H. \tag{79}\]
Now, noticing that \(A:=\xi\xi^{*}\) is actually a \(1\times 1\)-matrix, the last term in (79) becomes
\[\Phi=\mathrm{tr}(A^{2})+\zeta_{\mathbb{R}}H=\mathrm{tr}(A)^{2}+\zeta_{\mathbb{R }}H=H^{2}+\zeta_{\mathbb{R}}H, \tag{80}\]
thus101\(H=-\frac{1}{2}\zeta_{\mathbb{R}}+\sqrt{\Phi+\frac{1}{4}\zeta_{\mathbb{R}}^{2}}\) is indeed a function of \(\Phi\).
Footnote 101: Since \(H\geq 0\), the other solution of the quadratic equation is invalid.
We remark that this proof also shows why the same conclusion does not hold for cotangent bundles of other flag varieties of type A, which can be constructed via hyperkahler reduction in a similar way, [21, Sec.7]. For example, to get the cotangent bundle of a Grassmannian, \(T^{*}Gr(k,n)\), one has to change \(z\) and \(\xi\) to \((n\times k)\) and \((k\times n)\)-matrices, respectively. Then \(A=\xi\xi^{*}\) becomes a \((k\times k)\)-matrix, thus we no longer have the identity \(\mathrm{tr}(A^{2})=\mathrm{tr}(A)^{2}\) needed in (80).
However, for any projective variety \(X\) if we use the pull-back data from an embedding \(T^{*}X\hookrightarrow T^{*}\mathbb{C}\mathbb{P}^{N}\), then the moment map will be a function of \(\Phi\) for the choice of \(\Psi\) from Example 1.6.
### Viewing cotangent bundles as negative vector bundles
In Example 1.7, we can construct a symplectic form on cotangent bundles over projective varieties \(X\) to get a structure of a negative vector bundle in the weak sense that one can build a (real) symplectic form on \(T^{*}X\) of type
\[\omega=\pi^{*}\omega_{X}+\Omega,\]
determined by a choice of Hermitian connection on \(T^{*}X\), where \(\Omega|_{\mathrm{fibre}}=(\text{area form})/\pi\) in a unitary frame, and \(\Omega\) on horizontal vectors is determined by the curvature form of \(T^{*}X\to X\).102 We only need to construct this for \(T^{*}\mathbb{CP}^{N}\), as we can pull-back forms via an inclusion \(T^{*}X\hookrightarrow T^{*}\mathbb{CP}^{N}\) of subbundles.
Footnote 102: Moreover \(\Omega(TX,\cdot)=0\), and \(\Omega(v,h)=0\) if \(v\) is vertical and \(h\) is horizontal.
We will use the fact that \(F:=T\mathbb{CP}^{N}\) is Griffiths-positive for the Fubini-Study metric (see [10] and [11, Ch.VII, Ex.6.8]). Griffiths-positivity ensures that the line bundle \(L^{*}:=\mathcal{O}_{\mathbb{P}(F^{*})}(+1)\to\mathbb{P}(F^{*})\) is positive [12, (2.37)] where \(\mathbb{P}(F^{*})\) is the projectivisation of the dual vector bundle \(F^{*}\), and \(L^{*}\) is the dual of the tautological line bundle \(L(F^{*})\to\mathbb{P}(F^{*})\). Positivity of \(L^{*}\) means that the Hermitian metric on \(F\) induces a Hermitian metric on \(L^{*}\) whose curvature \(\mathcal{F}^{L^{*}}\) can be used to represent the first Chern class \(c_{1}(L^{*})\) as a positive closed real \((1,1)\)-form, namely \(\frac{i}{2\pi}\mathcal{F}^{L^{*}}\).
Now we take the dual, \(E:=F^{*}=T^{*}\mathbb{CP}^{N}\). Then \(\omega_{\mathbb{P}(E)}:=\frac{1}{2\pi i}\mathcal{F}^{L}=\frac{i}{2\pi}\mathcal{ F}^{L^{*}}\) is a Kahler form on \(\mathbb{P}(E)\). It follows that \(E\) satisfies the "negativity" requirements of [13, Lem.70]: the form \(\omega:=\pi^{*}\omega_{\mathbb{CP}^{N}}+\Omega\) on the total space \(\operatorname{Tot}(E\to\mathbb{CP}^{N})\) is symplectic because of the positivity of the form
\[\tfrac{1}{2\pi i}w^{\dagger}\mathcal{F}^{E}_{(\pi_{*}h,I\pi_{*}h)}w=\tfrac{r^{ 2}}{2\pi i}\mathcal{F}^{L}_{(h,Ih)}>0,\]
for \(w\in E\setminus\{0\}\), for horizontal \(h\neq 0\in T_{w}E\) (which can be viewed as a horizontal vector of the projection \(\mathbb{P}(E)\to\mathbb{CP}^{N}\)), where \(r\) is the radial coordinate for the fibres of \(E\). We remark that the conventions of [13, Sec.11.2] are that the radial coordinate for \(L\) viewed as a convex symplectic manifold is \(R^{L}=(1+r^{2})/2\) (whose moment map generates the flow \(e^{\pi it}\) rather than \(e^{2\pi it}\)), so the Hamiltonians used for Floer theory on \(E\) are functions \(c(2R^{L})\) that become linear in \(R^{L}\) at infinity.
|
2308.15782 | On Card guessing games: limit law for no feedback one-time riffle
shuffle | We consider the following card guessing game with no feedback. An ordered
deck of n cards labeled 1 up to n is riffle-shuffled exactly one time. Then,
the goal of the game is to maximize the number of correct guesses of the cards.
One after another a single card is drawn from the top, the guesser makes a
guess without seeing the card and gets no response if the guess was correct or
not. Building upon and improving earlier results, we provide a limit law for
the number of correct guesses and also show convergence of the integer moments. | Markus Kuba, Alois Panholzer | 2023-08-30T06:28:01Z | http://arxiv.org/abs/2308.15782v1 | # On card guessing games: limit law for no feedback one-time fiffle shuffle
###### Abstract.
We consider the following card guessing game with no feedback. An ordered deck of \(n\) cards labeled \(1\) up to \(n\) is riffle-shuffled exactly one time. Then, the goal of the game is to maximize the number of correct guesses of the cards. One after another a single card is drawn from the top, the guesser makes a guess without seeing the card and gets no response if the guess was correct or not. Building upon and improving earlier results, we provide a limit law for the number of correct guesses and also show convergence of the integer moments.
Key words and phrases:Card guessing, riffle shuffle, no feedback, limit law, moments 2000 Mathematics Subject Classification: 05A15, 05A16, 60F05, 60C05
## 1. Introduction
The analysis of card shuffling and card guessing games has a long history. Starting from a mathematical model of shuffling developed in 1956 at Bell Labs by E. Gilbert and C. Shannon, the subject has extended in various directions in a great many articles, amongst others [1, 8, 9, 10, 16, 17, 21, 22, 23, 25, 26, 28, 29]. The mathematical analysis of questions related to card shuffling and card guessing are not only of purely theoretical interest. There are applications to the analysis of clinical trials [5, 11], fraud detection related to extra-sensory perceptions [8], guessing so-called Zener Cards [25], as well as relations to tea tasting and the design of statistical experiments [12, 26].
In this work we consider the following problem. A deck of \(n\) cards labeled consecutively from \(1\) on top to \(n\) on bottom is face down on the table. The deck is riffle shuffled once and placed back on the table, face down. A guesser tries to guess at the cards one at a time, starting from the top. The goal is to maximize the number of correct guesses with the caveat, that the identities of the card guessed are not revealed, nor is the guesser told whether a particular guess was correct or not. Such card guessing games are usually called _no feedback_ games, as no information at all is delivered to the person guessing. In contrast, there are _complete feedback_ games, where the guesser is shown the drawn card and thus knows, whether the guess was correct or not. For a similar card guessing game with complete feedback we refer the reader to [20, 24].
The optimal strategy for the no feedback game, as well as extensions to \(k\)-time riffle shuffles, has been given by Ciucu [7]. Therein, he also derived the expected value \(\mathbb{E}(X_{n})\) of the number of correct guesses \(X_{n}\), when the deck is riffle shuffled once; see also Krityakierne and Thanatipanonda [19] for related results. The first few higher moments of \(X_{n}\) were derived by Krityakierne et al. [18]. We build on the earlier work [7, 18] and derive in this article the limit law of the number of correct guesses \(X_{n}\) in the no feedback game and also give asymptotic results for all integer moments, extending the results of [18].
Finally, we also comment on a different kind of card guessing games under the uniform distribution. A deck of a total of \(M\) cards is shuffled, and then the guesser is provided with the total number of cards \(M\), as well as the individual numbers of say hearts, diamonds, clubs and spades. After each guess, the person guessing the cards is shown the drawn card, which is then removed from the deck. This process is continued until no more cards are left. Assuming the guesser tries to maximize the number of correct guesses, one is again
interested in the total number of correct guesses. The card guessing procedure can be generalized to an arbitrary number \(N\geq 2\) of different types of cards. In the simplest setting there are two colors, red (hearts and diamonds) and black (clubs or spades), and their numbers are given by non-negative integers \(m_{1}\), \(m_{2}\), with \(M=m_{1}+m_{2}\). One is then interested in the random variable \(C_{m_{1},m_{2}}\) counting the number of correct guesses. Interestingly, it turned out that the random variable \(C_{m_{1},m_{2}}\) is closely related to card guessing with complete feedback after a single riffle shuffle. For this complete feedback card guessing game, not only the expected value and the distribution of the number of correct guesses is known [9, 17, 23, 28, 29], but also multivariate limit laws and interesting relations to combinatorial objects such as Dyck paths and Polya-Eggenberger urn models have been established [9, 21, 22].
### Notation
As a remark concerning notation used throughout this work, we always write \(X\mathop{\underline{\subset}}\limits^{\mathcal{L}}Y\) to express equality in distribution of two random variables (r.v.) \(X\) and \(Y\), and \(X_{n}\mathop{\xrightarrow{\mathcal{L}}}\limits^{\mathcal{L}}X\) for the weak convergence (i.e., convergence in distribution) of a sequence of random variables \(X_{n}\) to a r.v. \(X\). Furthermore, throughout this work we let \(h:=h(n)=\lceil\frac{n}{2}\rceil\). Moreover, we denote for \(s\in\mathbb{N}\) with \(x^{\underline{s}}=x(x-1)\cdots(x-(s-1))\) the falling factorials.
## 2. Riffle shuffle model and optimal strategy
### Gilbert-Shannon-Reeds model
The riffle shuffle, sometimes also called dovetail shuffle, is a card shuffling technique. In the mathematical modeling of card shuffling, the _Gilbert-Shannon-Reeds_ model [9, 15] is a probability distribution serving as a model of a riffle shuffle. One considers a sorted deck of \(n\) cards labeled consecutively from \(1\) up to \(n\). The deck of cards is cut into two packets, assuming that the probability of selecting \(k\) cards in the first packet, which we call a cut at position \(k\), and \(n-k\) in the second packet is defined as a binomial distribution with parameters \(n\) and \(1/2\):
\[\mathbb{P}\{\mathsf{Cut}=k\}=\frac{\binom{n}{k}}{2^{n}},\quad 0\leq k\leq n.\]
Afterward, the two packets are interleaved back into a single pile: one card at a time is moved from the bottom of one of the packets to the top of the shuffled deck, such that if \(m_{1}\) cards remain in the first and \(m_{2}\) cards remain in the second packet, then the probability of choosing a card from the first packet is \(m_{1}/(m_{1}+m_{2})\) and the probability of choosing a card from the second packet is \(m_{2}/(m_{1}+m_{2})\).
For a one-time shuffle, the operation of interleaving described above gives rise to an ordered deck (corresponding to the identity permutation) with multiplicity \(n+1\). Each other shuffled deck corresponds to a permutation with exactly two increasing subsequences and has multiplicity 1. In total number there are \(2^{n}-n-1\) different permutations of two increasing subsequences from the interleaving.
Figure 1. Example of a one-time riffle shuffle: a deck of five cards is split after 2 with probability \(\binom{5}{2}/2^{5}=5/16\) and then interleaved.
### Optimal strategy
The optimal strategy \(\mathcal{G}^{*}\) for maximizing the number \(X_{n}\) of correctly guessed cards, starting with a deck of \(n\) ordered cards, after a one-time riffle shuffle stems from the following proposition based on work of Ciucu [7] and Krityakierne and Thanatipanonda [19].
**Proposition 1** ([7, 19]).: _In order to maximize the number of correct guesses in a one-time riffle shuffle no feedback card guessing game, for large \(n\), the guesser should follow the optimal strategy \(\mathcal{G}^{*}\): guess the top half of the deck with sequence_
\[1,2,2,3,3,4,4,\ldots\]
_and guess the bottom half of the deck with sequence_
\[\ldots,n-3,n-3,n-2,n-2,n-1,n-1,n.\]
Proof.: For the sake of completeness and to make this work more self-contained, we add the nice and short argument justifying this strategy. From the Gilbert-Shannon-Reeds model one can readily determine the probability \(m_{i,j}=m_{i,j}(n)\) that the card labeled \(i\) ends up at position \(j\) after a riffle shuffle, starting with a deck of \(n\) ordered cards:
\[\begin{split} m_{i,i}&=\frac{1}{2^{n}}\big{(}2^{i- 1}+2^{n-i}\big{)},\\ m_{i,j}&=\frac{1}{2^{n-j+1}}\binom{n-j}{i-j}, \quad j<i,\end{split} \tag{1}\]
and the symmetry
\[m_{i,j}=m_{n-i+1,n-j+1}.\]
This follows directly by considering the different cutting positions \(k\), \(0\leq k\leq n\), and the number of different interleavings, such that card labeled \(i\) ends up at position \(j\). Let \(j<i\). Then, cuts at positions \(k\geq i\) cannot contribute, as all cards labeled \(1\) up to \(i-1\) are still before \(i\) after interleaving and the final position is at least \(i\). Thus, we are left with cuts at \(1\leq k<i\). As the cards \(k+1\) up to \(i-1\), a total of \(i-k-1\), of the second packet are always before \(i\), we require exactly \(j-i+k\) of the first packet out of the cards labeled \(\{1,\ldots,k\}\) to be interleaved before \(j\). There are \(\binom{j-1}{j-i+k}\) ways to do so. The remaining \(i-j\) cards of the first packet have to be interleaved above \(j\), leading to \(\binom{n-j}{i-j}\) different ways. In total,
\[m_{i,j}=\frac{1}{2^{n}}\sum_{k=i-j}^{i-1}\binom{j-1}{j-i+k}\binom{n-j}{i-j}= \frac{1}{2^{n-j+1}}\binom{n-j}{i-j}.\]
In the case \(i=j\) there are additional contributions from the cuts at \(k\geq i\), leading to
\[m_{i,i}=\frac{1}{2^{n-i+1}}+\sum_{k=i}^{n}\frac{\binom{n-i}{k-i}}{2^{n}}= \frac{1}{2^{n-i+1}}+\frac{1}{2^{i}}.\]
The case \(j>i\) can be treated similarly; we opt to recall a nice symmetry argument of [7]: We imagine having a second set of numbers on our cards, in which the cards are labeled consecutively from 1 on bottom through \(n\) on top. We call this the "upward labeling", compared to the original "downward labeling". It is clear that, after a riffle shuffle, card \(i\) ends up in position \(j\) in downward labeling if and only if card \(n-i+1\) goes to position \(n-j+1\) in the upward labeling. Since the probability distributions involved in the riffle shuffle have a vertical symmetry axis, we obtain the stated symmetry. Finally, the best guess \(g_{j}\) at the card in position \(j\) of the optimal strategy \(\mathcal{G}^{*}=g_{1}g_{2}\ldots g_{n}\) is determined by guessing the asymptotically largest probability,
\[g_{j}=\max_{i}\{m_{i,j}\},\quad 1\leq j\leq n,\]
which can be obtained by a close inspection of the binomial coefficients.
**Remark 1**.: As already pointed out in [19], the optimal strategy is not unique for a one-time riffle shuffle. In particular, for the card position \(j\), the player can optimally choose to guess any number from the set \(\mathcal{S}_{j}\), where \(\mathcal{S}=(\mathcal{S}_{j})\):
\[\mathcal{S}=\{1\},\{2\},\{2\},\{2,3\},\{3\},\{3,4\},\{4\},\{4,5\},\ldots\quad \text{top half}\]
and
\[\ldots,\{n-3,n-2\},\{n-2\},\{n-2,n-1\},\{n-1\},\{n-1\},\{n\}\quad\text{bottom half}.\]
However, in our analysis we follow exclusively the strategy \(\mathcal{G}^{*}\), which is the one that can be extended naturally to multiple-time riffle shuffle [19].
**Example 1**.: We consider the case \(n=3\) and the \(2^{3}\) possible permutations \(\sigma\). In Table 1 we highlight all \(2^{3}-3=5\) different permutations, colored cyan, as well as the cut positions and number of correct guesses.
We observe that under the optimal strategy we have
\[\mathbb{P}\{X_{3}=3\}=\frac{1}{2},\quad\mathbb{P}\{X_{3}=2\}=0,\quad\mathbb{P} \{X_{3}=1\}=\mathbb{P}\{X_{3}=0\}=\frac{1}{4}.\]
**Example 2**.: We consider the case \(n=4\) and the \(2^{4}\) possible permutations \(\sigma\). Again, in Table 2 we highlight all \(2^{4}-4=12\) different permutations, colored cyan, as well as the cut positions and number of correct guesses.
Under the optimal strategy we obtain
\[\mathbb{P}\{X_{4}=4\}=\frac{5}{16},\,\mathbb{P}\{X_{4}=3\}=0,\,\mathbb{P}\{X_{ 4}=2\}=\frac{3}{16},\,\mathbb{P}\{X_{4}=1\}=\mathbb{P}\{X_{4}=0\}=\frac{1}{4}.\]
We further simulated the probability mass function by looking at the empirical probabilities \(h_{k}\) for \(n=200\), \(1000\) and \(5000\) with samples of size \(N=50000\) for \(n=200\), \(1000\) and sample size \(N=100000\) for \(n=5000\).
\begin{table}
\begin{tabular}{||c||c||c|c|c||c|c||c|c||c||} \hline & **1** & **1** & **2** & **2** & **1** & **1** & **3** & **3** & **1** \\ \(\sigma\) & **2** & **2** & **1** & **3** & **2** & **3** & **1** & **2** \\ & **3** & **3** & **3** & **3** & **1** & **3** & **2** & **2** & **3** \\ \hline \(\mathsf{Cut}\) & **0** & **1** & **1** & **1** & **1** & **2** & **2** & **2** & **3** \\ \hline \(X\) & **3** & **3** & **1** & **0** & **3** & **1** & **0** & **3** \\ \hline \end{tabular}
\end{table}
Table 1. Case \(n=3\): optimal strategy \(\mathcal{G}^{*}=(1,2,3)\) and the number of correct guesses.
## 3. Distributional analysis and generating functions
Let \(X_{n}\) denote the random variable counting the number of correct guesses under the optimal strategy \(\mathcal{G}^{*}\) in the no-feedback model after a single riffle shuffle, starting with \(n\) ordered cards. In [18], the distribution of \(X_{n}\) has been determined using the generating function \(f_{n}(q)\):
\[\mathbb{E}(q^{X_{n}})=\frac{f_{n}(q)}{2^{n}}.\]
Our starting point is the following nice result, giving a formula for \(f_{n}(q)\) in terms of an auxiliary generating function \(g_{m_{1},m_{2}}(q)\), which is described itself in a recursive way.
**Lemma 1** (Krityakierne et al. [18]).: _Let \(h:=\lceil\frac{n}{2}\rceil\). The generating function \(f_{n}(q)\) satisfies_
\[f_{n}(q)=4q^{4}-2(q^{2}+q^{3})+\sum_{a=0}^{h}\sum_{b=0}^{n-h}g_{a,h-a}(q)\cdot g _{b,n-h-b}(q).\]
_Here, the generating function \(g_{m_{1},m_{2}}(q)\) is determined by the recurrence relation_
\[g_{m_{1},m_{2}}(q)=q^{\delta(c,m_{1})}g_{m_{1}-1,m_{2}}(q)+g_{m_{1},m_{2}-1}( q),\quad m_{1},m_{2}\geq 0\text{ and }(m_{1},m_{2})\neq(0,0), \tag{2}\]
_where \(c=\lfloor\frac{m_{1}+m_{2}}{2}\rfloor+1\) and \(\delta(x,y)\) denotes the Kronecker delta function, with initial values \(g_{0,0}(q)=1\) and \(g_{m_{1},m_{2}}(q)=0\), for \(m_{1}<0\) or \(m_{2}<0\)._
In order to study the limit law of \(X_{n}\), we interpret the results of Lemma 1 in a probabilistic way. Let \(Y_{m_{1},m_{2}}\) denote the random variable defined in terms of recurrence relation (2),
\[\mathbb{E}(q^{Y_{m_{1},m_{2}}})=\frac{g_{m_{1},m_{2}}(q)}{g_{m_{1},m_{2}}(1)} =\frac{g_{m_{1},m_{2}}(q)}{\binom{m_{1}+m_{2}}{m_{1}}}, \tag{3}\]
where the latter equality holds, since for \(q=1\) recurrence (2) is equivalent to Pascal's rule for the binomial coefficients. Next we translate above recurrence relations into a distributional equation for \(X_{n}\).
**Proposition 2** (Distributional equation for \(X_{n}\)).: _The random variable \(X_{n}\) satisfies for \(n\to\infty\):_
\[X_{n}\sim Y_{h-J_{h},J_{h}}+Y_{n-h-J_{n-h}^{*},J_{n-h}^{*}}^{*},\]
_where \(Y\), \(Y^{*}\) are distributed according to (3), \(J\), \(J^{*}\) are binomially distributed with parameter \(p=1/2\) and parameters \(h:=\lceil\frac{n}{2}\rceil\) and \(n-h\), respectively. Moreover, all random variables are independent._
Proof.: By Lemma 1 we have
\[\mathbb{E}(q^{X_{n}})=\frac{4q^{4}-2(q^{2}+q^{3})}{2^{n}}+\frac{1}{2^{n}}\sum_ {a=0}^{h}\sum_{b=0}^{n-h}g_{a,h-a}(q)\cdot g_{b,n-h-b}(q)\]
\[=\frac{4q^{4}-2(q^{2}+q^{3})}{2^{n}}+\sum_{a=0}^{h}\frac{g_{a,h-a}(q)}{ \binom{h}{a}}\cdot\frac{\binom{h}{a}}{2^{h}}\sum_{b=0}^{n-h}\frac{g_{b,n-h-b}(q)}{ \binom{n-h}{b}}\cdot\frac{\binom{n-h}{b}}{2^{n-h}}\] \[=\frac{4q^{4}-2(q^{2}+q^{3})}{2^{n}}\] \[\qquad+\sum_{a=0}^{h}\mathbb{E}(q^{Y_{a,h-a}})\cdot\mathbb{P}\{J_ {h}=a\}\sum_{b=0}^{n-h}\mathbb{E}(q^{Y_{b,n-h-b}^{*}})\mathbb{P}\{J_{n-h}^{*}=b\}.\]
Using that the product of probability generating functions corresponds to a sum of independent random variables, we obtain the distributional equation
\[X_{n}\mathop{=}^{\mathcal{L}}\hat{Y}_{h-J_{h},J_{h}}+\hat{Y}_{n-h-J_{n-h}^{*},J_{n-h}^{*}}^{*},\]
where the "hat" versions differ from their ordinary versions only on the values \(\{2,3,4\}\), with the difference tending to zero exponentially fast for \(n\to\infty\). Thus, we can safely neglect this difference when characterizing the limiting behaviour.
### Limit law
From the properties of the binomial distribution we know that \(J_{h}=\mathrm{B}(h,\frac{1}{2})\) satisfies
\[J_{h}\sim\mu_{J}+\sigma_{J}\cdot\mathcal{N}, \tag{4}\]
with \(\mu_{J}=\frac{h}{2}\sim\frac{n}{4}\), \(\sigma_{J}=\frac{\sqrt{h}}{2}\sim\frac{\sqrt{n}}{2\sqrt{2}}\) and \(\mathcal{N}=\mathcal{N}(0,1)\) the standard normal distribution. In view of Proposition 2 this implies that we need to know the distribution of \(Y_{m_{1},m_{2}}\) with parameters
\[m_{1}\sim h-\mu_{j}-\sigma_{J}\cdot t,\quad m_{2}\sim\mu_{j}+\sigma_{J}\cdot t,\quad t\in\mathbb{R}.\]
Thus we require the limit law of \(Y_{m_{1},m_{2}}\), for \(m_{1},m_{2}\to\infty\) and satisfying the assumptions above. In order to analyze recurrence relation (2) and thus \(Y_{m_{1},m_{2}}\), we proceed similar to the two-color card guessing game [21], setting up a suitable bijection. This bijection allows to analyze \(Y_{m_{1},m_{2}}\) in terms of certain Dyck paths by using tools from Analytic Combinatorics [13].
First, according to recurrence (2) we consider the sample paths from \((m_{1},m_{2})\) to \((0,0)\), \(m_{1},m_{2}\geq 0\), with steps \((-1,0)\), "left", and \((0,-1)\), "down", where the leftward steps carry a weight \(q\) if \(c=\lfloor\frac{m_{1}+m_{2}}{2}\rfloor+1\). Next we reverse the direction of the walks, thus going from \((0,0)\) to \((m_{1},m_{2})\), and then rotate the coordinate system clockwise by 45 degrees. After scaling, the resulting walks are directed walks of length \(m_{1}+m_{2}\) with Dyck steps \((1,1)\), "upward", and \((1,-1)\), "downward", starting at the origin and ending at \((m_{1}+m_{2},m_{2}-m_{1})\), see Figure 2.
It remains to translate the weight \(q^{\delta(c,m_{1})}\), with \(c=\lfloor\frac{m_{1}+m_{2}}{2}\rfloor+1\), to the directed paths. We consider all four different cases resulting from the parity of \(m_{1},m_{2}\), where we obtain the following.
Figure 2. Mapping of a sample path of \(Y_{6,3}\) to a directed lattice path from the origin to \((9,-3)\).
* Both \(m_{1},m_{2}\) are even or both \(m_{1},m_{2}\) are odd: \[\left\lfloor\frac{m_{1}+m_{2}}{2}\right\rfloor+1=\frac{m_{1}+m_{2}}{2}+1=m_{1}, \quad\text{which yields}\quad m_{1}=m_{2}+2.\]
* \(m_{1}\) is even and \(m_{2}\) odd or vice versa: \[\left\lfloor\frac{m_{1}+m_{2}}{2}\right\rfloor+1=\frac{m_{1}+m_{2}-1}{2}+1=m_{ 1},\quad\text{which yields}\quad m_{1}=m_{2}+1.\]
This implies that \(q\) counts the number of contacts with the two lines
\[g_{1}\colon\ y=x-1,\quad g_{2}\colon\ y=x-2.\]
After rotation, i.e., for Dyck paths, this implies that we count contacts with the lines \(y=-1\), corresponding to \(g_{1}\), as well as \(y=-2\), corresponding to \(g_{2}\). However, as only left steps from \((m_{1},m_{2})\to(m_{1}-1,m_{2})\) can carry a weight in the original sample paths, in the Dyck path setting a contact with these two lines is only counted when occurring after a downward step, see Figure 3.
Above findings are summarized as follows.
**Proposition 3** (Sample paths of the card guessing game and Dyck paths).: _Let \(\mathcal{S}_{m_{1},m_{2}}\) denote the set of weighted sample paths from \((m_{1},m_{2})\) to \((0,0)\), \(m_{1},m_{2}\geq 0\), with steps \((-1,0)\), carrying a weight \(q\) if \(c=\lfloor\frac{m_{1}+m_{2}}{2}\rfloor+1\), and \((0,-1)\). Then, \(\mathcal{S}_{m_{1},m_{2}}\) is in bijection with the set \(\mathcal{D}_{m_{1}+m_{2}}\) of Dyck paths with step sets \((1,1)\) and \((1,-1)\) of length \(m_{1}+m_{2}\), starting at the origin and ending at \((m_{1}+m_{2},m_{2}-m_{1})\), where the contacts with \(y=-1\) and \(y=-2\) are counted after a downward step._
**Proposition 4**.: _Assume that the numbers \(m_{1}\), \(m_{2}\) satisfy \(m_{1}-m_{2}\sim t\cdot\sqrt{m_{1}}\), as \(m_{1}\to\infty\), with \(t>0\). Then, the random variable \(Y_{m_{1},m_{2}}\) is asymptotically linear exponentially distributed,_
\[\frac{Y_{m_{1},m_{2}}}{\sqrt{m_{1}}}\stackrel{{\mathcal{C}}}{{ \to}}\operatorname{LinExp}(\tfrac{t}{2},\tfrac{1}{2}),\]
_or equivalently by stating the cumulative distribution function,_
\[\mathbb{P}\{Y_{m_{1},m_{2}}\leq z\sqrt{m_{1}}\}\to 1-e^{-\frac{z(2t+z)}{4}}, \quad z\geq 0.\]
_An analogous result holds for \(m_{2}-m_{1}\sim t\cdot\sqrt{m_{1}}\), as \(m_{2}\to\infty\), with \(t>0\)._
Before we prove this result, we discuss a motivation or, in other words, a back of the envelope explanation for it. Asymptotically, the number of down-contacts at levels \(-1\) and \(-2\) should be indistinguishable from the number of \(W_{m_{1},m_{2}}\) of returns to zero of Dyck paths of length \(m_{1}+m_{2}\), starting at zero and ending at \(m_{2}-m_{1}\). This latter quantity has been already analyzed in the context of two-color card guessing games, leading exactly to the same limit law.
Figure 3. Mapping of a directed lattice path from the origin to \((9,-3)\) to its corresponding sample path of \(Y_{6,3}\).
**Lemma 2** ([21]).: _For \(m_{2}\sim m_{1}\), where the difference \(d=m_{1}-m_{2}\) satisfies \(d\sim t\sqrt{m_{1}}\), with \(t>0\): suitably scaled, \(W_{m_{1},m_{2}}\) weakly converges to a linear exponential distribution, which is characterized via the distribution function_
\[F(x)=1-e^{-\frac{x(24+x)}{4}},\quad x>0.\]
_Thus, \(W_{m_{1},m_{2}}\) is asymptotically linear exponentially distributed:_
\[\frac{W_{m_{1},m_{2}}}{\sqrt{m_{1}}}\stackrel{{\mathcal{L}}}{{ \longrightarrow}}\mathrm{LinExp}\left(\tfrac{t}{2},\tfrac{1}{2}\right).\]
Next we show that the random variables \(W_{m_{1},m_{2}}\) and \(Y_{m_{1},m_{2}}\) are asymptotically indistinguishable. We show that both generating functions are almost identical and lead to the same asymptotic behavior. We derive the generating function of the weighted Dyck paths of interest, counting the number of down-contacts at levels \(-1\) and \(-2\). We follow the classical analysis of Banderier and Flajolet [2], see also [13]. Assume that \(m_{1}\geq m_{2}+2\), such that the endpoint has a negative \(y\)-coordinate: \(m_{2}-m_{1}\leq-2<0\). We can decompose these Dyck paths into three parts (see Figure 4):
* Part one. An excursion \(\mathcal{E}\) of arbitrary length, starting at the origin and never going below the \(x\)-axis and eventually returning to \(y=0\), then followed by a downward step with additional weight \(q\) leading to the first contact with \(y=-1\).
* Part two. The middle part \(\mathcal{M}\) consists itself of two different subparts. The first part consists of excursions \(\mathcal{E}_{\text{up}}(q)\) initially going upwards from \(y=-1\) to the same level. Here, the last step is always a downward step to \(y=-1\) and is weighted with \(q\). Second, we consider excursions \(\mathcal{E}_{\text{down}}(q)\) going downwards, with an additional weight \(q\) at the first step at height \(y=-2\). The middle part finishes with a final contact at \(y=-1\), followed by a downward step to \(y=-2\) with additional weight \(q\), yielding the formal description (see [13] for basic combinatorial constructions and the symbolic method): \[\mathcal{M}=\textsc{Seq}(\mathcal{E}_{\text{up}}(q)\cup\mathcal{E}_{\text{ down}}(q))\times\{q\}.\]
* Part three. An excursion starting at \(y=-2\) and ending at \(y=m_{2}-m_{1}\leq-2\), never reaching \(y=-1\).
By the classical arch decomposition, the generating function of excursions \(E(z)\), obtained by using the sequence construction from the symbolic method in combinatorics [13], satisfies
\[E(z)=\frac{1}{1-z^{2}E(z)},\quad\text{which implies}\quad E(z)=\frac{1-\sqrt{1-4 z^{2}}}{2z^{2}}.\]
Figure 4. Visualization of the decomposition for \(m_{1}\gg m_{2}\): the first part is the excursion (blue), the center part starts at \(y=-1\) and ends at the same level (green), the final part departs from \(y=-2\) to the ending position at level \(y=m_{2}-m_{1}\leq-2\) (orange).
Thus, our searched function for part one equals \(G_{1}(z):=E(z)\cdot zq\). Furthermore, the generating function of excursions \(\mathcal{E}_{\text{up}}(q)\) and \(\mathcal{E}_{\text{down}}(q)\), both with generating functions
\[\frac{1}{1-z^{2}qE(z)}=1+\sum_{k\geq 1}\big{(}z^{2}qE(z)\big{)}^{k},\]
are grouped together using the decomposition by arches corresponding to contacts with \(y=-1\), and taking into account the last additional downward step reaching \(y=-2\) yields for part two:
\[G_{2}(z):=M(z)=\frac{1}{1-2z^{2}qE(z)}\cdot zq.\]
Finally, the generating function of the last excursion from \(y=-2\) to \(y=m_{2}-m_{1}\) is obtained using the classical generating function \(F(z,u)=\sum_{p\text{\tiny{meader}}}z^{\text{\tiny{length of }}}p_{q}u^{\text{\tiny{final altitude of }}}p\) of the final altitude of meanders, see Banderier and Flajolet [2]. It is known that for Dyck paths it holds:
\[F(z,u)=\frac{u-zE(z)}{u\big{(}1-z\big{(}u+\frac{1}{u}\big{)}\big{)}},\]
where it is here assumed that the paths are starting at the origin and are never going below the \(x\)-axis. In our case, the paths go from \(y=-2\) to \(y=m_{2}-m_{1}\), so by symmetry we need to extract the coefficient of \(u^{m_{1}-m_{2}-2}\), thus we get for the third part \(G_{3}(z)=[u^{m_{1}-m_{2}-2}]F(z,u)\).
Collecting all parts \(G_{1}(z)\), \(G_{2}(z)\) and \(G_{3}(z)\), we obtain for the generating function
\[G(z,q)=\sum_{p\text{\tiny{Dyck path}}}z^{\text{\tiny{length of }}p}q^{\text{\tiny{\# downward visits at }}y\in\{-1,-2\}\text{\ of }p},\]
where the sum is running over all Dyck paths \(p\) starting at the origin and ending at \(y=m_{2}-m_{1}\leq-2\),
\[G(z,q)=[u^{m_{1}-m_{2}-2}]\frac{z^{2}q^{2}E(z)}{1-2z^{2}qE(z)}\cdot\frac{u-zE( z)}{u\big{(}1-z\big{(}u+\frac{1}{u}\big{)}\big{)}}. \tag{5}\]
The extraction of coefficients can actually be carried out in an explicit manner, as the denominators factors nicely [2],
\[u\big{(}1-z\big{(}u+\frac{1}{u}\big{)}\big{)}=u-zu^{2}-z=-z(u-u_{1}(z))(u-u_{2 }(z)),\]
where
\[u_{1}(z)=\frac{1-\sqrt{1-4z^{2}}}{2z}=zE(z),\quad u_{2}(z)=\frac{1+\sqrt{1-4z^ {2}}}{2z}, \tag{6}\]
such that
\[\frac{u-zE(z)}{u\big{(}1-z\big{(}u+\frac{1}{u}\big{)}\big{)}}=\frac{1}{-z(u-u _{2}(z))}=\frac{1}{zu_{2}(z)\big{(}1-\frac{u}{u_{2}(z)}\big{)}}=\frac{u_{1}(z) }{z\big{(}1-u\cdot u_{1}(z)\big{)}},\]
where we used that \(u_{1}(z)u_{2}(z)=1\). This leads to the following result.
**Proposition 5**.: _Assume that \(m_{1}\geq m_{2}+2\). The generating function \(G(z,q)\) of the number of Dyck paths of length \(n=m_{1}+m_{2}\), starting at level \(0\) and ending at level \(m_{2}-m_{1}\), weighted according to downward visits at levels \(y=-1\) and \(y=-2\), is given by_
\[G(z,q)=q^{2}\frac{1}{1-2z^{2}qE(z)}\cdot\big{(}zE(z)\big{)}^{m_{1}-m_{2}}. \tag{7}\]
**Remark 2** (Generalized composition schemes).: The structure of the generating function is similar to (extended) composition schemes
\[F(z,q)=\Psi\big{(}qH(z)\big{)}\cdot M(z),\]
considered in [4, 3, 13]. The main novelty here is the dependance on the additional parameter \(u\) in (5), or equivalently, the power \(\big{(}zE(z)\big{)}^{m_{1}-m_{2}-1}\), where \(m_{1}-m_{2}\) is allowed
to depend on \(n\), when analyzing \([z^{n}]F(z,q)\). This leads to new asymptotic regimes. A general study of such augmented schemes is forthcoming, together with the authors of [4].
Next, we compare this generating function to the generating function of the number of zero-contacts starting at \(y=0\) and ending at \(y=m_{2}-m_{1}\neq 0\), where the walks have the length \(n=m_{1}+m_{2}\). We can decompose this generating function into two parts: the first part is a bridge, where the weight \(q\) encodes the zero-contacts. Then, after a single step, we leave the \(x\)-axis and start our approach to the final position at \(d=|m_{2}-m_{1}|\):
\[\begin{split}&[u^{|m_{1}-m_{2}|-1}]\frac{1}{1-2z^{2}qE(z)}\cdot z \cdot\frac{u-zE(z)}{u\big{(}1-z\big{(}u+\frac{1}{u}\big{)}\big{)}}\\ &\quad=\frac{1}{z}\cdot\frac{1}{1-2z^{2}qE(z)}\big{(}zE(z)\big{)} ^{|m_{1}-m_{2}|+1}.\end{split} \tag{8}\]
We observe that both generating functions (5) and (8) are nearly identical, except a shift of length one in the length and in the difference \(d=|m_{1}-m_{2}|\). Symbolically,
\[Y_{m_{1},m_{2}}\sim W_{m_{1}-1,m_{2}}+2,\]
which readily leads to the stated limit law for \(Y_{m_{1},m_{2}}\). Note that one can give a more detailed analysis by extraction of coefficients, very similar to [21]; we leave the details to the interested reader.
Now are are ready to state the main result, namely the limit law for \(X_{n}\).
**Theorem 3**.: _The random variable \(X_{n}/\sqrt{n}\), counting the normalized number of correct guesses in a one-time riffle shuffle with no feedback card guessing game, tends for \(n\to\infty\) to a sum of two independent identically distributed half-normal distributed random variables \(H_{1}\), \(H_{2}\), each one with density function \(f_{H}(x)=\frac{2}{\sqrt{\pi}}e^{-x^{2}}\), \(x\geq 0\),_
\[\frac{X_{n}}{\sqrt{n}}\xrightarrow{\mathcal{L}}H_{1}+H_{2}.\]
_Equivalently, the density \(f(x):=f_{H_{1}+H_{2}}(x)\) of the limit law \(H_{1}+H_{2}\), supported on \([0,\infty)\), is given in terms of the density function \(\varphi(x)=\frac{1}{\sqrt{2\pi}}e^{-x^{2}/2}\) and the cumulative distribution function \(\Phi(x)=\int_{-\infty}^{x}\varphi(t)dt\) of the standard normal distribution as follows:_
\[f(x)=4\varphi(x)\cdot\big{(}2\Phi(x)-1\big{)}.\]
The shape of the density matches very well the simulations in [18, Figure 3], as well as our own simulations shown before.
Figure 5. Plot of the density function \(f(x)\) of the limit law. The red vertical line marks the expected value \(\frac{2}{\sqrt{\pi}}\approx 1.12838\) and the corresponding value of the density \(f(\frac{2}{\sqrt{\pi}})\approx 0.62548\).
Proof.: Our starting point is the distributional equation of \(X_{n}\) stated in Proposition 2. In the following we give a derivation of the limit law for the first summand \(Y_{h-J_{h},J_{h}}\), the second one is analyzed in a similar way. Let \(x>0\). We use the de Moivre-Laplace asymptotics of the binomial distribution and get, with \(\mu_{J}=h/2\) and \(\sigma_{J}=\sqrt{h}/2\) as denoted in (4),
\[F_{h}(x) :=\mathbb{P}\big{\{}Y_{h-J_{h},J_{h}}\leq x\sqrt{h/2}\big{\}}\] \[\sim\int_{0}^{h}\mathbb{P}\big{\{}Y_{h-j,j}\leq x\sqrt{h/2}\big{\}} \exp\Big{(}-\frac{(j-\mu_{J})^{2}}{2\sigma_{J}^{2}}\Big{)}\frac{1}{\sigma_{J} \sqrt{2\pi}}dj.\]
By the asymptotics of the binomial random variable we further get by substituting \(j=\mu_{J}+\sigma_{J}t\):
\[F_{h}(x)\sim 2\int_{0}^{\infty}\mathbb{P}\big{\{}Y_{h/2-t\sqrt{h}/2,h/2+t \sqrt{h}/2}\leq x\sqrt{h/2}\big{\}}\exp\Big{(}-\frac{t^{2}}{2}\Big{)}\frac{1}{ \sqrt{2\pi}}dt.\]
Furthermore, by our previous result in Proposition 4 we obtain
\[\mathbb{P}\{Y_{h/2-t\sqrt{h}/2,h/2+t\sqrt{h}/2}\leq x\sqrt{h/2}\}\to 1-e^{- \frac{x(2\sqrt{2}t+x)}{4}},\]
as
\[t\sqrt{h}\sim t\sqrt{2}\cdot\sqrt{h/2}.\]
This implies that
\[F_{h}(x)\sim 1-\frac{2}{\sqrt{2\pi}}\int_{0}^{\infty}e^{-\frac{x(2\sqrt{2}t+x)} {4}-t^{2}/2}dt=1-\frac{2}{\sqrt{2\pi}}\int_{0}^{\infty}e^{-\big{(}\frac{\sqrt {2}t+x}{2}\big{)}^{2}}dt.\]
The integral is readily evaluated by substituting \(\tau=(\sqrt{2}t+x)/2\) and we get
\[F_{h}(x)\sim 1-\frac{2}{\sqrt{\pi}}\int_{x/2}^{\infty}e^{-\tau^{2}}d\tau.\]
This implies that the arising density of the limit law of \(Y_{h-J_{h},J_{h}}/\sqrt{h/2}\) is obtained by taking the derivative w.r.t. \(x\) yielding
\[\frac{e^{-x^{2}/4}}{\sqrt{\pi}},\quad x\geq 0.\]
Actually, this is the density of a half-normal distributed random variable HN\((c)\) with parameter \(c=\sqrt{2}\). Finally, we note that
\[\sqrt{n}\sim\sqrt{4\cdot\frac{h}{2}}=2\sqrt{\frac{h}{2}},\]
which implies that the limit law of \(Y_{h-J_{h},J_{h}}/\sqrt{n}\) has, due to scaling, the density
\[f_{H}(x)=\frac{2}{\sqrt{\pi}}e^{-x^{2}},\quad x\geq 0.\]
Finally, the density of the limit law \(H_{1}+H_{2}\) of \(X_{n}/\sqrt{n}\) can be obtained using standard methods, where \(\operatorname{erf}(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{-t^{2}}dt\) denotes the error function:
\[f_{H_{1}+H_{2}}(x) =\int_{0}^{x}f_{H_{1}}(t)f_{H_{2}}(x-t)dt=\frac{4}{\pi}\int_{0}^{ x}e^{-t^{2}-(x-t)^{2}}dt=\frac{4}{\pi}e^{-x^{2}}\int_{0}^{x}e^{-2t(t-x)}dt\] \[=\frac{4}{\pi}e^{-x^{2}/2}\int_{0}^{x}e^{-2(t-\frac{x}{2})^{2}}dt =\frac{2\sqrt{2}}{\sqrt{\pi}}e^{-x^{2}/2}\operatorname{erf}\big{(}\frac{x}{ \sqrt{2}}\big{)}=4\varphi(x)\cdot\big{(}2\Phi(x)-1\big{)}.\]
## 4. Convergence of moments
We start by collecting the moment sequence of the limit law. Then, we perform a sanity check by comparing the moments of the limit law \(H_{1}+H_{2}\) of \(X_{n}/\sqrt{n}\) with the precise results for the first five moments with \(n=4L\) tending to infinity, computed in [18]. Afterwards, we extend the results of [18] obtaining moment convergence for arbitrary high integer moments (for arbitrary \(n\to\infty\)).
**Lemma 4** (Moments of the limit law).: _The sum \(H_{1}+H_{2}\) of two independent identically distributed half-normal distributed random variables \(H_{1}\), \(H_{2}\), each one with density function \(f_{H}(x)=\frac{2}{\sqrt{\pi}}e^{-x^{2}}\), \(x\geq 0\), has moment sequence_
\[\tilde{\mu}_{s}=\sum_{k=0}^{s}\binom{s}{k}\frac{\Gamma(\frac{k+1}{2})\Gamma( \frac{s-k+1}{2})}{\Gamma^{2}(\frac{1}{2})},\]
_for integer \(s\geq 1\)._
Proof.: We recall basic properties of the half-normal distribution \(H\) with density \(f_{H}(x)=\frac{2}{\sqrt{\pi}}e^{-x^{2}}\). Its integer moments are given by
\[\mu_{s}=\mathbb{E}(H^{s})=\frac{\Gamma(\frac{s+1}{2})}{\Gamma(\frac{1}{2})}= \frac{\Gamma(\frac{s+1}{2})}{\sqrt{\pi}},\]
with \(s\geq 0\). Consequently, the \(s\)-th integer moments \(\tilde{\mu}_{s}\) of \(H_{1}+H_{2}\) are given by
\[\tilde{\mu}_{s}=\mathbb{E}\Big{(}\big{(}H_{1}+H_{2}\big{)}^{s}\Big{)}=\sum_{k= 0}^{s}\binom{s}{k}\mu_{k}\mu_{s-k}=\sum_{k=0}^{s}\binom{s}{k}\frac{\Gamma( \frac{k+1}{2})\Gamma(\frac{s-k+1}{2})}{\Gamma^{2}(\frac{1}{2})}. \tag{9}\]
**Example 3** (Asymptotics of the first five moments [18]).: The first five moments of the half normal distributed random variable are given by
\[\mu_{1}=\mu_{3}=\frac{1}{\sqrt{\pi}},\;\mu_{2}=\frac{1}{2},\;\mu_{4}=\frac{3} {4},\;\mu_{5}=\frac{2}{\sqrt{\pi}}.\]
This implies that the first five moments of the limit law are given by
\[\tilde{\mu}_{1}=\frac{2}{\sqrt{\pi}},\;\tilde{\mu}_{2}=\frac{2}{\sqrt{\pi}}+ 1,\;\tilde{\mu}_{3}=\frac{5}{\sqrt{\pi}},\;\tilde{\mu}_{4}=\frac{8}{\pi}+3,\; \tilde{\mu}_{5}=\frac{43}{2\sqrt{\pi}}. \tag{10}\]
We check with the asymptotic expansions of the moments \(\mathbb{E}(X_{n}^{s})\), \(1\leq s\leq 5\) for \(n=4L\), and summarize here for the reader's convenience the dominant terms of the first five integer moments of \(X_{n}\), with \(n=4L\), as stated in [18]:
\[\mathbb{E}(X_{n}) \sim\frac{4L}{4^{L}}\binom{2L}{L},\] \[\mathbb{E}(X_{n}^{2}) \sim\frac{(4L)^{2}}{2\cdot 4^{2L}}\binom{2L}{L}^{2}+4L,\] \[\mathbb{E}(X_{n}^{3}) \sim\frac{40L^{2}}{4^{L}}\binom{2L}{L},\] \[\mathbb{E}(X_{n}^{4}) \sim\frac{256L^{3}}{2\cdot 4^{2L}}\binom{2L}{L}^{2}+48L^{2},\] \[\mathbb{E}(X_{n}^{5}) \sim\frac{688L^{3}}{4^{L}}\binom{2L}{L}.\]
By the standard asymptotics
\[\frac{2L}{4^{L}}\binom{2L}{L}\sim\frac{\sqrt{n}}{\sqrt{\pi}} \tag{11}\]
and our results for the first five moments (10), we observe that indeed, for \(n=4L\), it holds
\[\mathbb{E}\big{(}\big{(}X_{n}/\sqrt{n}\big{)}^{s}\big{)}\to\tilde{\mu}_{s},\quad 1 \leq s\leq 5.\]
The remaining part of the section is devoted to prove that actually all integer moments of \(X_{n}/\sqrt{n}\) converge to the moments of \(H_{1}+H_{2}\). Again we deal with the generating functions description given in Lemma 1, but we proceed in treating the occurring recurrence for \(g_{m_{1},m_{2}}(q)\) in a direct way by using the so-called kernel method, see [2, 27]. To this aim we first rewrite (2) by setting \(M:=m_{1}+m_{2}\) and \(d:=m_{2}-m_{1}\), and introducing
\[\tilde{g}_{M,d}(q)=g_{m_{1},m_{2}}(q)=g_{(M-d)/2,(M+d)/2}(q).\]
This yields the recurrence
\[\tilde{g}_{M,d}(q)=q^{\delta(d,-1)+\delta(d,-2)}\cdot\tilde{g}_{M -1,d+1}+\tilde{g}_{M-1,d-1},\quad\text{for $M\geq 1$ and $|d|\leq M$},\] \[\tilde{g}_{0,0}(q)=1,\qquad\text{with}\quad\tilde{g}_{M,d}=0,\quad \text{for $M<0$ or $|d|>M$}. \tag{12}\]
Let us introduce the trivariate generating function
\[\tilde{G}(z,u,q)=\sum_{M\geq 0}\sum_{d=-M}^{M}\tilde{g}_{M,d}(q)z^{M}u^{d}= \sum_{m_{1},m_{2}\geq 0}g_{m_{1},m_{2}}(q)z^{m_{1}+m_{2}}u^{m_{2}-m_{1}}, \tag{13}\]
for which we get an explicit solution.
**Proposition 6**.: _The generating function \(\tilde{G}(z,u,q)\) of the sequence \(\tilde{g}_{M,d}(q)\) (and thus also \(g_{m_{1},m_{2}}(q)\), resp.) satisfying recurrence (12) (and (2), resp.) is given by_
\[\tilde{G}(z,u,q)=\frac{\big{(}2qu^{2}z+(q-1)^{2}u-q(q-1)z\big{)}P(z^{2})-zu(u +q(q-1)z)}{zu(zu^{2}-u+z)(1-2qP(z^{2}))}, \tag{14}\]
_where \(P(t)=\frac{1-\sqrt{1-4t}}{2}=\sum_{n\geq 1}\frac{1}{n}\binom{2n-2}{n-1}t^{n}\) denotes the generating function of shifted Catalan-numbers._
Proof.: Introducing the auxiliary functions \(\tilde{G}_{d}(z,q)=\sum_{M\geq 0}\tilde{g}_{M,d}(q)z^{M}\), for \(d\in\mathbb{Z}\), we immediately obtain from (12) the system of equations
\[\begin{split}&\tilde{G}_{d}(z,q)=z\tilde{G}_{d+1}(z,q)+z\tilde{G}_{d- 1}(z,q),\quad d\geq 1\text{ or }d\leq-3,\\ &\tilde{G}_{0}(z,q)=z\tilde{G}_{1}(z,q)+z\tilde{G}_{-1}(z,q)+1, \\ &\tilde{G}_{d}(z,q)=qz\tilde{G}_{d+1}(z,q)+z\tilde{G}_{d-1}(z,q), \quad d=-1\text{ or }d=-2.\end{split} \tag{15}\]
In order to treat (15) by means of the kernel method, we introduce the pair of functions
\[A(z,u,q)=\sum_{d\geq 0}\tilde{G}_{-d}(z,q)u^{d},\qquad B(z,u,q)=\sum_{d\geq 0 }\tilde{G}_{d}(z,q)u^{d}.\]
It is straightforward to get from (15) the following pair of equations for \(A(z,u,q)\) and \(B(z,u,q)\), which involve the unknown functions \(\tilde{G}_{0}(z,q)\) and \(\tilde{G}_{-1}(z,q)\):
\[\begin{split}&(zu^{2}-u+z)A(z,u,q)\\ &\qquad\qquad+((q-1)zu^{2}+u-z)\tilde{G}_{0}(z,q)+((q-1)zu^{2}-zu )\tilde{G}_{-1}(z,q)=0,\\ &(zu^{2}-u+z)B(z,u,q)-z\tilde{G}_{0}(z,q)+zu\tilde{G}_{-1}(z,q) +u=0.\end{split} \tag{16}\]
The roots \(u_{1}=u_{1}(z)\), \(u_{2}=u_{2}(z)\) satisfying \(K(z,u):=zu^{2}-u+z=z(u-u_{1})(u-u_{2})=0\), i.e., vanishing the kernel \(K(z,u)\), are the ones stated in (6). Plugging the root \(u_{1}\), which admits a power series expansion around \(z=0\), for \(u\) into (16), the kernel is annihilated leading to a linear system of equations for \(\tilde{G}_{0}(z,q)\) and \(\tilde{G}_{-1}(z,q)\), whose solution can be written in the following way (note that \(P(z^{2})=zu_{1}(z)\)):
\[\tilde{G}_{0}(z,q)=\frac{(1-q)P(z^{2})+qz^{2}}{z^{2}(1-2qP(z^{2}))},\qquad \tilde{G}_{-1}(z)=\frac{qP(z^{2})}{z(1-2qP(z^{2}))}. \tag{17}\]
According to (16) and (17), we thus also get explicit solutions for the auxiliary trivariate g.f.:
\[A(z,u,q) =\frac{(z-u-(q-1)zu^{2})\tilde{G}_{0}(z,q)+(zu-(q-1)zu^{3})\tilde{G} _{-1}(z,q)}{zu^{2}-u+z},\] \[B(z,u,q) =\frac{z\tilde{G}_{0}(z,q)-zu\tilde{G}_{-1}(z,q)-u}{zu^{2}-u+z}.\]
Finally, according to the definition, we have
\[\tilde{G}(z,u,q)=A(z,u^{-1},q)+B(z,u,q)-\tilde{G}_{0}(z,q),\]
which, after simple manipulations, yields the stated result.
Bearing in mind the representation of \(f_{n}(q)\) given in Lemma 1, we will set \(u=1\), i.e., considering
\[\tilde{G}(z,q):=\tilde{G}(z,1,q)=\sum_{M\geq 0}\sum_{d=-M}^{M}\tilde{g}_{M,d}(q )z^{M}=\sum_{M\geq 0}\sum_{m=0}^{M}g_{m,M-m}(q)z^{M}. \tag{18}\]
Namely, when setting \(R(q)=4q^{4}-2(q^{2}+q^{3})\), this yields
\[f_{n}(q)=R(q)+[z^{h}]\tilde{G}(z,q)\cdot[z^{n-h}]\tilde{G}(z,q). \tag{19}\]
As we are interested in the moments of \(X_{n}\), we will set \(q=1+w\) in (19) and carry out a series expansion around \(w=0\). According to the definition of \(f_{n}(q)\), the coefficients in the corresponding expansion involve the factorial moments of \(X_{n}\),
\[f_{n}(1+w)=\sum_{s\geq 0}\frac{2^{n}\mathbb{E}(X_{n}^{\underline{s}})}{s!}w^{s}.\]
When denoting the respective expansions of the remaining functions via
\[R(1+w)=\sum_{0\leq s\leq 4}r_{s}w^{s},\qquad\tilde{G}(z,1+w)=\sum_{s\geq 0} \tilde{g}_{s}(z)w^{s},\]
we thus get from (19), by extracting the coefficient of \(w^{s}\), the useful representation:
\[\mathbb{E}(X_{n}^{\underline{s}})=\frac{s!\,r_{s}}{2^{n}}+\sum_{k=0}^{s}\frac{ s!}{2^{n}}\sum_{k=0}^{s}\big{(}[z^{h}]\tilde{g}_{k}(z)\big{)}\cdot\big{(}[z^{n-h}] \tilde{g}_{s-k}(z)\big{)}. \tag{20}\]
Of course, to make use of it, we require suitable expansions of \(\tilde{G}(z,1+w)\). First we use Proposition 6 and set \(u=1\) to get the explicit formula (with \(P(t)\) defined above):
\[\tilde{G}(z,q)=\frac{1}{1-2z}+\frac{(q-1)\big{(}qz^{2}+(1-q+qz)P(z^{2})\big{)} }{z(1-2z)(1-2qP(z^{2}))}. \tag{21}\]
Next we consider the coefficients in the series expansion of \(\tilde{G}(z,1+w)\) around \(w=0\), i.e., the functions \(\tilde{g}_{s}(z)=[w^{s}]\tilde{G}(z,1+w)\), determine the dominant singularities and provide local expansions around the singularity yielding the main asymptotic contributions.
**Lemma 5**.: _Let \(\tilde{G}(z,q)\) as defined in (18). The functions \(\tilde{g}_{s}(z)=[w^{s}]\tilde{G}(z,1+w)\) obtained as coefficients in a series expansion of \(\tilde{G}(z,1+w)\) around \(w=0\) have radius of convergence \(\frac{1}{2}\) and, for \(s\geq 1\), have the two dominant singularities \(\rho_{1,2}=\pm\frac{1}{2}\). Moreover, the local behaviour of \(\tilde{g}_{s}(z)\) around \(\rho:=\rho_{1}=\frac{1}{2}\) is given as follows,_
\[\tilde{g}_{s}(z)=\frac{1}{2^{\frac{s}{2}}\left(1-2z\right)^{\frac{s}{2}+1}} \cdot\big{(}1+\mathcal{O}(\sqrt{1-2z})\big{)},\quad s\geq 0.\]
**Remark 3**.: It is not difficult to show that the second dominant singularity \(\rho_{2}=-\frac{1}{2}\) occurring in the functions \(\tilde{g}_{s}(z)\) of Lemma 5 leads to contributions that do not affect the main terms stemming from the contributions of the singularity \(\rho=\rho_{1}=\frac{1}{2}\). Since we are here only interested in the main term contribution, we will restrict ourselves to elaborate the expansion around \(\rho\).
Proof.: The explicit formula for \(\tilde{G}(z,q)\) given in (21) can be rewritten as
\[\tilde{G}(z,q)=\frac{1}{1-2z}+\frac{(q-1)\big{(}z^{2}+zP(z^{2})+(q-1)(z^{2}+(z- 1)P(z^{2}))\big{)}}{z(1-2z)(1-2P(z^{2}))\big{(}1-\frac{2(q-1)P(z^{2})}{1-2P(z^{ 2})}\big{)}}.\]
Setting \(q=1+w\), we obtain the series expansion
\[\tilde{G}(z,1+w)=\frac{1}{1-2z}+\frac{w\big{(}z^{2}+zP(z^{2})+w(z ^{2}+(z-1)P(z^{2}))\big{)}}{z(1-2z)(1-2P(z^{2}))\big{(}1-\frac{2wP(z^{2})}{1-2 P(z^{2})}\big{)}}\] \[\quad=\frac{1}{1-2z}+w\frac{P(z^{2})+z}{(1-2z)(1-2P(z^{2}))}\] \[\quad\quad+\sum_{s\geq 2}w^{s}\frac{(z+P(z^{2}))(2P(z^{2}))^{s-1}} {(1-2z)(1-2P(z^{2}))^{s}}+\sum_{s\geq 2}w^{s}\frac{(z^{2}+(z-1)P(z^{2}))(2P(z^{2 }))^{s-2}}{z(1-2z)(1-2P(z^{2}))^{s-1}}\] \[\quad=\frac{1}{1-2z}+w\frac{P(z^{2})+z}{(1-2z)(1-2P(z^{2}))}+ \sum_{s\geq 2}w^{s}\frac{(2P(z^{2}))^{s-2}((z+1)P(z^{2})-z^{2})}{z(1-2z)(1-2P(z ^{2}))^{s}},\]
where we used in the last step the relation \((P(z^{2}))^{2}=P(z^{2})-z^{2}\). Thus, the functions \(\tilde{g}_{s}(z)\) are given as follows:
\[\tilde{g}_{0}(z)=\frac{1}{1-2z},\qquad\tilde{g}_{1}(z)=\frac{P(z^ {2})+z}{(1-2z)(1-2P(z^{2}))},\] \[\tilde{g}_{s}(z)=\frac{\big{(}(z+1)P(z^{2})-z^{2})(2P(z^{2}))^{s- 2}}{z(1-2z)(1-2P(z^{2}))^{s}},\quad s\geq 2. \tag{22}\]
Since \(1-2P(z^{2})=\sqrt{1-4z^{2}}=\sqrt{1-2z}\cdot\sqrt{1+2z}\), it is immediate from the explicit formulae, that the functions \(\tilde{g}_{s}(z)\) have radius of convergence \(\frac{1}{2}\) with dominant singularities \(\rho_{1}=\frac{1}{2}\) and, for \(s\geq 1\), \(\rho_{2}=-\frac{1}{2}\). To describe the local behaviour of \(\tilde{g}_{s}(z)\) around \(\rho:=\rho_{1}\) we use the notation \(\mathcal{Z}=\frac{1}{1-2z}\) and \(\tilde{\mathcal{Z}}=\frac{1}{1-4z^{2}}=\frac{1}{(1-2z)(1+2z)}\). We collect a few local expansions around \(\rho\) used thereafter:
\[z=\frac{1}{2}\cdot\big{(}1+\mathcal{O}(\mathcal{Z}^{-1})\big{)},\qquad\tilde{ Z}=\frac{\mathcal{Z}}{2}\cdot\big{(}1+\mathcal{O}(\mathcal{Z}^{-1})\big{)},\]
\[P(z^{2})=\frac{1}{2}\cdot\big{(}1+\mathcal{O}(\mathcal{Z}^{-\frac{1}{2}}) \big{)},\qquad\frac{1}{1-2P(z^{2})}=\frac{\mathcal{Z}^{\frac{1}{2}}}{2^{\frac{ 1}{2}}}\cdot\big{(}1+\mathcal{O}(\mathcal{Z}^{-1})\big{)}.\]
We then immediately get from (22):
\[\tilde{g}_{0}(z)=\mathcal{Z},\qquad\tilde{g}_{1}(z)=\frac{\mathcal{Z}^{\frac{ 3}{2}}}{2^{\frac{1}{2}}}\cdot\big{(}1+\mathcal{O}(\mathcal{Z}^{-\frac{1}{2}}) \big{)}.\]
Furthermore, with these expansions we easily obtain, for \(s\geq 2\) arbitrary but fixed,
\[\tilde{g}_{s}(z)=\big{(}1+\mathcal{O}(\mathcal{Z}^{-\frac{1}{2}}) \big{)}\cdot\big{(}\frac{1}{2}+\mathcal{O}(\mathcal{Z}^{-\frac{1}{2}})\big{)} \cdot 2\big{(}1+\mathcal{O}(\mathcal{Z}^{-1})\big{)}\cdot\mathcal{Z}\cdot\frac{ \mathcal{Z}^{\frac{s}{2}}}{2^{\frac{s}{2}}}\cdot\big{(}1+\mathcal{O}(\mathcal{ Z}^{-1})\big{)}\] \[\quad=\frac{\mathcal{Z}^{\frac{s}{2}}+1}{2^{\frac{s}{2}}}\cdot \big{(}1+\mathcal{O}(\mathcal{Z}^{-\frac{1}{2}})\big{)},\]
which completes the proof.
Now we have all ingredients at hand to show convergence of the moments of \(X_{n}\).
**Theorem 6**.: _The \(s\)-th integer moments \(\mathbb{E}\big{(}\big{(}\frac{X_{n}}{\sqrt{n}}\big{)}^{s}\big{)}\) of the suitably scaled r.v. \(X_{n}\) converge, for arbitrary but fixed \(s\geq 0\) and \(n\to\infty\), to the moments of the limit law \(H_{1}+H_{2}\):_
\[\mathbb{E}\Big{(}\big{(}\frac{X_{n}}{\sqrt{n}}\big{)}^{s}\Big{)}\to\tilde{\mu} _{s}=\mathbb{E}\big{(}\big{(}H_{1}+H_{2}\big{)}^{s}\big{)}=\sum_{k=0}^{s} \binom{s}{k}\cdot\frac{\Gamma\big{(}\frac{k+1}{2}\big{)}\Gamma\big{(}\frac{s-k +1}{2}\big{)}}{\pi}.\]
Proof of Theorem 6.: We consider the representation (20) of the \(s\)-th factorial moments of \(X_{n}\) and first observe that \(r_{s}\), the contributions of \(R(q)\), are bounded, \(|r_{s}|\leq 24\), thus turn out to be exponentially small compared to the remaining contributions. Consequently, they can be neglected, yielding
\[\mathbb{E}(X_{n}^{\underline{s}})\sim\sum_{k=0}^{s}\frac{s!}{2^{n}}\sum_{k=0} ^{s}\big{(}[z^{h}]\tilde{g}_{k}(z)\big{)}\cdot\big{(}[z^{n-h}]\tilde{g}_{s-k}( z)\big{)}. \tag{23}\]
In order to extract coefficients from the functions \(\tilde{g}_{s}(z)\) asymptotically, we use Lemma 5 and apply transfer lemmata [13] that "translate" the local behaviour of the generating function near the dominant singularity to the asymptotic behaviour of their coefficients. The local expansion around \(\rho=\frac{1}{2}\) given there (see also Remark 3) immediately leads to the following asymptotic behaviour, for \(n\to\infty\) and arbitrary but fixed \(s\geq 0\):
\[[z^{n}]\tilde{g}_{s}(z)=\frac{2^{n}n^{\frac{s}{2}}}{2^{\frac{s}{2}}\Gamma( \frac{s}{2}+1)}\cdot\big{(}1+\mathcal{O}(n^{-\frac{1}{2}})\big{)}=\frac{2^{n }2^{\frac{s}{2}}\Gamma(\frac{s+1}{2})}{s!\sqrt{\pi}}\cdot\big{(}1+\mathcal{O} (n^{-\frac{1}{2}})\big{)},\]
where we used for the latter equation the duplication formula of the Gamma function, \(\Gamma(\frac{s}{2}+1)\Gamma(\frac{s+3}{2})=\sqrt{\pi}\,2^{-s-1}\Gamma(s+2)\), and \(\Gamma(\frac{1}{2})=\sqrt{\pi}\).
Plugging this asymptotic result into (23) and using \(h=\frac{n}{2}\cdot\big{(}1+\mathcal{O}(n^{-1})\big{)}\), we get
\[\begin{split}\mathbb{E}(X_{n}^{\underline{s}})&= \sum_{k=0}^{s}\frac{s!}{2^{n}}\cdot\frac{2^{h}2^{\frac{s}{2}}\Gamma(\frac{k+1} {2})h^{\frac{k}{2}}}{\sqrt{\pi}\,k!}\cdot\big{(}1+\mathcal{O}(h^{-\frac{1}{2} })\big{)}\\ &\qquad\cdot\frac{2^{n-h}2^{\frac{s-k}{2}}\Gamma(\frac{s-k+1}{2}) (n-h)^{\frac{s-k}{2}}}{\sqrt{\pi}\,(s-k)!}\cdot\big{(}1+\mathcal{O}((n-h)^{- \frac{1}{2}})\big{)}\\ &=\sum_{k=0}^{s}\binom{s}{k}\cdot\frac{\Gamma(\frac{k+1}{2}) \Gamma(\frac{s-k+1}{2})}{\pi}\cdot n^{\frac{s}{2}}\cdot\big{(}1+\mathcal{O}(n^ {-\frac{1}{2}})\big{)}.\end{split} \tag{24}\]
The raw moments can be expressed in terms of the factorial moments by
\[\mathbb{E}(X_{n}^{s})=\sum_{k=0}^{s}\binom{s}{k}\mathbb{E}(X_{n}^{\underline{ k}}),\]
where \(\big{\{}\binom{s}{k}\big{\}}\) denote the Stirling numbers of the second kind, counting the number of ways to partition a set of \(s\) objects into \(k\) non-empty subsets. From (24) we thus obtain
\[\mathbb{E}(X_{n}^{s})=\mathbb{E}(X_{n}^{\underline{s}})+\mathcal{O}\big{(} \mathbb{E}(X_{n}^{\underline{s-1}})\big{)}\]
for \(n\to\infty\) and arbitrary but fixed \(s\geq 1\). This leads to the expansion
\[\mathbb{E}(X_{n}^{s})=\sum_{k=0}^{s}\binom{s}{k}\cdot\frac{\Gamma(\frac{k+1}{2} )\Gamma(\frac{s-k+1}{2})}{\pi}\cdot n^{\frac{s}{2}}\cdot\big{(}1+\mathcal{O}(n^ {-\frac{1}{2}})\big{)}=\tilde{\mu}_{s}\cdot n^{\frac{s}{2}}\cdot\big{(}1+ \mathcal{O}(n^{-\frac{1}{2}})\big{)}.\]
Scaling of \(X_{n}\) by \(\sqrt{n}\) immediately yields the stated result.
**Remark 4** (Proof of Theorem 3 by the method of moments).: Finally, we note that Theorem 6 also strengthens Theorem 3. Carleman's criterion [6, pp. 189-220] for the Stieltjes moment problem, support \([0,\infty)\), states that if
\[\sum_{s=0}^{\infty}\mu_{s}^{-1/(2s)}=+\infty, \tag{25}\]
then the moment sequence \((\mu_{s})_{s\geq 1}\) determines a unique distribution. Furthermore, this implies that if there exists a constant \(C>0\) such that
\[\mu_{s}\leq C^{s}(2s)!\quad\text{for }s\in\mathbb{N},\]
then Carleman's criterion is satisfied. We note that for the sum of independent half-normals \(H_{1}+H_{2}\) with moment sequence \((\tilde{\mu}_{s})_{s\in\mathbb{N}}\) it holds
\[\tilde{\mu}_{s}=\sum_{k=0}^{s}\binom{s}{k}\frac{\Gamma(\frac{k+1}{2})\Gamma( \frac{s-k+1}{2})}{\Gamma^{2}(\frac{1}{2})}\leq\frac{1}{\pi}\sum_{k=0}^{s} \binom{s}{k}\Gamma(k+1)\Gamma(s-k+1)=\frac{s}{\pi}s!\leq(2s)!,\]
such that the divergence in Carleman's criterion is satisfied. Thus, by the Frechet-Shohat theorem [14], we obtain the weak convergence of the normalized random variable \(\frac{X_{\mathbf{n}}}{\sqrt{n}}\) to \(H_{1}+H_{2}\) with moment sequence \((\tilde{\mu}_{s})_{s\in\mathbb{N}}\).
## 5. Conclusion
We studied the number of correct guesses when starting with an ordered deck of \(n\) cards labeled \(1\) up to \(n\) is riffle-shuffled exactly one time. Assuming that no feedback is given to the person guessing, the limit law was determined. Additionally, we have shown convergence of all positive integer moments, providing a second proof of the limit law. We note that the approach of this work also allows to analyze different questions, like waiting times for correct guesses, etc.
## Declarations of interest
The authors declare that they have no competing financial or personal interests that influenced the work reported in this paper.
|
2308.04857 | Emotion-Conditioned Text Generation through Automatic Prompt
Optimization | Conditional natural language generation methods often require either
expensive fine-tuning or training a large language model from scratch. Both are
unlikely to lead to good results without a substantial amount of data and
computational resources. Prompt learning without changing the parameters of a
large language model presents a promising alternative. It is a cost-effective
approach, while still achieving competitive results. While this procedure is
now established for zero- and few-shot text classification and structured
prediction, it has received limited attention in conditional text generation.
We present the first automatic prompt optimization approach for
emotion-conditioned text generation with instruction-fine-tuned models. Our
method uses an iterative optimization procedure that changes the prompt by
adding, removing, or replacing tokens. As objective function, we only require a
text classifier that measures the realization of the conditional variable in
the generated text. We evaluate the method on emotion-conditioned text
generation with a focus on event reports and compare it to manually designed
prompts that also act as the seed for the optimization procedure. The optimized
prompts achieve 0.75 macro-average F1 to fulfill the emotion condition in
contrast to manually designed seed prompts with only 0.22 macro-average F1. | Yarik Menchaca Resendiz, Roman Klinger | 2023-08-09T10:42:38Z | http://arxiv.org/abs/2308.04857v1 | # Emotion-Conditioned Text Generation through
###### Abstract
Conditional natural language generation methods often require either expensive fine-tuning or training a large language model from scratch. Both are unlikely to lead to good results without a substantial amount of data and computational resources. Prompt learning without changing the parameters of a large language model presents a promising alternative. It is a cost-effective approach, while still achieving competitive results. While this procedure is now established for zero- and few-shot text classification and structured prediction, it has received limited attention in conditional text generation. We present the first automatic prompt optimization approach for emotion-conditioned text generation with instruction-fine-tuned models. Our method uses an iterative optimization procedure that changes the prompt by adding, removing, or replacing tokens. As objective function, we only require a text classifier that measures the realization of the conditional variable in the generated text. We evaluate the method on emotion-conditioned text generation with a focus on event reports and compare it to manually designed prompts that also act as the seed for the optimization procedure. The optimized prompts achieve 0.75 macro-average F\({}_{1}\) to fulfill the emotion condition in contrast to manually designed seed prompts with only 0.22 macro-average F\({}_{1}\).
## 1 Introduction
Emotions are fundamental in communication, where they play an important role in transferring meaning and intent Ekman (1992). Emotion-conditioned natural language generation models aim at improving human-computer interaction, by generating text that is not limited to conveying propositional information. However, state-of-the-art conditional generation models require a large amount of data and computational power to achieve models that allow for a fine-grained control over the generated texts Pascual et al. (2021); Ghosh et al. (2017); Song et al. (2019); Zhou et al. (2018); Menchaca Resendiz and Klinger (2023).
In areas like text classification or structured prediction, prompt optimization has established itself as a zero- or few-shot learning paradigm Ding et al. (2022); Zhang et al. (2022); Wang et al. (2022), also in emotion analysis Plaza-del Arco et al. (2022); Zheng et al. (2022); Yin et al. (2019). Here, only parameters that are concatenated to the input are optimized and the large language model's parameters are frozen. Such models, therefore, exploit encoded knowledge in models such as Flan Tay et al. (2023), GPT-3 Brown et al. (2020) and Alpaca Taori et al. (2023) more explicitly than fine-tuning them for the task at hand. The optimization method learns "how to use" a model, not "how to change" it.
In recent instruction-based models, the prompt is an instruction to elicit a desired response. The instruction serves as a starting point for generating text that aligns with the intended task. Prompting in text classification Hu et al. (2022); Gu et al. (2022) usually includes the instruction (e.g., "classify the text...") and the label representation (e.g., "positive", "negative"). Summarization has been represented as an instruction by appending "TL;DR" or "summarize" Radford et al. (2019); Narayan et al. (2021). For text generation that translates tables
\begin{table}
\begin{tabular}{l l l} \hline \hline I. & Input prompt & Generated text \\ \hline
0 & Text with disgust & Disgust is a character in Inside Out \\ \hline
1 & Text expressing disgust & Disgusting \\ \hline
2 & Write a text to express disgust & A look of disgust came over his face. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Hypothetical example for a prompt optimization process. The seed prompt is given in Iteration (I.) 0 and misinterpreted to mention the character “Disgust”. This issue is fixed through iterative optimization.
to text, Li and Liang (2021) proposed to tune a prefix prompt to accomplish the task. In machine translation, prompts typically mention the source and target language, such as "translate English to German" (Raffel et al., 2020).
The task of prompt optimization can be formulated in various directions. The goal is to find the optimal sequence of tokens to represent the prompt for a specific model (e.g., Flan) and task (e.g., summarization), while keeping the model weights unchanged. AutoPrompt (Shin et al., 2020) defines the prompt optimization as "fill-in-the-blanks" based on a gradient-guided search. OpenPrompt (Ding et al., 2022) provides a toolkit for training prompts using a template dataset, along with corresponding verbalizers for different classes. Deng et al. (2022) use reinforcement learning to infer a successful prompt variation strategy. A different approach for optimization is fine-tuning the model to improve its performance with a specific prompt, while keeping the prompt unchanged (Jian et al., 2022; Gu et al., 2022).
In contrast to most previous work, we use models that have been fine-tuned to solve instruction-based tasks; in our case to generate emotion-conditioned texts. This comes with distinct challenges because the loss function cannot be determined by a single expected label (e.g., positive or negative). In our work, we use a classifier that measures the fulfillment of the condition as a source to calculate the value of an objective function. The optimization procedure that we propose is an evolutionary optimization method (Simon, 2013). Next to the objective function, an important component are actions that allow changes to a prompt to explore the search space.
## 2 Methods
We propose a method (summarized in pseudocode in Algorithm 1) for text generation conditioned on emotions using prompt optimization. It involves an iterative optimization procedure with three modules, namely _prompt modification_, _text generation_, and _prompt evaluation_. We describe the modules in Section 2.1 and the iterative optimization in Section 2.2.
### Modules
Prompt modification.In each optimization iteration, we apply the three operations, one at a time, to all the tokens in the prompt. Therefore, based on one "parent" prompt, we create \(\lambda>1\) "children".
_Addition_ adds the most probable token at any position within the prompt, including both the beginning and end of the prompt. We use the pre-trained RoBERTa model (Liu et al., 2019) to retrieve probable tokens for each of these positions. _Removal_ deletes a token from the prompt. The _Replacement_ operation exchanges a token by the most probable token, again as predicted by RoBERTa.
The _Addition_ and _Replacement_ operations use the <mask> special token to predict the word. We exemplify these operations in Table 2.
Text generation.We then use each of the \(\lambda\) prompt variations to create text using a large pre-trained language model (e.g., Flan). To do so, we instantiate it with the emotion category. We refer to this instantiation as the _Conditional-Prompts_. Each of them consists of the modified prompt and the specified emotion (e.g., "Text that expresses \(\langle\texttt{em}\rangle\)"). Here, \(\langle\texttt{em}\rangle\) is replaced by each of the emotion categories under consideration.
Evaluation.Each prompt is then evaluated through the texts that are generated with its instantiated _Conditional-Prompts_. In the evaluation, we do not further consider texts that are a paraphrase of the Conditional-Prompt. We calculate the BLEU score (Papineni et al., 2002) and filter all texts with a score greater than 0.2. For example, a language model could generate "The text expresses joy." for a Conditional-Prompt "Text that expresses joy".
The actual evaluation is performed by comparing the emotion condition to the judgment of an emotion classifier, applied to the generated texts. We use the \(\mathrm{F}_{1}\) measure both as an objective function during optimization and for final evaluation. Note that these two scores are based on two separate classifiers, trained on independent data.
\begin{table}
\begin{tabular}{l l l} \hline \hline Original Prompt & Oper. & Modified Prompt \\ \hline Text that expresses & Add. & Text string that expresses \\ Text that expresses & Repl. & Text \# expresses \\ Text that expresses & Rem. & Text expresses \\ \hline \hline \end{tabular}
\end{table}
Table 2: The prompt operations (Oper.) are performed on the same prompt. The Addition (Add.) adds RoBERTa’s special mask token (<mask>) between _Text_ and _that_. The Replacement (Repl.) masks the target word (that). The unmasked/predicted tokens by RoBERTa are underlined, and the replaced or removed tokens from the original are in **bold**. Removal (Rem.) deletes one token from the prompt.
### Iterative Optimization
Algorithm 1 shows the iterative prompt optimization for a given seed prompt \(P\) (e.g., "Text that expresses"). The optimization is based on a \((\mu,\lambda)\) evolutionary algorithm [1], more concretely \((1,\lambda)\), because we keep only the one best-performing prompt for the next optimization iteration. In contrast to a \((\mu+\lambda)\), the respective parent is not further considered in the next iteration. This makes the algorithm less likely to get stuck in a local optimum.
Initially, \(P_{\textit{opt}}\) (the optimized prompt) is initialized with the seed prompt \(P\). Next, each token in \(P_{\textit{opt}}\) is modified using the Addition, Replacement, and Removal. Each operation is performed one at a time, and the results are stored in \(\mathbf{P}_{\textit{mod}}\) (Section 2.1). The _Generate_ method produces a text for each _Conditional-Prompt_-combination of the input prompt and the emotion class (e.g., "Text that expresses joy", "Text that expresses anger"; Section 2.1). We compare the generated text from \(P_{\textit{opt}}\) (namely \(T_{\textit{opt}}\)) against the generated text from each modified prompt (\(\mathbf{P}_{\textit{mod}}\)), denoted as \(\mathbf{T}_{\textit{mod}}\). If the F\({}_{1}\) of \(\mathbf{T}_{\textit{mod}}\) is higher than that of \(T_{\textit{opt}}\), the prompt \(\textit{prompt}_{\textit{mod}}\) is assigned as the new optimized prompt (\(P_{\textit{opt}}\)) and added to the best-performing candidates (\(\mathbf{P}_{\textit{candds}}\)). Finally, this process is repeated for a total of \(N\) times and \(P_{\textit{opt}}\) is updated with the best-performing prompt from \(\mathbf{P}_{\textit{cands}}\).
```
Input : Seed Prompt \(P\), Maximum Iterations \(N\) Output : Optimized Prompt \(P_{\textit{opt}}\) \(P\); \(i\gets 0\); \(\mathbf{P}_{\textit{cands}}\leftarrow\{\}\); while\(i<N\)do \(\mathbf{P}_{\textit{mod}}\leftarrow\{\}\); for token \(\in P_{\textit{opt}}\)do \(\mathbf{P}_{\textit{mod}}\)\(+\)\(=\) Add(\(P_{\textit{opt}}\),\(token\)); \(\mathbf{P}_{\textit{mod}}\)\(+\)\(=\)\(\textit{Replace}\)(\(P_{\textit{opt}}\),\(token\)); \(\mathbf{P}_{\textit{mod}}\)\(+\)\(=\)\(\textit{Remove}\)(\(P_{\textit{opt}}\),\(token\)); \(\mathbf{T}_{\textit{opt}}\)\(\leftarrow\{\}\); for\(\textit{prompt}_{\textit{mod}}\in\mathbf{P}_{\textit{mod}}\)do \(\mathbf{T}_{\textit{mod}}\)\(\leftarrow\)\(\textit{Generate}(\textit{prompt}_{\textit{mod}})\); if\(\textit{Eval}(\mathbf{T}_{\textit{mod}})>\textit{Eval}(\mathbf{T}_{\textit{opt}})\)then \(P_{\textit{opt}}\)\(\leftarrow\)\(\textit{prompt}_{\textit{mod}}\); \(\mathbf{T}_{\textit{opt}}\)\(\leftarrow\)\(\mathbf{T}_{\textit{mod}}\); \(\mathbf{P}_{\textit{cands}}\)\(+\)\(=\)\(P_{\textit{opt}}\) \(i\gets i+1\); \(P_{\textit{opt}}\)\(\leftarrow\) select-one-best(\(\mathbf{P}_{\textit{cands}}\)); return\(P_{\textit{opt}}\);
```
**Algorithm 1**Automatic Prompt Optimization. _Eval_ involves an emotion classifier and the BLEU score.
## 3 Experiments
Section 3.1 explains the experimental settings used to optimize an initial prompt that we assume to be provided by a user. Section 3.3 validates the proposed method by showing that emotion-conditioned text generation improves when using the optimized prompt compared to the seed prompt.
### Experimental Settings
To validate the feasibility of our method for emotion-conditioned text generation, and its cost-effectiveness in terms of data and computational resources, we utilized available pre-trained models and datasets. Specifically, we used Flan [11], an open-source model trained on instruction-based datasets, as a generative model. We trained two classifiers using (1) the ISEAR dataset [12] for prompt optimization in each iteration, and (2) the crowd-enVent dataset [13] for final evaluation, utilizing the same subset of emotions as the ISEAR dataset.1 Both classifiers are built on top of RoBERTa using default parameters for 10 epochs.2
Footnote 1: The emotion labels are: Anger, Disgust, Fear, Guilt, Joy, Sadness, and Shame.
Footnote 2: The crowd-enVent and ISEAR-based classifiers have macro-F\({}_{1}\) of.78 and.77, respectively.
These data sets are independent of each other, and therefore the objective signal is independent of the final evaluation. Both sets, however, are comparable: they contain texts in which people were asked to report on an emotion-triggering event, given a predefined emotion. In the original ISEAR corpus, these texts were acquired in an in-lab setting in the 1990s, while the crowd-enVENT corpus has recently been collected in 2022 in a crowd-sourcing setup. An example from the ISEAR corpus is "When I was involved in a traffic accident." - an example from crowd-enVENT is "When my son was poorly with covid".
Prompt Modification.We selected a straightforward seed prompt--"Write a text that expresses \(\langle\texttt{em}\rangle\)"--for ten iterations and all operations.
Text Generation.For each _Conditional-Prompt_, we generate the three most probable sentences using a beam search with a beam size of 30, a next-token temperature of 0.7, and a top-p (nucleus)
sample of 0.7. We ensure that our output excludes sentences with repeated instances of the same bigram.
Prompt Evaluation.We filter out all prompts where the average BLEU score is higher than 0.2 across all the conditional prompts. Next, we select the prompt with the best F\({}_{1}\) score using the ISEAR classifier.
### State-of-the-art Baseline
We compare our method against the plug-and-play method proposed by Pascual et al. (2021)--a state-of-the-art model for affective text generation. To do so, we train the emotion discriminators that are part of that method on top of GPT-2 with the ISEAR dataset. The comparison is not straightforward since this method uses the prompt as a starting point to generate the sentence, whereas our approach treats the prompt as an instruction. Therefore, we select the most frequent n-grams from the ISEAR dataset as prompts: "When I was", "When a", and "When someone". For each prompt-discriminator combination, we generate the 5 most probable sentences.
### Results
We begin the discussion of the results with Table 3, which shows the prompt optimization and performance across iterations. It reveals two notable findings: First, already the first iteration, compared to the seed prompt in Iteration 0, shows an increase by 52 pp in F\({}_{1}\). This is a change only by replacing "that" with "to". Given our selection criteria which does not include the parent prompt in the selection, there can be a decrease in performance, which can be observed in Iteration 2. Second, all prompts in Table 3--the best-performing prompts at each iteration--are human-readable. This is in contrast to prompt optimization in other NLP tasks, where the resulting prompts often become less human-readable. For example, in the fact retrieval task "[X] limestone depositedati boroughDepending [Y]" performs better than "[X] is the capital of [Y]" (Ding et al., 2022).
Table 4 showcases examples of generated texts from various prompt candidates. The prompt candidates at the same iteration are a few examples of the resulting prompt modifications as described in Section 2. The provided F\({}_{1}\) scores refer to the performance of the prompt across the 7 emotions, not the performance of the specific examples shown. Comparing the generated text from the seed prompt (Row 1) and the first optimization (Row 2), we observe a better fulfillment of the emotion _disgust_ for the optimized prompt--the uncertainty expressed in Row 1 indicates _fear_. Prompt modifications at the same iteration have different performances. For example, in Iteration 2 (Rows 4/5), there is a difference of 33 pp in F\({}_{1}\). It is important to note that the best F\({}_{1}\) score does not always indicate an improvement in fulfilling the condition of the generated text. Sometimes, the best-scoring text can be a paraphrase of the prompt, which may be falsely classified as correct due to the presence of the emotion class name (e.g., Row 6/Iteration 5, Row 3/Iteration 2).
Finally, Table 5 shows an independent evaluation of the method along with the results achieved with the method by Pascual et al. (2021). We report F\({}_{1}\) scores for the ISEAR-based classifier used during the optimization process and the independent crowd-enVENT-based classifier. The latter numbers therefore constitute an independent evaluation result. We observe that the numbers of both classifiers are comparable to each other. The comparison to the baseline shows that our seed prompt performs on par with Pascual's method (.18,.12, and.17 vs..22, respectively). Our optimized prompt, however, shows a higher performance (.75 F\({}_{1}\)).
## 4 Conclusion and Future Work
In this study, we introduced the first automatic prompt optimization method for text generation conditioned on emotions. Our approach involved three token operations: addition, replacement, and removal. We utilized a BLEU score and an automatic classifier to filter and rank the modified prompts. We demonstrated that the optimized prompts led to a higher fulfillment of the intended
\begin{table}
\begin{tabular}{l l l} \hline \hline
1. & Ope. & Optimized Prompt (\(P_{opt}\)) & F\({}_{1}\) \\
0 — & Write a text that expresses \(\langle\)em\(\rangle\) &.28 \\
1 & Repl. & Write a text to expresses \(\langle\)em\(\rangle\) &.80 \\
2 & Add. & Write in a text to expresses \(\langle\)em\(\rangle\) &.91 \\
3 & Add. & Write in a text string to expresses \(\langle\)em\(\rangle\) &.88 \\
4 & Add. & Write in a long text string to expresses \(\langle\)em\(\rangle\) & **.94** \\
5 & Rem. & Write in long text string to expresses \(\langle\)em\(\rangle\) & **.94** \\
6 & Repl. & Write in long text strings to expresses \(\langle\)em\(\rangle\) &.91 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Prompt optimization at different iterations (I.), with Iteration 0 representing the seed prompt. The \(\langle\)em\(\rangle\) token represents any of the seven emotions in the ISEAR dataset. The macro F\({}_{1}\) score is calculated using the ISEAR classifier, across all the emotions.
emotions compared to the seed prompt, with a 53 pp improvement in the F\({}_{1}\) score. It is a cost-effective method in terms of both data and resource requirements, while still achieving good results.
This leads to important future work. While our approach improves emotion-conditioned text generation, there are several areas that need to be explored further. First, we need to explore different search techniques for prompt optimization (e.g., Beam search). Second, it is essential to compare the performance of the optimized prompts across different domains to assess the generalizability of our method. Our evaluation is arguably comparably narrow, with only one seed prompt and one domain in which emotions are expressed. Finally, it is crucial to analyze our approach by comparing it against a fine-tuned or trained model from scratch to evaluate its effectiveness and efficiency.
Another interesting direction of research would be to study in more detail how the expected domain of the generated texts (here: emotion self-reports) might be in conflict with the emotion condition and how that can be encoded in either the optimization process, the seed prompt selection or the objective functions, or in combinations of these parameters.
## 5 Ethical Considerations & Limitations
The proposed method aims at optimizing prompts for conditional text generation, particularly when conditioned on emotions. The generated affective texts do not only serve as a source to study the capabilities of large language models from a computational perspective. We believe that they can also be of value to better understand the representation of psychological concepts in automatically generated text. However, there are some risks associated with the method if not used with care, primarily inherited from the underlying language model. Optimized prompts could potentially result in generating text that reinforces stereotypes or marginalize certain groups. When dealing with the expression of emotions, it is essential to exercise caution when employing these models due to their potential impact on individuals.
A limitation in our evaluation and method is that we rely heavily on the seed prompts. This can lead to fast convergence--if the seed prompt is adequate for the task, the optimization process is more likely to be successful. The optimization is based on a \((\mu,\lambda)\) approach, which can be seen as a brute-force search. However, alternative search algorithms may provide a more efficient optimization of the prompt in terms of iterations.
Overall, the method has proven to be useful for text generation conditioned on emotions. We invite people to keep the above limitations in mind when considering the capabilities and applications of our method.
## Acknowledgements
This work has been supported by a CONACYT scholarship (2020-000009-01EXTF-00195) and by the German Research Council (DFG), project "Computational Event Analysis based on Appraisal Theories for Emotion Analysis" (CEAT, project number KL 2869/1-2).
\begin{table}
\begin{tabular}{l l l l} \hline \hline Iter. & Prompt & Generated text & F\({}_{1}\) \\ \hline
0 & Write a text that expresses disgust & i don’t know what to do about it. &.28 \\
1 & Write a text to expresses disgust & i’m disgusted. &.80 \\
1 & a text that expresses fear & a text that expresses fear &.90 \\
2 & Write in a text to expresses shame & i don’t know how to express my shame. &.91 \\
2 & Write a text that to expresses shame & i’m sorry to hear that. &.58 \\
5 & Write in a long enough string to expresses joy & a long enough string to express joy. & 1.0 \\
5 & Write a long text string to expresses joy & i love you so much & **.94** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Generated text examples from the same seed prompt (1st row) at different optimization steps. The macro F\({}_{1}\) score is computed for the prompt across all the emotions using the ISEAR classifier.
\begin{table}
\begin{tabular}{l l r r} \hline \hline Method & Prompt & ISEAR & \begin{tabular}{c} crowd- \\ enVent \\ \end{tabular} \\ \hline \multirow{3}{*}{\begin{tabular}{l} Pascual \\ (2021) \\ \end{tabular} } & When I was &.18 &.18 \\ & When a &.43 &.12 \\ & When someone &.21 &.17 \\ \hline \multirow{3}{*}{
\begin{tabular}{l} _Popt_ \\ \end{tabular} } & Write a text that expresses \(\langle\)em\(\rangle\) &.28 &.22 \\ & Write in long text string to express \(\langle\)em\(\rangle\) &.94 & **.75** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison between our method (\(P_{opt}\)) and Pascual (2021) Rows 1–3 are the most frequent n-grams for the ISEAR dataset. The 4th row corresponds to the seed prompt, and the 5th row represents the optimized prompt. The macro-average F\({}_{1}\)-score for both ISEAR and crowd-enVent datasets is computed across all emotions. |
2301.06593 | Distance-regular graphs admitting a perfect $1$-code | In this paper, we study the problem that which of distance-regular graphs
admit a perfect $1$-code. Among other results, we characterize distance-regular
line graphs which admit a perfect $1$-code. Moreover, we characterize all known
distance-regular graphs with small valency at most $4$, the distance-regular
graphs with known putative intersection arrays for valency $5$, and all
distance-regular graphs with girth $3$ and valency $6$ or $7$ which admit a
perfect $1$-code. | Mojtaba Jazaeri | 2023-01-16T20:20:00Z | http://arxiv.org/abs/2301.06593v1 | # Distance-regular graphs admitting a perfect \(1\)-code
###### Abstract.
In this paper, we study the problem that which of distance-regular graphs admit a perfect \(1\)-code. Among other results, we characterize distance-regular line graphs which admit a perfect \(1\)-code. Moreover, we characterize all known distance-regular graphs with small valency at most \(4\), the distance-regular graphs with known putative intersection arrays for valency \(5\), and all distance-regular graphs with girth \(3\) and valency \(6\) or \(7\) which admit a perfect \(1\)-code.
Key words and phrases:Distance-regular graph; Perfect \(1\)-code 2020 Mathematics Subject Classification: 05C69 and 05E30
## 1. Introduction
It is well known that the classical coding theory studies perfect codes in Hamming graphs and these graphs are distance-transitive. In 1973, Biggs [3] initiated an investigation of perfect codes in distance-transitive graphs. Since distance-transitive graphs are a family of distance-regular graphs, it is reasonable to study perfect codes in distance-regular graphs. Neumaier [16] introduced the notion of a completely regular code and proved that a perfect code in a distance-regular graph is indeed a completely regular code. We refer to the monograph [6, Chap. 11] for more background on perfect codes in distance-regular graphs. In this paper, we study the problem that which of distance-regular graphs admit a perfect \(1\)-code. In some literature, an efficient domination set is used instead of a perfect \(1\)-code (see for example [7] and [8]). For an overview on recent progress of this topic, we refer to [8]. We first state some observations and equations on perfect \(1\)-codes and then we characterize distance-regular line graphs which admit a perfect \(1\)-code. Furthermore, we state some facts about perfect codes in antipodal distance-regular graphs and give an overview on perfect \(1\)-codes in distance-regular graphs with small diameter at most \(4\). Moreover, we characterize all known distance-regular graphs with small valency
at most 4, the distance-regular graphs with known putative intersection arrays for valency 5, and all distance-regular graphs with girth 3 and valency 6 or 7 which admit a perfect 1-code.
## 2. Preliminaries
In this paper, all graphs are undirected and simple, i.e., there are no loops or multiple edges. Moreover, we consider the eigenvalues of the adjacency matrix of a graph. A connected graph \(\Gamma\) is called distance-regular with diameter \(d\) and intersection array
\[\{b_{0},b_{1},\ldots,b_{d-1};c_{1},c_{2},\ldots,c_{d}\}\]
whenever for each pair of vertices \(x\) and \(y\) at distance \(i\), where \(0\leq i\leq d\), the number of neighbours of \(x\) at distance \(i+1\) and \(i-1\) from \(y\) are constant numbers \(b_{i}\) and \(c_{i}\), respectively. This implies that a distance-regular graph is regular with valency \(b_{0}=k\) and the number of neighbours of \(x\) at distance \(i\) from \(y\) is a constant number \(k-b_{i}-c_{i}\) which is denoted by \(a_{i}\). A \(k\)-regular graph with \(n\) vertices is called strongly regular with parameters \((n,k,\lambda,\mu)\) whenever the number of common neighbours of two adjacent vertices is \(\lambda\) and the number of common neighbours of two non-adjacent vertices is \(\mu\). Note that for a strongly regular graph to be of diameter 2 and thus a distance-regular graph, it needs to be connected and non-complete. Moreover, for a distance-regular graph with diameter \(d\), the number of vertices at distance \(i\) from an arbitrary given vertex is constant and denoted by \(K_{i}\). Furthermore,
\[K_{i+1}=\frac{K_{i}b_{i}}{c_{i+1}},\]
where \(i=0,1,\ldots,d-1\) and \(K_{0}=1\).
Recall that a projective plane of order \(q\) is a point-line incidence structure such that each line has \(q+1\) points, each point is on \(q+1\) lines, and every pair of points is on a unique line. Furthermore, the incidence graph of a projective plane is a bipartite distance-regular graph with diameter three and intersection array \(\{q+1,q,q;1,1,q+1\}\). Moreover, the distinct eigenvalues (of the adjacency matrix) of this graph are \(\{\pm(q+1),\pm\sqrt{q}\}\).
Let \(\Gamma\) be a graph with vertex set \(V\). Then any subset \(C\) of \(V\) is called a code in \(\Gamma\). Let \(\overline{\Gamma_{t}(c)}\) denote the set of vertices at distance at most \(t\) from \(c\), where \(c\in C\). Then the code \(C\) in \(\Gamma\) is called perfect \(t\)-code whenever \(\{\overline{\Gamma_{t}(c)}\mid c\in C\}\) is a partition of the vertex set \(V\) (cf. [15]). This implies that a code \(C\) is perfect 1-code whenever \(C\) is an independent set and every vertex outside \(C\) has a unique neighbour in
\(C\). Furthermore, it is trivial to see that if \(C_{1}\) and \(C_{2}\) are two perfect \(1\)-codes in a graph, then \(|C_{1}|=|C_{2}|\) since there exists a bijection between \(C_{1}\) and \(C_{2}\) by the definition of a perfect \(1\)-code. Moreover, if \(C\) is a perfect \(1\)-code in a \(k\)-regular graph \(\Gamma\), then
\[|C|=\frac{|V|}{k+1}, \tag{2.1}\]
because \(\{\Gamma_{1}(c)\mid c\in C\}\) is a partition for the vertex set \(V\) and each part has size \(k+1\).
**Remark 2.1**.: _Let \(\Gamma\) be a regular graph with vertex set \(V\). Then \(\Gamma\) admits an one-element subset of \(V(\Gamma)\) as a perfect \(1\)-code if and only if \(\Gamma\) is a complete graph._
The following two observations are trivial by the definition of a perfect \(1\)-code but are useful.
**Observation 2.2**.: _Let \(C\) be a perfect \(1\)-code with at least two elements in a connected graph and \(x,y\in C\). Then \(d(x,y)\geq 3\). Moreover, there exist at least two elements at distance \(3\) in \(C\). To see this let \(x\in C\) and \(y\) be a vertex at distance \(2\) from \(x\) which is indeed outside \(C\). Then there exists a unique element \(z\in C\) which is adjacent to \(y\) and therefore the distance between \(x\) and \(z\) is \(3\)._
**Observation 2.3**.: _Let \(C\) be a perfect \(1\)-code in a connected regular graph with vertex set \(V\) and valency \(k\). Then \(\{C,V\backslash C\}\) is an equitable partition with the quotient matrix_
\[\begin{bmatrix}0&k\\ 1&k-1\end{bmatrix}.\]
_Therefore \(-1\) must be an eigenvalue of (the adjacency matrix of) this graph._
### Completely regular codes
Let \(\Gamma\) be a connected regular graph with vertex set \(V\) and a code \(C\), where \(|C|\geq 2\). Then the number
\[d(C):=\min\{d(x,y)\mid x,y\in C,x\neq y\}\]
is called the minimum distance of \(C\). The distance \(v\in V\) from \(C\) is defined by
\[d(v,C):=\min\{d(v,w)\mid w\in C\}\]
and the number
\[t(C):=\max\{d(v,C)\mid v\in V\}\]
is called the covering radius of \(C\). Let
\[C_{\ell}:=\{v\in V\mid d(v,C)=\ell\},\]
where \(\ell=0,1,\ldots,t(C)\). Then the code \(C\) is called completely regular whenever for all \(\ell\), every vertex in \(C_{\ell}\) has the same number \(c_{\ell}\) of neighbours in \(C_{\ell-1}\), the same number \(b_{\ell}\) of neighbours in \(C_{\ell+1}\) and the same number \(a_{\ell}\) of neighbours in \(C_{\ell}\). It is trivial to see that every one-element code is completely regular in a distance-regular graph. This definition was fist introduced by Neumaier [16]. He proved that a code \(C\) in a distance-regular graph is a perfect code if and only if it is a completely regular code with \(d(C)=2t(C)+1\) (cf. [16, Thm. 4.3]). It follows that if \(C\) is a perfect \(1\)-code in a distance-regular graph, then the code \(C\) is a completely regular code with \(d(C)=3\) since \(t(C)=1\).
## 3. Antipodal distance-regular graphs
Let \(\Gamma\) be an antipodal distance-regular graph with diameter \(d\geq 3\). Then the folded graph of \(\Gamma\) which is denoted by \(\overline{\Gamma}\) is a graph whose vertex set is the fibers and two fibers are adjacent whenever there exists an edge between them in the graph \(\Gamma\). Recall that two fibers have the same size and if two are adjacent in \(\overline{\Gamma}\), then there exists a perfect matching between them in the graph \(\Gamma\). Moreover, each vertex in one fiber is adjacent to at most one vertex in another fiber since each pair of vertices in a fiber is at distance \(d\geq 3\). Let \(C\) be a perfect code in \(\Gamma\). Then \(C\) is a disjoint union of some fibers (cf. [6, last Remark on p. 349]). It follows that if \(\overline{C}\) is the corresponding code in the folded graph \(\overline{\Gamma}\), then \(\overline{C}\) is a perfect code in \(\overline{\Gamma}\). Therefore we have the following proposition.
**Proposition 3.1**.: _Let \(\Gamma\) be an antipodal distance-regular graph with diameter \(d\geq 3\). Then the graph \(\Gamma\) admits a perfect code \(C\) if and only if \(C\) is a disjoint union of some fibers such that \(\overline{C}\) is a perfect code in the folded graph \(\overline{\Gamma}\)._
Recall that the folded graph of an antipodal distance-regular graph with diameter \(3\) is a complete graph. Therefore we can conclude the following corollary about a perfect \(1\)-code in such a graph.
**Corollary 3.2**.: _Let \(\Gamma\) be an antipodal distance-regular graph with diameter \(3\). Then a code \(C\) is \(1\)-perfect if and only if \(C\) is a fiber._
Furthermore, the folded graph of an antipodal distance-regular graph with diameter \(d=4\) or \(5\) is a strongly regular graph and therefore we can conclude the following corollary about a perfect \(1\)-code in these graphs since there is no perfect \(1\)-code in a strongly regular graph.
**Corollary 3.3**.: _There is no perfect \(1\)-code in an antipodal distance-regular graph with diameter \(d=4\) or \(5\)._
We note that the folded graph of the Doubled odd graph \(\mathrm{DO_{n}}\) with diameter \(2n-1\) is the Odd graph \(\mathrm{O_{n}}\) with diameter \(n-1\) and therefore we can conclude the following corollary.
**Corollary 3.4**.: _The Doubled odd graph \(\mathrm{DO_{n}}\) admits a perfect \(1\)-code if and only if the Odd graph \(\mathrm{O_{n}}\) admits a perfect \(1\)-code._
## 4. Distance-regular line graphs
In this section, we characterize distance-regular line graphs which admit a perfect \(1\)-code. We denote the line graph of a graph \(\Gamma\) by \(\mathrm{L}(\Gamma)\). The main theorem of this section is as follows.
**Theorem 4.1**.: _Let \(\Gamma\) be a distance-regular graph with least eigenvalue \(-2\). Then \(\Gamma\) admits a perfect \(1\)-code if and only if \(\Gamma\) is isomorphic to one of the following graphs._
* _The cycle graph_ \(C_{6n}\)_,_
* _the line graph of the Petersen graph,_
* _the line graph of the Tutte-Coxeter graph._
We first observe that if \(\Gamma\) is a \(k\)-regular graph with \(n\) vertices and the line graph \(\mathrm{L}(\Gamma)\) admits a perfect \(1\)-code \(C\), then by Equation 2.1, we have
\[|C|=\frac{\frac{nk}{2}}{2k-1}.\]
Furthermore, every vertex in \(C\) is related to an edge in the graph \(\Gamma\) and therefore the perfect \(1\)-code \(C\) can be considered as a \(1\)-regular induced subgraph, say \(\overline{C}\), which is also a vertex cover of the graph \(\Gamma\). In other words, the line graph \(\mathrm{L}(\Gamma)\) contains \(|\overline{C}|\) edges with mutually disjoint closed edge neighborhoods. Recall that the closed edge neighborhood of an edge \(e\) consists of the neighborhood of \(e\) together with the edge \(e\) itself. Moreover, \(-(k-1)\) must be an eigenvalue of the graph \(\Gamma\) because \(-1\) must be an eigenvalue of the line graph \(\mathrm{L}(\Gamma)\) (cf. Obs. 2.3). Therefore we can conclude the following proposition.
**Proposition 4.2**.: _Let \(\Gamma\) be a \(k\)-regular graph such that its line graph \(\mathrm{L}(\Gamma)\) admits a perfect \(1\)-code. Then \(-(k-1)\) is an eigenvalue of the graph \(\Gamma\). Moreover, if the graph \(\Gamma\) is bipartite, then \(\pm k\) and \(\pm(k-1)\) must be the eigenvalues of the graph \(\Gamma\)._
Distance-regular graphs with least eigenvalue \(-2\) have been classified as follows.
**Theorem 4.3**.: _[_6_, Thm. 3.12.4 and 4.2.16]_ _Let \(\Gamma\) be a distance-regular graph with least eigenvalue \(-2\). Then \(\Gamma\) is a cycle of even length, or its diameter \(d\) equals \(2,3,4,\) or \(6\). Moreover,
_._
* _If_ \(d=2\)_, then_ \(\Gamma\) _is a cocktail party graph, a triangular graph, a lattice graph, the Petersen graph, the Clebsch graph, the Shrikhande graph, the Schlafli graph, or one of the three Chang graphs,_
* _If_ \(d=3\)_, then_ \(\Gamma\) _is the line graph of the Petersen graph, the line graph of the Hoffman-Singleton graph, the line graph of a strongly regular graph with parameters_ \((3250,57,0,1)\)_, or the line graph of the incidence graph of a projective plane,_
* _If_ \(d=4\)_, then_ \(\Gamma\) _is the line graph of the incidence graph of a generalized quadrangle of order_ \((q,q)\)_,_
* _If_ \(d=6\)_, then_ \(\Gamma\) _is the line graph of the incidence graph of a generalized hexagon of order_ \((q,q)\)_._
We note that the distance-regular graphs with least eigenvalue _larger_ than \(-2\) are also known. Besides the complete graphs (with least eigenvalue \(-1\)), there are the cycles of odd length. Recall that for a complete graph, perfect \(1\)-codes are only one-element subsets of the vertex set (see also Rem. 2.1). Furthermore, a cycle graph \(C_{n}\) of length \(n\) has eigenvalues \(2\cos(\frac{2\pi j}{n})\), where \(j=0,1,\ldots,n-1\). If this graph admits a perfect \(1\)-code, then \(-1\) must be an eigenvalue of this graph by Observation 2.3. Therefore we can conclude the following straightforward proposition.
**Proposition 4.4**.: _A cycle graph \(C_{n}\) admits a perfect \(1\)-code if and only if \(3\) divides \(n\)._
Now we have to investigate four cases \(d=2\), \(d=3\), \(d=4\) and \(d=6\) in Theorem 4.3. The case \(d=2\) can be ruled out by Observation 2.2. For \(d=3\), the line graph of the Petersen graph is an antipodal distance-regular graph and every fiber of this graph is a perfect \(1\)-code (cf. Cor. 3.2). Furthermore, the line graph of the Hoffman-Singleton graph, the line graph of a strongly regular graph with parameters \((3250,57,0,1)\), and the line graph of the incidence graph of a projective plane don't admit a perfect \(1\)-code because \(-1\) is not an eigenvalue of these graphs.
### The line graph of the incidence graph of a generalized quadrangle
Let \(\Gamma\) be the incidence graph of a generalized quadrangle of order \((q,q)\). Then it has intersection array \(\{q+1,q,q,q;1,1,q+1\}\) and five distinct eigenvalues \(\{\pm(q+1),\pm\sqrt{2q},0\}\) (cf. [6, Sec. 6.5]). Therefore, by Proposition 4.2, the only possible case is \(q=2\). If \(q=2\), then the graph \(\Gamma\) is indeed the Tutte-Coxeter graph. If the line graph of this graph admits a perfect \(1\)-code \(C\), then \(|C|=9\) by Equation 2.1, because it is \(4\)-regular with \(45\) vertices. Now, in Figure 1, we consider the Doily representation of a generalize quadrangle of order \((2,2)\) with nine marked flags (i.e., incident point-line pairs) with distinct non-black
colours such that no two of them share a point or a line. Then the collection of these nine marked flags represents a perfect \(1\)-code in the line graph of the Tutte-Coxeter graph since each of its vertices corresponds to a flag of \(\operatorname{GQ}(2,2)\). We note that the points of these nine marked flags together with the black lines form a \(\operatorname{GQ}(2,1)\) subquadrangle, and their lines together with the black points form a complementary \(\operatorname{GQ}(1,2)\) subquadrangle. Now we can conclude the following proposition.
**Proposition 4.5**.: _The line graph of the incidence graph of a generalized quadrangle of order \((q,q)\) admits a perfect \(1\)-code if and only if \(q=2\)._
### The line graph of the incidence graph of a generalized hexagon
Let \(\Gamma\) be the incidence graph of a generalized hexagon of order \((q,q)\), where \(q\geq 2\). Then it has intersection array
\[\{q+1,q,q,q,q,q;1,1,1,1,q+1\}\]
and seven distinct eigenvalues \(\{\pm(q+1),\pm\sqrt{3q},\pm\sqrt{q},0\}\). If the line graph of this graph admits a perfect \(1\)-code, then the only possible case is \(q=3\) by Proposition 4.2. Let \(\Gamma\) be the incidence graph of a generalized hexagon of order \((3,3)\). If the line graph \(\operatorname{L}(\Gamma)\) admits a perfect \(1\)-code \(C\), then \(|C|=208\) by Equation 2.1 since this graph is \(6\)-regular with \(1456\) vertices. On the other hand, the code \(C\) in the line graph \(\operatorname{L}(\Gamma)\) can be related to a code \(\overline{C}\) in the graph \(\Gamma\) in such a way
Figure 1. The Doily representation of \(\operatorname{GQ}(2,2)\)
that \(\overline{C}\) is a \(1\)-regular induced subgraph of the graph \(\Gamma\) consisting of \(208\) edges with mutually disjoint closed edge neighbourhoods. Additionally, \(\overline{C}\) is also a vertex cover of the graph \(\Gamma\).
Recall that the only known generalized hexagon of order \((3,3)\) is called the _split Cayley hexagon of order \(3\)_. Let \(\Gamma\) be the incidence graph of the split Cayley hexagon of order \(3\). Then it has a lot of substructures which are generalized hexagons of order \((1,3)\) and \((3,1)\) (cf. [12]). Every generalized hexagon of order \((1,q)\) is isomorphic to the double of a projective plane of order \(q\) for which the point set of the double is the set of points and lines and the line set is exactly the set of flags of the projective plane.
**Lemma 4.6**.: _The incidence graph of the double of a projective plane of order \(q\) contains at most \(q^{2}+q+1\) edges with mutually disjoint closed edge neighbourhoods._
Proof.: Let \(\overline{C_{1}}\) be a collection of edges with mutually disjoint closed edge neighbourhoods of the incidence graph of this structure. If an edge \((p,(p,\ell))\) is in the set \(\overline{C_{1}}\), for a point \(p\) and line \(\ell\), then \((\ell,(p^{\prime},\ell))\) is not in \(\overline{C_{1}}\) for every point \(p^{\prime}\) in \(\ell\). Similarly, if an edge \((\ell,(p,\ell))\) is in the set \(\overline{C_{1}}\), for a point \(p\) and line \(\ell\), then \((p,(p,\ell^{\prime}))\) is not in \(\overline{C_{1}}\) for every line \(\ell^{\prime}\) which contains the point \(p\). On the other hand, a projective plane of order \(q\) contains \(q^{2}+q+1\) points and \(q^{2}+q+1\) lines which implies that the set \(\overline{C_{1}}\) contains at most \(q^{2}+q+1\) edges with mutually disjoint closed edge neighbourhoods and the result follows.
By using the FinInG package [2] for GAP[19], the graph \(\Gamma\) can be constructed as the incidence graph of a block design1 with the point set \(\{1,2,\ldots,364\}\) by the following commands.
Footnote 1: It has \(364\) points and \(364\) blocks such that each block consists of \(4\) points and each point belongs to \(4\) blocks.
gh:=SplitCayleyHexagon(3); des:=BlockDesignOfGeneralisedPolygon(gh); Without loss of generality, let the point \(1\) be outside the code \(\overline{C}\). Then this vertex belongs to a substructure with the following point set \(A\) of size \(26\) and block set \(B\) of size \(52\) consisting of all blocks of this design with the property that each point in \(A\) belongs to \(4\) blocks in \(B\).
A:={1,2,3,4,5,6,7,8,9,12,15,18,23,28,33,44,65,68,69,88,91,92,129,130,178,179}; B:={[1,2,111,112],[1,28,55,56],[1,65,127,128],[1,88,176,177],[2,3,224,227], [2,4,225,228],[2,5,223,226],[3,23,45,46],[3,129,245,247],[3,178,311,313], [4,18,34,35],[4,130,246,248],[4,179,292,312],[5,6,10,11],[5,7,13,14], [5,8,16,17],[6,44,93,94],[6,91,189,194],[6,92,190,195],[7,33,70,71],
[7,68,140,145],[7,69,141,146],[8,9,21,22],[8,12,26,27],[8,15,31,32],
[9,88,186,191],[9,178,321,327],[9,179,281,293],[12,65,137,142],
[12,129,259,265],[12,130,260,266],[15,18,38,41],[15,23,49,52],
[15,28,59,62],[18,69,156,167],[18,92,203,214],[23,68,155,166],
[23,91,202,213],[28,33,78,83],[28,44,101,106],[33,130,222,279],
[33,163,178,331],[44,129,171,300],[44,179,208,338],[65,68,175,344],
[65,92,218,297],[68,179,241,355],[69,88,210,334],[69,129,244,357],
[88,91,161,306],[91,130,242,361],[92,178,239,356]}
In this substructure, if we consider each block as a line, then it is isomorphic to the generalized hexagon of order \((1,3)^{2}\). Moreover, each block in \(B\) consists of two points in \(A\) and two points in the following set \(P\) of size \(104\).
P:={10,11,13,14,16,17,21,22,26,27,31,32,34,35,38,41,45,46,49,52,55,56,59,62,
70,71,78,83,93,94,101,106,111,112,127,128,137,140,141,142,145,146,155,156,
161,163,166,167,171,175,176,177,186,189,190,191,194,195,202,203,208,210,213,
214,218,222,223,224,225,226,227,228,239,241,242,244,245,246,247,248,259,260,
265,266,279,281,292,293,297,300,306,311,312,313,321,327,331,334,338,344,355,
356,357,361}
Now consider another substructure with point set \(M\) consisting of the points not in \(A\cup P\) and block set \(N\) consisting of the blocks not in \(B\). In this new substructure, if we consider each block as a line, then it is indeed isomorphic to the interesting subgeometry with \(234\) points and \(312\) lines which is illustrated in [12, Sec. 3]. By these structures and the properties of the code \(\overline{C}\), we could obtain a contradiction as follows. Let \(\Gamma_{i}(u)\) denote the set of vertices at distance \(i\) from the vertex \(u\) in the graph \(\Gamma\), where \(1\leq i\leq 6\). Recall that without loss of generality, the point \(1\) is outside the code \(\overline{C}\). We use _inner_ and _outer_ for elements in \(\overline{C}\) and outside \(\overline{C}\), respectively. Therefore the vertex \(1\) is adjacent to \(4\) inner blocks. Moreover, these \(4\) blocks are adjacent to \(4\) inner points and \(8\) outer points in \(\Gamma_{2}(1)\) forming the sets \(\Gamma_{2}^{in}(1)\) and \(\Gamma_{2}^{out}(1)\), respectively. We proceed this approach to find \(\Gamma_{6}^{in}(1)\) and \(\Gamma_{6}^{out}(1)\). The \(4\) inner points of \(\Gamma_{2}^{in}(1)\) are adjacent to \(12\) outer blocks in \(\Gamma_{3}(1)\) forming the set \(\Gamma_{3}^{out}(1)\), and the \(8\) outer points of \(\Gamma_{2}^{out}(1)\) are adjacent to \(24\) inner blocks in \(\Gamma_{3}(1)\) forming the set \(\Gamma_{3}^{in}(1)\). Moreover, the \(12\) outer blocks of \(\Gamma_{3}^{out}(1)\) are adjacent to \(36\) inner points in \(\Gamma_{4}(1)\) forming the set \(\Gamma_{4}^{in}(1)\), and the \(24\) inner blocks of \(\Gamma_{3}^{in}(1)\) are adjacent to \(24\) inner points and \(48\) outer points in \(\Gamma_{4}(1)\) forming the sets \(\Gamma_{4}^{in^{\prime}}(1)\) and \(\Gamma_{4}^{out}(1)\), respectively. Furthermore, the \(36\) inner points of \(\Gamma_{4}^{in}(1)\) are adjacent to \(36\) inner blocks and \(72\) outer blocks in \(\Gamma_{5}(1)\) forming the
sets \(\Gamma_{5}^{in}(1)\) and \(\Gamma_{5}^{out}(1)\), respectively, and the 24 inner points of \(\Gamma_{4}^{in^{\prime}}(1)\) are adjacent to 72 outer blocks in \(\Gamma_{5}(1)\) forming the set \(\Gamma_{5}^{out^{\prime}}(1)\), and the 48 outer points of \(\Gamma_{4}^{out}(1)\) are adjacent to 144 inner blocks in \(\Gamma_{5}(1)\) forming the set \(\Gamma_{5}^{in^{\prime}}(1)\). Finally, the 144 inner blocks of \(\Gamma_{5}^{in^{\prime}}(1)\) are adjacent to 144 inner points in \(\Gamma_{6}(1)\) forming the set \(\Gamma_{6}^{in}(1)\) and the 99 remaining points in \(\Gamma_{6}(1)\) are outer points forming the set \(\Gamma_{6}^{out}(1)\) since the code \(\overline{C}\) consists of \(4+24+36+144=208\) points.
By using GAP[19], it turns out that there are 4 points in \(A\cap\Gamma_{2}(1)\) and 8 points in \(P\cap\Gamma_{2}(1)\). Moreover, there are 12 blocks in \(B\cap\Gamma_{3}(1)\) and 24 blocks in \(N\cap\Gamma_{3}(1)\). Furthermore, there are 12 points in \(A\cap\Gamma_{4}(1)\), 24 points in \(P\cap\Gamma_{4}(1)\) and 72 points in \(M\cap\Gamma_{4}(1)\). Moreover, there are 36 blocks in \(B\cap\Gamma_{5}(1)\) and 288 blocks in \(N\cap\Gamma_{5}(1)\). Finally, there are 9 points in \(A\cap\Gamma_{6}(1)\), 72 points in \(P\cap\Gamma_{6}(1)\) and 162 points in \(M\cap\Gamma_{6}(1)\).
Moreover, there exist five cases based on the number of inner points \(i\) in \(A\cap\Gamma_{2}(1)\), where \(0\leq i\leq 4\). If \(|A\cap\Gamma_{2}^{in}(1)|=i\), then \(|A\cap\Gamma_{2}^{out}(1)|=|P\cap\Gamma_{2}^{in}(1)|=4-i\), \(|P\cap\Gamma_{2}^{out}(1)|=4+i\), \(|B\cap\Gamma_{3}^{in}(1)|=|A\cap(\Gamma_{4}^{in^{\prime}}(1)\cup\Gamma_{4}^{ out}(1))|=|N\cap\Gamma_{3}^{out}(1)|=12-3i\), \(|N\cap\Gamma_{3}^{in}(1)|=|M\cap\Gamma_{4}^{in^{\prime}}(1)|=12+3i\), \(|B\cap\Gamma_{3}^{out}(1)|=|A\cap\Gamma_{4}^{in}(1)|=|B\cap\Gamma_{5}^{in}(1)|=3i\), \(|P\cap\Gamma_{4}^{in}(1)|=|B\cap\Gamma_{5}^{out}(1)|=6i\), \(|P\cap(\Gamma_{4}^{in^{\prime}}(1)\cup\Gamma_{4}^{out}(1))|=24-6i\), \(|M\cap\Gamma_{4}^{in}(1)|=36-9i\), and \(|M\cap\Gamma_{4}^{out}(1)|=24+6i\). Moreover, as each point of \(P\cap\Gamma_{4}(1)\) has the neighbor from \(B\) in \(\Gamma_{3}(1)\), it follows that each block of \(B\cap\Gamma_{5}(1)\) has one neighbor in \(A\cap\Gamma_{4}(1)\) and one in \(A\cap\Gamma_{6}(1)\).
Now suppose that \(i\geq 1\). Then \(|A\cap\Gamma_{6}^{in}(1)|=6\) and \(|A\cap\Gamma_{6}^{out}(1)|=3\). Since each point of \(A\cap\Gamma_{6}^{out}(1)\) has precisely \(i\) neighbors in \(B\cap\Gamma_{5}^{in}(1)\), it follows that \(|B\cap\Gamma_{5}^{in^{\prime}}(1)|=6+3(4-i)=18-3i\), and then \(|A\cap\Gamma_{4}^{out}(1)|=6-i\) and \(|A\cap\Gamma_{4}^{in^{\prime}}(1)|=6-2i\). Therefore, there are \(i+3i+(6-2i)+6=12+2i>13\) inner points in \(A\), each of which is adjacent to an inner block in \(B\), contradicting Lemma 4.6 (see Figure 2). This then leaves us with the case \(i=0\).
Let there exist \(a\) inner and \(9-a\) outer points in \(A\cap\Gamma_{6}(1)\), where \(0\leq a\leq 9\). Then there are \(3a\) blocks in \(B\cap\Gamma_{5}^{out^{\prime}}(1)\). Moreover, \(|A\cap\Gamma_{4}^{in^{\prime}}(1)|=a\) since each block of \(B\cap\Gamma_{5}^{out^{\prime}}(1)\) has one neighbor in \(A\cap\Gamma_{4}^{in^{\prime}}(1)\), and each point of \(A\cap\Gamma_{4}^{in^{\prime}}(1)\) has three neighbors in \(B\cap\Gamma_{5}^{out^{\prime}}(1)\). If \(a\geq 1\), then consider the \((3,2)\)-biregular bipartite incidence graph with the point set consisting of the \(2a\) inner points in the union of \(A\cap\Gamma_{4}^{in^{\prime}}(1)\) and \(A\cap\Gamma_{6}(1)\), and the block set consisting of the \(3a\) blocks in \(B\cap\Gamma_{5}^{out^{\prime}}(1)\). This graph contains at least 35 vertices since the girth of this graph is at least 12. This implies that \(a\geq 7\) and therefore there are at least 14 inner points in \(A\), contradicting Lemma 4.6. It follows that the only possible case for \(i=0\) is as in Figure 3
and we prove that it is impossible. On one side, each of the \(12\) points in \(P\cap\Gamma_{4}^{in^{\prime}}(1)\) has three neighbors in \(N\cap\Gamma_{5}^{out^{\prime}}(1)\) and two points in the set \(P\) can not share the same neighbor in \(N\). Therefore each block in the set \(R\), consisting of the \(36\) blocks in \(N\cap\Gamma_{5}^{out^{\prime}}(1)\) which do not have a neighbor in \(P\cap\Gamma_{4}^{in^{\prime}}(1)\), must be adjacent to a unique point in \(P\cap\Gamma_{6}^{in}(1)\). Moreover, each of the \(36\) points in \(P\cap\Gamma_{6}^{in}(1)\) has exactly one neighbor in \(N\cap\Gamma_{5}^{out^{\prime}}(1)\). To see this, suppose in contrary that there exists a point in \(P\cap\Gamma_{6}^{in}(1)\) which has at least two neighbors in \(N\cap\Gamma_{5}^{out^{\prime}}(1)\). Then there exists a point \(u\in P\cap\Gamma_{6}^{in}(1)\) which has no neighbor in \(R\). Furthermore, there are two blocks in \(B\cap\Gamma_{5}^{in^{\prime}}(1)\) at distance \(3\) and therefore two points in \(P\cap\Gamma_{6}^{in}(1)\) at distance \(4\) form \(u\) which are adjacent to at most six blocks in \(R\). Moreover, there are at most three blocks in \(N\cap\Gamma_{5}^{out}(1)\) at distance \(3\) and therefore at most three points in \(P\cap\Gamma_{6}^{in}(1)\) at distance \(4\) form \(u\) which are adjacent to at most six blocks in \(R\). Additionally, there are no blocks in \(N\cap\Gamma_{3}^{in}(1)\) at distance \(3\) from \(u\). It follows that there is no path of length at most \(6\) from \(u\) to some blocks of \(R\), a contradiction. This implies that each of the \(36\) points in \(P\cap\Gamma_{6}^{in}(1)\) has exactly two neighbors in \(N\cap\Gamma_{5}^{out}(1)\). On the other side, by using GAP[19], it turns out that there are exactly \(27\) points in \(P\cap\Gamma_{6}^{in}(1)\) which have two neighbors in \(\Gamma_{5}^{in}(1)\cup\Gamma_{5}^{out}(1)\)3, a contradiction, and this completes the proof.
Footnote 3: There are \(2^{4}=16\) cases for these sets depending on the choice of the four inner points in \(P\cap\Gamma_{2}^{in}(1)\). All of these cases have been checked with GAP[19].
Therefore we can conclude the following proposition.
**Proposition 4.7**.: _If the line graph of the incidence graph of a generalized hexagon of order \((q,q)\), where \(q\geq 2\), admits a perfect \(1\)-code, then \(q=3\). Moreover, the incidence graph the split Cayley hexagon of order \(3\) doesn't admit a perfect \(1\)-code.4_
Footnote 4: The anonymous referee double-checked this with GAP[19].
Figure 2. The incidence graph of \(\operatorname{GH}(3,3)\) for \(i>0\)
Figure 3. The incidence graph of \(\operatorname{GH}(3,3)\) for \(i=0\)
## 5. Distance-regular graphs with small diameter
As far as we know, the general problem of characterizing distance-regular graphs with small diameter greater than \(2\) which admit a perfect \(1\)-code is hard. Therefore we give an overview up to diameter \(4\). For a complete graph, the only perfect \(1\)-codes are the one-element subsets (cf. Prop. 2.1). Moreover, by Observation 2.2, there is no perfect \(1\)-code in a strongly regular graph.
Now suppose that \(C\) is a perfect \(1\)-code in a distance-regular graph with diameter \(3\). Then the distance between two vertices in \(C\) must be \(3\). Let \(C\) be a perfect \(1\)-code in a bipartite distance-regular graph with diameter \(3\). Then \(C\) contains exactly two elements from different parts since the distance between two vertices in \(C\) must be \(3\). Therefore this graph must be a complete bipartite graph minus a perfect matching. Hence we can conclude the following proposition.
**Proposition 5.1**.: _Let \(\Gamma\) be a bipartite distance-regular graph with diameter \(3\). Then \(\Gamma\) admits a perfect \(1\)-code \(C\) if and only if \(\Gamma\) is a complete bipartite graph minus a perfect matching and \(C\) contains exactly two elements at distance \(3\) from different parts._
Note that every antipodal distance-regular graph of diameter \(3\) (including the bipartite ones) admits a perfect \(1\)-code - in fact, each fiber is such a code. Now we deal with primitive distance-regular graphs with diameter \(3\).
**Observation 5.2**.: _If a primitive distance-regular graph \(\Gamma\) with diameter \(3\) admits a perfect \(1\)-code, then it has eigenvalue \(-1\) by Observation 2.3 and therefore its distance-\(3\) graph is strongly regular (cf. [6, Prop. 4.2.17]). It follows that if this strongly regular graph has parameters \((n,k,\lambda,\mu)\), then \(\lambda\geq|C|-2\) because the perfect code \(C\) is a clique in the distance-\(3\) graph by Observation 2.2._
Among primitive distance-regular graphs with diameter \(3\) and small number of vertices, the first putative example is the Odd graph with \(35\) vertices and intersection array \(\{4,3,3;1,1,2\}\) since it has eigenvalue \(-1\) (cf. [6, Chap. 14] and Obs. 2.3). This graph admits a perfect \(1\)-code with \(7\) vertices (cf. [13, Fig. 1]). The second example is the Sylvester graph with \(36\) vertices and intersection array \(\{5,4,2;1,1,4\}\). Indeed every \(6\)-clique in the distance-\(3\) graph of the Sylvester graph corresponds to a perfect \(1\)-code in this graph (see also [14, Sec. 3]).
**Proposition 5.3**.: _The Sylvester graph admits a perfect \(1\)-code._
Let \(\Gamma\) be a distance-regular graph with diameter \(4\). If \(\Gamma\) is antipodal, then it doesn't admit a perfect \(1\)-code (cf. Cor. 3.3). Now suppose
that \(\Gamma\) is bipartite with degree \(k\). Then it has eigenvalues \(\{\pm k,0,\pm 1\}\) because it has eigenvalue \(-1\) by Observation 2.3. The following lemma shows there is no such graph.
**Lemma 5.4**.: _There is no bipartite distance-regular graph with diameter \(4\) and eigenvalues \(\{\pm k,0,\pm 1\}\)._
Proof.: Let \(\Gamma\) be a bipartite distance-regular graph with diameter \(4\) and eigenvalues \(\{\pm k,0,\pm 1\}\). We note that this graph can not be the cycle graph on \(8\) vertices and therefore \(k>2\). On the other hand, if this graph has intersection array \(\{k,b_{1},b_{2},b_{3};1,c_{2},c_{3},c_{4}\}\), then by considering the intersection matrix of this graph, the second largest eigenvalue must be \((c_{2}+1)k-c_{2}(c_{3}+1)\) and therefore \((c_{2}+1)k-c_{2}(c_{3}+1)=1\). On the other hand, \(c_{2}\) must divide \(k\) (cf. [6, Lem. 1.7.2]) which implies that \(c_{2}=1\) and \(c_{3}=2k-2\). Therefore \(k\leq 2\) since \(c_{3}\leq k\), a contradiction, and this completes the proof.
Therefore we can conclude the following proposition.
**Proposition 5.5**.: _A bipartite distance-regular graph with diameter \(4\) doesn't admit a perfect \(1\)-code._
Among primitive distance-regular graphs with diameter \(4\) and small number of vertices, the first putative example is the Coxeter graph with \(28\) vertices and intersection array \(\{3,2,2,1;1,1,1,2\}\) since it has eigenvalue \(-1\) (cf. [6, Chap. 14] and Obs. 2.3). This graph partitions into three \(7\)-gons and a \(7\)-coclique (cf. [5]). It turns out that the \(7\)-coclique is indeed a perfect \(1\)-code. Therefore we can conclude the following proposition.
**Proposition 5.6**.: _The Coxeter graph admits a perfect \(1\)-code._
## 6. Distance-regular graphs with small valency
All known distance-regular graphs with small valency at most \(4\), the distance-regular graphs with known putative intersection arrays for valency \(5\), and all distance-regular graphs with girth \(3\) and valency \(6\) or \(7\) are listed in [10]. We give an overview of all possible intersection arrays and corresponding graphs, and indicate which of these admit a perfect \(1\)-code. Note that for each intersection array in Table 1 there is a unique distance-regular graph. Moreover, for each intersection array in Table 2 there is a unique distance-regular graph, except possibly for the last array, which corresponds to the incidence graph of a generalized hexagon of order \((3,3)\). Furthermore, in Table 3, all known putative intersection arrays for distance-regular graphs with valency \(5\) are listed. All of the graphs in the table are unique, given their intersection arrays,
except possibly the incidence graph of a generalized hexagon of order \((4,4)\) (the last case). All distance-regular graphs with valency at most \(7\) and girth \(3\) (i.e., with triangles) are listed in Table 4 besides the ones with valency at most \(5\) that we have encountered in the previous tables. For each of the intersection arrays \(\{6,3;1,2\}\) and \(\{6,4,4;1,1,3\}\), there are exactly two distance-regular graphs (as mentioned in the table). By \(n\), \(d\), and \(g\), we denote the number of vertices, diameter, and girth, respectively. We note that in the reference column, only one reason is stated.
### The point graphs of the generalized hexagons of order \((2,2)\)
Up to isomorphism there are exactly two generalized hexagons of order \((2,2)\). Each of them is the dual of the other (cf. [9, Theorem 1]). Their point graphs (collinearity graphs) give rise to two distance-regular graphs with intersection array \(\{6,4,4;1,1,3\}\) and \(63\) vertices. We can distinguish the two graphs by whether the graph induced on the vertices at distance \(3\) from a fixed vertex is connected or not. These two non-isomorphic distance-regular graphs have been constructed in GAP[19] with Grape[18] package as Graph \(1\) and Graph \(2\) in [1]. Indeed the graph induced on the vertices at distance \(3\) from a fixed vertex in the Graph \(2\) is connected whereas in the Graph \(1\) is disconnected. Let \(\Gamma\) be one of such distance-regular graphs which admits a perfect \(1\)-code \(C\). Then \(|C|=9\) by Equation 2.1. Moreover, each pair of vertices in \(C\) is at distance \(3\) by Observation 2.2 since \(\Gamma\) has diameter \(3\). Therefore a perfect \(1\)-code \(C\) can be viewed as a clique in the distance-\(3\) graph of \(\Gamma\). By using GAP[19], it turns out that the Graph \(1\) of [1] admits perfect \(1\)-codes5 but the Graph \(2\) of [1] doesn't admit a perfect \(1\)-code since there is no complete subgraph of size \(9\) in its distance-\(3\) graph. This shows that we can not deduce that a distance-regular graph admits a perfect \(1\)-code from its spectrum.
Footnote 5: The Graph \(1\) of [1] with vertex set \(\{1,2,\ldots,63\}\) admits \(\{3,4,5,7,35,37,42,50,63\}\) and \(\{1,2,22,24,30,33,41,61,63\}\) as perfect \(1\)-codes.
\begin{table}
\begin{tabular}{l c c c c c c} \hline Intersection array & \(n\) & \(d\) & \(g\) & Name & Perfect 1-code & Reference \\ \hline \{3;1\} & 4 & 1 & 3 & K\({}_{4}\) & Yes & Rem. 2.1 \\ \{3,2;1,3\} & 6 & 2 & 4 & K\({}_{3,3}\) & No & Eq. 2.1 \\ \{3,2,1;1,2,3\} & 8 & 3 & 4 & K\({}_{3,3}^{*}\) & Yes & Prop. 5.1 \\ \{3,2;1,1\} & 10 & 2 & 5 & Petersen & No & Eq. 2.1 \\ \{3,2,2;1,1,3\} & 14 & 3 & 6 & Heawood & No & Eq. 2.1 \\ \{3,2,2,1;1,1,2,3\} & 18 & 4 & 6 & Pappus & No & Eq. 2.1 \\ \{3,2,2,1,1;1,1,2,2,3\} & 20 & 5 & 6 & Desargues & No & Cor. 3.3 \\ \{3,2,1,1,1;1,1,2,3\} & 20 & 5 & 5 & Dodecahedron & No & Cor. 3.3 \\ \{3,2,2,1;1,1,1,2\} & 28 & 4 & 7 & Coxeter & Yes & Prop. 5.6 \\ \{3,2,2,2;1,1,1,3\} & 30 & 4 & 8 & Tutte’s 8-cage & No & Eq. 2.1 \\ \{3,2,2,2,2,1,1,1; & 90 & 8 & 10 & Foster & No & Eq. 2.1 \\ \(1,1,1,1,2,2,2,3\}\) & & & & & & \\ \{3,2,2,2,1,1,1; & 102 & 7 & 9 & Biggs-Smith & No & Eq. 2.1 \\ \(1,1,1,1,1,1,3\}\) & & & & & & \\ \{3,2,2,2,2,2; & 126 & 6 & 12 & Tutte’s 12-cage & No & Eq. 2.1 \\ \(1,1,1,1,1,3\}\) & & & & & & \\ \hline \end{tabular}
\end{table}
Table 1. Distance-regular graphs with valency 3
\begin{table}
\begin{tabular}{l c c c c c c} \hline Intersection array & \(n\) & \(d\) & \(g\) & Name & Perfect 1-code & Reference \\ \hline \{4;1\} & 5 & 1 & 3 & K\({}_{5}\) & Yes & Rem. 2.1 \\ \{4,1;1,4\} & 6 & 2 & 3 & K\({}_{2,2,2}\) & No & Eq. 2.1 \\ \{4,3;1,4\} & 8 & 2 & 4 & K\({}_{4,4}\) & No & Eq. 2.1 \\ \{4,2;1,2\} & 9 & 2 & 3 & Paley graph P(9) & No & Eq. 2.1 \\ \{4,3,1;1,3,4\} & 10 & 3 & 4 & K\({}_{5,5}^{*}\) & Yes & Prop. 5.1 \\ \{4,3,2;1,2,4\} & 14 & 3 & 4 & IG\((7,4,2)\) & No & Eq. 2.1 \\ \{4,2,1;1,1,4\} & 15 & 3 & 3 & L(Petersen) & Yes & Cor. 3.2 \\ \{4,3,2,1;1,2,3,4\} & 16 & 4 & 4 & Q\({}_{4}\) & No & Eq. 2.1 \\ \{4,2,2;1,1,2\} & 21 & 3 & 3 & L(Heawood) & No & Eq. 2.1 \\ \{4,3,3;1,1,4\} & 26 & 3 & 6 & IG\((13,4,1)\) & No & Eq. 2.1 \\ \{4,3,3,1;1,3,4\} & 32 & 4 & 6 & IG\((\text{A}(2,4)\setminus\text{pc})\) & No & Eq. 2.1 \\ \{4,3,3;1,1,2\} & 35 & 3 & 6 & O\({}_{4}\) & Yes & [13, Fig. 1] \\ \{4,2,2,2;1,1,1,2\} & 45 & 4 & 3 & L(Tutte’s 8-cage) & Yes & Prop. 4.5 \\ \{4,3,3,2,2,1,1; & 70 & 7 & 6 & DO\({}_{4}\) & Yes & Cor. 3.4 \\ \{1,1,2,2,3,3,4\} & & & & & & \\ \{4,3,3,3;1,1,1,4\} & 80 & 4 & 8 & IG(GQ\((3,3)\)) & No & Obs. 2.3 \\ \{4,2,2,2,2; & 189 & 6 & 3 & L(Tutte’s 12-cage) & No & Eq. 2.1 \\ \{1,1,1,1,1,2\} & & & & & \\ \{4,3,3,3,3,3; & 728 & 6 & 12 & IG(GH\((3,3)\)) & No & Eq. 2.1 \\ \{1,1,1,1,1,4\} & & & & & \\ \hline \end{tabular}
\end{table}
Table 2. Distance-regular graphs with valency 4
\begin{table}
\begin{tabular}{l r r r r r r r} \hline Intersection array & \(n\) & \(d\) & \(g\) & Name & Perfect 1-code & Reference \\ \hline \{6;1\} & 7 & 1 & 3 & K\({}_{7}\) & Yes & Rem. 2.1 \\ \{6,1;1,6\} & 8 & 2 & 3 & K\({}_{2,2,2,2}\) & No & Eq. 2.1 \\ \{6,2;1,6\} & 9 & 2 & 3 & K\({}_{3,3,3}\) & No & Eq. 2.1 \\ \{6,2;1,4\} & 10 & 2 & 3 & T(5) & No & Eq. 2.1 \\ \{6,3;1,3\} & 13 & 2 & 3 & P(13) & No & Eq. 2.1 \\ \{6,4;1,3\} & 15 & 2 & 3 & \(\overline{\rm T(6)}\sim\) GQ\((2,2)\) & No & Eq. 2.1 \\ \{6,3;1,2\} & 16 & 2 & 3 & L\({}_{2}\)(4), Shrikhande & No & Eq. 2.1 \\ \{6,4,2;1,2,3\} & 27 & 3 & 3 & H\((3,3)\) & No & Eq. 2.1 \\ \{6,4,2,1;1,4,6\} & 45 & 4 & 3 & halved Foster & No & Eq. 2.1 \\ \{6,3,3;1,1,2\} & 52 & 3 & 3 & L(IG\((13,4,1)\) & No & Eq. 2.1 \\ \{6,4,4;1,1,3\} & 63 & 4 & 3 & GH\((2,2)\) (Graph 1) & Yes & Sec. 6.1 \\ \{6,4,4;1,1,3\} & 63 & 4 & 3 & GH\((2,2)\) (Graph 2) & No & Sec. 6.1 \\ \{6,3,3,3;1,1,1,2\} & 160 & 4 & 3 & L(IG(GQ\((3,3)))) & No & Prop. 4.5 \\ \{6,3,3,3,3;1,1,1,1,2\} & 1456 & 6 & 3 & L(IG(GH\((3,3)))) & No & Prop. 4.7 \\ \{7;1\} & 8 & 1 & 3 & K\({}_{8}\) & Yes & Rem. 2.1 \\ \{7,4,1;1,2,7\} & 24 & 3 & 3 & Klein & Yes & Cor. 3.2 \\ \hline \end{tabular}
\end{table}
Table 4. Distance-regular graphs with girth 3 and valency 6 or 7
\begin{table}
\begin{tabular}{l r r r r r r} \hline Intersection array & \(n\) & \(d\) & \(g\) & Name & Perfect 1-code & Reference \\ \hline \{5;1\} & 6 & 1 & 3 & K\({}_{6}\) & Yes & Rem. 2.1 \\ \{5,4;1,5\} & 10 & 2 & 4 & K\({}_{5,5}\) & No & Eq. 2.1 \\ \{5,2,1;1,2,5\} & 12 & 3 & 3 & Icosahedron & Yes & Cor. 3.2 \\ \{5,4,1;1,4,5\} & 12 & 3 & 4 & K\({}_{6,6}^{*}\) & Yes & Prop. 5.1 \\ \{5,4;1,2\} & 16 & 2 & 4 & Folded 5-cube & No & Eq. 2.1 \\ \{5,4,3;1,2,5\} & 22 & 3 & 4 & IG\((11,5,2)\) & No & Eq. 2.1 \\ \{5,4,3,2,1;1,2,3,4,5\} & 32 & 5 & 4 & Q\({}_{5}\) & No & Eq. 2.1 \\ \{5,4,1,1;1,4,5\} & 32 & 4 & 5 & Armanios-Wells & No & Eq. 2.1 \\ \{5,4,2;1,1,4\} & 36 & 3 & 5 & Sylvester & Yes & Prop. 5.3 \\ \{5,4,4;1,1,5\} & 42 & 3 & 6 & IG\((21,5,1)\) & No & Obs. 2.3 \\ \{5,4,4,1;1,4,5\} & 50 & 4 & 6 & IG(A\((2,5)\setminus\) pc) & No & Eq. 2.1 \\ \{5,4,4,3;1,1,2,2\} & 126 & 4 & 6 & O\({}_{5}\) & No & Obs. 2.3 \\ \{5,4,4,4;1,1,5\} & 170 & 4 & 8 & IG(GQ\((4,4))\) & No & Eq. 2.1 \\ \{5,4,4,3,3,2,2,1,1;\} & 252 & 9 & 6 & DO\({}_{5}\) & No & Cor. 3.4 \\ \{1,1,2,2,3,3,4,4,5\} & & & & & \\ \{5,4,4,4,4,4;4\} & 2730 & 6 & 12 & IG(GH\((4,4)\)) & No & Obs. 2.3 \\ \{1,1,1,1,1,5\} & & & & & \\ \hline \end{tabular}
\end{table}
Table 3. Distance-regular graphs with valency 5
## Acknowledgements
The author would like to thank the anonymous referee for his/her invaluable comments which led to fixing some errors in the proof of Proposition 4.7 and the statement of Proposition 5.3, and improving the presentation of this paper. The author is grateful to the Research Council of Shahid Chamran University of Ahvaz for financial support (SCU.MM99.29248).
|
2303.10588 | Orbit Equivalence of actions on Cartan pairs | We introduce and study the notion of continuous orbit equivalence of actions
of countable discrete groups on Cartan pairs in (twisted) groupoid context. We
characterize orbit equivalence of actions in terms of the corresponding
C$^*$-algebraic crossed products using Kumjian-Renault theory. We relate our
notion to the classical notion of orbit equivalence of actions on topological
spaces by showing that, under certain conditions, orbit equivalence of actions
on (twisted) groupoids follows from orbit equivalence of restricted actions on
unit spaces. We illustrate our results with concrete examples of continuous
orbit equivalent actions on groupoids coming from odometer transformations. | Massoud Amini, Mahdi Moosazadeh | 2023-03-19T06:36:28Z | http://arxiv.org/abs/2303.10588v3 | # Orbit equivalence of actions on Cartan pairs
###### Abstract.
We introduce and study the notion of continuous orbit equivalence of actions of countable discrete groups on Cartan pairs in (twisted) groupoid context of Renault. We characterize orbit equivalence of actions in terms of the corresponding \(\mathrm{C}^{*}\)-algebraic crossed products using Kumjian-Renault theory. We relate our notion to the classical notion of orbit equivalence of actions on topological spaces by showing that, under certain conditions, orbit equivalence of actions on (twisted) groupoids follows from orbit equivalence of restricted actions on unit spaces.
2020 Mathematics Subject Classification: Primary 46L55; Secondary 37A20
## 1. Introduction
Dynamical systems have ever been a fruitful source of examples for operator algebras. The very early examples in type theory of von Neumann algebras, given by Murray and von Neumann, used group actions on measure spaces. Later, W. Krieger showed that for two ergodic non-singular systems, the associated von Neumann crossed product factors are isomorphic iff the systems are orbit equivalent [10, 11], building upon an earlier result of H. Dye that all ergodic p.m.p actions are orbit equivalent. Since then, orbit equivalence is extensively studied both in the measurable [7] and topological [9], [3] setting.
The p.m.p. actions \(\Gamma_{1}\curvearrowright X_{1}\) and \(\Gamma_{2}\curvearrowright X_{2}\) are called orbit equivalent if there exists an isomorphism \(\phi:X_{1}\to X_{2}\) of measure spaces sending orbits to orbits (a.e.). For discrete countable groups acting essentially free, this is known to be equivalent to the corresponding measure groupoids being isomorphic, and to the existence of an isomorphism between the corresponding von Neumann algebraic crossed products, preserving \(L^{\infty}\) masas [18], [20].
The notion of orbit equivalence in topological setting is studied for \(\mathbb{Z}^{d}\) actions on Cantor set in [8], [9], and in general in [13]. The actions \(\Gamma_{1}\curvearrowright X_{1}\) and \(\Gamma_{2}\curvearrowright X_{2}\) of discrete groups on topological spaces are continuous orbit equivalent if there exist a homeomorphism \(\phi:X_{1}\to X_{2}\) and continuous maps \(a:\Gamma_{1}\times X_{1}\to\Gamma_{2}\) and \(b:\Gamma_{2}\times X_{2}\to\Gamma_{1}\) with \(\phi(\gamma_{1}x_{1})=a(\gamma_{1},x_{1})\phi(x_{1})\) and \(\phi^{-1}(\gamma_{2}x_{2})=b(\gamma_{2},x_{2})\phi^{-1}(x_{2})\), for \(\gamma_{i}\in\Gamma_{i}\) and \(x_{i}\in X_{i}\), \(i=1,2\). The topological version of the above equivalence is known to hold with \(L^{\infty}\) masas replaced by \(C_{0}\) masas [13, Theorem 1.2].
It is natural to ask weather there is a notion of orbit equivalence for general \(C^{*}\)-dynamics and this is the main objective of present paper. As already seen in both measurable and continuous cases, the masas-or more precisely, the Cartan subalgebras-play a central role in orbit equivalence, and we rather define orbit equivalence of actions on Cartan pairs, that is, actions of countable discrete groups on \(C^{*}\)-algebras, preserving given Cartan subalgebras. This makes our approach inevitably related to twisted grroupoids, as any Cartan pair comes from a Weyl
twisted groupoid [16], and a Cartan invariant action on a separable \(C^{*}\)-algebras induces an action on the Weyl groupoid [1, Proposition 3.4].
In the next section we define (continuous) orbit equivalence of actions on groupoids (Definition 2.7), and for topologically principally free actions (Definition 2.12), we prove the following characterization of continuous orbit equivalence. All groupoids in this paper are locally compact, Hausdorff, second countable, and etale, and all groups are countable discrete.
**Theorem A**.: For topologically free actions \(\Gamma_{i}\curvearrowright(G_{i},\Sigma_{i})\) on twisted groupoids, \(i=1,2\), consider the following statements:
1. \(\Gamma_{1}\curvearrowright(G_{1},\Sigma_{1})\sim_{coe}\Gamma_{2}\curvearrowright( G_{2},\Sigma_{2})\),
2. there exists isomorphism \(\psi^{\prime}:\Gamma_{1}\ltimes\Sigma_{1}\to\Gamma_{2}\ltimes\Sigma_{2}\) such that \(\psi^{\prime}(\Gamma_{1},\mathbb{T}\times\Sigma_{1}^{(0)})=(\Gamma_{2},\mathbb{ T}\times\Sigma_{2}^{(0)})\) and \(\psi^{\prime}(id_{1},\sigma_{1})=(id_{2},\phi(\sigma_{2}))\), where \((\phi,\phi^{\prime})\) is a twisted groupoid isomorphism between \((G_{1},\Sigma_{1})\) and \((G_{2},\Sigma_{2})\),
3. there exists isomorphism \(\theta:C_{r}^{*}(\Gamma_{1}\ltimes G_{1},\Gamma_{1}\ltimes\Sigma_{1})\cong C _{r}^{*}(\Gamma_{2}\ltimes G_{2},\Gamma_{2}\ltimes\Sigma_{2})\) such that, * \(\theta(C_{0}(\Sigma_{1}^{(0)}))=C_{0}(\Sigma_{2}^{(0)})\), * \(\theta(\Gamma_{1}\ltimes_{r}C_{0}(G_{1}^{(0)}))=\Gamma_{2}\ltimes_{r}C_{0}(G_{ 2}^{(0)})\), * \(\theta(C_{r}^{*}(G_{1},\Sigma_{1}))=C_{r}^{*}(G_{2},\Sigma_{2})\).
Then \((1)\Leftrightarrow(2)\Rightarrow(3)\). These are equivalent for topologically principally free actions.
For the case of trivial twist, we get following corollary.
**Corollary B**.: For topologically free actions \(\Gamma_{i}\curvearrowright G_{i}\) on groupoids, \(i=1,2\), consider the following statements:
1. \(\Gamma_{1}\curvearrowright G_{1}\sim_{coe}\Gamma_{2}\curvearrowright G_{2}\),
2. there exists groupoid isomorphism \(\psi:\Gamma_{1}\ltimes G_{1}\to\Gamma_{2}\ltimes G_{2}\) such that \(\psi(\Gamma_{1},G_{1}^{(0)})=(\Gamma_{2},G_{2}^{(0)})\) and \(\psi(id_{\Gamma_{1}},G_{1})=\psi(id_{\Gamma_{2}},G_{2})\),
3. there exists isomorphism \(\theta:C_{r}^{*}(\Gamma_{1}\ltimes G_{1})\cong C_{r}^{*}(\Gamma_{2}\ltimes G_ {2})\) such that, * \(\theta(C_{0}(G_{1}^{(0)}))=C_{0}(G_{2}^{(0)})\), * \(\theta(\Gamma_{1}\ltimes_{r}C_{0}(G_{1}^{(0)}))=\Gamma_{2}\ltimes_{r}C_{0}(G_ {2}^{(0)})\), * \(\theta(C_{r}^{*}(G_{1}))=C_{r}^{*}(G_{2})\).
Then \((1)\Leftrightarrow(2)\Rightarrow(3)\). These are equivalent for topologically principally free actions.
In particular, for actions on Cartan pairs, we have the following characterization of continuous orbit equivalence.
**Corollary C**.: Given Cartan pairs \((A_{1},B_{1})\) and \((A_{2},B_{2})\) and Cartan invariant actions \(\Gamma_{1}\curvearrowright A_{1}\) and \(\Gamma_{2}\curvearrowright A_{2}\), consider the following statements:
1. \(\Gamma_{1}\curvearrowright A_{1}\sim_{coe}\Gamma_{2}\curvearrowright A_{2}\),
2. there exists isomorphism \(\theta:\Gamma_{1}\ltimes_{r}A_{1}\cong\Gamma_{2}\ltimes_{r}A_{2}\) such that:, * \(\theta(B_{1})=B_{2}\), * \(\theta(\Gamma_{1}\ltimes_{r}B_{1})=\Gamma_{2}\ltimes_{r}B_{2}\), * \(\theta(A_{1})=A_{2}\).
Then \((1)\Rightarrow(2)\), and these are equivalent for topologically principally free actions.
We find conditions for rigidity of orbit equivalence of actions on twisted groupoids, in the sense that it could follow from orbit equivalence of the restricted actions on the corresponding unit spaces.
**Theorem D**.: For Cartan invariant actions \(\Gamma_{1}\curvearrowright C^{*}_{r}(G_{1},\Sigma_{1})\) and \(\Gamma_{2}\curvearrowright C^{*}_{r}(G_{2},\Sigma_{2})\) with isomorphism \(\psi:C^{*}_{r}(G_{1},\Sigma_{1})\to C^{*}_{r}(G_{2},\Sigma_{2})\) satisfying \(\psi(C_{0}(G^{(0)}_{1}))=C_{0}(G^{(0)}_{2})\), and induced actions \(\Gamma_{1}\curvearrowright G^{(0)}_{1}\curvearrowright_{coe}\Gamma_{2} \curvearrowright G^{(0)}_{2}\) with corresponding maps \(a\) and \(b\), if,
\[a(\gamma_{1},s(g_{1}))=a(\gamma_{1},r(g_{1})),\ \ b(\gamma_{2},s(g_{2}))=b( \gamma_{2},r(g_{2})), \tag{1.1}\]
for \(\gamma_{i}\in\Gamma_{i},g_{i}\in G_{i},i=1,2\), then \(\Gamma_{1}\curvearrowright(G_{1},\Sigma_{1})\sim_{coe}\Gamma_{2}\curvearrowright( G_{2},\Sigma_{2})\).
**Corollary E**.: For topologically free actions \(\Gamma_{i}\curvearrowright G^{(0)}_{i}\), \(i=1,2\), consider the following statements:
1. \(a(\gamma_{1},s(g_{1}))=a(\gamma_{1},r(g_{1}))\) and \(b(\gamma_{2},s(g_{2}))=b(\gamma_{2},r(g_{2}))\), for \(\gamma_{i}\in\Gamma_{i}\) and \(g_{i}\in G_{i}\); \(i=1,2\),
2. \(\Gamma_{1}\curvearrowright(G_{1},\Sigma_{1})\sim_{coe}\Gamma_{2}\curvearrowright( G_{2},\Sigma_{2})\),
3. there exists isomorphism \(\theta:\Gamma_{1}\ltimes_{r}C^{*}_{r}(G_{1})\to\Gamma_{2}\ltimes_{r}C^{*}_{r}(G_{2})\) such that, * \(\theta(C_{0}(G^{(0)}_{1}))=C_{0}(G^{(0)}_{2})\), * \(\theta(\Gamma_{1}\ltimes_{r}C_{0}(G^{(0)}_{1}))=\Gamma_{2}\ltimes_{r}C_{0}(G^ {(0)}_{2})\), * \(\theta(C^{*}_{r}(G_{1},\Sigma_{1}))=C^{*}_{r}(G_{2},\Sigma_{2})\).
Then \((1)\Leftrightarrow(2)\Rightarrow(3)\). These are equivalent for topologically principally free actions.
## 2. Continuous Orbit Equivalence
An action \(\Gamma\curvearrowright X\) on a topological space is called topologically free if for \(x\in X\) and \(id_{\Gamma}\neq\gamma\in\Gamma\), \(\{x\in X|\gamma.x\neq x\}\) is dense in \(X\). For a groupoid \(G\), \(\operatorname{Aut}(G)\) is the group of all groupoid isomorphisms \(\phi:G\to G\). By an action of a group \(\Gamma\) on a groupoid \(G\) we mean a group homomorphism \(:\Gamma\to\operatorname{Aut}(G)\). We say that \(\Gamma\curvearrowright G\) is (topologically) free if the induced action \(\Gamma\curvearrowright G^{(0)}\) is (topologically) free. The semidirect product \(\Gamma\ltimes G\) is the groupoid \(\Gamma\times G\) with the following operations \((\gamma,g)(\gamma^{\prime},g^{\prime}):=(\gamma\gamma^{\prime},(\gamma^{\prime -1}g)g^{\prime}),\ (\gamma,g)^{-1}=(\gamma^{-1},\gamma g^{-1})\), for \(\gamma,\gamma^{\prime}\in\Gamma\) and \(g,g^{\prime}\in G\).
**Definition 2.1**.: Two actions \(\Gamma_{1}\curvearrowright G_{1}\) and \(\Gamma_{2}\curvearrowright G_{2}\) on groupoids are said to be continuous \(r\)-orbit equivalent if there exist a groupoid isomorphism \(\phi:G_{1}\to G_{2}\), and continuous maps \(a:\Gamma_{1}\times G^{(0)}_{1}\to\Gamma_{2}\) and \(b:\Gamma_{2}\times G^{(0)}_{2}\to\Gamma_{1}\), with \(\phi(\gamma_{1}g_{1})=a(\gamma_{1},r(g_{1}))\phi(g_{1})\) and \(\phi^{-1}(\gamma_{2}g_{2})=b(\gamma_{2},r(g_{2}))\phi^{-1}(g_{2})\), for \(\gamma_{i}\in\Gamma_{i},g_{i}\in G_{i},i=1,2\). The continuous \(s\)-orbit equivalence is defined similarly. Two actions are called continuous orbit equivalent if they are both continuous \(r\)- and \(s\)-orbit equivalent with the same cocycle maps \(a\) and \(b\).
A topological space \(X\) could be regarded as a (co-trivial) groupoid with \(G^{(2)}=\operatorname{diag}(X)\) and trivial inverse and multiplication, and in this case, the above notion of orbit equivalence is the same as the classical one.
**Lemma 2.2**.: _Let \(\Gamma_{1}\curvearrowright G_{1}\) and \(\Gamma_{2}\curvearrowright G_{2}\) be two continuous \(r\)-orbit equivalent topologically free actions. Let the maps \(\phi,a,b\) be as they defined in Definition 2.1. For \(\gamma_{i},\gamma^{\prime}_{i}\in\Gamma_{i}\) and \(g_{i},g^{\prime}_{i}\in G_{i}\), \(i=1,2\),_
1. \(a(\gamma_{1},s(g_{1}))=a(\gamma_{1},r(g_{1}))\) _and_ \(b(\gamma_{2},s(g_{2}))=b(\gamma_{2},r(g_{2}))\)_,_
2. _when products are defined,_
* \(a(\gamma_{1}^{\prime},r(g_{1}^{\prime}))a(\gamma_{1},r(g_{1}))=a(\gamma_{1}^{ \prime}\gamma_{1},r((\gamma_{1}^{-1}g_{1}^{\prime})g_{1}))\),
* \(b(\gamma_{2}^{\prime},r(g_{2}^{\prime}))b(\gamma_{2},r(g_{2}))=b(\gamma_{2}^{ \prime}\gamma_{2},r((\gamma_{2}^{-1}g_{2}^{\prime})g_{2}))\),
* \(a(\gamma_{1},r(g_{1}))^{-1}=a(\gamma_{1}^{-1},r(\gamma_{1}g_{1}^{-1}))\), \(b(\gamma_{2},r(g_{2}))^{-1}=b(\gamma_{2}^{-1},r(\gamma_{2}g_{2}^{-1}))\),
* \(b(a(\gamma_{1},r(g_{1}),\phi(g_{1})))=\gamma_{1}\), \(a(b(\gamma_{2},r(g_{2}),\phi(g_{2})))=\gamma_{2}\).
Proof.: By topological freeness, there is a dense subset \(D_{i}\subset G_{i}^{(0)}\) such that \(\gamma.d=d\) implies \(\gamma=id_{\Gamma_{i}}\), for \(\gamma\in\Gamma_{i}\), \(d\in D_{i}\), \(i=1,2\). Since groupoids are etale, for an arbitrary open subset \(U_{i}\subset G_{i}\), \(r(U_{i})\) is open and \(r(U_{i})\cap D_{i}\neq\emptyset\). Thus, \(r^{-1}(D_{i})\) is dense in \(G_{i}\), \(i=1,2\). For \(\gamma_{i},\gamma_{i}^{\prime}\in\Gamma_{i}\) and \(g_{i},g_{i}^{\prime}\in G_{i}\), \(i=1,2\), observe that \(\phi(\gamma_{1}g_{1})^{-1}=(a(\gamma_{1},r(g_{1}))\phi(g_{1}))^{-1}\) is equivalent to \(a(\gamma_{1},r(g_{1}^{-1}))\phi(g_{1}^{-1})=a(\gamma_{1},r(g_{1}))\phi(g_{1} ^{-1})\), and so by topological freeness and continuity of \(a\), we get (1). By a similar argument, we also have \(b(\gamma_{2},s(g_{2}))=b(\gamma_{2},r(g_{2}))\). For (2), observe that,
\[\phi((\gamma_{1}^{\prime}\gamma_{1})((\gamma_{1}^{-1}g_{1}^{ \prime})g_{1})) =\phi(\gamma_{1}^{\prime}(g_{1}^{\prime}(\gamma_{1}g_{1})))=a( \gamma_{1}^{\prime},r(g_{1}^{\prime}(\gamma_{1}g_{1})))\phi(\gamma_{1}((\gamma _{1}^{-1}g_{1}^{\prime})g_{1}))\] \[=a(\gamma_{1}^{\prime},r(g_{1}^{\prime}(\gamma_{1}g_{1})))a( \gamma_{1},r((\gamma_{1}^{-1}g_{1}^{\prime})g_{1}))\phi((\gamma_{1}^{-1}g_{1} ^{\prime})g_{1}),\]
and \(\phi((\gamma_{1}^{\prime}\gamma_{1})((\gamma_{1}^{-1}g_{1}^{\prime})g_{1}))=a (\gamma_{1}^{\prime}\gamma_{1},r((\gamma_{1}^{-1}g_{1}^{\prime})g_{1}))\phi(( \gamma_{1}^{-1}g_{1}^{\prime})g_{1})\), and again by topological freeness,
\[a(\gamma_{1}^{\prime}\gamma_{1},r((\gamma_{1}^{-1}g_{1}^{\prime })g_{1})) =a(\gamma_{1}^{\prime},r(g_{1}^{\prime}(\gamma_{1}g_{1})))a(\gamma _{1},r((\gamma_{1}^{-1}g_{1}^{\prime})g_{1}))\] \[= a(\gamma_{1}^{\prime},r(g_{1}^{\prime}))a(\gamma_{1},s((\gamma_{1 }^{-1}g_{1}^{\prime})g_{1}))=a(\gamma_{1}^{\prime},r(g_{1}^{\prime}))a(\gamma _{1},s(g_{1}))\] \[= a(\gamma_{1}^{\prime},r(g_{1}^{\prime}))a(\gamma_{1},r(g_{1})),\]
and the same for \(b\). Now (3) follows from (2) and (4) is proved similarly.
**Theorem 2.3**.: _Let \(\Gamma_{1}\curvearrowright G_{1}\), \(\Gamma_{2}\curvearrowright G_{2}\) be two topologically free actions. The following are equivalent:_
1. \(\Gamma_{1}\curvearrowright G_{1}\curvearrowright G_{2}\curvearrowright G_{2}\)_,_
2. _there exists groupoid isomorphism_ \(\psi:\Gamma_{1}\ltimes G_{1}\to\Gamma_{2}\ltimes G_{2}\) _such that_ \(\psi(\Gamma_{1},G_{1}^{(0)})=(\Gamma_{2},G^{(0)})\) _and_ \(\psi(id_{1},G_{1})=(id_{2},G_{2})\)_._
Proof.: \((1)\Rightarrow(2)\): Let \(\phi:G_{1}\to G_{2}\) be the isomorphism given by orbit equivalence. We claim that the maps \(\psi:\Gamma_{1}\ltimes G_{1}\to\Gamma_{2}\ltimes G_{2}\); \((\gamma_{1},g_{1})\mapsto(a(\gamma_{1},r(g_{1})),\phi(g_{1}))\), and \(\chi:\Gamma_{2}\ltimes G_{2}\to\Gamma_{1}\ltimes G_{1}\); \((\eta_{2},h_{2})\mapsto(b(\eta_{2},r(h_{2})),\phi^{-1}(h_{2}))\), are groupoid homomorphisms. Let \(((\gamma_{1},g_{1}),(\gamma_{2},g_{2}))\in(\Gamma_{1}\ltimes G_{1})^{(2)}\), then \((\gamma_{2}^{-1}g_{1},g_{2})\in G_{1}^{(2)}\), \(r(g_{2})=s(\gamma_{2}^{-1}g_{1})=\gamma_{2}^{-1}s(g_{1})\), and \(\gamma_{2}r(g_{2})=s(g_{1})\). For \(\gamma\in\Gamma_{1}\),
\[a(\gamma,\gamma_{2}r(g_{2}^{-1}))=a(\gamma,\gamma_{2}s(g_{2}))=a(\gamma,\gamma_{ 2}r(g_{2}))=a(\gamma,s(g_{1}))=a(\gamma,r(g_{1})).\]
Using this and Lemma 2.2, we get,
\[\psi(\gamma_{1},g_{1})\psi(\gamma_{2},g_{2}) =(a(\gamma_{1},r(g_{1})),\phi(g_{1}))(a(\gamma_{2},r(g_{2})),\phi(g _{2}))\] \[=(a(\gamma_{1},r(g_{1}))a(\gamma_{2},r(g_{2})),(a(\gamma_{2},r(g_{ 2}))^{-1}\phi(g_{1}))\phi(g_{2}))\] \[=(a(\gamma_{1}\gamma_{2},r((\gamma_{2}^{-1}g_{1})g_{2})),(a(\gamma _{2}^{-1},r(\gamma_{2}g_{2}^{-1}))\phi(g_{1}))\phi(g_{2}))\] \[=(a(\gamma_{1}\gamma_{2},r((\gamma_{2}^{-1}g_{1})g_{2})),(\phi( \gamma_{2}^{-1}g_{1}))\phi(g_{2}))\] \[=\psi(\gamma_{1}\gamma_{2},(\gamma_{2}^{-1}g_{1})g_{2})=\psi(( \gamma_{1},g_{1})(\gamma_{2},g_{2})),\]
for \(\gamma_{1},\gamma_{2}\in\Gamma_{1},g_{1},g_{2}\in G_{1}\), thus, \(\psi\) is a groupoid homomorphism. The proof for \(\chi\) is similar. By Lemma 2.2, these maps are inverse of each other, and \(\psi\) satisfies the required conditions.
(2) \(\Rightarrow\) (1): Let \(\psi:\Gamma_{1}\ltimes G_{1}\to\Gamma_{2}\ltimes G_{2}\) and \(\phi:G_{1}\to G_{2}\) be isomorphisms such that \(\psi(\Gamma_{1},G_{1}^{(0)})=(\Gamma_{2},G_{2}^{(0)})\) and \(\psi(id_{1},g_{1})=(id_{2},\phi(g_{1}))\), for \(g_{1}\in G_{1}\). Let \(a:\Gamma_{1}\times G_{1}^{(0)}\to\Gamma_{2}\) map \((\gamma_{1},r(g_{1}))\) to the first component of \(\psi(\gamma_{1},r(g_{1}))\) and \(b:\Gamma_{2}\times G_{2}^{(0)}\to\Gamma_{1}\) map \((\gamma_{2},r(g_{2}))\) to the first component of \(\psi^{-1}(\gamma_{2},r(g_{2}))\). By assumption, if \(\psi(\gamma_{1},r(g_{1}))=(a(\gamma_{1},r(g_{1})),g_{2})\), then \(g_{2}\in G_{2}^{(0)}\). Also, \(\psi(s(\gamma_{1},r(g_{1})))=s(a(\gamma_{1},r(g_{1})),g_{2})\) implies \(\psi(id_{1},r(g_{1}))=(id_{2},g_{2})\), and \(g_{2}=\phi(r(g_{1}))\). This means that, \(\psi(\gamma_{1},r(g_{1}))=(a(\gamma_{1},r(g_{1})),\phi(r(g_{1}))).\) Furthermore,
\[\psi(\gamma_{1},g_{1}) =\psi(\gamma_{1},r(g_{1}))\psi(id_{\Gamma_{1}},g_{1})=(a(\gamma_{ 1},r(g_{1})),\phi(r(g_{1})))(id_{\Gamma_{1}},\phi(g_{1}))\] \[=(a(\gamma_{1},r(g_{1})),\phi(g_{1})),\]
and
\[(a(\gamma_{1}^{-1},\gamma_{1}r(g_{1})),\phi(\gamma_{1}g_{1})) =\psi(\gamma_{1},g_{1}^{-1})^{-1}=(a(\gamma_{1},r(g_{1}^{-1})), \phi(g_{1}^{-1}))^{-1}\] \[=(a(\gamma_{1},r(g_{1}^{-1}))^{-1},a(\gamma_{1},r(g_{1}^{-1})) \phi(g_{1})).\]
Thus, \(a(\gamma_{1},r(g_{1}))\phi(g_{1})=a(\gamma_{1},s(g_{1}))\phi(g_{1})=\phi( \gamma_{1}g_{1})\), for \(\gamma_{1}\in\Gamma_{1}\) and \(g_{1}\in G_{1}\). Similarly, \(b(\gamma_{2},r(g_{2}))\phi^{-1}(g_{2})=\phi^{-1}(\gamma_{2}g_{2})\), for \(\gamma_{2}\in\Gamma_{2}\) and \(g_{2}\in G_{2}\).
**Corollary 2.4**.: Let \(\Gamma_{1}\curvearrowright G_{1}\sim_{coe}\Gamma_{2}\curvearrowright G_{2}\) be topologically free actions then \(C_{r}^{*}(\Gamma_{1}\ltimes G_{1})\cong C_{r}^{*}(\Gamma_{2}\ltimes G_{2})\).
Next, let us recall the notion of twisted groupoids. Let \(G\) be a groupoid and for \(u,v\in G^{(0)}\), put \(G_{u}:=s^{-1}(u)\), \(G^{u}:=r^{-1}(u)\) and \(G_{u}^{v}:=G_{u}\cap G^{v}\). Then \(G\) is called principal if for each \(u\in G^{(0)}\), \(G_{u}^{u}:=\{u\}\), and topologically principal if the set of units \(u\in G^{(0)}\) with \(G_{u}^{u}=\{u\}\) is dense in \(G^{(0)}\). By a bisection we mean a subgroupoid \(S\) of groupoid \(G\) with restrictions \(r|_{S}\) and \(s|_{S}\) being injective.
A twisted groupoid was introduced by Kumjian [12] (see also, [17]) as a pair of groupoids \((G,\Sigma)\) yielding a groupoid extension,
\[\mathbb{T}\times G^{(0)}\stackrel{{\iota}}{{\longrightarrow}} \Sigma\stackrel{{\pi}}{{\twoheadrightarrow}}G.\]
The particular case, \(\mathbb{T}\times G^{(0)}\rightarrowtail\mathbb{T}\times G\twoheadrightarrow G\), gives the trivial twist.
A twisted groupoid is called topological principal if \(G\) is so. By definition of twisted groupoid \((G,\Sigma)\), there exist an action of the unit circle \(\mathbb{T}\) on \(\Sigma\) such that when \(\pi(\sigma_{1})=\pi(\sigma_{2})\), there is a unique \(t\in\mathbb{T}\) with \(t.\sigma_{1}=\sigma_{2}\)[17, 11.1.2 and 11.1.3].
Let \(G\) be an etale groupoid with Haar system of counting measures and \((G,\Sigma)\) be a twisted groupoid with corresponding maps \(\iota\) and \(\pi\). We have the set,
\[C_{c}(G,\Sigma):=\{f\in C_{c}(\Sigma,\mathbb{C}):f(t\sigma)=\bar{t}f(\sigma), \text{ for all }\sigma\in\Sigma,t\in\mathbb{T}\}\]
with the natural \(*\)-algebraic structure, whose completion is the reduced \(\mathrm{C}^{*}\)-algebra \(C_{r}^{*}(G,\Sigma)\). By an isomorphism \((\phi_{0},\phi_{1},\phi_{2})\) between twisted groupoids \((G_{1},\Sigma_{1})\) and \((G_{2},\Sigma_{2})\) we mean a commutative diagram,
where \(\phi_{0}\), \(\phi_{1}\), and \(\phi_{2}\) are isomorphisms. We identify \(\mathbb{T}\times G_{1}^{(0)}\) with \(\mathbb{T}\times G_{2}^{(0)}\) and write \((\phi_{2},\phi_{1}):(G_{1},\Sigma_{1})\to(G_{2},\Sigma_{2})\) for this isomorphism.
By an action of a group \(\Gamma\) on a twisted groupoid \((G,\Sigma)\) we mean two actions \(\Gamma\curvearrowright\Sigma\) and \(\Gamma\curvearrowright G\) with the following equivariance conditions,
\[\iota(t,\gamma.u)=\gamma.\iota(t,u),\ \ \pi(\gamma.\sigma)=\gamma.\pi(\sigma),\ \ ( \gamma\in\Gamma,t\in\mathbb{T},\sigma\in\Sigma,u\in G^{(0)}). \tag{2.1}\]
**Proposition 2.5**.: Given actions \(\Gamma_{i}\curvearrowright(G_{i},\Sigma_{i})\), \(i=1,2\), such that there exists an isomorphism \((\phi^{\prime},\phi):(G_{1},\Sigma_{1})\to(G_{2},\Sigma_{2})\), consider following statements:
1. there exists an isomorphism \(\psi:\Gamma_{1}\ltimes\Sigma_{1}\to\Gamma_{2}\ltimes\Sigma_{2}\) with \(\psi(\Gamma_{1},\mathbb{T}\times\Sigma_{1}^{(0)})=(\Gamma_{2},\mathbb{T} \times\Sigma_{2}^{(0)})\) and \(\psi(id_{1},\sigma_{1})=(id_{2},\phi(\sigma_{2}))\),
2. \(\Gamma_{1}\curvearrowright G_{1}\sim_{coe}\Gamma_{2}\ltimes G_{2}\).
Then (1) implies (2).
Proof.: For \(i=1,2\), let \((\iota_{i},\pi_{i})\) be the twist maps of \((G_{i},\Sigma_{i})\), and consider the twist \(\mathbb{T}\times G_{i}^{(0)}\stackrel{{(c_{\Gamma_{i},\iota_{i}} )}}{{\longrightarrow}}\Gamma_{i}\ltimes\Sigma_{i}\stackrel{{(id, \pi_{i})}}{{\longrightarrow}}\Gamma_{i}\ltimes G_{i}\), where \(c_{\Gamma_{i}}\) is the constant map to the identity element of \(\Gamma_{i}\). By the definition of twisted isomorphism \((\phi^{\prime},\phi)\), and equalities \(\psi(id_{1},t.\sigma_{1})=t.\psi(id_{1},\sigma_{1})\) and \((\gamma_{1},\sigma_{1})=(\gamma_{1},r(\sigma_{1}))(id_{1},\sigma_{1})\), we get the equivariance condition \(\psi(\gamma_{1},t.\sigma_{1})=t.\psi(\gamma_{1},\sigma_{1})\). Thus, \(\psi\) induces \(\psi^{\prime}:\Gamma_{1}\ltimes G_{1}\to\Gamma_{2}\ltimes G_{2}\) with \((\psi^{\prime},\psi)\) an isomorphism of twisted groupoids. For \(g_{1}\in G_{1}\), \(\sigma_{1}\in\Sigma_{1}\) with \(\pi_{1}(\sigma_{1})=g_{1}\), we have, \(\psi^{\prime}(id_{1},g_{1})=\psi(id_{1},\pi_{2}(\phi(\sigma_{1})))\), that is, \(\psi^{\prime}(id_{1},G_{1})\subseteq(id_{2},G_{2})\), and by the same argument, \(\psi^{\prime}(id_{1},G_{1})=(id_{2},G_{2})\). Also, for \(u\in G_{1}^{(0)}\),
\[\psi^{\prime}(\gamma_{1},u) \in\psi^{\prime}\circ(id,\pi_{1})(\gamma_{1},\pi_{1}^{-1}(u))=(id,\pi_{2})\circ\psi(\gamma_{1},\pi_{1}^{-1}(u))\] \[\subseteq(id,\pi_{2})(\Gamma_{2},\mathbb{T}\times\Sigma_{2}^{(0 )})=(\Gamma_{2},\Sigma_{2}^{(0)}),\]
thus, \(\psi^{\prime}(\Gamma_{1},\Sigma_{1}^{(0)})\subset(\Gamma_{2},\Sigma_{2}^{(0)})\), and by the same argument, \(\psi^{\prime}(\Gamma_{1},\Sigma_{1}^{(0)})=(\Gamma_{2},\Sigma_{2}^{(0)})\), which completes the proof by Theorem 2.3
**Corollary 2.6**.: In the notation of the above proposition, \(\Gamma_{1}\curvearrowright\Sigma_{1}\sim_{coe}\Gamma_{2}\curvearrowright\Sigma_ {2}\) with respect to \(\phi\) implies \(\Gamma_{1}\curvearrowright G_{1}\sim_{coe}\Gamma_{2}\curvearrowright G_{2}\) with respect to \(\phi^{\prime}\).
**Definition 2.7**.: Two topologically free actions \(\Gamma_{1}\curvearrowright(G_{1},\Sigma_{1})\) and \(\Gamma_{2}\curvearrowright(G_{2},\Sigma_{2})\) are continuous orbit equivalent if there exists isomorphism
\[(\phi,\phi^{\prime}):(G_{1},\Sigma_{1})\to(G_{2},\Sigma_{2})\]
satisfying condition (1) of Proposition 2.5.
Theorem 2.3 justifies the above definition (see, Definition 3.4 in Appendix). By the last proposition, the defined notion of continuous orbit equivalence for free actions \(\Gamma_{1}\curvearrowright(G_{1},\Sigma_{1})\) and \(\Gamma_{2}\curvearrowright(G_{2},\Sigma_{2})\) is stronger than \(\Gamma_{1}\curvearrowright G_{1}\sim_{coe}\Gamma_{2}\curvearrowright G_{2}\), but weaker than \(\Gamma_{1}\curvearrowright\Sigma_{1}\sim_{coe}\Gamma_{2}\curvearrowright\Sigma_ {2}\).
A sub-C\({}^{*}\)-algebra \(B\) of C\({}^{*}\)-algebra \(A\) is called a Cartan subalgebra if,
1. \(B\) is a maximal abelian subalgebra of \(A\),
2. \(B\) is regular, i.e., \(N_{B}(A):=\{n\in A:nBn^{*}\subseteq B\text{ and }n^{*}Bn\subseteq B\}\) generates \(A\) as a C\({}^{*}\)-algebra,
3. there exists a faithful conditional expectation \(P:A\to B\).
The pair \((A,B)\) is called a Cartan pair. In this case, the existence of an approximate identity of \(A\) in \(B\) is automatic [14, Theorem 2.6].
There is a close relationship between separable C\({}^{*}\)-algebras containing a Cartan subalgebra and twisted groupoid C\({}^{*}\)-algebras. Renault proved the following result [16, Theorem 5.9].
**Proposition 2.8**.: For a Cartan pair \((A,B)\) where \(A\) is a separable C\({}^{*}\)-algebra, there exists a twisted Hausdorff locally compact second countable topologically principal etale groupoid \((G,\Sigma)\) with \((A,B)\cong(C_{r}^{*}(G,\Sigma),C_{0}(G^{(0)}))\).
The twisted groupoid \((G,\Sigma)\) in the above proposition is called the Weyl twisted groupoid associated to the Cartan pair \((A,B)\). Let us recall the construction of the Weyl twisted groupoid (for details, see [16] and [1, Remark 2.8]). Let \(X\) be the spectrum of the abelian C\({}^{*}\)-algebra \(B\). For each normalize \(n\in N_{A}(B)\), by [16, Lemma 4.6], \(n^{*}n\) and \(nn^{*}\) are in \(B\). Put, \(\operatorname{dom}(n):=\{x\in X;n^{*}n(x)>0\}\) and \(\operatorname{ran}(n):=\{x\in X;nn^{*}(x)>0\}\). By [16, Proposition 4.7], there is a partial homeomorphism \(\alpha_{n}:\operatorname{dom}(n)\to\operatorname{ran}(n)\) such that for all \(x\in\operatorname{dom}(n)\) and \(b\in B\), \(n^{*}bn(x)=b(\alpha_{n}(x))n^{*}n(x)\). Now the groupoid \(G=G(B)\) is defined by,
\[G:=\{[x,\alpha_{n},y]:n\in N_{A}(B),y\in\operatorname{dom}(n),\alpha_{n}(y)=x \}/\sim,\]
where \([x,\alpha_{n},y]\sim[x^{\prime},\alpha_{m^{\prime}},y^{\prime}]\) iff \(y=y^{\prime}\) and there exist neighbourhood \(U\subseteq X\) of \(y\) such that \(\alpha_{n}|_{U}=\alpha_{n^{\prime}}|_{U}\). The unit space \(G^{(0)}\) could be identified with \(\{(x,\alpha_{b},x);b\in B,x\in\operatorname{supp}(b)\}\). The twist \(\Sigma=\Sigma(B)\) is defined by,
\[\Sigma:=\{(x,n,y)\in\operatorname{ran}(n)\times N_{A}(B)\times\operatorname{ dom}(n):\alpha_{n}(y)=x\}/\approx,\]
where \((x,n,y)\approx(x^{\prime},n^{\prime},y^{\prime})\) iff \(y=y^{\prime}\) and there exist \(b,b^{\prime}\in B\) with \(b(y),b^{\prime}(y)>0\) and \(nb=n^{\prime}b^{\prime}\). The unit space could be identified with \(\{(x,b,x):b\in B,x\in\operatorname{dom}(b)\}\). The map \((x,n,y)\mapsto[x,\alpha_{n},y]\) yields a central extension, making \((G,\Sigma)\) a twisted groupoid with \((A,B)\cong(C_{r}^{*}(G,\Sigma),C_{0}(G^{(0)}))\). This extension uses the identification of \(\mathbb{T}\times X\) with \(\mathscr{B}:=\{[x,b,x]:b\in B,\ b(x)\neq 0\}\subseteq\Sigma(B)\), given by,
\[[x,b,x]\mapsto(b(x)/|b(x)|,x) \tag{2.2}\]
The next result ensures the uniqueness of twisted groupoid in Proposition 2.8.
**Proposition 2.9** (Proposition 4.15 of [16]).: Let \((G,\Sigma)\) be a twisted Hausdorff locally compact second countable topologically principal etale groupoid. Let \(A:=C_{r}^{*}(G,\Sigma)\) and \(B:=C_{0}(G^{(0)})\). Then there is an isomorphism of twisted groupoids,
**Proposition 2.10** (Proposition 3.4 of [1]).: Let \(A\) be a separable C\({}^{*}\)-algebra admitting a Cartan subalgebra \(B\subseteq A\). Let a countable group \(\Gamma\) act on \(A\) such that \(\gamma.B=B\), for all \(\gamma\in\Gamma\), then there is an action of \(\Gamma\) on \((G(B),\Sigma(B))\) such that,
\[\Gamma\ltimes_{r}A\cong\Gamma\ltimes_{r}C_{r}^{*}(G(B),\Sigma(B))\cong C_{r} ^{*}(\Gamma\ltimes G(B),\Gamma\ltimes\Sigma(B)).\]
Since \(\alpha:\Gamma\curvearrowright A\) is Cartan invariant, there is a restricted action of \(\Gamma\) on \(B\cong C_{0}(G^{(0)})\), or equivalently, on \(G^{(0)}\). This guarantees that \(N_{B}(A)\) is invariant under the action. A simple calculation shows that,
\[\alpha_{\gamma.n}(\gamma.y)=\gamma\alpha_{n}(y). \tag{2.3}\]
We have the actions, \(\Gamma\curvearrowright G\) and \(\Gamma\curvearrowright\Sigma\), given by,
\[\gamma[x,\alpha_{n},y]=[\gamma.x,\alpha_{\gamma.n},\gamma.y],\ \ \gamma(x,n,y)=(\gamma.x,\gamma.n,\gamma.y). \tag{2.4}\]
By identification (2.2), these actions yields an action of \(\Gamma\) on the twisted groupoid \((G(B),\Sigma(B))\).
Let \((A_{1},B_{1})\) and \((A_{2},B_{2})\) be Cartan pairs, and \(\psi:A_{1}\to A_{2}\) be a Cartan invariant isomorphism. By Proposition 2.8, \((A_{i},B_{i})\cong(C_{r}^{*}(G_{i},\Sigma_{i}),C_{0}(G^{(0)}))\), for \(i=1,2\). The Cartan invariance gives an isomorphism \(G_{1}^{(0)}\cong G_{2}^{(0)}\), again denoted by \(\psi\). For \(n\in N_{A_{1}}(B_{1})\), \(x\in G_{1}^{(0)}\), and \(b^{\prime}\in B_{2}\), say, \(b^{\prime}=\psi(b)\), for some \(b\in B_{1}\), we have,
\[\psi(n^{*})\psi(b)\psi(n)(\psi(x)) =\psi(b)(\alpha_{\psi(n)}(\psi(x)))\psi(n^{*}n)(\psi(x)),\] \[n^{*}bn(x) =b(\psi^{-1}(\alpha_{\psi(n)}(\psi(x))))n^{*}n(x),\] \[b(\alpha_{n}(x))n^{*}n(x) =b(\psi^{-1}(\alpha_{\psi(n)}(\psi(x))))n^{*}n(x).\]
Since \(x\in\mathrm{dom}(n)\), \(n^{*}n(x)>0\), and the above equalities hold for all \(b\in B_{1}\), we get, \(\alpha_{\psi(n)}(\psi(x))=\psi(\alpha_{n}(x))\). Thus, there exists a twisted groupoid isomorphism \((\theta,\theta^{\prime}):(G_{1},\Sigma_{1})\to(G_{2},\Sigma_{2})\), given by,
\[\theta([\alpha_{n_{1}}(x_{1}),\alpha_{n_{1}},x_{1}])=[\alpha_{ \psi(n_{1})}(\psi(x_{1})),\alpha_{\psi(n_{1})},\psi(x_{1})],\] \[\theta^{\prime}(\alpha_{n_{1}}(x_{1}),n_{1},x_{1})=(\alpha_{\psi (n_{1})}(\psi(x_{1})),\psi(n_{1}),\psi(x_{1})). \tag{2.5}\]
**Definition 2.11**.: Let \((A_{1},B_{1})\) and \((A_{2},B_{2})\) be Cartan pairs. We say that Cartan invariant actions \(\Gamma_{1}\curvearrowright A_{1}\) and \(\Gamma_{2}\curvearrowright A_{2}\) are continuous orbit equivalent if the induced action on the corresponding twisted groupoids are continuous orbit equivalent.
As mentioned in [16], using the canonical action \(\mathbb{T}\curvearrowright\Sigma\) in the twisted groupoid \((G,\Sigma)\), one can define a complex line bundle \(L:=\frac{\mathbb{C}\times\Sigma}{\sim}\), where \((c,\sigma)\sim(c^{\prime},\sigma^{\prime})\) iff there exists \(t\in\mathbb{T}\) such that \((c,\sigma)=(\bar{t}c,t.\sigma)\). Let \([c,\sigma]\) be the corresponding equivalence class, and let \(\omega:L\to\mathbb{C};[c,\sigma]\mapsto c\). Then each section of \(L\) could be viewed as a map \(f:G\to\mathbb{C}\), after composing with \(\omega\). For each section \(f\), the open support of \(f\) is defined to be,
\[\mathrm{supp}^{\prime}(f)=\{g\in G;f(g)\neq 0\}.\]
By [15, Proposition 2.4.2], each element of \(C_{r}^{*}(G,\Sigma)\) could be viewed as a continuous section of \(L\), whose open support is the image of the open support of \(f\in C_{c}(G,\Sigma)\) by the twist map \(\pi\) (see, section 2.2 in [4]). With this convention, we have the identification,
\[C_{0}(G^{(0)})=\{f\in C_{r}^{*}(G,\Sigma):\mathrm{supp}^{\prime}(f)\subset G^ {(0)}\}, \tag{2.6}\]
where \(h\in C_{0}(G^{(0)})\) is identified with \(f(\sigma)=\bar{t}h(x)\), if \(\sigma\in\mathbb{T}\times G^{(0)}\), and \(f(\sigma)=0\), otherwise. This could be used to explicitly write the twisted groupoid isomorphism of Proposition 2.9 (compare with [16, Proposition 4.7]): Let \(n\in N_{A}(B)\), since \(G\) is topologically principal, \(S:=\mathrm{supp}^{\prime}(n)\) is a bisection of \(G\)[16, Proposition 4.7]. This gives groupoid isomorphisms \(\phi_{1}:\Sigma(B)\to\Sigma;\ (y,n,x)\mapsto(n(Sy)/\sqrt{n^{*}n(y)},Sx)\) and \(\phi_{2}:G(B)\to G;\ [y,\alpha_{n},x]\mapsto Sx\).
Let \((G,\Sigma)\) and \((H,E)\) be twisted groupoids. By an open embedding of \((H,E)\) in \((G,\Sigma)\), written as \((H,E)\subset^{\circ}(G,\Sigma)\), we mean open inclusions \(G^{(0)}\subseteq H\subseteq G\) and \(\Sigma^{(0)}\subset E\subset\Sigma\), such that the following diagram commutes,
(2.7)
Let \((G,\Sigma)\) be a twisted groupoid with twist maps \(\iota_{0}\) and \(\pi_{0}\), and let \(\Gamma\curvearrowright(G,\Sigma)\). Then
\[\mathbb{T}\times G^{(0)}\overset{\iota}{\rightharpoonup}\Gamma\ltimes\Sigma \overset{\pi}{\rightharpoonup}\Gamma\ltimes G\]
is a twist, where \(\iota(t,v):=(id_{\Gamma_{1}},\iota_{0}(t,v))\), and \(\pi(\gamma,\sigma):=(\gamma,\pi_{0}(\sigma))\).
1. There exist an open embedding which embeds \((G,\Sigma)\) in \((\Gamma\ltimes G,\Gamma\ltimes\Sigma)\), \[\begin{CD}\mathbb{T}\times G^{(0)}@>{\iota}>{}>\Gamma\ltimes\Sigma @V{\pi}>{}>\Gamma\ltimes G\\ \mathbb{T}\times G^{(0)}@>{\iota_{0}}>{}>\Sigma @V{\pi_{0}}>{}>G\\ \end{CD}\] where \(\eta(\sigma):=(id_{\Gamma},\sigma)\) and \(\eta^{\prime}(g):=(id_{\Gamma},g)\), for \(\sigma\in\Sigma,g\in G\). By [2, Lemma 3.4], there is a canonical embedding \(C_{r}^{*}(G,\Sigma)\hookrightarrow C_{r}^{*}(\Gamma\ltimes G,\Gamma\ltimes\Sigma)\).
2. There is an open embedding of \(\Gamma\ltimes G^{(0)}\) with trivial twist in \((\Gamma\ltimes G,\Gamma\ltimes\Sigma)\), \[\begin{CD}\mathbb{T}\times G^{(0)}@>{\iota}>{}>\Gamma\ltimes\Sigma @V{\pi}>{}>\Gamma\ltimes G\\ \mathbb{T}\times G^{(0)}@>{\iota}>{}>\mathbb{T}\times(\Gamma\ltimes G^{(0)})@>{ \pi}>{}>\Gamma\ltimes G^{(0)}\end{CD}\] where \(\theta(t,(\gamma,v))=(\gamma,\iota_{0}(t,v))\). By the definition of twisted groupoid, \(\pi_{0}^{-1}(u)=\iota_{0}(\mathbb{T}\times\{u\})\), for \(u\in G^{(0)}\), and \((\Gamma,\iota_{0}(\mathbb{T}\times G^{(0)}))=(\Gamma,\pi^{-1}(G^{(0)}))\) is open, since \(\Gamma\) is discrete and \(G^{(0)}\) is open. We use the notation \(\Gamma\ltimes G^{(0)}\subset^{\circ}(G,\Sigma)\) for this open inclusion. There is a natural isomorphism between \(\Gamma\ltimes_{r}C_{0}(G^{(0)})\) and \(C_{r}^{*}(\Gamma\ltimes G^{(0)})\), and again by [2, Lemma 3.4], there is a canonical embedding \(\Gamma\ltimes_{r}C_{0}(G^{(0)})\hookrightarrow C_{r}^{*}(\Gamma\ltimes G, \Gamma\ltimes\Sigma)\).
One of the main steps in proving our first main result is to use Proposition 2.9 for \(C_{r}^{*}(\Gamma\ltimes G,\Gamma\ltimes\Sigma)\). In order to do this, we need a condition to ensure topological principality of \((\Gamma\ltimes G,\Gamma\ltimes\Sigma)\).
**Definition 2.12**.: An action \(\Gamma\curvearrowright G\) is called _principally free_ if
\[\gamma.r(g)=s(g)\text{ implies }\gamma=id\text{ and }g\in G^{(0)}. \tag{2.8}\]
It is called _topologically principally free_ if there is a dense subset \(U\subseteq G^{(0)}\) such that (2.8) holds for each \(g\in G_{u}\) and each \(u\in U\).
The (topologically) principally free actions are (topologically) free. An action \(\Gamma\curvearrowright G\) is principally free iff \(\Gamma\ltimes G\) is (topologically) principal. Note that, this notion is defined only for actions on (topologically) principal groupoids.
## 3. Proof of The Main Results
In this section we prove the main results of the paper.
**Lemma 3.1**.: _Let \((G,\Sigma)\) and \((H,E)\) be twisted groupoids with \((G,\Sigma)\subset^{\circ}(H,E)\). Then,_
1. \(C_{r}^{*}(H,E)\overset{\iota}{\hookrightarrow}C_{r}^{*}(G,\Sigma)\) _and the restriction of_ \(\iota\) _to_ \(C_{0}(H^{(0)})\) _is an isomorphism onto_ \(C_{0}(G^{(0)})\)_._
_If in addition the twisted groupoids are topologically principal, we also have,_
_._
2. \(H(C_{0}(H^{(0)}))\overset{\iota_{2}}{\hookrightarrow}G(C_{0}(G^{(0)}))\) _and_ \(E(C_{0}(H^{(0)}))\overset{\iota_{1}}{\hookrightarrow}\Sigma(C_{0}(H^{(0)}))\)_, where_ \(\iota_{1}(\alpha_{n}(x),\alpha_{n},x):=(\alpha_{\iota(n)}(x),\alpha_{\iota(n)},x) \iota_{2}[\alpha_{n}(x),\alpha_{n},x]:=[\alpha_{\iota(n)}(x),\alpha_{\iota(n)},x]\)_,_
3. _for isomorphisms_ \((\phi_{2},\phi_{1}):(G(C_{0}(G^{(0)})),\Sigma(C_{0}(G^{(0)})))\to(G,\Sigma)\) _and_ \((\phi^{\prime}_{2},\phi^{\prime}_{1}):(H(C_{0}(H^{(0)})),E(C_{0}(H^{(0)}))) \to(H,E)\)_, defined in Proposition_ 2.9_, the following diagrams commutes,_ (3.1)
Proof.: Since \((H,E)\subset^{\circ}(G,\Sigma)\), the natural map \(C_{c}(H,E)\to C_{c}(G,\Sigma)\) extends to an isometric homomorphism \(:C_{r}^{*}(H,E)\hookrightarrow C_{r}^{*}(G,\Sigma)\), by [2, Lemma 3.4]. Since the open support of an element in \(C_{c}(H,E)\) is a subset of the open support of that element in \(C_{c}(G,\Sigma)\), by the identification (2.6), we have, \(\iota|_{C_{0}(H^{(0)})}:C_{0}(H^{(0)})\cong C_{0}(G^{(0)})\), which proves (1). Put \(A:=C_{r}^{*}(G,\Sigma)\), \(B:=C_{r}^{*}(H,E)\), and \(C:=C_{0}(G^{(0)})\cong C_{0}(H^{(0)})\). For (2), let \([\alpha_{n}(x),\alpha_{n},x]\in H(C_{0}(H^{(0)}))\), i.e., \(n\in N_{B}(C)\), \(x\in\operatorname{dom}(n)\). By the definition of normalizers, \(\iota(N_{B}(C))=N_{A}(C)\cap\iota(B)\). On the other hand, since \(\iota\) maps \(C_{0}(H^{(0)})\) into \(C_{0}(G^{(0)})\), \(\operatorname{dom}(\iota(n))=\operatorname{dom}(n)\), thus \([\alpha_{\iota(n)}(x),\alpha_{\iota(n)},x]\in G(C_{0}(G^{(0)}))\). It is easy to check that \(\iota_{2}\) is a groupoid embedding, and the same for \(\iota_{1}\). To prove (3), let \(n\in N_{B}(C)\) and put \(S_{1}:=\operatorname{supp}^{\prime}(\iota(n))\subseteq G\) and \(S_{2}=\operatorname{supp}^{\prime}(n)\subseteq H\), then \(\phi^{\prime}_{2}([\alpha_{n}(x),\alpha_{n},x])=S_{1}x\) and \(\phi_{2}([\alpha_{\iota(n)}(x),\alpha_{\iota(n)},x])=S_{2}x\). By the proof of part (1), \(S_{1}\subset S_{2}\), and [16, Proposition 4.7] implies that \(S_{1}\) and \(S_{2}\) are bisections and \(S_{1}x=S_{2}x\), for each \(x\). Finally, \(\phi^{\prime}_{1}(y,n,x)=(n(S_{1}x)/\sqrt{n^{*}n(x)},S_{1}x)=(\iota(n)(S_{2}x)/ \sqrt{\iota(n^{*}n)(x)},S_{2}x)=\phi_{1}(y,\iota(n),x)\), as required.
**Lemma 3.2**.: _Let \(B_{i}:=C_{r}^{*}(H_{i},E_{i})\subset C_{r}^{*}(G_{i},\Sigma_{i}):=A_{i}\), \(i=1,2\), where \((G_{i},\Sigma_{i})\) and \((H_{i},E_{i})\) are twisted topologically principal groupoids with \((H_{i},E_{i})\subset^{\circ}(G_{i},\Sigma_{i})\). Assume that there exists a Cartan invariant isomorphism \(\psi:A_{1}\to A_{2}\), mapping \(B_{1}\) onto \(B_{2}\). Then the isomorphism \((\phi^{\prime},\phi):(G_{1},\Sigma_{1})\to(G_{2},\Sigma_{2})\) induced by the Renault's canonical isomorphism (2.5) maps \(E_{1}\) and \(H_{1}\) to \(E_{2}\) and \(H_{2}\), respectively, yielding a twisted groupoid isomorphism \((H_{1},E_{1})\cong(H_{2},E_{2})\)._
Proof.: Let us identify \(C_{i}:=C_{0}(G_{i}^{(0)})\) with \(C_{0}(H_{i}^{(0)})\), for \(i=1,2\). Let \((\phi_{i},\phi^{\prime}_{i}):(G_{i}(C_{i}),\Sigma_{i}(C_{i}))\to(G_{i},\Sigma_{i})\) be the isomorphism defined in Proposition 2.9, and \((\omega^{\prime},\omega):(G_{1}(C_{1}),\Sigma_{1}(C_{1}))\to(G_{2}(C_{2}), \Sigma_{2}(C_{2}))\) be as in (2.5). We have commutative diagrams,
By Lemma 3.1, we also have the following commutative diagrams,
It is enough to observe that \(\omega(E_{1}(C_{1}))=E_{2}(C_{2})\) and \(\omega^{\prime}(H_{1}(C_{1}))=H_{2}(C_{2})\). Let \(n\in N_{B_{1}}(C_{1})=N_{A}(C_{1})\cap B_{1}\) and \(x\in H_{1}^{(0)}=G_{1}^{(0)}\) with \(x\in\operatorname{dom}(n)\), then \((\alpha_{n}(x),n,x)\in E_{1}(C_{1})\), and by the paragraph before Definition 2.11, we have, \(\omega((\alpha_{n}(x),n,x))=(\psi(\alpha_{n}(x)),\psi(n),\psi(x))=(\alpha_{ \psi(n)}(\psi(x)),\psi(n),\psi(x)).\) Since \(\psi\) is Cartan invariant, \(\psi(n)\in N_{B_{2}}(C_{2})\) and \(\psi(x)\in G_{2}^{(0)}\), and in particular, \((\alpha_{\psi(n)}(\psi(x)),\psi(n),\psi(x))\in E_{2}(C_{2})\). Similarly, \(\omega^{\prime}(H_{1}(C_{1}))=H_{2}(C_{2})\). As the following diagram commutes,
the restrictions \(\phi|_{E_{1}}\) and \(\phi^{\prime}|_{H_{1}}\) induce an isomorphism: \((H_{1},E_{1})\to(H_{2},E_{2})\).
**Lemma 3.3**.: _Let \((A,B)\) be a Cartan pair, \(\Gamma\curvearrowright A\) be a Cartan invariant action, and \(\Gamma\curvearrowright(G(B),\Sigma(B))\) be as in (2.4). Then, for \(X:=\hat{B}\), the following are equivalent:_
1. \(\Gamma\ltimes G(B)\) _is topologically principal,_
2. \(\beta:\Gamma\curvearrowright G(B)\) _is topologically principally free,_
3. _the elements_ \(x\in X\) _with the following property form a dense subset of_ \(X\)_:_ \(\alpha_{n}(x)=\gamma.x\) _with_ \(n\in N_{A}(B)\) _and_ \(x\in\operatorname{dom}(n)\) _holds only when_ \(\gamma=id_{\Gamma}\) _and_ \(n\in B\)_,_
4. _The elements_ \(x\in X\) _with the following property form a dense subset of_ \(X\)_:_ \(n^{*}(\gamma.b)n(x)=b(x)n^{*}n(x)\) _with_ \(n\in N_{A}(B)\) _and_ \(x\in\operatorname{dom}(n)\)_, for all_ \(b\in B\)_, holds only when_ \(\gamma=id_{\Gamma}\) _and_ \(n\in B\)_._
Proof.: (1) and (2) are equivalent by the observation after Definition 2.12. For \(g=[\alpha_{n}(x),\alpha_{n},x]\in G(B)\), \(\gamma.r(g)=s(g)\) is equivalent to \(\gamma.\alpha_{n}(x)=x\). By the definition of \(G(B)\), \(g\) is in the unit space if \(n\in B\). Thus \(x\in U_{\beta}\) simply means that \(\alpha_{n}(x)=\gamma^{-1}.x\) holds only when \(\gamma=id_{\Gamma}\) and \(n\in B\), for \(n\in N_{A}(B)\) with \(x\in\operatorname{dom}(n)\). Thus (2) and (3) are equivalent. To show the equivalence of (3) and (4), it is enough to observe that \(n^{*}(\gamma.b)n(x)=b(x)n^{*}n(x)\), for \(b\in B\), if and only if \(\alpha_{n}(x)=\gamma^{-1}.x\), for \(n\in N_{A}(B)\) with \(x\in\operatorname{dom}(n)\). For this, let \(n\in N_{A}(B)\), \(x\in\operatorname{dom}(n)\) and \(\gamma\in\Gamma\) such that \(n^{*}(\gamma.b)n(x)=b(x)n^{*}n(x)\), for \(b\in B\), then,
\[b(\gamma^{-1}.\alpha_{n}(x))n^{*}n(x)=(\gamma.b)(\alpha_{n}(x))n^{*}n(x)=n^{*} (\gamma.b)n(x)=b(x)n^{*}n(x),\]
for \(b\in B\). Since \(x\in\operatorname{dom}(n)\) and \(b\) is arbitrary, \(\gamma^{-1}.\alpha_{n}(x)=x\). Conversely, if \(\gamma^{-1}.\alpha_{n}(x)=x\), then \(b(x)n^{*}n(x)=b(\gamma^{-1}.\alpha_{n}(x))n^{*}n(x)=n^{*}(\gamma.b)n(x)\)
A Cartan invariant action \(\Gamma\curvearrowright A\) is called topologically principally free if the condition (4) above holds.
Proof of Theorem A.: \((1)\Leftrightarrow(2)\): This follows directly from Definition 2.7.
\((2)\Rightarrow(3)\): The map \(\psi^{\prime}\) lifts to \(\theta^{\prime}:C_{c}(\Gamma_{1}\ltimes G_{1},\Gamma_{1}\ltimes\Sigma_{1}) \to C_{c}(\Gamma_{2}\ltimes G_{2},\Gamma_{2}\ltimes\Sigma_{2})\), mapping \(C_{c}(G_{1}^{(0)})=C_{c}(\Sigma_{1}^{(0)})\) and \(C_{c}(G_{1},\Sigma_{1})\) onto \(C_{c}(G_{2}^{(0)})\) and \(C_{c}(G_{2},\Sigma_{2})\), respectively, and it maps \(C_{c}(\Gamma_{1}\ltimes G_{1}^{(0)},\mathbb{T}\times(\Gamma_{1}\ltimes G_{1}^ {(0)}))\) onto \(C_{c}(\Gamma_{2}\ltimes G_{2}^{(0)},\mathbb{T}\times(\Gamma_{2}\ltimes G_{2}^ {(0)}))\), by part (2) of the paragraph before Definition 2.12. This induces an isomorphism \(\theta:C_{r}^{*}(\Gamma_{1}\ltimes G_{1},\Gamma_{1}\ltimes\Sigma_{1})\to C_{r }^{*}(\Gamma_{2}\ltimes G_{2},\Gamma_{2}\ltimes\Sigma_{2})\). The embedding of trivial twist of \(\Gamma_{i}\ltimes G_{i}^{(0)}\) in \((\Gamma_{i}\ltimes G_{i},\Gamma_{i}\ltimes\Sigma_{i})\) is an open embedding, \((G_{i},\Sigma_{i})\subset^{\circ}(\Gamma_{i}\ltimes G_{i},\Gamma_{i}\ltimes \Sigma_{i})\), and \((id_{i},G_{i}^{(0)})\subset^{\circ}(\Gamma_{i}\ltimes G_{i},\Gamma_{i}\ltimes \Sigma_{i})\), for \(i=1,2\). By the proof of [2, Lemma 3.4], given \(f_{i}\in C_{c}(G_{i}^{(0)})\), \(f_{i}^{\prime}\in C_{c}(\Gamma_{i}\ltimes G_{i}^{(0)})\), and \(f_{i}^{\prime\prime}\in C_{c}(G_{i},\Sigma_{i})\), we have, \(||f_{i}||_{C_{0}(G_{i}^{(0)})}=||f_{i}||_{C_{r}^{*}(\Gamma_{i}\ltimes G_{i}, \Gamma_{i}\ltimes\Sigma_{i})}\), \(||f_{i}^{\prime}||_{C_{r}^{*}(\Gamma_{i}\ltimes G_{i}^{(0)})}=||f_{i}^{\prime \prime}||_{C_{r}^{*}(\Gamma_{i}\ltimes G_{i},\Gamma_{i}\ltimes\Sigma_{i})}\), and \(||f_{i}^{\prime\prime}||_{C_{r}^{*}(G_{i},\Sigma_{i})}=||f_{i}^{\prime\prime}|| _{C_{r}^{*}(\Gamma_{i}\ltimes G_{i},\Gamma_{i}\ltimes\Sigma_{i})}\). Therefore, \(\theta\) maps \(C_{0}(G_{1}^{(0)})\), \(\Gamma_{1}\ltimes_{r}C_{r}^{*}(G_{1}^{(0)})=C_{r}^{*}(\Gamma_{1}\ltimes G_{1}^ {(0)})\), and \(C_{r}^{*}(G_{1},\Sigma_{1})\) onto \(C_{0}(G_{2}^{(0)})\), \(\Gamma_{2}\ltimes_{r}C_{0}(G_{2}^{(0)})\), and \(C_{r}^{*}(G_{2},\Sigma_{2})\), respectively.
\((3)\Rightarrow(2)\): Put \(A_{i}:=C_{r}^{*}(\Gamma_{i}\ltimes G_{i},\Gamma_{i}\ltimes\Sigma_{i})\), \(B_{i}:=C^{*}(G_{i},\Sigma_{i})\), \(C_{i}:=C_{0}(G_{i}^{(0)})\), and \(\Gamma_{i}\ltimes_{r}C_{0}(G_{i}^{(0)})\cong C_{r}^{*}(\Gamma_{i}\ltimes G_{i} ^{(0)})=:D_{i}\), for \(i=1,2\). Then, there exists an isomorphism \(A_{1}\cong A_{2}\), mapping \(B_{1},C_{1}\), and \(D_{1}\) onto \(B_{2},C_{2}\), and \(D_{2}\), respectively. The additional condition ensures that \(\Gamma_{i}\ltimes G_{i}\) is topologically pricipal, and by Proposition 2.9, there exists a twisted groupoid isomorphism \((\psi,\psi^{\prime}):(\Gamma_{1}\ltimes G_{1},\Gamma_{1}\ltimes\Sigma_{1}) \to(\Gamma_{2}\ltimes G_{2},\Gamma_{2}\ltimes\Sigma_{2})\). Since \((G_{i},\Sigma_{i})\subset^{\circ}(\Gamma_{i}\ltimes G_{i},\Gamma_{i}\ltimes \Sigma_{i})\), by Lemma 3.2, \(\psi(id_{\Gamma_{1}},G_{1})=\psi(id_{\Gamma_{2}},G_{2})\), and \(\psi^{\prime}(id_{\Gamma_{1}},\Sigma_{1})=\psi(id_{\Gamma_{2}},\Sigma_{2})\), yielding a twisted groupoid isomorphism \((G_{1},\Sigma_{1})\cong(G_{2},\Sigma_{2})\). Finally, since \(\Gamma_{i}\ltimes G_{i}^{(0)}\subset^{\circ}(\Gamma_{i}\ltimes G_{i},\Gamma_{i }\ltimes\Sigma_{i})\), and \(\psi(\Gamma_{1},G_{1}^{(0)})=\psi(\Gamma_{1}\ltimes G_{1}^{(0)})=\Gamma_{2} \ltimes G_{2}^{(0)}=(\Gamma_{2},G_{2}^{(0)})\), \(\psi^{\prime}\) maps \((\Gamma_{1},\mathbb{T}\times\Sigma_{1}^{(0)})\) onto \((\Gamma_{2},\mathbb{T}\times\Sigma_{2}^{(0)})\).
By Proposition 2.10, statement (3) above could be replaced with the requirement that there exists an isomorphism \(\theta:\Gamma_{1}\ltimes_{r}C_{r}^{*}(G_{1},\Sigma_{1})\cong\Gamma_{2}\ltimes_{r }C_{r}^{*}(G_{2},\Sigma_{2})\) with,
* \(\theta(C_{0}(G_{1}^{(0)}))=C_{0}(G_{2}^{(0)})\),
* \(\theta(\Gamma_{1}\ltimes_{r}C_{0}(G_{1}^{(0)}))=\Gamma_{2}\ltimes_{r}C_{0}(G_{2 }^{(0)})\),
* \(\theta(C_{r}^{*}(G_{1},\Sigma_{1}))=C_{r}^{*}(G_{2},\Sigma_{2})\).
This in particular shows that, Corollary C follows from Proposition 2.8.
Proof of Theorem D.: Put \(C_{i}:=C_{0}(G_{i}^{(0)})\), \(i=1,2\), and let,
\[(\theta,\theta^{\prime}):(G_{1}(C_{1}),\Sigma(C_{1}))\to(G_{2}(C_{2}),\Sigma_{2})\]
be as in (2.5). Since the actions are Cartan invariant, \(\Gamma_{i}\) acts on \((G_{i}(C_{i}),\Sigma_{i})\) by (2.4). Let \(a:\Gamma_{1}\times G_{1}^{(0)}\to\Gamma_{2}\) and \(b:\Gamma_{2}\times G_{2}^{(0)}\to\Gamma_{1}\) be continuous maps yielding \(\Gamma_{1}\curvearrowright G_{1}^{(0)}\ltimes_{coef}\Gamma_{2}\curvearrowright G _{2}^{(0)}\). Then, \(\theta(\gamma.[y,\alpha_{n},x])=a(\gamma,x).\theta([y,\alpha_{n},x])\), for \(\gamma\in\Gamma_{1}\), \(\gamma^{\prime}\in\Gamma_{2}\), \([y,\alpha_{n},x]\in G_{1}(C_{1})\), and \([y^{\prime},\alpha_{n^{\prime}},x^{\prime}]\in G_{2}(C_{2})\). By continuity of \(a\), there exists an open neighbourhood \(U\subset G_{1}^{(0)}\) of \(x\) such that \(a(\gamma,u)=a(\gamma,x)\), for \(u\in U\). By assumption,
\[a(\gamma,x)=a(\gamma,u)=a(\gamma,s([\alpha_{n}(u),\alpha_{n},u]))=a(\gamma,r([ \alpha_{n}(u),\alpha_{n},u]))=a(\gamma,\alpha_{n}(u)),\]
and since the actions preserve Cartan subalgebras, \(\gamma_{1}.n_{1}\) and \(\gamma_{2}.n_{2}\) are normalizers in \(C_{r}^{*}(G_{1},\Sigma_{1})\) and \((G_{2},\Sigma_{2})\), respectively, for \(\gamma_{i}\in\Gamma_{i}\) and normalizers \(n_{i}\) in
\(C_{r}^{*}(G_{i},\Sigma_{i})\), \(i=1,2\). For \(x\in G_{1}^{(0)}\),
\[(a(\gamma,x)\psi(n^{*}n))(\psi(\gamma.x))=\psi(n^{*}n)(\psi(x))=n^{*}n(x).\]
Thus, \(\mathrm{dom}(a(\gamma,x)\psi(n))=\psi(\mathrm{dom}(n))\). Therefore,
\[\alpha_{a(\gamma,x)\psi(n)}(\psi(\gamma u)) =\alpha_{a(\gamma,x)\psi(n)}(a(\gamma,x)\psi(u))=a(\gamma,x) \alpha_{\psi(n)}(\psi(u))\] \[=a(\gamma,\alpha_{n}(u))\psi(\alpha_{n}(u))=\psi(\gamma\alpha_{n} (u))\] \[=\psi(\alpha_{\gamma n}(\gamma u))=\alpha_{\psi(\gamma n)}(\psi( \gamma u)),\]
i.e., \(\alpha_{a(\gamma,x)\psi(n)}|_{\psi(\gamma.U)}=\alpha_{\psi(\gamma n)}|_{\psi( \gamma.U)}\). Finally, by the paragraph after Proposition 2.8, \(\theta(\gamma.[y,\alpha_{n},x])=[\psi(\gamma y),\alpha_{\psi(\gamma n)},a( \gamma,x)\psi(x)]=a(\gamma,x).\theta([y,\alpha_{n},x]).\) Similar equality holds for \(\theta^{-1}\) and \(b\).
Now by Lemma 2.2, we get Corollary E. As a final remark, note that in the last theorem, condition (1.1) implies \(\Gamma_{1}\curvearrowright G_{1}\sim_{coe}\Gamma_{2}\curvearrowright G_{2}\) with respect to \(\theta\), which is weaker than \(\Gamma_{1}\curvearrowright\Sigma_{1}\sim_{coe}\Gamma_{2}\curvearrowright\Sigma_ {2}\) with respect to \(\theta^{\prime}\), by Proposition 2.5. Similarly, the following conditions imply \(\Gamma_{1}\curvearrowright\Sigma_{1}\sim_{coe}\Gamma_{2}\curvearrowright\Sigma_ {2}\) with respect to \(\theta^{\prime}\),
1. for normalizer \(n\) of \(C_{r}^{*}(G_{1},\Sigma_{1})\), \(x\in\mathrm{dom}(n)\), \(\gamma\in\Gamma_{1}\), there exist \(b,b^{\prime}\in C_{0}(G_{1}^{(0)})\) such that \(b(\gamma x),b^{\prime}(x)>0\) and \(\psi((\gamma.n)b)=a(\gamma,x)\psi(nb^{\prime})\),
2. for normalizer \(n\) of \(C_{r}^{*}(G_{2},\Sigma_{2})\), \(\gamma\in\Gamma_{2}\) and \(x\in\mathrm{dom}(n)\), there exist \(b,b^{\prime}\in C_{0}(G_{2}^{(0)})\) such that \(b(\gamma x),b^{\prime}(x)>0\) and \(\psi^{-1}((\gamma.n)b)=b(\gamma_{1},x)\psi^{-1}(nb^{\prime})\).
## Appendix
As seen right after Definition 2.7, continuous orbit equivalence of actions on twisted groupoids could be defined similar to Definition 2.1. In this definition, as expected, the natural \(\mathbb{T}\) action on twist plays an important role. This appendix justifies such a definition and gives an analog of Theorem 2.3 in this general setting.
Following [5], by a cocycle we mean a map \(c\) from a groupoid \(G\) to a group \(\Gamma\), which is multiplicative, in the sense that, \(c(g_{1}g_{2})=c(g_{1})c(g_{2})\), for \((g_{1},g_{2})\in G^{(2)}\).
**Definition 3.4**.: Two actions \(\Gamma_{1}\curvearrowright(G_{1},\Sigma_{1})\) and \(\Gamma_{2}\curvearrowright(G_{2},\Sigma_{2})\) are continuous \(r\)-orbit (resp., \(s\)-orbit) equivalent if there are isomorphism \(\phi:(G_{1},\Sigma_{1})\to(G_{2},\Sigma_{2})\), continuous maps \(a:\Gamma_{1}\times G_{1}^{(0)}\to\Gamma_{2}\), \(b:\Gamma_{2}\times G_{2}^{(0)}\to\Gamma_{1}\), and cocycles \(t:\Gamma_{1}\ltimes\Sigma_{1}^{(0)}\to\mathbb{T}\), \(t^{\prime}:\Gamma_{2}\ltimes\Sigma_{2}^{(0)}\to\mathbb{T}\), such that, \(\phi(\gamma_{1}g_{1})=\frac{t_{\gamma_{1},\tau(g_{1})}}{t_{\gamma_{1},s(g_{1} )}}a(\gamma_{1},r(g_{1}))\phi(g_{1})\) and \(\phi^{-1}(\gamma_{2}g_{2})=\frac{t^{\prime}_{\gamma_{2},\tau(g_{2})}}{t^{ \prime}_{\gamma_{2},s(g_{2})}}b(\gamma_{2},r(g_{2}))\phi^{-1}(g_{2})\), \(\big{(}\)resp., \(\phi(\gamma_{1}g_{1})=\frac{t_{\gamma_{1},\tau(g_{1})}}{t_{\gamma_{1},s(g_{1} )}}a(\gamma_{1},s(g_{1}))\phi(g_{1})\) and \(\phi^{-1}(\gamma_{2}g_{2})=\frac{t^{\prime}_{\gamma_{2},\tau(g_{2})}}{t^{ \prime}_{\gamma_{2},s(g_{2})}}b(\gamma_{2},s(g_{2}))\phi^{-1}(g_{2})\big{)}\), for \(\gamma_{i}\in\Gamma_{i}\) and \(g_{i}\in G_{i}\), \(i=1,2\). They are continuous orbit equivalent if both conditions hold for the same maps.
Not that Lemma 2.2 also holds for actions on twisted groupoids. Using this, by a similar argument as in the proof of Theorem 2.3, one could show that the above notion of orbit equivalence coincides with Definition 2.7 for topologically free actions.
**Theorem 3.5**.: _Let \(\Gamma_{1}\curvearrowright(G_{1},\Sigma_{1})\), \(\Gamma_{2}\curvearrowright(G_{2},\Sigma_{2})\) be two topologically free actions. The following are equivalent:_
1. \(\Gamma_{1}\curvearrowright(G_{1},\Sigma_{1})\sim_{coe}\Gamma_{2}\curvearrowright( G_{2},\Sigma_{2})\)_,_
_,_
2. _there exists a groupoid isomorphism_ \((\psi^{\prime},\psi):(\Gamma_{1}\ltimes G_{1},\Gamma_{1}\ltimes\Sigma_{1})\to( \Gamma_{2}\ltimes G_{2},\Gamma_{2}\ltimes\Sigma_{2})\) _such that_ \(\psi(\Gamma_{1},\mathbb{T}\times\Sigma_{1}^{(0)})=(\Gamma_{2},\mathbb{T}\times \Sigma_{2}^{(0)})\) _and_ \(\psi(id_{1},\Sigma_{1})=(id_{2},\Sigma_{2}).\)__
Proof.: \((1)\Rightarrow(2)\): Let \(\phi:\Sigma_{1}\to\Sigma_{2}\) be the isomorphism given by the orbit equivalence. Let \(\psi:\Gamma_{1}\ltimes\Sigma_{1}\to\Gamma_{2}\ltimes\Sigma_{2}\) and \(\chi:\Gamma_{2}\ltimes\Sigma_{2}\to\Gamma_{1}\ltimes\Sigma_{1}\) map \((\gamma_{1},\sigma_{1})\) to \((a(\gamma_{1},r(\sigma_{1})),t_{\gamma_{1},r(\sigma_{1})}\phi(\sigma_{1}))\) and \((\eta_{2},h_{2})\) to \((b(\eta_{2},r(h_{2})),t^{\prime}_{\gamma_{2},r(h_{2})}\phi^{-1}(h_{2}))\), respectively. To see that \(\psi\) and \(\chi\) are groupoid homomorphisms, let \(((\gamma_{1},\sigma_{1}),(\gamma_{2},\sigma_{2}))\in(\Gamma_{1}\ltimes\Sigma_ {1})^{(2)}\), that is, \((\gamma_{2}^{-1}\sigma_{1},\sigma_{2})\in\Sigma_{1}^{(2)}\), \(r(\sigma_{2})=s(\gamma_{2}^{-1}\sigma_{1})=\gamma_{2}^{-1}s(\sigma_{1})\), and \(\gamma_{2}r(\sigma_{2})=s(\sigma_{1})\), then for \(\gamma\in\Gamma_{1}\), \(a(\gamma,\gamma_{2}r(\sigma_{2}^{-1}))=a(\gamma,\gamma_{2}s(\sigma_{2}))=a( \gamma,\gamma_{2}r(\sigma_{2}))=a(\gamma,s(\sigma_{1}))=a(\gamma,r(\sigma_{1}))\), and by Lemma 2.2, \[\psi(\gamma_{1}, \sigma_{1})\psi(\gamma_{2},\sigma_{2})=(a(\gamma_{1},r(\sigma_{1} )),t_{\gamma_{1},r(\sigma_{1})}\phi(\sigma_{1}))(a(\gamma_{2},r(\sigma_{2})), t_{\gamma_{2},r(\sigma_{2})}\phi(\sigma_{2}))\] \[=(a(\gamma_{1},r(\sigma_{1}))a(\gamma_{2},r(\sigma_{2})),t_{ \gamma_{1},r(\sigma_{1})}t_{\gamma_{2},r(\sigma_{2})}(a(\gamma_{2},r(\sigma_{ 2}))^{-1}\phi(\sigma_{1}))\phi(\sigma_{2}))\] \[=(a(\gamma_{1}\gamma_{2},r((\gamma_{2}^{-1}\sigma_{1})\sigma_{2}) ),t_{\gamma_{1},r(\sigma_{1})}t_{\gamma_{2},r(\sigma_{2})}(a(\gamma_{2}^{-1},r (\gamma_{2}\sigma_{2}^{-1}))\phi(\sigma_{1}))\phi(\sigma_{2}))\] \[=(a(\gamma_{1}\gamma_{2},r((\gamma_{2}^{-1}\sigma_{1})\sigma_{2}) ),t_{\gamma_{1},r(\sigma_{1})}t_{\gamma_{2},r(\sigma_{2})}t_{\gamma_{2}^{-1}, s(\sigma_{1})}t_{\gamma_{2}^{-1},r(\sigma_{1})}^{-1}(\phi(\gamma_{2}^{-1}\sigma_{1})) \phi(\sigma_{2}))\] \[=(a(\gamma_{1}\gamma_{2},r((\gamma_{2}^{-1}\sigma_{1})\sigma_{2}) ),t_{\gamma_{1}\gamma_{2},\gamma_{2}^{-1}r(\sigma_{1})}(\phi(\gamma_{2}^{-1} \sigma_{1}))\phi(\sigma_{2}))\] \[=\psi(\gamma_{1}\gamma_{2},(\gamma_{2}^{-1}\sigma_{1})\sigma_{2}) =\psi((\gamma_{1},\sigma_{1})(\gamma_{2},\sigma_{2})).\]
Similarly, \(\chi\) is a groupoid homomorphism and inverse of \(\psi\). The isomorphism \(\psi\) now induces an isomorphism \(\psi^{\prime}:(\Gamma_{1},G_{1})\to(\Gamma_{2},G_{2})\).
\((2)\Rightarrow(1):\) Let \(\psi:\Gamma_{1}\ltimes\Sigma_{1}\to\Gamma_{2}\ltimes\Sigma_{2}\) and \(\phi:\Sigma_{1}\to\Sigma_{2}\) be isomorphisms with \(\psi(\Gamma_{1},\mathbb{T}\times\Sigma_{1}^{(0)})=(\Gamma_{2},\mathbb{T}\times \Sigma_{2}^{(0)})\) and \(\psi(id_{1},\sigma_{1})=(id_{2},\phi(\sigma_{1}))\). Let \(a:\Gamma_{1}\times\Sigma_{1}^{(0)}\to\Gamma_{2}\) map \((\gamma_{1},r(\sigma_{1}))\) to the first component of \(\psi(\gamma_{1},r(\sigma_{1}))\) and \(b:\Gamma_{2}\times\Sigma_{2}^{(0)}\to\Gamma_{1}\) map \((\gamma_{2},r(\sigma_{2}))\) to the first component of \(\psi^{-1}(\gamma_{2},r(\sigma_{2}))\). By assumption, there exists \(t_{\gamma_{1},r(\sigma_{1})}\in\mathbb{T}\) such that \(\psi(\gamma_{1},r(\sigma_{1}))=(a(\gamma_{1},r(\sigma_{1})),t_{\gamma_{1},r( \sigma_{1})}.\sigma_{2})\), where \(\sigma_{2}\in\Sigma_{2}^{(0)}\), then \(\psi(s(\gamma_{1},r(\sigma_{1})))=s(a(\gamma_{1},r(\sigma_{1})),t_{\gamma_{1},r( \sigma_{1})}\sigma_{2})\) implies \(\psi(id_{1},r(\sigma_{1}))=(id_{2},\sigma_{2})\), and \(\sigma_{2}=\phi(r(\sigma_{1}))\), that is, \(\psi(\gamma_{1},r(\sigma_{1}))=(a(\gamma_{1},r(\sigma_{1})),t_{\gamma_{1},r( \sigma_{1})}\phi(r(\sigma_{1}))).\) Also,
\[\psi(\gamma_{1},\sigma_{1}) =\psi(\gamma_{1},r(\sigma_{1}))\psi(id_{\Gamma_{1}},\sigma_{1})=(a( \gamma_{1},r(\sigma_{1})),t_{\gamma_{1},r(\sigma_{1})}\phi(r(\sigma_{1})))(id_{ \Gamma_{1}},\phi(\sigma_{1}))\] \[=(a(\gamma_{1},r(\sigma_{1})),t_{\gamma_{1},r(\sigma_{1})}\phi( \sigma_{1})).\]
The map \(t:\Gamma_{1}\ltimes\Sigma_{1}^{(0)}\to\mathbb{T}\); \((\gamma_{1},u_{1})\mapsto t_{\gamma_{1},u_{1}}\) is multiplicative, and a cocycle. Similarly, a multiplicative map \(t^{\prime}:\Gamma_{2}\ltimes\Sigma_{2}^{(0)}\) could be defined with respect to \(\psi^{-1}\), and \(t_{\gamma_{1},s(\sigma_{1})}^{-1}a(\gamma_{1},r(\sigma_{1}^{-1}))\phi(\sigma_{1})= t_{\gamma_{1}^{-1},\gamma_{1}r(\sigma_{1})}\phi(\gamma_{1}\sigma_{1})\). By Lemma 2.2, we get \(\phi(\gamma_{1}\sigma_{1})=\frac{t_{\gamma_{1},r(\sigma_{1})}}{t_{\gamma_{1},s( \sigma_{1})}}a(\gamma_{1},r(\sigma_{1}))\phi(\sigma_{1})\), for \(\gamma_{1}\in\Gamma_{1}\) and \(\sigma_{1}\in\Sigma_{1}\). The equality for \(\phi^{-1}\) is shown similarly.
|
2302.01974 | Conic Sparsity: Estimation of Regression Parameters in Closed Convex
Polyhedral Cones | Statistical problems often involve linear equality and inequality constraints
on model parameters. Direct estimation of parameters restricted to general
polyhedral cones, particularly when one is interested in estimating low
dimensional features, may be challenging. We use a dual form parameterization
to characterize parameter vectors restricted to lower dimensional faces of
polyhedral cones and use the characterization to define a notion of 'sparsity'
on such cones. We show that the proposed notion agrees with the usual notion of
sparsity in the unrestricted case and prove the validity of the proposed
definition as a measure of sparsity. The identifiable parameterization of the
lower dimensional faces allows a generalization of popular spike-and-slab
priors to a closed convex polyhedral cone. The prior measure utilizes the
geometry of the cone by defining a Markov random field over the adjacency graph
of the extreme rays of the cone. We describe an efficient way of computing the
posterior of the parameters in the restricted case. We illustrate the
usefulness of the proposed methodology for imposing linear equality and
inequality constraints by using wearables data from the National Health and
Nutrition Examination Survey (NHANES) actigraph study where the daily average
activity profiles of participants exhibit patterns that seem to obey such
constraints. | Neha Agarwala, Arkaprava Roy, Anindya Roy | 2023-02-03T19:47:56Z | http://arxiv.org/abs/2302.01974v2 | Characterization and estimation of high dimensional sparse regression parameters under linear inequality constraints
###### Abstract
Modern statistical problems often involve such linear inequality constraints on model parameters. Ignoring natural parameter constraints usually results in less efficient statistical procedures. To this end, we define a notion of'sparsity' for such restricted sets using lower-dimensional features. We allow our framework to be flexible so that the number of restrictions may be higher than the number of parameters. One such situation arise in estimation of monotone curve using a non parametric approach e.g. splines. We show that the proposed notion of sparsity agrees with the usual notion of sparsity in the unrestricted case and proves the validity of the proposed definition as a measure of sparsity. The proposed sparsity measure also allows us to generalize popular priors for sparse vector estimation to the constrained case.
_Key words_: sparsity, convex polyhedral cone, high dimension, adjacency graph, spike-and-slab prior, continuous shrinkage prior.
## 1 Introduction
In this chapter, we consider Bayesian estimation of possibly high dimensional parameter that are known to be restricted to a pointed closed convex polyhedral cone. We develop everything
in the backdrop of normal mean estimation problem where the mean vector is constrained to a convex polyhedral cone but the concepts and the prior probability distributions developed here generalize easily to other models. Often, in constrained problems, the restricted models have to be embedded in higher dimensional models where the parameter space is unrestricted or at least more amenable to standard estimation methods. Thus, model complexity can be high in constrained problems even if the dimension of observations is not. In such situation some form of low dimensional formulation of the problem is required for making statistical inference possible without demanding a large sample size. The embedding to a higher dimensional space provides a parameterization of the model. For successful inference over a 'low dimensional' set of parameters the embedding needs to be an identifiable parameterization over that set. This property of the embedding is not guaranteed. We look at the restriction of parameters to a pointed full-dimensional closed convex cone defined by a set of linear inequalities
\[\mathbb{C}=\{\boldsymbol{\mu}\in\mathbb{R}^{n}:\boldsymbol{A}\boldsymbol{\mu} \geq 0\} \tag{1}\]
where \(\boldsymbol{A}\) is some fixed \(m\times n\) matrix. Since the cone is the intersection of finitely many half-spaces, it is a polyhedral cone. We consider the natural embedding of the cone using its minimal set of generators and consider its restriction to lower dimensional faces of the cone. We show that ascribing sparsity on the parameters of the embedding is not sufficient to have identifiable representation of the lower dimensional parameter vectors.
The main contribution of this chapter is an identifiable parameterization of vectors lying in lower dimensional subsets of the cone described in terms of the minimal generators representations. We define such vectors lying on the lower dimensional faces as'sparse' vector because the notion of sparsity agrees with the usual notion of sparsity when the cone is an orthant. Then using the proposed definition of sparsity we defined flexible prior distributions that are either fully or nearly fully supported on the set of'sparse' vectors and allows one to carry on Bayesian inference under sparsity and conic constraints.
There are many motivating applications where'sparse' signals for constrained parameters arise and thus estimation of these parameters with these restrictions is desired. Some examples of that are popular in economics are estimation of cumulative distribution function (CDF),
demand curve estimation, portfolio optimization and trend detection in econometrics. However, constrained parameter inference are not limited to business and economics, for example, dose-determination in treatments, signal detection in radar processing, and shape-constrained inference in non-parametric statistics.
Let \(\mathbf{y}=(y_{1},\ldots,y_{n})^{\prime}\sim N(\mathbf{\mu},\sigma^{2}\mathbf{I})\) where the parameter of interest \(\mathbf{\mu}=(\mu_{1},\ldots,\mu_{n})^{\prime}\in\mathbb{C}\). We assume that \(\mathbb{C}\) has non-zero interior volume with respect to the \(n\) dimensional Lebesgue measure. We consider a general framework where \(\mathbf{\mu}\) is constrained to a proper polyhedral cone \(\mathbf{A}\mathbf{\mu}\geq\mathbf{0}\). A proper polyhedral cone is a closed convex full polyhedral cone that is pointed. A pointed cone is one that does not contain any non-trivial subspace and it is full or full-dimensional if the dual cone is pointed. We assume the cone is pointed (acute) and irreducible, i.e. the \(m\times n\) (\(m\geq n\)) matrix describing the linear inequalities, \(\mathbf{A}\), is full column rank, and the rows are conically independent in the sense that there are no non-negative linear combinations, other than the trivial combination, of the rows that gives the zero vector.
The importance of linear inequality constraints in the practice of statistics is two fold. First, linear constraints arise extensively in shape restricted inference including, but not limited to, monotonicity, concavity or convexity. Such restrictions can be imposed directly on the mean function parameter or they can be modelled non-parametrically to obtain a flexible and smooth estimate. For example, if our goal is to fit a function, \(f\) to the data \((x_{1},y_{1}),\ldots,(x_{n},y_{n})\), so that
\[y_{i}=f(x_{i})+\epsilon_{i}\]
where \(f\) is assumed to have some restrictions and \(E(\mathbf{\epsilon})=\mathbf{0}\) and \(\text{cov}(\mathbf{\epsilon})=\sigma^{2}\mathbf{I}\) then assuming a parametric approach, the mean function \(f\) is the parameter \(\mathbf{\mu}\) with \(A\mathbf{\mu}\geq\mathbf{0}\).
Second, the linear inequalities constraint framework can be used to extend estimation of \(\mathbf{\mu}\) in the non-negative orthant when the covariance matrix is a general positive definite matrix \(\mathbf{\Sigma}\). Consider the model \(\mathbf{y}|\mathbf{\mu}\sim N(\mathbf{\mu},\sigma^{2}\mathbf{\Sigma})\) where \(\mathbf{\Sigma}\) is completely known. A standard approach to dealing with general \(\mathbf{\Sigma}\) matrix is to transform the observations to \(\mathbf{z}=\mathbf{\Sigma}^{-1/2}\mathbf{y}\) so that \(\mathbf{z}|\mathbf{\theta}\sim N(\mathbf{\theta},\sigma^{2}\mathbf{I})\) where \(\mathbf{\theta}=\Sigma^{-1/2}\mathbf{\mu}\). However, the transformed mean \(\mathbf{\Sigma}^{-1/2}\mathbf{\mu}\) need not remain in the positive orthant unless \(\mathbf{\Sigma}\) is such that the square root \(\mathbf{\Sigma}^{-1/2}\) is a _positive operator_, i.e. a matrix that leaves the cone unchanged. In the case of the positive orthant that would mean
\(\mathbf{\Sigma}^{-1/2}\) is a non-negative matrix. e.g. \(\mathbf{\Sigma}\) is an M-matrix with an inverse that admits a positive square-root. Hence one could reduce the problem of estimating \(\mathbf{\mu}\) where \(\mathbf{\mu}\geq\mathbf{0}\) to estimating \(\mathbf{\theta}\) where \(\mathbf{\Sigma}^{1/2}\mathbf{\theta}\geq\mathbf{0}\).
Of course, one could combine these two problems and consider the bigger problem of linear inequality constraints for a general \(\mathbf{\Sigma}\). For \(\mathbf{y}|\mathbf{\mu}\sim N(\mathbf{\mu},\sigma^{2}\mathbf{\Sigma})\) with \(\mathbf{A}\mathbf{\mu}\geq\mathbf{0}\). The problem can be transformed by taking \(\mathbf{z}=\Sigma^{-1/2}\mathbf{y}\) so that \(\mathbf{z}|\mathbf{\theta}\sim N(\mathbf{\theta},\sigma^{2}\mathbf{I})\) where \(\mathbf{\theta}=\Sigma^{-1/2}\mathbf{\mu}\). Hence, the estimation of \(\mathbf{\mu}\) where \(A\mathbf{\mu}\geq\mathbf{0}\) is condensed to estimating \(\mathbf{\theta}\) where \(\mathbf{A}\Sigma^{1/2}\mathbf{\theta}\geq\mathbf{0}\).
One way of estimating such a parameter is to first obtain an unrestricted estimate of the parameter and then truncate it so that the estimate lies in the constrained parameter space. Intuitively, the performance of the estimator is expected to be much better if such constraint conditions are incorporated in the estimation process. Hence the idea here is to incorporate the linear inequality restrictions into the model and in the inferential procedures.
From a frequentist estimation point of view this is a standard 'cone projection' problem of finding \(\mathbf{\mu}\in\mathbb{C}\) such that it minimizes \(||\mathbf{y}-\mathbf{\mu}||^{2}\). The cone projection problem is a special case of quadratic programming which involves finding \(\mathbf{\theta}\) such that it minimizes \(\mathbf{\theta}^{T}\mathbf{Q}\mathbf{\theta}-2c^{T}\mathbf{\theta}\) over \(\mathbb{C}\). When \(\mathbf{Q}\) is positive definite, the objective function has a unique minimum and the solution reduces to finding the projection of a general Euclidean vector to the convex cone [16; 18]. Several algorithms have been studied in the literature to address the cone projection problem by Dykstra (1983), Karmarkar (1984), Fraser and Massam (1989) among others [7; 8; 9; 10; 13; 14; 15; 20]. A detailed account of the numerical stability and computational cost of the projection algorithms has been studied by Dimiccoli (2016) [6]. Constrained estimation of normal mean restricted to convex cones has been discussed in detail in Sen and Silvapulle (2001) [21]. Polyhedral cone constraints or equivalently linear inequalities arise extensively in shape restricted inference. There are many papers on estimation of regression function under shape restrictions which are special cases of the conic restriction problem. In the Bayesian set up, Danaher _et al._ (2012) provided an example of Bayesian estimation of normal mean when the mean is constrained to a convex polytope [4].
As mentioned, one of the most interesting question that naturally arises in the context of closed convex polyhedral cone restrictions is how to specify sparsity in constrained spaces such as \(\mathbb{C}\). We provide a novel characterization of "sparse" parameters restricted to a polyhedral
cone in Section 2. The notion of'sparsity' defined here conforms with the general definition in the unrestricted case or in the case of the orthant. In Section 3 we define priors where bulk of the support is on the sparse vectors. Such priors would facilitate sparse signal extraction under general convex polyhedral cone restrictions. Finally, results from some of the examples are discussed in section 5.
## 2 Sparsity on Closed Convex Polyhedral Cones
To begin with, we provide some background on the geometry of cone and produce examples of three dimensional cone to understand and set the ideas. For any cone \(\mathbb{C}\), let us denote its dimension by \(\dim(\mathbb{C})\). A polyhedral cone is formed by the intersection of finitely many half spaces that contain the origin, i.e. for a matrix \(\boldsymbol{A}\in\mathbb{R}^{m\times n}\), we define
\[\mathbb{C}=\{\boldsymbol{\mu}\in\mathbb{R}^{n}:\boldsymbol{A} \boldsymbol{\mu}\geq 0\} \tag{2}\]
to be a polyhedral cone with \(\dim(\mathbb{C})=n\). The halfspace representation of the cone containing the origin is called the facet representation or H-representation and the matrix A forming the set of linear inequalities is called the representation matrix. The face of a cone is a lower dimensional feature formed by the intersection of the cone with a supporting hyperplane. In particular, we focus on vertex, extreme ray and facet that are faces of a cone, each lying in different dimension. A vertex is a face of dimension 0, an extreme ray is a face of dimension 1 and a facet is a face of dimension \(\dim(\mathbb{C})-1\).
We use the primal-dual representation of the cone to define'sparsity'. Using Minkowski's theorem, a polyhedral cone (2) can also be represented using a finite set of vectors called generators or extreme rays. That is, for any \(\boldsymbol{A}_{m\times n}\), there exists a generating matrix \(\boldsymbol{\Delta}_{n\times d}\) such that
\[\mathbb{C} = \{\boldsymbol{\mu}\in\mathbb{R}^{n}:\boldsymbol{\mu}=\boldsymbol {\Delta}\boldsymbol{b}=\sum_{j=1}^{d}b_{j}\boldsymbol{\delta}_{j},\ b_{j}\geq 0\} \tag{3}\]
where the columns \(\boldsymbol{\delta}_{j}\) are the generators of the cone. This representation of a polyhedral cone is called the vertex representation or V-representation. The converse of the Minkowski's theorem
is the Weyl's theorem for a polyhedral cone which states the existence of a representation matrix given a generating matrix. The generators are called _minimal_ if they are conically independent, i.e. there is no positive linear combination of the generators that equals the origin vector. For the rest of this chapter, we assume \(\mathbf{A}\) to be a _irreducible_ matrix meaning that rows of \(\mathbf{A}\) are conically independent. If \(\mathbf{A}\) is full row rank, then it is irreducible. We also assume that the \(rank(A)=n\). The resulting cone is called an _acute_ cone and the set of extreme rays is its minimal generating system. In that case, \(d\) is the minimal number of extreme rays forming the skeleton of the cone.
**Remark 1**.: _The parameterization of a cone \(\mathbb{C}\) in terms of \(\mathbf{b}\) in its vertex representation is not a proper parameterization in the sense for each vector \(\mathbf{\mu}\in\mathbb{C}\) there could be multiple \(\mathbf{b}\) such that \(\mathbf{\Delta}\mathbf{b}=\mathbf{\mu}\) even when the cone is irreducible and acute. Thus, the vector \(\mathbf{b}\) is not generally identifiable from the vector \(\mathbf{\mu}.\) Only when \(m=n=d\) and the cone is irreducible and acute, in which case \(\Delta=\mathbf{A}^{-1}\) is non-singular and the parameterization is a bijection between the cone and the non-negative orthant._
Figure (1) shows an example of a polyhedral cone in \(\mathbb{R}^{3}\) i.e. \(n=3\) formed by \(m=6\) homogeneous linear inequalities. There are \(m=6\) hyperplanes intersecting with the cone and
Figure 1: An example of polyhedral cone in \(\mathbb{R}^{3}\) with \(m=6\) homogeneous linear inequalities and \(6\) extreme rays.
hence the number of facets is 6. Also, it turns out the number of extreme rays in \(\mathbb{R}^{3}\) is equal to the number of facets. However, it is not true in general and \(d\) can be substantially larger than \(m\) which leads us to the next part.
Since there are two descriptions of a polyhedral cone, the pair \((\mathbf{A},\mathbf{\Delta})\) is said be the Double description (DD) pair [19]. Switching between the two descriptions is called the representation conversion problem. Given the facet representation, the problem of finding the set of minimal extreme rays is called the extreme ray enumeration problem. Similarly, finding the irreducible representation from the vertex representation is called facet enumeration problem. When \(A\) is full row rank, the extreme rays \(\mathbf{\delta}_{j}\)'s are given by the columns of \(\mathbf{\Delta}=\mathbf{A}^{T}(\mathbf{A}\mathbf{A}^{T})^{-1}\) and \(d=m\)[18]. When \(\mathbf{A}\) is not full row rank, the number of extreme rays may be substantially larger than \(m\). In that case, the extreme rays of the cone can be obtained using proposition 1 from Meyer (1999) [17].
There have been many variations and modifications of the Double Description (DD) method to move back and forth between the two representations, right from the primitive DD method to standard DD method [3, 12, 19, 23]. We use the R package "rcdd" by K. Fakuda, a R interface for cddlib which is a C-implementation of the DD method of Motzkin et al. [11, 12].
The methodology proposed here depends on the idea of describing points on the boundary of the cone or describing a points with proximity to the boundary of the cone when the point is in the interior. To this end, we need to use the adjacency graph w
Figure 2: An illustration of the H-representation (left) and V-representation (center) for a irreducible polyhedral cone (right) in \(\mathbb{R}^{3}\) with \(n=3,m=8,d=8\).
extreme rays of the cone.
**Definition 1**.: _For an acute cone \(\mathbb{C}=\{\boldsymbol{\mu}:\boldsymbol{\mu}=\boldsymbol{\Delta b}\}\), two extreme rays \(\boldsymbol{\delta}_{i}\) and \(\boldsymbol{\delta}_{j}\) are adjacent if the minimal face containing both rays does not contain any other extreme rays of the cone._
Two well-known tests for verifying the adjacency of extreme rays of a cone are the algebraic test and combinatorial test [19]. Given the adjacency relation one can define the _adjacency graph_ of the cone. Let \(\{\boldsymbol{\delta}_{1},\ldots,\boldsymbol{\delta}_{d}\}\) correspond to a set of nodes in \(V=\{1,\ldots,d\}\). Then the edge set E is defined through adjacency. i.e. each pair of adjacent extreme rays, \(i\) and \(j\) correspond to a edge in the graph network. The edge set \(E\) can be written as the union of the edge set for each node. Suppose \(E=\{E_{1},E_{2},\ldots,E_{d}\}\) where \(E_{i}\) denote the set of adjacent extreme rays corresponding to the \(\boldsymbol{\delta}_{i}\). Then \(G=(V,E)\) forms an undirected graph. The degree of a node of a graph is the number of edges that are incident to the node. We denote the degree of the \(i^{th}\) node by \(deg(\boldsymbol{\delta}_{i})\). Then \(|E_{i}|=deg(\boldsymbol{\delta}_{i})+1\).
To illustrate the geometry of polyhedral cones in 3D, consider the following example with \(n=3,m=8,d=8\) from Figure 2. The corresponding adjacency graph is shown in Figure 3. For instance, \(\boldsymbol{\delta}_{1}\) is an extreme ray, which is is adjacent to \(\boldsymbol{\delta}_{2}\) and \(\boldsymbol{\delta}_{8}\). Hence in the corresponding adjacency graph, node 1 is connected to node 2 and node 8. In this case, each extreme ray is connected to two other extreme rays. So \(deg(\boldsymbol{\delta}_{i})=2\) and \(|E_{i}|=3\ \ \forall i\).
For high dimension with \(n>3\), the adjacency graph can become quite complicated with varying degree. A simple example for a polyhedral cone in \(\mathbb{R}^{4}\) with \(m=7,d=8\) is illustrated below with degree varying between 3 and 4.
When \(m=n\), is the number of minimal generators is the same as the dimension, and
Figure 3: The graph network for the cone from Figure 2.
the adjacency graph is a _complete_ graph. We will use the adjacency network to describe the notion of sparsity as well as the proposed priors.
When there are no restrictions, a sparse vector is a vector that has a large number of zeros (or for a weaker notion of sparsity the vector has a large number of entries that are negligible). For non-negative orthant, the same definition applies, except the non-zero entries are required to be positive. Thus, the sparse vectors are the one which lie on (or close to) one of the lower dimensional faces of the orthant. Following the description of sparsity in the orthant, we define a sparse vector to be any vector lying on or near a lower dimensional face. Since any \(\mathbf{x}\in\mathcal{K}\) can be represented as \(\mathbf{x}=\Delta\mathbf{b}\), extrinsic'sparsity' can be defined as \(\mathbf{x}\) being specified by smaller number of lower dimensional features. In other words, \(\mathbf{x}\) is sparse when \(\mathbf{b}\) is a sparse vector. The idea is to map the vector \(\mathbf{x}\) in \(\mathbb{R}^{n}\) to a non-negative orthant in \(\mathbb{R}^{d}\), use the definition of sparsity in the orthant and then use the inverse map to lift the notion of sparsity back to the polyhedral cone. The dimension \(d\) in which the vector \(\mathbf{x}\) is being embedded is either equal to or larger than the original dimension \(n\).
For a non-negative orthant, the canonical vectors are the minimal generators and the usual definition of sparsity is that the vector can be written as a conic combination of a few of the full set of generators. Such vectors will lie on the boundary of the orthant, on a lower dimensional face of the orthant to be precise. It seems natural to use a similar definition of sparsity in the general case, i.e. vectors that lie on lower dimensional faces of the cone. The minimal two dimensional faces are the conic hull of pair of adjacent generator. Thus,
Figure 4: An illustration of the H-representation (left) and V-representation (center) for a irreducible polyhedral cone in \(\mathbb{R}^{4}\) with its adjacency graph (right).
to restrict the vector to the lower dimensional faces one can work with adjacent generator. However, simply generating a vector as a conic combination of a set of adjacent rays is not enough to guarantee that the vector lies on a lower dimensional face. It seems that the notion sparsity is more nuanced. For the vector to occupy a lower dimensional face, the sets of generators must form a _clique_ or in other words the sub-adjacency graph corresponding to the set of generators used to define a sparse vector must be complete. This will ensure that the notion of sparsity is an identifiable notion in the sense that a sparse vector cannot have a non-sparse representation.
Recall that a clique, \(W\), of an undirected graph \(G=(V,E)\) is a subset of vertices, \(W\subseteq V\) such that every two distinct nodes are connected by an edge. That is, a clique of a graph is an induced subgraph that is complete. A maximum clique of a graph, G, is a clique \(w\) such that \(w\bigcup\{v\}\) in not a clique for any \(v\in V\backslash w\). Then we have the following definition of a'sparse' vector in a closed convex polyhedral cone.
**Definition 2**.: _Let \(\mathbb{C}=\{\boldsymbol{\mu}\in\mathbb{R}^{n}:\boldsymbol{\Delta b}\}\) be the vertex representation of a closed convex polyhedral cone \(\mathbb{C}\) where the columns of \(\boldsymbol{\Delta}\) is a set of \(d\) minimal generators of \(\mathbb{C}\). Let \(G=(V,E)\) be the adjacency graph of \(\mathbb{C}\) where \(V=\{1,\ldots,d\}\) and \(E=\{E_{1},\ldots,E_{d}\}.\) Then \(\boldsymbol{\mu}=\boldsymbol{\Delta}{\in}\mathbb{C}\) is sparse iff the subgraph corresponding to \(i:b_{i}>0\) is a clique._
The following result proves that the above definition is 'proper' in the sense for a sparse vector there cannot be a non-sparse representation.
**Theorem 1**.: _Suppose \(\boldsymbol{\mu}\in\mathbb{C}\) has a vertex representation \(\boldsymbol{\mu}=\boldsymbol{\Delta b}\) such that the set of nodes \(\mathcal{I}=\{i:b_{i}>0\}\) forms a clique. Then in any vertex representation of \(\boldsymbol{\mu}=\boldsymbol{\Delta\beta}\) we have \(\beta_{i}=0\) for all \(i\in\{1,\ldots,d\}\backslash\mathcal{I}.\)_
Proof.: We will use method of induction to prove the result. From the definition of adjacency, the result is obvious true when the size of the clique is \(k=2.\) Now suppose it is true a positive integer \(k>2\). Let \(\boldsymbol{\mu}=\sum_{i=1}^{k+1}b_{i}\boldsymbol{\delta}_{i}\) be a vertex representation of a vector \(\boldsymbol{\mu}\) where without loss of generality we assume that the nodes \(\{1,\ldots,k+1\}\) form a clique. Suppose there is another representation of \(\boldsymbol{\mu}\) as
\[\boldsymbol{\mu}=\sum_{i=1}^{k+1}\beta_{i}\boldsymbol{\delta}_{i}+\sum_{i=k+2 }^{d}b_{i}\boldsymbol{\delta}_{i}.\]
Then \(\mathbf{0}=\sum_{i=1}^{k+1}(\beta_{i}-b_{i})\mathbf{\delta}_{i}+\sum_{i=k+2}^{d}\beta_{i} \mathbf{\delta}_{i}.\) Consider three cases.
**case1:**\((\beta_{i}-b_{i})\geq 0,\forall i\) In this case a nonnegative linear combination of the columns of \(\mathbf{\Delta}\) is zero which contradicts minimality of the generators.
**case2:**\((\beta_{i}-b_{i})<0\), for some \(i\). Let \(\mathcal{J}=\{i:(\beta_{i}-b_{i})<0\}.\) Then
\[\mathbf{x}=\sum_{i\in\mathcal{J}}(b_{i}-\beta_{i})\mathbf{\delta}_{i}=\sum_{i\in\{1, \ldots,k+1\}\setminus\mathcal{J}}(\beta_{i}-b_{i})\mathbf{\delta}_{i}+\sum_{i=k+2 }^{d}b_{i}\mathbf{\delta}_{i}.\]
Thus, the vector \(\mathbf{x}\) has two representations one of which is based on a clique since any sub-clique of a clique is also a clique. Since \(|\mathcal{J}|\leq k\), this contradicts the assumption unless \(\beta_{i}=b_{i}\) for \(i=1,\ldots,k+1\) and \(\beta_{i}=0\) for \(i=(k+1),\ldots,d.\) This completes the proof.
## 3 Sparse Priors for Closed Convex Polyhedral cone
To define probability measures on the cone that is supported mostly on lower dimensional sets, one could simply specify any sparse prior that are used in the unrestricted case as a prior on \(\mathbf{b}\) in the vertex representtaiton and invoke a prior on \(\mathbf{\mu}.\) Such a prior indeed works as a sparse prior on the cone provided the adjacency graph is a complete graph, as in the case of the positive orthant.
Thus, for the case when \(d=n\), and hence the adjacency graph is a complete graph one could use popular sparse priors such as the continuous shrinkage priors like Horseshoe priors [2] or spike-and-slab priors like the Strawderman-Berger prior [1], where the continuous part is taken to be a density on the first orthant such as product of normal densities truncated to the positive half. Specifically, one could define priors on \(\mathbf{b}\) as the Horseshoe prior
\[b_{i}|\tau,\lambda_{i} \sim N(0,\tau^{2}\lambda_{i}^{2})_{+},\] \[\lambda_{i} \sim C(0,1)_{+},\] \[\tau|\sigma \sim C(0,\sigma)_{+},\] \[\pi(\sigma) \propto\tfrac{1}{\sigma} \tag{4}\]
or as the Strawderman-Berger prior
\[\pi(b_{i}) = (p\delta_{o}+(1-p)\ N(0,\tau^{2}\lambda_{i}^{2})_{+})\] \[\pi(\lambda_{i}) \propto \lambda_{i}(1+\lambda_{i}^{2})^{\frac{3}{2}},\] \[p \sim \mbox{Unif}(0,1),\] \[\tau|\sigma \sim C(\sigma,\sigma)\ 1(\tau\geq\sigma),\] \[\pi(\sigma) \propto \frac{1}{\sigma}\]
One could also specify other priors such as Bayesian lasso [22] like prior on \(b\). However, when the adjacency graph is not complete, simply demanding that the vector \(b\) is sparse does not ensure that the resulting \(\mu\) vector is near a lower dimensional face. To guarantee sparsity, it is important to specify which of the components of \(b\) are zero. For instance, consider the above example of the 3D cone with eight extreme rays and suppose only \(b_{4}\) and \(b_{8}\) are the only positive entries in \(b\). The resulting vector will lie on the 2D cone generated by the vectors \(\mathbf{\delta}_{4}\) and \(\mathbf{\delta}_{8}\). Points on this set can be far away from any of the faces and can have many equivalent dense representation (Remark 1) where none of the entries in \(b\) is zero or small. Hence, the vector will not be sparse according to the notion described above. Thus, general sparse prior on \(b\) may still put substantial mass in the dense interior of the cone.
It is evident from the definition of sparsity that one could simply restrict to the clique-lattice of the adjacency graph and work with the maximal cliques to define priors that will be supported only on sparse vectors. To define a probability measure that is supported on the sparse vectors and hence on the maximal cliques corresponding to the adjacency graph, an obvious choice would be to define a Markov Random Field (MRF), specifically the Gibbs distribution describing the clique probabilities and then conditional on the clique defining a prior on the entries of \(b\) within the clique. We briefly review the Markov-Gibbs equivalence in the context of an undirected graph. Suppose \(\{X_{v}:v\in V\}\) be a stochastic process with \(X_{v}\) taking values in \(S_{v}\). Suppose further the joint distribution of the variables is \(Q\{\mathbf{x}\}=P\{X_{v}=x_{v}\mbox{ for }v\in V\}\) where \(\mathbf{x}=(x_{1},\ldots,x_{d})\) and \(x_{i}\in S_{i}\).
**Definition 3**.: _The probability distribution Q is called a Gibbs distribution for the graph if it can be written in the form_
\[Q\{\mathbf{x}\}=\prod_{S\in W}\phi_{S}(\mathbf{x})\]
_where \(W\) is the set of cliques for \(G\) and \(\phi_{S}\) is a positive function (also referred to as clique potential function) that depends on \(\mathbf{x}\) only through \(\{x_{v}:v\in S\}\). The definition is equivalent if maximal cliques are used instead of just cliques._
An MRF is characterized by its local property (the Markovianity) whereas a Gibbs Random Field (GRF) is characterized by its global property (the Gibbs distribution). The Hammersley-Clifford theorem establishes the equivalence of these two types of properties. The theorem asserts that the process \(\{X_{v}:v\in V\}\) is a Markov Random field if and only if the corresponding \(Q\) is a Gibbs distribution. The practical value of the theorem is that it provides a simple way to parametrize the joint probability by specifying the clique potential functions. In other words, the theorem tells us it suffices to search over Gibbs distribution.
Given a particular maximal clique, then define the sparsity of a vector in the usual sense by generating the vector using possibly sparse coefficients on the generators belonging to the clique. This procedure agrees with the usual method of selecting sparse vectors on the orthant or \(\mathbb{R}^{n}\) where the generators are the canonical vectors and all the extreme rays together for the unique maximal clique.
Thus, specifically we recommend the following class of sparse prior on \(\mathbb{C}\). Let \(\mathcal{W}\) be the set of maximal cliques of the adjacency graph of \(\mathbb{C}\).
\[\mathbf{b}|w \sim\pi(\mathbf{b}_{w})\] \[w \sim\pi_{\mathcal{W}}(w) \tag{6}\]
where given a clique \(w\in\mathcal{W}\), \(\mathbf{b}_{w}\) is the subvector of \(\mathbf{b}\) constructed with the entries of \(\mathbf{b}\) where the indices belong to \(w\), \(\pi(\mathbf{b}_{w})\) is a'sparse' prior, such as the Horseshoe prior or the Strawderman-Berger prior, on \(\mathbf{b}_{w}\) in appropriate dimension, and \(\pi_{\mathcal{W}}(w)\) is an MRF on \(\mathcal{W}\). The priors \(\pi(\cdot)\) and \(\pi_{\mathcal{W}}(\cdot)\) an have their own hyper-parameters and hyperpriors can be specified accordingly.
In order to have a prior that is fully supported but has most of the support on the sparse
vectors one could add a mixture component including the full set of extreme rays
\[\mathbf{b}|\delta,w\sim\delta\pi^{0}(\mathbf{b})+(1-\delta)\pi(\mathbf{b}_{w})\] \[w|\delta\sim\delta I(w=V)+(1-\delta)\pi_{\mathcal{W}}(w)\] \[\delta\sim Bernoulli(\phi) \tag{7}\]
where \(\pi^{0}(\cdot)\) is a sparse prior on the interior of the positive orthant, \(\mathbb{R}^{n}_{+}\), and the Bernoulli parameter \(\phi\) is either a pre-specified small probability or a prior can be specified on \(\phi\).
### Prior with adjacency on \(\mathbf{b}\)
A 'weaker' notion of sparsity will be to allow for mass to be spread along the boundary of the cone instead of being only supported on the boundary. In high dimension, the probability for most fully supported measures on the entire cone will concentrate on or near the boundary and hence so will the posterior. However, how the prior is specified will have impact on the recovery rate of the sparse sets.
Instead of restricting to cliques, one could choose priors that are supported on a cone generated by a single adjacency set. While not guaranteed, such priors would emphasize vectors where most of the coefficients in \(\mathbf{b}\) are small in any representation of the vector. Of course the idea of small or negligible coefficients has to be formalized but in general this would mean \(b_{j}<\epsilon,j\notin E_{i}\) fora given adjacency set \(E_{i}\) and for some pre-specified small value \(\epsilon>0\). Unfortunately, even when only a few coefficients within an adjacency are set to positive values, the resulting vector may still have equivalent representations that are very dense. If the prior specified on the elements of \(\mathbf{b}\) within an adjacency set is sufficiently sparse, with high prior probability the generated vectors would be near one of the boundary sets, i.e. the minimum distance of the point to the boundary will be small.
To this end we define 'weakly sparse' priors that are fully supported on a closed convex polyhedral cone \(\mathbb{C}\) and with most or all of its mass supported on or near the boundary. To formally define this, let
\[S(\mathbf{\mu})=\{\mathbf{b}\in\mathbb{R}^{d}:\mathbf{\mu}=\Delta\mathbf{b},\mathbf{b}\geq 0\}. \tag{8}\]
Then we have the following definition for a weakly sparse vector.
**Definition 4**.: _Let \(\mathbf{\mu}\in\mathbb{C}\) where \(\mathbb{C}\) is a closed convex polyhedral cone with vertex representation given by \(\mathbb{C}=\{\mathbf{x}\in\mathbb{R}^{n}:\mathbf{x}=\mathbf{\Delta}\mathbf{b}\) for some \(\mathbf{b}\in\mathbb{R}_{+}^{d}\}.\) Then \(\mathbf{\mu}\) is weakly sparse if \(\exists\mathbf{b}\in S(\mathbf{\mu})\) such that \(\{i:b_{i}>0\}\) corresponds to an adjacent set of an extreme ray in the adjacency graph of \(\mathbb{C}\) where \(S(\mathbf{\mu})\) is defined in (8)._
We propose adjacency prior based generalization of the Horseshoe or Strawderman-Berger priors as
\[\pi_{1},\ldots,\pi_{d} \sim \text{Dirichlet}(\alpha_{1},\ldots,\alpha_{d})\] \[u \sim \text{Multinomial}(1,\pi_{1},\ldots,\pi_{d})\] \[\mathbf{b}|u \sim \pi(\mathbf{b}_{E_{u}})\]
where \(\pi(\mathbf{b}_{E_{u}})\) can be \(\pi_{HS}(\mathbf{b}_{E_{u}})\) or \(\pi_{SB}(\mathbf{b}_{E_{u}})\). This is different from using a prior like modified lasso such as fused lasso (Tibshirani and Saunders, 2005) type selection, since we select only 1 adjacency set to stay on the surface whereas in fused lasso several clusters maybe selected and hence the results vectors may have dense representations.
## 4 Numerical Results
### Distribution of points in a 3D cone
Figure 5 shows the distribution on 10000 points drawn from the Horseshoe kind prior on polyhedral cone. While most points lie near the face of the cone including the vertex, there are still many points in the interior of the cone. The 2D contour has been plotted by considering equal volume of circular cones inside the polyhedral cone and then calculating the relative frequency of 10000 points. The points very close to the vertex are included in the outermost region since they are anyway sparse for being close to the vertex. From the 2D contour, it is clearer that there is a heavy positive mass in the interior most circle.
Figure 6 and 7 presents the points inside the 3D polyhedral cone and the reciprocal 2D
contour for Horseshoe prior on adjacent set and on maximal clique, respectively. The figures either show some positive mass and no positive mass in the interior of the cone for the two cases. All points are either closer to or lie exactly on the lower dimension features be it vertex, extreme rays or facet.
### Max-min distance of points from facet
In this numerical study, we consider polyhedral cone in different dimensions and simulate \(R=100000\) points using the three different priors discussed in the previous section. For a fair comparison, for each of the adjacent set \(E_{j}\) chosen, we select randomly \(|E_{j}|\) rays so that \(\pi(u)\sim\frac{1}{\binom{d}{(|E_{j}|)}}\) and the rest of the prior specifications are same as in horseshoe prior with
Figure 5: Plot showing points inside a 3D polyhedral cone by invoking a Horseshoe prior on \(\mathbf{b}\) (left) and 2D contour of the cone showing the concentration of 10000 such points (right).
Figure 6: Plot showing points inside a 3D polyhedral cone by incorporating adjacency of extreme rays (left) and 2D contour of the cone showing the concentration of 10000 such points (right).
adjacency(9). We report the scaled max-min distance where the maximum is over \(R\) number of repetitions and minimum is considered with respect to the point's distance from the \(m\) facets. That is, we see which hyperplane of the cone it is closest to.
Let \(d_{ij}=\max\limits_{i=1:R}\,\min\limits_{j=1:m}\,\,\text{distance}_{ij}\). Table 1 reports the max-min distance for Horseshoe prior \(d_{ij}\), Horseshoe prior on a randomly selected set \(d_{ij}^{r}\), Horseshoe prior with adjacency \(d_{ij}^{a}\) and Horseshoe prior on cliques \(d_{ij}^{c}\).
## 5 Application
We discuss two examples in details. For the positive isotonic function estimation, we explain both the parametric and non-parametric approach. For the bell-shaped function, we show results for the parametric approach and additionally discuss how the non-parametric fit can be obtained.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \hline \(n\) & \(m\) & \(d\) & \(d_{ij}\) & \(d_{ij}^{r}\) & \(d_{ij}^{a}\) & \(d_{ij}^{c}\) \\ \hline
3 & 6 & 6 & 3.081 & 3.081 & 0.891 & 0 \\ \hline
8 & 11 & 16 & 1.096 & 0.924 & 0.309 & 0 \\ \hline \hline
10 & 13 & 20 & 0.818 & 0.376 & 0.159 & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Max-min distance of points from different priors
Figure 7: Plot showing points inside a 3D polyhedral cone by incorporating maximal cliques of extreme rays (left) and 2D contour of the cone showing the concentration of 10000 such points (right).
### Positive Isotonic Function
We consider the mean function \(f(x)=\exp(x)\) over the interval \([-2,2]\), a positive isotonic function so that \(A\) is \(n\times n\) matrix with \(A_{1,1}=1\), \(A_{i,i}=-1\), \(A_{i,i+1}=1\) for \(i=2,\ldots,n\). Hence the Bayes estimator \(\hat{\boldsymbol{\mu}}\) is obtained using MCMC by invoking a Horseshoe kind prior and Strawderman-Berger kind prior on \(\boldsymbol{b}\) based on the model
\[\boldsymbol{\mu}=\Delta\boldsymbol{b}.\]
Figure 8 shows the plot of the estimators for the priors along with the MLE.
In the non-parametric approach, we model \(f(x)=\boldsymbol{\Psi}(x)\boldsymbol{\beta}\) where \(\boldsymbol{\Psi}(x)\) is the \(p\) dimensional basis function at \(x\). This will produce a flexible and smooth estimate depending on the choice of \(p\). To enforce the monotonicity of \(f(x)\), we consider a set of fine grid points \(t_{1}<\cdots<t_{m}\) over the range of \(x\) and construct \(A\) such that the \(i^{th}\) row of \(A\) is the derivative of the basis functions \(\boldsymbol{\Psi}^{\prime}(x)\) evaluated at \(t_{i}\). These constraints are then applied on the coefficients parameter such that \(A\boldsymbol{\beta}\geq\boldsymbol{0}\) where \(A\) is a \(m\times p\) matrix. Specifically, we consider cubic B-splines with no intercept and \(k=3\) equidistant internal knots so that \(p=6\)[5]. We consider
Figure 8: Bayes estimates for Horseshoe prior (HS), Strawderman-Berger prior (SB) and MLE for \(n=50\) points from \(f(x)=\exp(x)\).
\(m=8\) equidistant grid points and since the number of constraints is greater than the number of parameters, the number of extreme rays \(d=18\) is greater than \(p\). Similar to the parametric approach, \(\hat{\mathbf{f}}\) is obtained using MCMC by invoking priors on \(\mathbf{b}\) through the model
\[\mathbf{f}=\mathbf{\Psi}\mathbf{\beta}=\mathbf{\Psi}\Delta\mathbf{b}=\tilde{\Delta}\mathbf{b}.\]
Figure 9 presents the results from all four priors, the Horseshoe kind estimator and Strawderman Berger kind estimator as well as the priors incorporating adjacency. As expected, all four the estimators are smoother compared to ones obtained by parametric approach. For the priors incorporating adjacency, Figure 10 demonstrates the \(d\) estimates based solely on one of the \(d\) adjacency sets for Horseshoe kind prior (left) and for Strawderman-Berger kind prior (right). The final estimates for the priors invoking adjacency are an average of these \(d\) estimators presented in the right panel of Figure 9 since all these adjacent sets appear with almost equal frequency in the mcmc chains.
Figure 9: Bayes estimates using cubic B-spline with 3 internal knots for \(n=50\) points from \(f(x)=\exp(x)\). Horseshoe prior (HS) and Strawderman-Berger prior (SB) (left) and Horseshoe prior (HS adjacency) and Strawderman-Berger prior with adjacency (SB adjacency) (right).
### Bell-shaped Function
In this example, we consider estimation of a symmetric bell-shaped curve. Given the inflection points \(k_{1}\) and \(k_{2}\), \(A\) is a \((n+2)\times n\) matrix based on the constraints that the function is positive, increasing on the left, convex, concave, convex and then decreasing at the right. We consider the true mean function \(f(x)\) to be a normal density scaled to have large values i.e. \(f(x)=50\ \frac{1}{\sigma}\phi(\frac{x}{\sigma})\) for \(n=40\) points over \(x\) in \([-2,2]\). The estimated mean functions are obtained by invoking priors on \(\mathbf{b}\) using the model \(\mathbf{f}=\Delta\mathbf{b}\) where the number of extreme rays \(d\) becomes super large and is equal to \(2551\) for \(n=40\). The results are shown in Figure 11. Similar to the MLE, both simple Strawderman-Berger prior and the simple Horseshoe prior are piece-wise functions. Figure 12 provides 18 estimates out of \(d=2551\) estimates one for each of the \(d\) extreme sets for the priors incorporating adjacency. Since, each of these sets appear almost equally in the mcmc, we take an average of the \(2551\) estimates to obtain the final estimate for both the priors using adjacency.
Figure 10: 18 estimates from the each of the \(d=18\) adjacency set for Horseshoe prior with adjacency (left) and Strawderman-Berger prior with adjacency (right) for \(n=50\) points from \(f(x)=\exp(x)\).
## 6 Discussion
In this chapter, we have have introduced new priors on high-dimensional closed convex cone where most of the mass is on lower dimensional sets on the boundary. The priors facilitate Bayesian estimation of constrained priors. While the motivating example is estimation of a constrained normal mean vector, the application of non-parametric estimation of shape-restricted functions show that the priors can easily applied to a regression model. In fact, it can be used for inference for any parameter vector with linear inequality constraints. For now, we have shown applications with inequality restrictions on the parameters but the notion of sparsity is related to having several of the inequalities reducing to equality in the true value of the parameter. While in the present set up these equality constraints are not necessarily binding, many examples where equality constraints are present as hard constraints in addition to inequality constraints can be also be incorporated in the proposed method. Another interesting application of our work is testing for \(H_{0}:\mathbf{A}\mathbf{\mu}=\mathbf{0}\) versus \(H_{1}:\mathbf{A}\mathbf{\mu}\geq\mathbf{0}\) using Bayesian model comparison. When \(\mathbf{A}=I\), the problem reduces to testing origin against non-negative orthant and the Likelihood Ratio Test is much easier to compute than for a general \(\mathbf{A}\). The projection of the data vector to the cone will also lie on one of the lower dimensional faces and is the max
Figure 11: Bayes estimates for Horseshoe prior and Strawderman-Berger prior with adjacency for \(n=40\) points from \(f(x)=50\ \frac{1}{\sigma}\phi\big{(}\frac{x}{\sigma}\big{)}\).
imum likelihood estimator. In general the projection maybe hard to compute, but in principle the Bayesian posterior should concentrate around the Euclidean projection. Bayesian recovery results for the true clique and posterior concentration results need to be investigated.
|
2306.02752 | Accelerated particle beams in a 3D simulation of the quiet Sun. Lower
atmospheric spectral diagnostics | Nanoflare heating through small-scale magnetic reconnection events is one of
the prime candidates to explain heating of the solar corona. However, direct
signatures of nanoflares are difficult to determine, and unambiguous
observational evidence is still lacking. Numerical models that include
accelerated electrons and can reproduce flaring conditions are essential in
understanding how low-energetic events act as a heating mechanism of the
corona, and how such events are able to produce signatures in the spectral
lines that can be detected through observations. We investigate the effects of
accelerated electrons in synthetic spectra from a 3D radiative
magnetohydrodynamics simulation to better understand small-scale heating events
and their impact on the solar atmosphere. We synthesised the chromospheric Ca
II and Mg II lines and the transition region Si IV resonance lines from a quiet
Sun numerical simulation that includes accelerated electrons. We calculated the
contribution function to the intensity to better understand how the lines are
formed, and what factors are contributing to the detailed shape of the spectral
profiles. The synthetic spectra are highly affected by variations in
temperature and vertical velocity. Beam heating exceeds conductive heating at
the heights where the spectral lines form, indicating that the electrons should
contribute to the heating of the lower atmosphere and hence affect the line
profiles. However, we find that it is difficult to determine specific
signatures from the non-thermal electrons due to the complexity of the
atmospheric response to the heating in combination with the relatively low
energy output (~1e21 erg/s). Even so, our results contribute to understanding
small-scale heating events in the solar atmosphere, and give further guidance
to future observations. | H. Bakke, L. Frogner, L. Rouppe van der Voort, B. V. Gudiksen, M. Carlsson | 2023-06-05T10:15:21Z | http://arxiv.org/abs/2306.02752v3 | # Accelerated particle beams in a 3D simulation of the quiet Sun
###### Abstract
Context:Nanoflare heating through small-scale magnetic reconnection events is one of the prime candidates to explain heating of the solar corona. However, direct signatures of nanoflares are difficult to determine, and unambiguous observational evidence is still lacking. Numerical models that include accelerated electrons and can reproduce flaring conditions are essential in understanding how low-energetic events act as a heating mechanism of the corona, and how such events are able to produce signatures in the spectral lines that can be detected through observations.
Aims:We investigate the effects of accelerated electrons in synthetic spectra from a 3D radiative magnetohydrodynamics simulation to better understand small-scale heating events and their impact on the solar atmosphere.
Methods:We synthesised the chromospheric Ca ii and Mg ii lines and the transition region Si iv resonance lines from a quiet Sun numerical simulation that includes accelerated electrons. We calculated the contribution function to the intensity to better understand how the lines are formed, and what factors are contributing to the detailed shape of the spectral profiles.
Results:The synthetic spectra are highly affected by variations in temperature and vertical velocity. Beam heating exceeds conductive heating at the heights where the spectral lines form, indicating that the electrons should contribute to the heating of the lower atmosphere and hence affect the line profiles. However, we find that it is difficult to determine specific signatures from the non-thermal electrons due to the complexity of the atmospheric response to the heating in combination with the relatively low energy output (\(\sim 10^{21}\) erg s\({}^{-1}\)). Even so, our results contribute to understanding small-scale heating events in the solar atmosphere, and give further guidance to future observations.
## 1 Introduction
Nanoflares are heating events associated with small-scale magnetic reconnection in the solar atmosphere. They release energy in the range \(10^{24}\)-\(10^{25}\) erg, and they are believed to occur frequently throughout the atmosphere. The nanoflare heating mechanism is one of the prime candidates in understanding why the corona is heated to millions of Kelvin (Parker, 1988). It is generally accepted that flare energy is transported by electrons accelerated to non-thermal energies as magnetic field lines reconnect. The accelerated electrons transfer energy to the ambient plasma through Coloumb collisions as they travel along the magnetic field (Brown, 1971; Emslie, 1978; Holman et al., 2011), leaving observable signatures in the spectral lines that form in the sites where the energy is deposited. Signatures of non-thermal electrons are found in observed hard X-ray spectra from active region flares. However, X-ray observations of small-scale events with nanoflare energies are rare because the signatures are typically below the detection threshold (although, see e.g. Wright et al., 2017; Glesener et al., 2020; Cooper et al., 2021). As a result, the presence and properties of nanoflares in the solar atmosphere remain poorly known.
Heating signatures from energetic events in the corona are difficult to observe directly, as the high conductivity of coronal plasma has a tendency to smear the signatures out. It is therefore beneficial to look for signatures of heating release in the atmospheric layers that are responsive to heating, such as the transition region (TR) and chromosphere. Non-thermal electrons accelerated by magnetic reconnection in the corona collide with the dense TR and chromospheric plasma, giving rise to changes in temperature and density. However, looking for specific signatures is problematic as nanoflares are difficult to observe. Through numerical simulations, Testa et al. (2014) have found that non-thermal electrons are necessary to reproduce blueshifts in the Si iv 140.3 nm line observed with the Interface Region Imaging Spectrograph (IRIS; De Pontieu et al., 2014) in small heating events at the footpoints of transient hot loops. By exploring a wide range of parameters, Polito et al. (2018) have carried out an extensive numerical investigation to better understand and interpret TR observations, and Bakke et al. (2022) have extended the analysis to include spectral lines that form deeper in the atmosphere and are readily accessible by ground-based telescopes. In the latter, the analysis of chromospheric spectra in 1D simulations of nanoflares showed that the lines forming deeper in the chromosphere experience similar effects as the lines forming higher up. Testa et al. (2020) have further demonstrated that observations of high variability (\(\lesssim 60\) s) at the footpoints of hot coronal loops (\(\sim 8\)-\(10\) MK) in active region (AR) cores provide powerful diagnostics of the properties of coronal heating and energy transport when combined with numerical simulations.
In this work, we investigate the effect of accelerated electrons in a 3D radiative magnetohydrodynamics (MHD) simulation by analysing synthetic chromospheric Ca ii 854.2 nm, Ca ii H and K, and Mg ii h and k spectral lines as well as the TR Si iv resonance lines. The simulation is based on the 3D MHD Bifrost
model introduced in Bakke et al. (2018), which has been further developed in Frogner et al. (2020) and Frogner & Gudiksen (2022) to include a more accurate method for calculating the electron beam heating. We explore the impact of non-thermal electrons by comparing the spectral line analysis from different regions that are both subject and not subject to beam heating. In the analysis we investigate the Doppler shifts of the spectra and the formation of line intensity.
## 2 Method
### Bifrost simulation
The numerical simulation was performed using the Bifrost code (Gudiksen et al., 2011), which solves the resistive MHD partial differential equations in 3D, with radiative transfer and field-aligned thermal conduction accounted for in the energy equation. In the photosphere and lower chromosphere, Bifrost solves the optically thick radiative transfer equation with scattering (Hayek et al., 2010). In the upper chromosphere and TR, it approximates non-LTE radiative losses based on parameterised results from 1D radiative hydrodynamic simulations (Carlsson & Leenaarts, 2012). Finally, it calculates optically thin radiative losses in the corona.
Bifrost was developed with a high degree of modularity, allowing users to extend the code with additional physics. In Bakke et al. (2018), we presented a method for treating energy transport by accelerated electrons in Bifrost simulations, which we expanded upon and discussed in depth in Frogner et al. (2020) and Frogner & Gudiksen (2022). The first step of the method is to detect locations where the magnetic field reconnects using a criterion for reconnection in MHD theory (Biskamp, 2005). The second step is to estimate the energy distributions of the non-thermal electrons expected to be accelerated at each reconnection site. We assume that the distribution is a power-law, with a lower cut-off energy \(E_{\rm c}\) corresponding to the intersection of the power-law with the local thermal distribution. The lower cut-off energy is not fixed, but is roughly proportional to temperature (for example, \(10^{6}\) K corresponds to a lower cut-off energy of the order of 1 keV). We determine the total non-thermal energy flux assuming that a fixed fraction \(p\) of the released magnetic energy, which otherwise would be converted entirely into resistive heating, goes into accelerating electrons. The value of \(p=0.2\) was chosen based on flare observations suggesting that typical values of \(p\) range from 10% (Emslie et al., 2004, 2012) up to 50% (Lin & Hudson, 1971). Finally, we leave the power-law index \(\delta\) as a free global parameter. This parameter largely affects the resulting distribution of deposited electron beam energy. We used a value of \(\delta=6\), which has a faster rate of deposited energy compared to smaller values. We note that a larger \(\delta\) leads to a smaller penetration depth of the beam (Allred et al., 2015). The value of \(\delta\) is supported by observational evidence showing an increase in power-law index with decreasing flare energy (Hannah et al., 2011). A higher value of \(\delta\) is also motivated by the 1D flare simulations analysed in Bakke et al. (2022), where \(\delta=7\) was used for the non-thermal electron energy distribution. The final step is to trace the trajectory of each non-thermal electron beam along the magnetic field while computing the heating of the local plasma due to Coulomb collisions along the way. For this, we use an analytical expression accounting for the systematic velocity change of beam particles due to collisions with ambient hydrogen atoms and free electrons (Emslie, 1978; Hawley & Fisher, 1994). During the simulation, we continually computed the transfer of energy by the beams in this way and included it as a term in the MHD energy equation.
The particular atmospheric simulation considered in this paper encompasses a horizontal area of \(24\times 24\) Mm and a vertical span from 2.5 Mm below the photosphere to 14.3 Mm above it, in the corona. The simulation has a resolution of \(768\times 768\times 768\) grid cells, with a uniform grid cell extent of 31 km in the horizontal directions and uneven vertical grid cell extents that vary with height in the atmosphere. Due to the need to resolve sudden local variations near the transition region, the grid cells are about 12 km tall between the photosphere and the height of 4 Mm. From this region, the vertical grid cell extent increases evenly to 21 km at the bottom of the simulation box and to 80 km at the top. In the simulation, heating at the bottom boundary in combination with radiative cooling in the photosphere produces convective motions. The chromosphere and corona are heated by magnetic reconnection and acoustic shocks resulting from these motions. At this point, the Bifrost simulation qualifies as rather quiet, and electron beams from this level of reconnection can be regarded as weak. In order to perturb the system with more magnetic energy and produce more energetic reconnection events, we introduced a large scale magnetic flux emergence. To emulate flux emergence, a sheet with magnetic field strength of 2000 G oriented in the \(y\)-direction was injected at the bottom boundary. As it rose up through the convection zone and coalesced in the convective downflow regions, the injected field organised into a largely bipolar loop system pushing up on the ambient \(x\)-directed magnetic field that was originally present in the corona. Reconnection between the ambient and injected field then resulted in minor energy release and particle acceleration events throughout the corona. This setup made the simulation more active, but still relatively quiet as compared to solar active regions with high flaring activity. We note that the original setup of this Bifrost simulation was developed and used by Hansteen et al. (2019), with the aim of studying the generation of Ellerman bombs and UV bursts through flux emergence.
For this paper, we consider a series of 36 snapshots with 1 s intervals, where the simulation time step is \(10^{-3}\) s. This simulation starts 8220 s after the magnetic flux sheet has been injected
Figure 1: Vertical magnetic field \(B_{z}\) in the photosphere at \(t^{\prime}=0\) s (8220 s after the magnetic flux sheet has been injected).
at the bottom boundary. The vertical component of the magnetic field in the photosphere at this time can be seen in Fig. 1, where \(t^{\prime}=0\) s is the first time step where the electrons are injected. The total power of accelerated electrons in a single simulation snapshot is roughly \(10^{24}\) erg s\({}^{-1}\), and individual beams along the magnetic field produce approximately \(3\cdot 10^{21}\) erg s\({}^{-1}\) of non-thermal power (Frogner et al., 2020). A typical small-scale beam heating event is estimated to release \(10^{20}\)-\(10^{24}\) erg of non-thermal energy in the lower atmosphere assuming that the event lasts around 100 s. With 36 s of simulation time, we can assume that the energy released by heating events is on the lower end of this range, and hence a few orders of magnitudes less than the typical nanoflare energy (which is about \(10^{24}\)-\(10^{25}\) erg). But even though the events are weak, they are highly abundant, and a significant number of small beam heating events are likely to occur in the chromosphere at any given time. We note that it is difficult to provide a meaningful number of events as they have a tendency to lose their identity in the simulation due to the relatively low energy released. However, see Kanella and Gudiksen (2017) for a method of identifying coronal heating events in a Bifrost simulation by detecting 3D volumes of high Joule heating to find locations with current sheets.
### Spectral synthesis with RH
The spectral lines were synthesised using the RH1.5D radiative transfer code (Uitenbroek, 2001; Pereira and Uitenbroek, 2015), which calculates spectra from 1D, 2D or 3D numerical simulations on a column-by-column basis. RH1.5D solves the non-LTE radiative transport for spectral lines in partial redistribution (PRD), which is important in the synthesis of chromospheric lines where a more accurate treatment of photon scattering is required. While PRD is not strictly necessary in the synthesis of all our chosen spectra, RH1.5D can still be employed as a general non-LTE code. In general, PRD is assumed in the synthesis of Mg ii n and k (Milkey and Mihalas, 1974; Leenaarts et al., 2013, 2013) and Ca ii H and K (Vardavas and Cram, 1974; Shine et al., 1975; Bjorgen et al., 2018), but is less important for Ca ii 854.2 nm and the Si iv resonance lines.
Each atmosphere from the Bifrost snapshots was used as input to the RH code. We did not include a micro-turbulence term to the spectral synthesis as we wanted to focus on the effect from velocities in the Bifrost simulation. The \(z\)-axis of the input atmospheres includes the heights from the surface to the corona, excluding the convection zone as it is not relevant for the line synthesis. We selected all columns in \(x\)- and \(y\)-direction when synthesising the spectra from the main snapshot at \(t^{\prime}=28\) s analysed in this study, covering a domain of \(768\times 768\times 670\) pixels. The synthetic spectra for the entire time series were calculated using a coarser grid (\(384\times 384\times 670\) pixels) for the model atmospheres in order to reduce the computation time. The coarser grid does not affect the spectral analysis as the level of change from one grid cell to its neighbouring ones is almost negligible. A similar coarser sampling of a Bifrost simulation domain was performed in Leenaarts et al. (2013). We used the default 5 level-plus-continuum H i and Ca ii atoms, the 10 level-plus-continuum Mg ii atom from Leenaarts et al. (2013), and the 30 level-plus-continuum Si iv atom from Kerr et al. (2019). The latter was used to allow the Si iv resonance lines to form under optically thick conditions, as the model silicon atom includes potential opacity effects. It is common to assume that the Si iv emission is formed under optically thin conditions, and hence compute the emissivity without calculating the full radiative transfer. Through 1D flare modelling, Kerr et al. (2019) found that optical depth effects are considerable in the produced Si iv emission, and that the lines can form under optically thick conditions even for weaker flares with electron energy flux down to \(F\approx 5\cdot 10^{9}\) erg cm\({}^{-2}\) s\({}^{-1}\). We note that the model atom was constructed for use on simulated flares in RADYN (Carlsson and Stein, 1992, 1995, 1997; Allred et al., 2015), a 1D radiative transfer code that allows for flare investigation in an isolated system. The model atom employs a photospheric value \(A_{\rm Si}=7.51\) for the silicon abundance (Asplund et al., 2009), even though other work (e.g. Olluri et al., 2015; Martinez-Sykora et al., 2016) have argued in favour of using coronal abundances for silicon and other low first ionisation potential (low-FIP) elements. Using coronal abundances is based on the findings that low-FIP elements tend to be overabundant in the TR and corona (Laming, 2004). However, Warren et al. (2016) have shown that low-FIP elements have a composition that is close to that of the photosphere during impulsive heating events. In Bifrost, it is possible that the model silicon atom is more accurate in regions that are subject to electron acceleration, but also at the sites where the electron energy is deposited. However, we keep in mind that the silicon abundance might not be accurate in areas that are not subject to heating.
The synthetic spectra were calculated for 36 s, corresponding to 36 Bifrost snapshots with a 1 s time interval. IRIS observations of TR moss show that the lifetime of short-lived brightenings resulting from coronal nanoflare heating at the footpoints of hot transient coronal loops varies between 10-30 s (Testa et al., 2013, 2014, 2020). With the Bifrost time series, we should be able to study the effects of non-thermal electrons on a similar timescale. However, we note that with the current energy distribution for the non-thermal electrons, it is difficult to obtain a strong signal in the synthetic spectra.
### Optically thin calculation of Si iv emission
While the RH spectral line synthesis allows for the Si iv resonance lines to form under both optically thick and thin conditions, we also include a more straight forward approach to calculate the Si iv emission under optically thin conditions in order to compare potential differences and put further constraints on the interpretation of observations. The approach we used is similar to that of Olluri et al. (2015), where we calculated emissivities for the relevant Si iv energy transitions using atomic data from the CHIANTI database (Dere et al., 1997; Del Zanna et al., 2021). We did this for a range of temperatures and electron densities representative of the conditions in the corona and upper TR of the simulated atmosphere to create a lookup table enabling us to efficiently obtain the emissivity at every location in the simulation domain. We note that emissivities for temperatures lower than 10 000 K are set to zero in CHIANTI. In this approach, we used the coronal abundance \(A_{\rm Si}=8.10\)(Feldman, 1992). To determine the intensities formed in the optically thin regime as it emerges from the atmosphere, we integrated the emissivities in each vertical column of the atmosphere. We also computed the Doppler shift and width of the synthetic spectral line by evaluating the first and second moment of the locally emitted line profile with respect to Doppler shift from the line centre, and integrated this over height. We assume that the locally emitted radiation has a Gaussian spectral profile with thermal broadening and a Doppler shift depending on the local plasma velocity. The optically thin calculation is also significantly cheaper in computational terms than the RH spectral synthesis.
### Contribution function to the line intensity
The spectral diagnostics consisted of analysing the contribution function to the emergent intensity. The contribution function can be used to explore which parts of the atmosphere contributes to the line formation. Following Carlsson & Stein (1997), the contribution function was calculated as
\[C_{t}(z)\equiv\frac{\mathrm{d}I_{V}(z)}{\mathrm{d}z}=S_{v}\ \tau_{v}\mathrm{e}^{-\tau_{v}}\ \frac{\chi_{v}}{\tau_{v}}. \tag{1}\]
The first term on the right-hand side gives the total source function \(S_{v}\). Here, \(S_{v}\) is dependent on frequency because we assume PRD. The next term is the optical depth factor \(\tau_{v}\mathrm{e}^{-\tau_{v}}\), which represent the Eddington-Barbier part of the contribution function. The optical depth factor has a maximum at \(\tau_{v}=1\). The final term, \(\chi_{v}/\tau_{v}\), is the ratio of the opacity over optical depth. The term is responsible for line asymmetries due to its sensitivity to velocity gradients in the atmosphere. In the presence of strong velocity gradients, the opacity is typically large at small optical depths, and \(\chi_{v}/\tau_{v}\) is the dominant factor in the contribution function.
### Locations of interest
The locations of interest were selected based on the electron acceleration regions. Figure 2 shows the net electron beam heating power integrated vertically (upper panel) and horizontally (lower panel) over the simulation domain. The electrons are mostly accelerated along the magnetic field, where negative average beam heating power (blue regions) indicates where part of the energy is transported away from the reconnection site. As shown in the upper panel, we have chosen three areas (orange, green, and blue circles) at magnetic field footpoints that are associated with field lines were electrons are accelerated, and a reference area (red circle) without beam impact. The crosses (upper panel) and dashed lines (lower panel) represent the specific locations L1, L2, L3, and L4 that we subsequently analyse in detail.
The specific locations were found using CRISPREX (Vissers & Rouppe van der Voort, 2012), which is a widget based tool developed to browse and analyse large observational data sets. However, CRISPREX can also be used to analyse synthetic spectra from simulations by creating a data cube that is readable by the tool. We formatted the synthetic Mg ii \(\mathrm{k}\), \(\mathrm{Ca}\,\mathrm{\SIUnitSymbolMicro}\) K, and Si iv 140.3 nm lines from Bifrost at \(t^{\prime}=28\ \mathrm{s}\) as a multidimensional data cube that is readable by CRISPREX, and chose locations based on the intensity and complexity of the profiles by browsing through the spectra within the different areas. We further note that the prominent low altitude current sheet at \((x,y)=(10,11)\) was found to produce an Ellerman bomb and UV burst in the detailed analysis by Hansteen et al. (2019).
## 3 Results
### Evolution of the Bifrost atmosphere
Figure 3 shows the time evolution of temperature \(T\), vertical velocity \(v_{z}\), and electron number density \(n_{\mathrm{e}}\) in the Bifrost simulation at 1 s intervals. The rows represent the different quantities, while the columns represent the specific locations shown in Fig. 2. The temperature in the four different panels does not experience significant increases or decreases over time. However, the atmospheric structure varies from location to location. This is better seen in Fig. 4, showing the temperature at a vertical cut in the \(xz\)-plane taken at the location of the \(y\)-coordinate of L1, L2, L3, and L4, at a single instance in time (\(t^{\prime}=28\ \mathrm{s}\)).
out. This makes it difficult to recognise an unambiguous footprint left by the electrons.
The temperature at L3 (panel (c)) does not change significantly over the duration of the simulation. The TR is located around 1.5 Mm, and panels (g) and (k) show that at this height there is a weak plasma upflow and an increase in electron number density. We note that the change over time is so small that the line of the last time step covers small variations. There are generally more visible changes over time in the corona compared to the lower atmosphere. The beam heating rate in panel (o) shows that the electrons are both accelerated and depositing their energy between 2.3-3.1 Mm, which is in the corona. This is because the cut-off energy \(E_{\rm C}\) at coronal heights is low (around 1 keV), and the electrons loose most of their energy through interactions with the coronal plasma. This leads us to suspect that particular features in the synthetic spectra are not caused by the electrons, as signature are potentially only visible in coronal spectral lines and we focus here on spectral lines formed deeper in the atmosphere.
The temperature structure at L4 (see panel (d) in Figs. 3 and 4) is much more complex compared to the other locations. The cool plasma from the magnetic bubble intersects the column at several different heights, making the atmospheric structure difficult to analyse. Panel (p) shows that the beam heating rate is almost balanced out by the energy transferred to the electrons at the reconnection site. This means that the electrons that get accelerated at approximately 2 Mm deposit their energy right away, and do not contribute to any noticeable heating.
It is important to note that Fig. 3 shows the evolution of the atmosphere along the \(z\)-axis as seen from directly above, and not along the magnetic field lines. It is beneficial to choose pixel locations that are situated at magnetic field footpoints, as it is possible to detect spectral line signatures from electrons accelerated along the magnetic field connected to the particular footpoint. However, because of the low cross-field transport of energy, we do not expect to see a direct effect of heating by accelerated electrons to travel along \(z\) because the effect is isolated to the specific field line where the heating is taking place. This is most likely not along \(z\), as the field lines can at best only be regarded as straight at the very bottom of the atmosphere. This is better illustrated in Fig. 5, which shows the angle between the field and the vertical direction, calculated as
\[\theta_{B}\equiv\tan^{-1}\!\left(\frac{\sqrt{B_{x}^{2}+B_{y}^{2}}}{|B_{z}|} \right)=\tan^{-1}\!\left(\frac{|B_{\rm b}|}{|B_{z}|}\right), \tag{2}\]
at the magnetic footpoint locations (L2, L3, and L4) over the duration of the simulation. The figure shows that the angle increases with \(z\), reaching 90\({}^{\circ}\) in the corona where the field lines are mostly horizontal. We note that at L3 and L4, the angle is 90\({}^{\circ}\) at \(z=0\) Mm because the magnetic field is highly complex in the convection zone, and the surface is not always located at exactly 0 Mm. This means that the magnetic field might not be aligned with \(z\) at 0 Mm, hence we see that the angle between \(z\) and the magnetic field is large rather than small. We also note that even though the angle is smaller at low heights (for instance around 1 Mm at L3), we are not be able to see direct effects of
Figure 3: Evolution of temperature \(T\), vertical velocity \(v_{z}\), electron number density \(n_{\rm e}\), and beam heating rate \(Q_{\rm b}\) in the Bifrost simulation. The quantities are plotted in the range \(z\in[0,14]\) Mm at 1 s intervals for the duration of the time series. Each column represents the specific locations (L1, L2, L3, and L4) from the chosen areas. Negative (positive) velocities correspond to upflows (downflows). The insets in panels (n)–(p) show the electron beam heating in a sub-region, where the \(y\)-axes are limited to better show the details of the variations in \(Q_{\rm b}\).
non-thermal electrons potentially depositing their energy at these heights unless \(\theta_{B}\) is zero. The heating signatures from the electron beams seen in Fig. 3 (n)-(p) do not originate from vertical field lines that are aligned with \(z\), but rather from reconnection events along the magnetic field connected to the footpoints.
### Emission from synthetic spectra
Figure 6 represents the time evolution of the synthetic Ca ii 854.2 nm, Ca ii H, Mg ii k, and Si iv 140.3 nm lines at the four selected locations, where we have added individual line profiles at \(t^{\prime}=0\) s in each panel (orange line) to indicate what the spectra looks like. We note that our findings from the Ca ii K, Mg ii n, and Si iv 139.4 nm lines are similar to Ca ii H, Mg ii k, and Si iv 140.3 nm, respectively, hence these results are not shown. At L1, the minimum intensity of Ca ii 854.2 nm at \(t^{\prime}=28\) s is redshifted to a value between 1 and 2 km s\({}^{-1}\). The line seems to narrow towards the end of the simulation, but the general shape of the profile persists. At \(t^{\prime}=28\) s, the minimum intensity of Ca ii H and Mg ii k and the single peak of Si iv 140.3 nm are redshifted to approximately +5 km s\({}^{-1}\). The Ca ii H line profile has increased emission in the blue wing and peak that becomes weaker over time, and there are small variations in the intensity of the line core and the red peak. Over time, the minimum intensity of Mg ii k and the single peak of Si iv 140.3 nm shifts periodically between approximately 0 km s\({}^{-1}\) and +5 km s\({}^{-1}\).
At the L2 location, the synthetic spectra are redshifted to varying degree. The minimum intensity of Ca ii and Mg ii are redshifted to a value between 1 and 2 km s\({}^{-1}\), while the Si iv line exhibits a much stronger redshift of the line. The latter is also significantly broadened, most likely due to the downflows at TR heights. Si iv forms higher in the atmosphere compared to the other spectra, and it is therefore more likely to be affected by the strong downflow seen between 1-2 Mm in Fig. 3 (f). In the beginning of the simulation, there is strong emission in the absorption feature so that the profile almost looks single peaked. As time progresses, the spectral profile becomes broader and the line is redshifted up to approximately +30 km s\({}^{-1}\). From around 14 s, the red and blue peaks become more pronounced due to increased emission in these components. The Ca ii lines show increased emission in the red wing that is due to the weak downflowing velocity around 0.4 Mm (see Fig. 3 (f)). Over time, the red wings of the profiles become less broad. This feature is not seen in Mg ii k, suggesting that the line forms slightly above this height.
The evolution of the spectra at L3 shows that the different spectra are experiencing oscillations. This behaviour is not detectable in the Ca ii 854.2 nm panel, but further investigation show that this line, along with the other spectral lines, are subject to shock waves passing through the atmosphere. Around the formation height of Ca ii 854.2 nm, we see temperature oscillations varying between 6 500 and 6 700 K, but the intensity amplitude is too small to see because of the large range between core Ca ii H, Mg ii k, and Si iv 140.3 nm, the temperature oscillations vary between 7 200 and 7 800 K, 8 500 and 11 000 K, and 8 000 and 14 000 K, respectively. These values are low for Si iv, but we note that the temperatures are taken at the \(\tau=1\) height and that the contribution function covers a wider range. The temperature oscillations at the formation height of Si iv 140.3 nm exhibit the largest variation, hence the line is showing the strongest modulation in intensity. As the difference in minimum and maximum temperature decreases, the oscillating pattern in the intensity panels becomes weaker.
At L4, the Ca ii spectral profiles have line cores at approximately 0 km s\({}^{-1}\) at the beginning of the simulation that are redshifted to a value between 1 and 2 km s\({}^{-1}\) over time. Both profiles are double peaked with a slightly more intense blue peak, but the Ca ii 854.2 nm profile becomes single peaked and less intense as time progresses. The Ca ii H profile keeps its double peaked shape, but the red peak becomes less intense from around 20 s. The Mg ii k profile is similar to that of L2, with a blue peak that is more intense than the red peak and an absorption feature that is redshifted to a value between 1 and 2 km s\({}^{-1}\). At \(t^{\prime}=12\) s, there is a sudden increase in intensity of the entire line profile that is also faintly seen in the Mg ii k panel at the L2 location. This is due to a sudden increase in temperature at the formation height of the spectral line. The Si iv 140.3 nm line profile is severely broadened over the entire duration of the simulation. The initial profile (orange line) has a red and blue peak and a central reversal of the line core at approximately +6 km s\({}^{-1}\) that has a higher intensity than the peaks. Similar to the L3 loca
Figure 4: Vertical cut in the \(xz\)-plane of the temperature structure at \(t^{\prime}=28\) s. The vertical cut is taken at the location of the \(y\)-coordinate of L1, L2, L3, and L4. The dashed lines are drawn at the \(x\)-coordinate of the different locations.
Figure 5: Angle between the magnetic field and the vertical direction as a function of \(z\) for the entire Bifrost time series at the L2, L3, and L4 locations.
tion, the Si iv line is subject to shock waves, where the temperature oscillations around its formation height vary between 26 000 and 29 000 K. At around 26 s, the temperature stabilises and the shape of the profile is similar to the initial profile (\(t^{\prime}=0\) s).
Figure 6 shows that the strongest emission of the different spectra is found at L2. This location is promising in terms of electron beam heating, as it is located at the footpoint that is connected to the longest coronal loops. The upper panel in Fig. 2 shows a large number of electron acceleration sites that are connected to the particular magnetic field footpoint. Even though the atmospheric response to the electrons has proven difficult to single out, the TR and chromospheric spectra might still be affected by the electron beam heating.
Figure 6: Spectral evolution of Ca ii 854.2 nm, Ca ii H, Mg ii k, and Si iv 140.3 nm at the locations of interest. The \(x\)-axes are in units of Doppler offset, where negative (positive) velocities indicate blueshifts (redshifts). The intensity is shown in units of brightness temperature. The orange line profiles are taken at \(t^{\prime}=0\) s, where the highest (lowest) intensity of the profiles in each row corresponds to the maximum (minimum) intensity of the respective colourbars. We note that the intensity of Si iv 140.3 nm at the L1 location has a maximum of \(I_{v}=7\) kK in order to visually enhance the features of the relatively weak emission. The orange line profiles give a better indication of the difference in intensity across the Si iv row.
by the electrons depositing their energy along the loops. Figure 3 (n) shows that the electrons along the line of sight deposit most of their energy at TR and chromospheric heights, hence it is possible that the increased intensity seen in the L2 column of Fig. 6 is caused by local heating events.
### Line formation
Figure 7 shows the Ca ii 854.2 nm, Ca ii H, and Mg ii k line cores (top row) and \(\tau=1\) heights (bottom row) for the entire simulation domain at a single snapshot in time (\(t^{\prime}=28\) s). For simplicity, the line core is defined at \(v_{\rm D}=0\) (see the dashed line in Fig. 6), even though the concept of a single line core vanishes when analysing complex atmospheres with multi-layered structures. We have chosen one of the later Bifrost snapshots so that the electrons will have affected the atmosphere for a longer duration (the electrons are present from \(t^{\prime}=0\) s). Panels (a)-(c) show emission of the line cores at the magnetic field footpoints. The prominent current sheet at \((x,y)=(10,11)\) has enhanced intensity in the line cores, see Hansteen et al. (2019) for a detailed analysis of the associated Ellerman bomb and UV burst emission. The emission from the spectral lines consists of long strands outlining the loops above the flux emergence region. This region is seen in panels (d)-(f) as the structure with the highest \(z_{\tau=1}\) values. The structure is not as clearly outlined in panel (d) as in the other panels. This is because Ca ii 854.2 nm forms deeper in the atmosphere, hence there is less emission of the line core above the flux emergence region. This is also seen in panel (a), where there are fewer long emission strands outlining the magnetic bubble compared to the other upper panels. The Ca ii H and Mg ii k line cores form higher in the chromosphere, hence larger portions of the flux emergence region are outlined in the lower panels.
Figure 8 shows the integrated Si iv 140.3 nm intensity calculated using two different line synthesis approaches, as well as the ratio of the Si iv 139.4 nm to Si iv 140.3 nm line. In the left panel, the emission is calculated employing an optically thin approach using CHIANTI. The middle panel shows the intensity as output from the RH1.5D code allowing the Si iv resonance lines to form under both optically thin and thick conditions. The intensity in both panels is normalised between 0 and 1 as we aim to do a qualitative comparison between the two intensity maps. A detailed quantitative comparison is difficult given differences between the two approaches in for example silicon abundance and temperature coverage. Hence Fig. 8 aims to visualise the impact of using a model atom that includes potential opacity effects. At first glance, the integrated intensity maps look similar. The general structure of the simulation is outlined by the intensity in both panels, with long strands spanning the flux emergence region. However, the structures are smoother in the left panel where the emission forms under optically thin conditions. The middle panel has features that appear to be below the loops that outline the flux emergence region. These features are either weak or not seen in the left panel, suggesting that it is necessary to include potential opacity effects when calculating Si iv synthetic spectra.
Figure 7: Ca ii 854.2 nm, Ca ii H, and Mg ii k nominal line core intensity (upper panels) and \(\tau=1\) heights (lower panels) at \(t^{\prime}=28\) s. The colourbar of the intensity is clipped at 8.5 kK to emphasise the less bright features of the Ca ii line cores.
This is further supported by the right panel, which shows the ratio of the Si iv 139.4 nm to Si iv 140.3 nm line. In the optically thin limit the ratio should be equal to two, which is the ratio of their oscillator strengths. While the figure shows that most of the Si iv lines form under optically thin conditions, there are also darker areas where the ratio is below two. Similar results were discovered in Skan et al. (2023), where the wavelength integrated Si iv ratio was found to be between 1.6 and 1.8 at four different locations in a loop-like structure in a MURaM simulation. Our results show that a few of the darker areas where the ratio is below two stretches along the strands above the flux emergence region, which is consistent with their findings. Our results underline the risk of assuming that all Si iv emission forms under optically thin conditions in the solar atmosphere, and motivates our choice of using a more advanced approach when calculating the Si iv synthetic spectra.
### Contribution function to the intensity
Figures 9-11 show four \(2\times 2\) diagrams of the intensity formation of Ca ii 854.2 nm, Ca ii H, Mg ii k, and Si iv 140.3 nm at the L2, L3, and L4 locations at \(t^{\prime}=28\) s. The panels in each subfigure represent the individual terms in Eq. 1 as well as the total contribution function to the line intensity.
Panels (a) and (b) in Fig. 9 show the intensity formation of Ca ii 854.2 nm and Ca ii H, respectively. There are two distinct downward velocity gradients that are reflected in the \(\chi_{V}/\tau_{\rm v}\) term around 0.5 Mm and 1.05 Mm. The velocity gradient at 1.05 Mm seems to be located just above the maximum formation height of both spectral lines, and does not affect the contribution to the line intensity. The velocity gradient at 0.5 Mm (see also Fig. 3 (b)) causes small emission features in the red wing, which can be seen in the emergent intensity profile that is shown in the lower right panels. The source function is coupled to the Planck function from the photosphere up to 0.3 Mm, where a narrow cold region is seen as a dip in the Planck function and as a darker band in \(S_{\nu}\). Above the cold region, the source function is more closely following the Planck function again. For Ca ii 854.2 nm, the functions decouple around 0.6 Mm, while for Ca ii H this does not happen until 0.8 Mm. There is a strong increase in the Planck function, and hence also temperature, around 1 Mm. This gives rise to an increase in the source function around the maximum height of formation for the two lines. In panel (a), the increase in source function is responsible for the central reversal of the Ca ii 854.2 nm line core (\(v_{\rm D}=0\)), while the absorption feature at \(-1\) km s\({}^{-1}\) is caused by the decline in source function just above the narrow cold band. In panel (b), the increase in source function gives rise to two emission peaks in the Ca ii H intensity profile. The line core is formed at the maximum height of the optical depth unity curve (\(z=1\) Mm), where the central reversal is caused by the source function declining from the peak at around 0.95 Mm.
Figure 9 (c) shows the intensity formation of Mg ii k. The velocity gradients at 0.5 Mm and 1.05 Mm are reflected in the \(\chi_{V}/\tau_{\rm v}\) term. The gradient at 0.5 Mm occurs below the formation height of the spectral profile, while the gradient at 1.05 Mm does not affect the total contribution function significantly because the other terms are too small. In turn, the contribution function is dominated by the peak in source function, which is caused by the sudden increase in temperature seen as the Planck function. The central emission is caused by the increasing source function, while there is a shallow central absorption caused by a declining source function in line centre.
Figure 9 (d) shows the formation of the Si iv 140.3 nm line. The downward velocity gradient at 1.05 Mm is clearly seen in the \(\chi_{V}/\tau_{\rm v}\) term. The \(\tau=1\) height reaches high altitude, up to 1.7 Mm, at about +35 km s\({}^{-1}\) Doppler offset. We note that all the spectral profiles presented in Fig. 9 are shifted to the red to varying degree. The strongest redshift is seen in Si iv, which has an average redshift of +6 km s\({}^{-1}\) in the entire box. This is due to the overall positive velocity at the formation height shifting the profile to the red. In panel (d), the strong velocity gradient also causes a broadening of the asymmetric profile. The contribution function panel in the lower right shows that the part of the profile formed above approximately 1 Mm is formed under optically thick conditions, as this is where the \(\tau=1\) curve follows the peak of the contribution function. Below 1 Mm on the other hand, the \(\tau=1\) curve departs from the contribution function, which means that the formation for this part of the line is under optically thin conditions.
Figure 10 represents the intensity formation of the four different spectral lines at the L3 location. Panel (a) shows that Ca ii 854.2 nm is less affected by PRD than the other lines, as the source function, which appears like a horizontal band, shows little variation along the \(x\)-axis. The maximum height of the optical depth unity curve is slightly below the height of both
Figure 8: Integrated Si iv 140.3 nm intensity calculated using CHIANTI (left, marked as optically thin) and RH1.5D (middle, marked as optically thin+thick), and ratio of the Si iv resonance lines (right). The three panels show their respective quantities at \(t^{\prime}=28\) s. The integrated intensity maps are normalised between 0 and 1. The \(I_{1,943}/I_{1403}\) ratio is calculated using the intensity output from the RH code.
the downward velocity gradient and the sudden temperature increase, hence these features do not contribute to the intensity and are not reflected in the emergent intensity profile. The intensity of the line is therefore just a map of the source function at optical depth unity. The source function decreases with height up to where the line core forms, causing the overall absorption profile without emission features.
Panels (b) and (c) in Fig. 10 show the formation of Ca ii H and Mg ii k, respectively. The maximum height of the Ca ii H optical depth unity curve is just below that of Mg ii k. The Ca ii H
Figure 9: Intensity formation of the Ca ii 854.2 nm (a), Ca ii H (b), Mg ii k (c), and Si iv 140.3 nm (d) spectral lines at the L2 location at \(t^{\prime}=28\) s. Each subfigure consists of four panels, where the quantities given in the top left corners are shown in greyscale as functions of frequency from the line centre (in units of Doppler offset) and atmospheric height. The \(\tau_{\nu}=1\) height (purple solid) and vertical velocity (red dotted) are displayed in all panels. Negative (positive) velocities correspond to upflows (downflows). The top right panels display the source function at \(v_{\rm D}=0\) (yellow dashed) and Planck function (green dashed) in units of brightness temperature specified along the top (we note that the temperature range in (d) is larger because Si iv is sensitive to much higher temperatures). Multiplication of the first three panels produces the contribution function in the bottom right panel. This panel also contains the intensity profile (pink dashed) in units of brightness temperature. Gamma correction is added to the \(C_{t_{\nu}}\) term to amplify the weaker values.
profile has two peaks as a result of the peak in source function. There is an increase in the source function just after it decouples from the Planck function at approximately 0.7 Mm that gives rise to a small intensity increase in the red peak. The velocity gradient also makes a small contribution to the emission of the red peak, which has a slightly higher intensity than the blue peak. The rest of the line profile forms similarly as Ca ii 854.2 nm, where the intensity maps the source function. The red and blue peaks of the Mg ii k profile are caused by the velocity gradient and temperature increase around 1.35 Mm, while the absorption feature is caused by the decline in the source function at the maximum height of the optical depth unity curve. The source function decouples from the Planck function at a higher height (around 1.25 Mm) compared to Ca ii H, giving a larger rise in source function that leads to more pronounced peaks.
Figure 10 (d) shows the intensity formation of Si iv 140.3 nm. The contribution function is dominated by the \(\chi_{v}/\tau_{v}\) term, which is caused by the weak downflowing velocity around \(z=1.35\) Mm. The velocity causes the single peaked profile to shift to the red. We know from Fig. 6 that the lines forming at the L3 location are subject to shock waves passing through the atmosphere, where the oscillations in
Figure 10: Intensity formation of the Ca ii 854.2 nm (a), Ca ii H (b), Mg ii k (c), and Si iv 140.3 nm (d) spectral lines at the L3 location at \(t^{\prime}=28\) s. See the caption of Fig. 9 for more details.
temperature contributes to increases and decreases in intensity. The \(2\times 2\) diagrams only show a single instance in time, and at \(t^{\prime}=28\) s the maximum formation height of the Si iv line is just below the height of the temperature increase. Hence we can assume that the line is formed between the shock waves.
Figure 11 shows the formation of the different spectral profiles at the L4 location. The panels in (a) represent the intensity formation of Ca ii 854.2 nm. The line forms below the height of both the velocity gradient at 1.1 Mm and the temperature increase at 1 Mm. There is a decrease in the source function after it decouples from the Planck function. When the temperature starts to increase around 0.7 Mm, the source function increases too. This, together with the increase in \(\chi_{v}/\tau_{v}\) at the maximum \(\tau=1\) height, causes an emission feature in the line core.
Panels (b) and (c) in Fig. 11 show the intensity formation of Ca ii H and Mg ii k, respectively. Both lines are double peaked, with a blue peak that has higher intensity than the red peak. The two peaks in the Ca ii H profile are caused by the increase in source function around 0.95 Mm. The blue peak is more intense because the \(\chi_{v}/\tau_{v}\) term gives a stronger contribution on the blue side. The decline in source function at the highest height of the \(\tau=1\) curve causes the absorption feature at approximately
Figure 11: Intensity formation of the Ca ii 854.2 nm (a), Ca ii H (b), Mg ii k (c), and Si iv 140.3 nm (d) spectral lines at the L4 location at \(t^{\prime}=28\) s. We note that the maximum height in (d) is larger than the other panels. See the caption of Fig. 9 for more details.
\(+2\) km s\({}^{-1}\). We note that there is a strong velocity gradient around 1.1 Mm that does not contribute to the formation of the Ca ii H line intensity. However, the Mg ii K line forms at this height, and the strong downward velocity gradient causes the line core to shift to the red. The blue peak is more intense than the red peak because it gets a larger contribution from the \(\tau_{\nu}\)exp\((-\tau_{\nu})\) term, but both peaks get a significant contribution from the increase in temperature and source function at 1.05 Mm. The absorption feature is caused by the decline in source function at 1.1 Mm. The increased emission of the red wing (around \(+25\) km s\({}^{-1}\)) and blue wing (around \(-10\) km s\({}^{-1}\)) is most likely caused by the bright columns seen in the \(\tau_{\nu}\)exp\((-\tau_{\nu})\) panel, even though the contribution is too small to be visible in the \(C_{I_{\nu}}\) panel.
Figure 11 (d) shows the formation of the Si iv 140.3 nm line. The complex structure of the atmosphere at L4 (see Fig. 3 (d)) results in the line features forming at very different heights (we note that the height in Fig. 11 (d) ranges from 0 to 4 Mm, whereas the height in the other panels ranges from 0 to 2 Mm). We can distinguish both a blue and a red component in the line profile. These are formed at \(z=3\) and \(z=3.2\) Mm, where the \(\tau=1\) heights have peaks at \(-10\) and \(+25\) km s\({}^{-1}\). These velocity components contribute to the broadening of the spectral profile. The line core forms around 1.05 Mm, where there is a downward velocity component that causes it to shift to the red. The emission of the line core is caused by the sudden increase in temperature and source function.
## 4 Discussion
The analysis presented in this work shows that the chromospheric and TR spectra are highly affected by strong velocity gradients and sudden variations in temperature. It is difficult to determine if these variations are due to the non-thermal electrons depositing their energy along the magnetic field, especially since the simulation is multi-dimensional and potential effects that occur are not aligned with the particular vertical columns used for calculating the emergent spectra. Even though we cannot make a firm conclusion about the effect the non-thermal electrons have on the synthetic spectra, our results contribute to the continued pursuit of understanding small-scale reconnection events and their impact on the solar atmosphere.
To determine signatures in the synthetic spectra that may arise from the non-thermal electrons, we studied the evolution of the atmosphere and the response to the accelerated electrons. Frogner et al. (2020) have shown that the energy transport by accelerated electrons and thermal conduction differs greatly with depth in the lower atmosphere. Heating by thermal conduction dominates at TR heights, but decreases towards the chromosphere due to the temperature drop. The non-thermal electrons are not directly affected by the sudden decrease in temperature at TR heights, and beam heating generally exceeds conductive heating in the chromosphere. This means that synthetic spectra forming at TR heights, such as the Si iv resonance lines, are likely to be affected by both electron beam heating and thermal conduction, while the synthetic chromospheric spectra should be mostly affected by the non-thermal electrons. However, Fig. 3 shows that there is no clear indication that the electron beams are affecting the evolution of the atmosphere. This is most likely due to the low value of \(E_{C}\) for the non-thermal electrons in the corona, but also because the energy transport is very low. Small values of \(E_{C}\) (around 1-2 keV) implies that the effect of non-thermal electrons on the TR and chromosphere are similar to that of thermal conduction (Testa et al., 2014; Polito et al., 2018; Testa et al., 2020). Additionally, signatures from non-thermal electrons in chromospheric spectra greatly diminishes when the electrons deposit their energy in the corona. In an attempt to add maximum power to the electron beams, we performed an experiment where all the energy from the reconnection events was transferred to the electrons (\(p=1\)). In this experiment, the atmospheric structure was almost identical to the original simulation where \(p=0.2\), and the impact on the spectral diagnostics was insignificant. The only notable difference was in the beam heating, which was increased by a factor of 5. This tells us that the beam heating events in this simulation are too weak to significantly affect the low atmosphere, even when the electron beams carry the maximum amount of energy that is possible in this simulation.
The low level of change in our simulation might be due to the relatively short time that the electrons are present. Robinson et al. (2022) have demonstrated that it takes approximately 800 s (from the magnetic field is ordered) for the field in a Bifrost simulation of the quiet Sun to generate enough magnetic energy to produce heating events of typical nanoflare energies (\(10^{24}\) erg). Guerreiro et al. (2017) have shown that most reconnection events in a Bifrost simulation similar to ours have lifetimes of roughly 40 s, with a weighted average of around 50-60 s. During that lifetime, the energy released by the small-scale events typically ranges from \(10^{20}\)-\(10^{24}\) erg, which is the same as what Frogner et al. (2020) have predicted for longer lasting electron beam heating events. Our 36 s of simulation time is of the order of the shortest events presented in similar simulations, as the high computational cost has so far limited the running time. A longer simulation, including more full time-scale heating events would most likely produce more locations in the chromosphere where the effect of the electron beams would leave their imprint. At this point there is no plan to shoulder the computational cost required without also changing the solar environment to a more active region. To say with certainty that a strong signal would show up in this simulation by running it for longer time cannot therefore be guaranteed.
The travel distance from the site of reconnection to the site of the deposited electron energy is affected by the power-law index \(\delta\). A low power-law index allows the electrons to penetrate deeper into the atmosphere, while larger values lead to energy being deposited higher in the atmosphere. This is because the rate of deposited energy increases more rapidly for larger values of \(\delta\), meaning that the amount of energy deposited in the lower atmosphere is less than for smaller values of \(\delta\). Consequently, the spectra forming higher in the atmosphere, such as the Si iv resonance lines, are more likely to be affected by the non-thermal electrons compared to spectra forming at lower heights. Generally, we expect to see a large difference in the intensity and shape of the spectral lines when the time offset between the non-thermal electrons and the thermal conduction front is the greatest. In reality, this happens if a reconnection event occur high in the atmosphere, meaning that a relatively large amount of energy is transferred to the electrons and the electrons travel a great distance. This comes from the fact that travel distance increases linearly with height, while the available energy decreases exponentially with height. In this analysis, we have chosen columns that are situated at the magnetic field footpoints of the simulation. Even though L2, L3, and L4 are connected to field lines showing large changes in average electron beam heating power, we do not know if energetic events in the corona have an effect on the lower atmosphere. The most significant beam heating is seen at L2 (Fig. 3 (n)), where energy from the non-thermal electrons is deposited at TR and chromospheric heights. What is unique about L2 is that the electrons responsible for the peak in beam heating around 1 Mm are not accelerated from local re
connection in the lower atmosphere, as we do not see negative values of \(Q_{\rm b}\) of the same magnitude at approximately the same height. At L3 and L4, the electrons deposit their energy almost immediately after they are accelerated. At L2, there are two acceleration sites (at \(z=2\) and 9 Mm) where energy is transferred to the electrons. However, since the angle between the magnetic field and the vertical direction at these heights differs from the angle at 1 Mm, these events are not related. It is therefore possible that the electrons depositing their energy at this height might be accelerated from reconnection events in the corona.
L2 is the most promising location in terms of signatures from non-thermal electrons. The electron energy deposited at 1 Mm is consistent with the upflows of hot plasma into the TR and corona, and further takes place around the formation height of the Ca ii and Mg ii lines. However, it is difficult to know if the strong velocity gradient is caused by the electrons depositing their energy at this height, especially since we see velocity gradients that are consistent with the sudden temperature increase from the chromosphere to the TR at all four locations. Additionally, the velocity gradients at L3 and L4 do not seem to be directly affected by the deposited electron energy, which gives reason to believe that this is not the case at L2 either. This is further supported by comparing the spectral lines, where the shape and features of the Mg ii k line at L2 and L4 are both caused by steep velocity gradients and sudden increases in temperature. If the electrons have a significant impact on the temperature and velocity at L2, we expect to see a larger difference between the spectral profiles at the two locations. However, we cannot be certain that the electrons do not affect the atmosphere, and hence also the spectra, even though there are no significant effects from the energy deposited directly in the TR and chromosphere. We know that the electrons are continuously accelerated throughout the simulation, and they might be affecting the result more passively compared to larger energy releases. The Ca ii H, Mg ii k, and Si iv 104.3 nm lines show similarities to those produced by some of the RADYN models in Polito et al. (2018), Testa et al. (2020), and Bakke et al. (2022), in particular the low-temperature (1 MK) loop models. The similarities include increased emission of the Ca ii H and Mg ii k blue peaks, slight redshift of the Mg ii k line core, emission of the blue wing of Mg ii k, and single peaked Si iv 104.3 nm profiles that are strongly redshifted. The fact that we see spectral features that are similar to the signatures caused by non-thermal electrons in RADYN models suggests that the accelerated electrons in the Bifrost simulation have an impact on the atmosphere. However, even though we have an idea of the mechanisms behind small-scale heating events and the transport of non-thermal electrons, it is difficult to make a conclusion from our simulation without observational proof.
The results of the spectral line analysis can give an indication of what to look for in observations. The non-thermal electrons present in the Bifrost simulation might have an impact on the atmosphere, even though spectral line features that arise as a consequence to the beam heating have proven difficult to identify. We find that the changes to the synthetic spectra over time are relatively small. If the features of the spectral profiles are caused by the non-thermal electrons, and these features are more or less sustained over the simulation duration, it should mean that small-scale events can be detected by instruments with slower cadence than the 1 s time step in this simulation as the signal remains relatively unchanged. The spectral line diagnostics in this work include the Ca ii lines, which gives potential for observing small-scale events with ground-based telescopes, such as the Swedish 1-m Solar Telescope (SST; Scharmer et al., 2003), the Daniel K. Inouye Solar Telescope (DKIST; Rimmele et al., 2020), and the planned European Solar Telescope (EST; Quintero Noda et al., 2022). It is beneficial to include lines in the visible, as ground-based telescopes allow for higher spatial resolution compared to millimetre observations and extreme-UV diagnostics observed from space. Coordinated observations with for instance SST and IRIS would be advantageous to provide more constraints on small-scale heating events, even below the nanoflare limit.
In this paper, we have investigated the effect of non-thermal electrons in a 3D Bifrost simulation by performing a detailed analysis of synthetic chromospheric and TR spectral lines. We have demonstrated that there is a clear difference between the spectra forming in regions subject to electron beams and not. We show that the spectral lines are highly affected by variations in vertical velocity and temperature, but the complexity of the atmospheric response in the Bifrost simulation makes it challenging to determine specific signatures that arise uniquely from the non-thermal electrons. Based on the simulations presented here, we cannot conclude that a clear and consistent signature will arise when higher beam energies are included. Additionally, the time span of the simulation is shorter than the typical lifetimes of small-scale heating events. A simulation with a longer time span and with higher energy beam heating events would be interesting to investigate when available. Still, the spectral line analysis performed in this work can contribute to the understanding of small-scale heating events in the solar atmosphere.
###### Acknowledgements.
We thank Paola Testa for the useful comments that helped improve the paper. This research was supported by the Research Council of Norway, project numbers 250810 and 325491, and through its Centres of Excellence scheme, project number 262622. Computational resources have been provided by Sigma2 - the National Infrastructure for High-Performance Computing and Data Storage in Norway.
|
2304.14749 | Understanding accountability in algorithmic supply chains | Academic and policy proposals on algorithmic accountability often seek to
understand algorithmic systems in their socio-technical context, recognising
that they are produced by 'many hands'. Increasingly, however, algorithmic
systems are also produced, deployed, and used within a supply chain comprising
multiple actors tied together by flows of data between them. In such cases, it
is the working together of an algorithmic supply chain of different actors who
contribute to the production, deployment, use, and functionality that drives
systems and produces particular outcomes. We argue that algorithmic
accountability discussions must consider supply chains and the difficult
implications they raise for the governance and accountability of algorithmic
systems. In doing so, we explore algorithmic supply chains, locating them in
their broader technical and political economic context and identifying some key
features that should be understood in future work on algorithmic governance and
accountability (particularly regarding general purpose AI services). To
highlight ways forward and areas warranting attention, we further discuss some
implications raised by supply chains: challenges for allocating accountability
stemming from distributed responsibility for systems between actors, limited
visibility due to the accountability horizon, service models of use and
liability, and cross-border supply chains and regulatory arbitrage | Jennifer Cobbe, Michael Veale, Jatinder Singh | 2023-04-28T10:43:39Z | http://arxiv.org/abs/2304.14749v2 | # Understanding accountability in algorithmic supply chains
###### Abstract.
Academic and policy proposals on algorithmic accountability often seek to understand algorithmic systems in their socio-technical context, recognising that they are produced by'many hands'. Increasingly, however, algorithmic systems are also produced, deployed, and used within a supply chain comprising multiple actors tied together by flows of data between them. In such cases, it is working together of an _algorithmic supply chain_ of different actors who contribute to the production, deployment, use, and functionality that drives systems and produces particular outcomes. We argue that algorithmic accountability discussions must consider supply chains and the difficult implications they raise for the governance and accountability of algorithmic systems. In doing so, we explore algorithmic supply chains, locating them in their broader technical and political economic context and identifying some key features that should be understood in future work on algorithmic governance and accountability (particularly regarding general purpose AI services). To highlight ways forward and areas warranting attention, we further discuss some implications raised by supply chains: challenges for allocating accountability stemming from _distributed responsibility_ for systems between actors, limited visibility due to the _accountability horizon_, service models of use and liability, and cross-border supply chains and regulatory arbitrage.
Algorithmic accountability, supply chains, AI as a Service, general purpose AI, political economy, accountability horizon +
Footnote †: 2023 Copyright held by the owner/undert(s). ACM ISBN 979-8-4-007-1912-42/306.
+
Footnote †: 2023 Copyright held by the owner/undert(s). ACM ISBN 979-8-4-007-1912-42/306.
+
Footnote †: 2023 Copyright held by the owner/undert(s). ACM ISBN 979-8-4-007-1912-42/306.
+
Footnote †: 2023 Copyright held by the owner/undert(s). ACM ISBN 979-8-4-007-1912-42/306.
+
Footnote †: 2023 Copyright held by the owner/undert(s). ACM ISBN 979-8-4-007-1912-42/306.
+
Footnote †: 2023 Copyright held by the owner/undert(s). ACM ISBN 979-8-4-007-1912-42/306.
+
Footnote †: 2023 Copyright held by the owner/undert(s). ACM ISBN 979-8-4-007-1912-42/306.
+
Footnote †: 2023 Copyright held by the owner/undert(s). ACM ISBN 979-8-4-007-1912-42/306.
+
Footnote †: 2023 Copyright held by the owner/undert(s). ACM ISBN 979-8-4-007-1912-4/306.
+
Footnote †: 2023 Copyright held by the owner/undert(s). ACM ISBN 979-8-4-007-1912-4/306.
+
Footnote †: 2023 Copyright held by the owner/undert(s). ACM ISBN 979-8-4-007-1912-4/306.
+
Footnote †: 2023 Copyright held by the owner/undert(s). ACM ISBN 979-8-4-007-1912-4/306.
+
Footnote †: 2023 Copyright held by the owner/undert(s). ACM ISBN 979-8-4-007-1912-4/306.
## 1. Introduction
The'many hands' problem holds that accountability is difficult where many people have together contributed to activity or outcome, as it may be impossible to allocate responsibility to any one of them (Han et al., 2007; Han et al., 2008; Han et al., 2009). Writing in 1996, Nissenbaum argued that computer systems raise a particular form of this problem - they are usually produced by groups or organisations rather than individuals, and may include components developed by others (Han et al., 2008). Addressing the'many hands' problem is increasingly a concern in the algorithmic accountability literature (implicit or explicitly), with proposals in recent years for accountability for algorithmic systems to operate at an organisational level (Han et al., 2008; Han et al., 2008; Han et al., 2008; Han et al., 2008; Han et al., 2008; Han et al., 2009; Han et al., 2009; Han et al., 2009).
Yet today's computer systems--including AI technologies--are increasingly modular, dependent on cloud-based technologies, and interconnected. The 'agile turn' of recent decades transformed software development, distribution, and infrastructure, directly influencing how businesses are organised and computing resources are distributed (Han et al., 2008). _Agile development_ means software (including 'AI models) is now produced in short development cycles with continuous testing and iterative revision after deployment (Han et al., 2008). Computing resources are now generally modularised and distributed _as a service_, with a client-server model in which the server performs the computation (Han et al., 2008; Han et al., 2008). The challenges of scaling services and increasingly portable client devices drove advances in data centres with flexible resources and software becoming increasingly _cloud-based_(Han et al., 2008; Han et al., 2008). Consequently, software development now often involves, to various degrees, integrating pre-built modular components provided _as services_ and controlled by others into a complete product: not simply a system, but a _system-of-systems_(Han et al., 2008).
As a result, digital technologies across society and the economy are increasingly organised around _data-driven supply chains_ involving several interconnected actors and their systems. In these supply chains, data flows between actors, linking systems designed, developed, owned, and controlled by different people and organisations (Han et al., 2008): a sensor system (controlled by one actor) might connect to an analytics system (controlled by another) which itself outputs into a decision-making system (controlled by a third). This is often so even for seemingly simple applications; for example, a home thermostat can be driven by data from a national weather service, which is itself fed data from thermometers owned and operated by different actors. In such supply chains, the _working together_ of services and systems controlled by different actors produces particular outcomes--hardware capabilities, software functionalities, the workings of commercial and industrial processes, 'AI' decisions and outputs, and more. Supply chains are _data-driven_ in that the flow of data between actors links them together, allowing a system controlled by one actor to interact with those controlled by others and together produce some functionality (Han et al., 2008; Han et al., 2008). In the context of AI and algorithmic systems, **algorithmic supply chains** are those where several actors contribute towards the production, deployment, use, and functionality of AI technologies. In these supply chains, AI 'as a service' providers often play key roles (Han et al., 2008).
By reconfiguring software production and distribution, the agile turn also had significant political economic and other ramifications (Han et al., 2017; Krawczyk et al., 2018). In bringing services together to produce functionality through supply chains, developers now delegate control over much of the underlying technologies to others, complicating the governance of those technologies and the products they are part of. It is no longer necessarily true that computer systems are produced by a group of developers or an organisation, or by a vendor simply integrating various standalone components into one product (itself raising the'many hands' problem (Krawczyk et al., 2018)). Instead, they often now involve a group of organisations arranged together in a data-driven supply chain, _each retaining control over component systems they provide as services to others_. Moreover, certain key actors in supply chains--in particular, major cloud providers who often control underlying technologies--provide many services to millions of customers, holding important positions across supply chains in many sectors (Han et al., 2017; Krawczyk et al., 2018). The agile turn has thus reorganised many areas of social and economic life - now reconstituted around data-driven supply chains with a few systemically important actors providing the core infrastructure that underpins contemporary society.
Algorithmic supply chains bring significant implications for governance and accountability frameworks and mechanisms relevant to algorithmic systems. Allocating accountability across supply chain actors for producing, deploying, and using algorithmic systems is relevant to general academic, policy, and regulatory discussions around algorithmic accountability, and to more specific legislative efforts around regulation of AI. Here we argue that _governance of and accountability for algorithmic systems_ as deployed and used in the real world must _operate across the supply chains_ which will increasingly underpin, drive, and produce their outputs and effects.
Much of the policy and academic literature, however, is grounded in an organisation-focused understanding of digital technologies. Even recent work which seeks to address the'many hands' problem through a relatively broad view of accounting for algorithmic systems is typically focused on making specific stages of system lifecycle more transparent (Krawczyk et al., 2018; Krawczyk et al., 2018; Krawczyk et al., 2018; Krawczyk et al., 2018) or framed around the perspective of a single organisation (Han et al., 2017; Krawczyk et al., 2018; Krawczyk et al., 2018). This attention now paid to organisations' accountability for their algorithmic systems was long overdue, but the focus on organisational accountability has largely obscured the dynamics of algorithmic supply chains. We therefore still lack ways to conceive of these chains, to bring them within legal, regulatory, and governance mechanisms, and to appropriately distribute responsibility and accountability.
This paper contributes to understanding these challenges. First (SS2), we discuss recent trends in algorithmic accountability and identify limitations regarding supply chains. Next (SS3), we describe and contextualise AI services and algorithmic supply chains and identify key features of how they are structured and operate. Then (SS4) we discuss important implications of these features for accountability: the distributed nature of responsibility in supply chains (SS4.1); the limited understanding individual actors may have of the broader chain due to the 'accountability horizon' (SS4.2); and providers' efforts to structure supply chains to maximise control and commercial advantage while minimising legal risk and accountability (SS4.3).
In all, we argue, algorithmic accountability work must urgently address the technological, legal, and political economic dynamics of algorithmic supply chains. We do not offer concrete technical or organisational proposals to improve accountability in these supply chains, but instead hope to produce a shift in focus for algorithmic accountability as a field and indicate new research directions
## 2. Accountability in Algorithmic Systems
Significant academic and policy work has sought various forms of accountability for algorithmic systems (Krawczyk et al., 2018). Accountability is often seen either as a _mechanism_ (particularly in Europe and non-US Anglophone countries) or a _virtue_ (particularly in the US) (Krawczyk et al., 2018). As a mechanism, it is an institutional arrangement whereby an _actor_ provides accounts to a _forum_, who deliberates on those accounts and may impose consequences (Han et al., 2017). A developer might provide information to a regulator about their system, for example, with the regulator then issuing a penalty or requiring design changes. Some algorithmic accountability literature explicitly views accountability as a mechanism for holding actors to account for their systems (Krawczyk et al., 2018; Krawczyk et al., 2018; Krawczyk et al., 2018; Krawczyk et al., 2018). By contrast, accountability as a virtue is a normative concept, a set of standards for evaluating behaviour--often tied to being transparent, responsible, and responsive--with 'being accountable' seen as a positive quality of particular actors (Krawczyk et al., 2018). Some (predominantly technical) work has thus sought to improve the accountability of certain technologies by imbuing them with such positive qualities. Yet applying accountability to algorithmic systems in this way--rather than to the organisations responsible for them--often equates accountability with technical functionality (for example, building 'Accountable AI') rather with human virtues which are not reducible to technically tractable concepts (Krawczyk et al., 2018).
We treat accountability as a mechanism, whereby actors are held accountable for technologies they are responsible for. However, accountability for digital technologies is often challenging. The'many hands' problem--that often no one person is responsible for outcomes which multiple people helped produce--has long been recognised: computer systems are rarely produced by an individual who can be held accountable, but by teams and organisations with many people contributing (Krawczyk et al., 2018). Moreover, modular software development--where software developed by one organisation uses a library developed by another, for example--further complicates things (Krawczyk et al., 2018). Much software is too complex, relying on too many components, for any one person to account for all of its workings.
In the context of algorithmic accountability, specifically, a key conceptual shift has been in understanding these systems not as 'algorithms' but as _algorithmic systems_. "intricate, dynamic arrangements of people and code" (Krawczyk et al., 2018). This recognises that 'algorithms' are produced and work within human contexts and in practice cannot be understood separately from them. Simultaneously, explanations of (ML) model workings are increasingly recognised as insufficient to account for algorithmic systems (Han et al., 2017; Krawczyk et al., 2018; Krawczyk et al., 2018). Much research has therefore gradually moved away from seeking transparency or explanations of models (though this remains an important area of work) to understanding algorithmic systems more broadly as socio-technical phenomena. Much of this reflects--implicitly or explicitly--an understanding that algorithmic systems are often the result of'many hands': produced by and deployed and used within teams and organisations. To account for an algorithmic system,
one needs to account for the collective efforts of the organisational processes involved in producing, deploying, and using it.
The term 'algorithmic system' is now widely used in algorithmic accountability, with academic and policy literature commonly suggesting ways to improve accountability for their organisational aspects. Some proposals seek lower-level mechanisms to document the choices and decisions made by people in developing, deploying, or using a system, such as for datasheets (Krishnan et al., 2017) or data cards (Krishnan et al., 2017) to describe datasets, or model cards (Krishnan et al., 2017; Krishnan et al., 2017) and factsheets (Bartos et al., 2017) to describe model specification and capabilities. Such proposals often recognise accountability as a positive quality (i.e. a virtue) and seek improved transparency of algorithmic production and deployment processes. Higher-level proposals seek to integrate lower-level mechanisms and provide ways of understanding holistically the _process_ of producing, deploying, and using algorithmic systems, such as for auditability (Krishnan et al., 2017), reviewability (Krishnan et al., 2017), contestability (Krishnan et al., 2017), traceability (Krishnan et al., 2017), and others (Krishnan et al., 2017). These have mainly reflected accountability as a mechanism, and sought ways to support institutional mechanisms and accountability relationships between actors and forums. Though coming from different perspectives, these various lower- and higher-level mechanisms all essentially recognise that algorithmic accountability--either as a virtue or a mechanism--must reflect the'many hands' nature of AI technologies.
More recently, a'second wave' of algorithmic accountability research has sought to address more structural concerns around the development, deployment, and effects of algorithmic systems (Krishnan et al., 2017). This work moves from creating better methods to scrutinise systems _in situ_ to considering _whether_ such systems should be built at all, how, why, and who gets to govern them. This echoes longer-standing critical work in fields such as surveillance studies, which has considered the structural impacts of technologies of sorting and profiling on societies, and in which arguments exist against using these technologies altogether (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017). While literature in these fields considers issues such as the cumulative effects of systems on individuals and communities (Krishnan et al., 2017), they typically consider systems themselves through an organisation-centric lens - equating particular functionality (credit scoring, criminal profiling, airport screening, targeting advertising), with either the actor authorising the action (bank, police department, interior ministry, online platform), or a particular technology provider or contract.
Yet following the agile turn, an organisation-centric view is a less meaningful frame for analysis. Consequential algorithmic systems are commonly produced, deployed, used, and have effects through and within supply chains. It is therefore no longer the case that software is generally developed by particular teams or organisations (who may have integrated components developed by others into their finished product). Instead, as we argue, functionality results from the _working together of multiple actors_ across various stages of production, deployment, and use of AI technologies (connected by data flows across organisational, legal, technical, visibility boundaries). This does not mean that a particular single organisation will never be appropriate to hold to account, but that identifying the actors and processes that led to the functionality of any particular algorithmic system becomes significantly less straightforward.
We next (SS3) explore key features of algorithmic supply chains, followed by their challenges and implications for the mechanism of algorithmic accountability (SS4).
## 3. AI Services and Supply Chains
Significant barriers to entry limit the number of organisations that can produce bespoke state-of-the-art AI technologies in-house, either for their own use or to bring to market (Krishnan et al., 2017). Developing, maintaining, and renewing advanced AI technologies typically requires large and relevant quantities of data, potentially from multiple sources and labelled or moderated, relating to many use-cases, contexts, and subjects. Cutting-edge model development requires scarce expertise in model training, testing and deployment, all with significant storage, compute, and networking needs.
Companies with these capabilities now offer commercial access to cloud-based AI technologies 'as a service' (AlaaS) (Krishnan et al., 2017; Krishnan et al., 2017). Major companies including Amazon (Amazon, 2010), Microsoft (Krishnan et al., 2017), Google (Alphabet) (Krishnan et al., 2017), and IBM (Krishnan et al., 2017) offer networked access to various state-of-the-art AI capabilities, including both model-building services and pre-built (and 'general purpose') models in areas such as language, speech, vision, and analytics (see (Krishnan et al., 2017)), or generative models for producing text, images, audio, or video. Some companies offer specific services to customers, such as facial recognition (Krishnan et al., 2017), hiring (Krishnan et al., 2017; Krishnan et al., 2017), or medical diagnostics (Krishnan et al., 2017; Krishnan et al., 2017). And some operate as platforms for all the above, looking to connect developers, clients and infrastructure providers, among others, in a multi-sided market - Amazon and Microsoft, for example, offer access to models from other providers alongside their own (Krishnan et al., 2017; Krishnan et al., 2017), whereas other platforms are primarily an intermediary (such as HuggingFace (Krishnan et al., 2017)). AI services can be integrated into apps and Web services, analytics systems, business and industrial processes, workflows, and with IoT devices with real-world physical effects (collectively: '_applications_'). Low marginal cost and effort means this will likely become the primary way that organisations integrate AI capabilities (Krishnan et al., 2017).
AI services take various forms (Krishnan et al., 2017). Here we focus on services offering access to pre-built 'general purpose' models and to customised models tailored using tools offered by providers. In these, providers take major roles in the technology's production and distribution, developing and hosting systems on their (owned or managed) infrastructure. Services are typically accessed through application programming interfaces ('APIs') controlled by providers, which allow the underlying system's capabilities to be integrated into applications by customers (Fig. 1). This client-server model thus allows providers' algorithmic systems to run on their infrastructure, under their control, even while they are deployed by customers in applications across many contexts and use-cases. There are typically few (if any) checks on customers' identities or intentions, services use standard-form contracts (at least for smaller clients), and customers are billed on the API calls made (Krishnan et al., 2017).
An application's supply chain may involve several AI and non-AI services, potentially from multiple providers. Indeed, an AI service may be only one part of a broader, more complex chain for a given application, which may integrate multiple AI and other services. Actors in these chains are broadly 'upstream' or 'downstream' from the perspective of others - though this distinction can blur where actors take multiple positions in a chain. AI services themselves have supply chains, such as for dataset production activities like data gathering and labelling (Krishnan et al., 2017) (see SS4.3.2), typically involving data from customers' application deployments (Krishnan et al., 2017) and from providers' own user-facing platforms and services. Customers therefore appear in
supply chains for deployment _and_ production of some AI services, while end-users of customers' applications and providers' services are themselves drawn into production. Firms using AI services may themselves provide services to their own customers (Yam et al., 2017), such as proctoring software sold to universities repackaging Amazon's facial recognition service (Beng et al., 2017), or copywriting software repackaging OpenAI's text generation model (GPT-3 (Kumar et al., 2018)).
We identify several key features of algorithmic supply chains:
* production, deployment, and use are split between several interdependent actors;
* supply chain actors and data flows perpetually change;
* major providers' operations are increasingly integrated across markets and between production and distribution; and
* supply chains are increasingly consolidating around systemically important providers.
We draw out their implications for accountability in SS4.
### Supply chains split production, deployment, and use between interdependent actors
In algorithmic supply chains, different aspects of production, deployment, and use of AI technologies are split between multiple actors tied together by data flows. As a result, the activities of the various actors in supply chains each depend on the actions of by others. This may involve various interacting AI and non-AI technologies, such as cloud services, servers, data centres, data sources, and content delivery networks, controlled by different actors. The _working together_ of the various actors who control these technologies--each doing something that enables, supports, or facilitates the actions of others--produces a particular outcome (see Fig. 2). Each actor in a supply chain may not be aware of the others, nor have consciously decided to work together towards that outcome - indeed, they may have limited understanding of actors even one or two steps removed (see SS4.2). However, each depends on something done by others, and their role in a supply chain is contingent on the activities of actors both up- and downstream of them.
Actors in algorithmic supply chains are thus _interdependent_, each doing something to fulfil the needs of others (such as processing a particular data input and returning an output, or providing infrastructure to support application deployment). The interdependence of the various actors responsible for developing, deploying, and operating algorithmic systems in supply chains means they are not individual, independent actors _as such_. Instead, these actors, their relations, and their role in the workings and effects of AI technologies can only be understood _in the context of that supply chain_. Studying an actor and their systems in isolation from supply chain contexts is akin to studying an algorithmic model in isolation from its broader organisational context (the limitations of which are increasingly recognised (see SS2)). The dynamics of interdependence in algorithmic supply chains--how they are structured, the relative importance of actors, and how problems spread--are therefore key considerations for algorithmic accountability, as we now explore.
1.1. Supply chain interdependencies are structured by technological, legal, and political economic factors
Interdependence between actors gives algorithmic supply chains their structure and functionality. Certain actors--typically (AI and non-AI) cloud service providers--have leveraged AI technologies they own, production processes they control, and cheaply-accessed networking technologies to pursue particular interdependencies with others and strategically position themselves in markets and supply chains of many kinds. Technologies afford certain capabilities to those who use or control them (Kumar et al., 2018; Yam et al., 2018; Yam et al., 2018). They can therefore also afford the ability to do things that fulfill the needs of others. Because, as we discuss in SS3.1, people doing things for each other produces interdependence between them (Yam et al., 2018), different technologies can afford different kinds of interdependencies. Networking and data processing technologies, for example, allow the stages of production and deployment of AI technologies to be distributed geographically. They can therefore be done by different people, each of whom does something for the others, producing interdependence between them.
However, technologies and their affordances cannot _determine_ interdependencies or the structure of supply chains. Affordances are not objective properties of technologies, but depend on context and perspective (Kumar et al., 2018; Yam et al., 2018; Yam et al., 2018; Yam et al., 2018). How providers can strategically position themselves is thus shaped both by their technologies' affordances _and_ by social, legal, and political economic factors which also influence how actors relate to each other, what they do for each other, and the interdependencies that arise. Accordingly, to position themselves in supply chains and markets, providers have also leveraged political economic factors such as economies of scale and favourable legal frameworks such as intellectual property, intermediary liability, and data protection (Yam et al., 2018; Yam et al., 2018; Yam et al., 2018). Political economic and legal factors--not just technological--are thus important in producing and structuring algorithmic supply chains.
#### 3.1.2. Some actors are core players in supply chains
Supply chain actors are generally not equal in their interdependence with each other, and some may do things that others particularly depend
Figure 1. Sequence diagram of a simplified data-driven supply chain with an AI service. The customer sends input data to provider’s API, who performs some computation, before returning the results to the customer. Some of the broader supply-chain is illustrated, where the customer has previously received data from other third parties, and later, sends some data to another.
on. Their services, for example, may be relied upon by multiple others, as is often the provider's aim in offering general purpose services. Providers may depend on each customer only a little, while customers may depend on the provider for business-critical application functionality. Supply chain interdependencies are thus often _asymmetric_, with certain actors--typically including at least those responsible for production of AI technologies--performing core functions for others, while others are more peripheral. Various contextual signs might indicate that an actor is core in a particular chain. For example, they may be the application developer who calls the supply chain into existence. They may perform some function (such as providing an AI service) which is crucial to application functionality. They may provide a key step between actors (such as offering access to another provider's technology through their API) upon which subsequent steps of the chain depend. Or they may appear at multiple different points in the chain providing cloud-based technical architecture on which functions performed by other actors rely. Some actors may also be more interchangeable and replaceable than others - the barrier to entry for applying a specific off-the-shelf API is typically substantially lower than to generate the underlying technology to begin with.
These asymmetries of interdependence produce asymmetries in power (Srivastava et al., 2017): where one actor depends more on things done by another than that other depends on them, the balance of power between them favours the second actor (Srivastava et al., 2017). An application developer, for example, who uses a major provider's AI service, will depend more on that provider than the provider--who has many customers--will depend on that one developer. Power balances in algorithmic supply chains thus arise relationally yet asymmetrically and change over time as the relations and interdependencies between actors evolve (Srivastava et al., 2017). These power balances are not determined by actors' relations to the _technologies_ involved, but through their relation to and interdependence with _each other_. As we note (SS3.1.1), this is subject to many potential influences, of which factors like control of production processes and APIs are just some. But, by leveraging their technologies alongside legal, social, and political economic factors, providers can hold significantly asymmetrical positions as core actors in many supply chains, with power balances between them and others heavily in their favour.
#### 3.1.3. Interdependence helps problems propagate
Supply chain interdependencies mean problems with one actor's technologies can propagate through other actors' systems. Where an AI service is biased in some way (such as facial recognition performing poorly on particular demographics (Kang et al., 2016)), that bias will be inherited by applications relying on that service (Srivastava et al., 2017). Such a cascade's effects may be complex and unpredictable given the dynamic and largely undocumented set of actors and interdependencies found in many chains. Statistical guarantees may not hold when systems are composed together, and it is not straightforward to evaluate a whole system when each individual component may have been evaluated under different threat models (or other criteria) (Srivastava et al., 2017; Srivastava et al., 2017). Unless identified by the provider, actors 'downstream' from them may be unaware of a problem until they notice some unexpected behaviour. Even then, because they have delegated key aspects of production (and possibly deployment) of the technologies their application relies on to other actors (such as AI service providers), customers may struggle to understand where in a supply chain the problem has arisen, why, and what they can do to mitigate it.
### Supply chains are transient and dynamic with unstable interdependencies
Agile development combined with services-based distribution models has produced algorithmic supply chains which operate as dynamic _processes_ of data flow between a changing number and arrangement of actors. Just as critical engagement with algorithmic systems must recognise that they change over time (Srivastava et al., 2017), so do algorithmic supply chains. Indeed, because data flow ties actors together, a chain may differ each time it is instantiated. At various points there may be multiple possible directions for data to flow between actors depending on the outcomes of analyses performed on it. A face detected in a video stream using one service, for example, might trigger a flow to a separate facial recognition service to identify the person (with its own supply chain and associated data flows) and back again. This might trigger a flow to a third system to record and alert of the presence of a particular individual. Supply chains can thus be dynamically instantiated, and their structure may vary depending on the input data and the outputs of component systems. A supply chain's structure--the actors involved, what they do for each other, the interdependencies and power balances between them--may therefore only be fully apparent once the functionality or outcome has been been produced.
However, technical, legal, or economic relations between actors do often persist across multiple instances of a particular supply chain. An application developed to use a particular provider's service will typically use that service repeatedly, even if the path of data flow between actors differs between instances. As such, while
Figure 2. A representative AI supply chain. The application developer (blue) initiates a series of data flows by sending input data to an AI service provider (grey). One AI service provider (red) appears at multiple key points in the supply chain – providing infrastructure (A) for an AI service offered by (grey); providing an AI service (B) to another cloud service provider (orange); and providing technical infrastructure (C) for application deployment.
supply chains may change overall, bilateral relations between particular actors may remain relatively consistent. However, the nature of their relationship--the services provided and used, for example--may still change over time. An application developer may introduce new features, for instance, which use additional services offered by the particular provider. They may deprecate features such that particular services are no longer needed. They might employ additional support services for rapid growth in application resource requirements (for example, where an application 'goes viral'). The provider may change their terms of service (altering the legal relationship between them) or withdraw particular services (resulting in changes in the developer's application). These are just some of the ways that relationships between actors may change.
### Some actors are integrated across markets and between production and distribution
Some providers of AI and other cloud services commonly found in algorithmic supply chains have reached high levels of integration; both horizontally (across markets and sectors), and vertically (across production and distribution processes). This has implications for their positioning and role in algorithmic supply chains.
#### 3.3.1. Horizontal integration
Horizontally integrated companies operate across markets and sectors. The most prominent cloud providers (Amazon, Microsoft, Google, Alibaba, IBM) offer various services across many related and adjacent markets and may appear repeatedly in a supply chain. Some such services are AI-related; others are infrastructure for applications (storage, database, content delivery, credential management, and so on); still others are user-facing, from business and consumer web-based services (such as maps or photo backup) to software packages for customers and their users (such as Microsoft 365). This allows a single provider to re-purpose their AI and other technologies across a range of services, both infrastructural and user-facing. It is common for providers to purchase potential competitors and new market entrants, either to obtain intellectual property, to expand their services across markets, or to stifle emerging competition in existing markets. Providers can also simplify how existing customers bring AI services within their applications by providing tools to facilitate integrating them with their other services. Providers may financially incentivise customers to use several of their services instead of those of competitors.
#### 3.3.2. Vertical integration
Vertically integrated companies control multiple stages of production and distribution. Several major AI providers--primarily Amazon, Microsoft, and Google--own key infrastructure for producing their systems and distributing them as services across markets: data centres and servers; content delivery networks; APIs and customer-facing interfaces; and network infrastructure. Vertical integration offers bespoke technical infrastructure specific to these providers' needs which they can use for many services across markets to exploit economies of scale. High resolution media (requiring significant resources), for example, thus encourages vertical integration, as does state-of-the-art AI production (requiring more data, bigger and more complex models, intensive compute, and sophisticated training and testing processes). Vertically integrated providers can link deployment of systems by customers to their production processes, testing and further refining those AI technologies using customers' input data, applications, and real-world use cases (Han et al., 2017). This allows providers to reduce the resources needed to improve models, while offsetting some research and development costs by bringing it into a process paid for by customers (Han et al., 2017). They can thus lower the net cost of developing more accurate and more generalisable systems (Han et al., 2017).
However, vertical integration has limits. AI providers might not operate in-house data cleaning and labelling processes, for instance (key parts of training, testing, and updating models). The business benefits to providers of bringing these processes 'in house' are potentially outweighed by the commercial advantages of extending supply chains across borders to exploit differences in laws (SS4.3.2). Aspects of AI production are often instead outsourced to low-paid and insecure workers in the Global South (Han et al., 2017; Han et al., 2017) (through data cleaning and labelling services offered by companies like Sama AI (Sama et al., 2018), or through Mechanical Turk (Bradner et al., 2018)). Moreover, some major providers now offer access through their services to generative (foundation) models produced and controlled by others, marking a shift towards _less_ integration in some emerging product sectors.
#### 3.3.3. Providers all the way down
Though some prominent AI providers are both horizontally and vertically integrated, most are not. Instead, they tend to specialise in a few closely-related services, such as algorithmic recruitment, processing legal documents, certain medical processes (Shen et al., 2018), and even 'algorithmic governance' and 'ethical AI' (see (Shen et al., 2018)), without operating across traditional cloud service markets. These specialist providers typically'rent' technological infrastructure from a larger provider rather than operating their own (OpenAI, for example, exclusively uses Microsoft's Azure cloud services (Same et al., 2018)). This reflects the fact that developing advanced AI technologies and operating them at scale will in many cases require technical resources beyond the means of all but the biggest providers. As a result, whether through their own AI services or through those of others who depend on their cloud infrastructure, major providers like AWS, Microsoft Azure, and Google Cloud will be crucial players in future AI development and distribution.
Some AI-specific providers' services can be accessed only through a larger provider's interface and brought by customers into applications through that specific provider's cloud, rather than through a competitor (OpenAI's commercial services can be accessed _only_ through Azure (Same et al., 2018)). The larger provider's cloud offering thus operates as a platform through which they facilitate and can gatekeep market access to the smaller provider's service. In some cases, one cloud provider's interface may be used to access a specialist AI provider's model (Same et al., 2018), where that specialist provider itself uses a _different_ cloud provider for their supporting infrastructure for development (Bradner et al., 2018). That is to say, several larger cloud providers may be involved at different stages of production and deployment of specialist AI providers' services (and indeed, those of others).
### Supply chains are increasingly consolidating around systemically important providers
The dynamics of interdependence and integration mean that algorithmic supply chains are increasingly consolidating around (primarily) Amazon, Microsoft, and Google (Jiang et al., 2019; Li et al., 2020; Li et al., 2020). Several factors tend towards consolidation, including competitive advantages offered by integration. These companies span markets, offering developers 'all-in-one' packages with easy access to state-of-the-art technologies, which readily scale and enable 'global' reach. In AI production, they leverage bespoke and advanced computing resources and expertise, significant quantities of data representing real-world deployments and use-cases, and economies of scale across AI and non-AI customer bases. They can therefore offer services at lower cost, broader scale, greater technical sophistication, and with potentially easier access than many competitors. Moreover, their substantial financial resources help consolidate their position through purchases of and investments in potential competitors (such as Google's purchase of DeepMind (Li et al., 2020), or Microsoft's investment in OpenAI (Li et al., 2020)).
As a result, major providers are _systemically important_ for the political economy, governance, and accountability of AI. Even where an application does not use a major provider's AI services (using the developer's own AI technology, for example, or obtaining it from a smaller provider), major providers' non-AI services may form significant parts of supply chains for either that application or the AI service it uses (or both). These providers are therefore core actors in many supply chains, strategically positioning themselves across markets in a process of enclosure of AI-technological infrastructure and, by extension, of businesses, institutions, organisations, and sectors relying on supply chains involving their services. They are thus positioned in commercially beneficial interdependencies both with other actors in particular supply chains, but also more broadly - a few dominant providers underpin important social and economic processes while themselves depending to various degrees on many actors in social, legal, technological, and political economic processes which help produce and maintain their position.
## 4. (Implications for) accountability in algorithmic supply chains
Algorithmic supply chains bring difficult implications for governance and accountability. Much algorithmic accountability research often reflects an organisation-focused understanding of accountability (SS2). Yet the production, deployment, and use of AI technologies in supply chains is split between multiple actors who together produce its workings and effects and whose part in producing functionality cannot be understood separately from the chain (SS3.1). Organisation-focused framings cannot properly capture this distribution of responsibilities between actors across the stages of the AI lifecycle, which also challenges assignments of accountability in relevant legal frameworks (SS4.1). Moreover, problems with systems can propagate widely downstream through supply chains (SS3.1.3), yet particular actors are often unaware of the broader chain, and the limits of visibility across supply chains make interventions like risk assessments difficult (SS4.2). It is therefore crucial for governance and accountability mechanisms to understand the actors in supply chains, what they do for each other, which of them take core roles, and the interconnections and interdependencies between them. At the same time, however, the dynamic, transient nature of supply chains (SS3.2)--which can potentially be instantiated each time and unfold differently as data is processed--is also challenging.
Moreover, algorithmic supply chains are structured through interactions between technological, legal, social, and political economic factors (SS3.1.1). It is therefore not enough to attend only to ways of making the technology more transparent or understandable (though this can help understand specific points in particular systems' lifecycle). Instead, algorithmic accountability work must consider broader factors: how providers leverage technology and law to structure interdependencies, integrate their operations (SS3.3), consolidate their position (SS3.4), increase their control and power while minimising legal accountability (SS4.3.1), and extend their supply chains across borders to minimise cost and legal risk and maximise commercial benefit (SS4.3.2). The dynamics of supply chains, the legal and political economic factors influencing their structure, and the relations and interdependencies between actors that result are all significant considerations from a view of accountability _as a mechanism_--one, in particular, for investigating, understanding, assessing, challenging, and contesting power. They are also important in considering who should be accountable, to whom, for what, and through which mechanisms and institutional arrangements
### Responsibility for algorithmic systems is distributed between several actors
Governance and accountability mechanisms around algorithmic systems should address the _distributed responsibility_ in algorithmic supply chains. Different actors control aspects of commissioning, designing, developing, deploying, using, or monitoring a particular AI technology. Responsibility for the workings and outcomes of supply chains is thus distributed among several actors who may not be straightforward to identify nor consistent across instances. Even when some actors are influential, there is therefore typically no one actor in overall control of a supply chain. Existing accountability literature, however, typically assumes that (while models or input data might change) the actors and components remain relatively stable. Yet directing governance and accountability mechanisms at, or allocating accountability to, the wrong actors in supply chains risks undermining the stated goals of these mechanisms. Accountability involves a relationship where an actor provides accounts of their activities to a forum, who imposes consequences to correct the actor if needed (Li et al., 2020). For accountability mechanisms to succeed, it is therefore crucial that the right actors are assigned to the appropriate relationships. In this context, those who are factually responsible for various aspects of production, distribution, and use of algorithmic systems must be identified correctly so that accountability can be allocated accordingly.
#### 4.1.1. Legal accountability and distributed responsibility
Some jurisdictions have sought to address distributed responsibility in data-driven supply chains more generally. The Court of Justice of the European Union (CJEU) has attempted to contend with this in data protection law, for example. A key question in data protection law is who is a _data controller_ - factually in control of, and therefore primarily responsible in law for, personal data processing (Miller et al., 2019). The
CJEU has repeatedly held that multiple parties can be controllers for some or all aspects of a chain of processing (Kraus et al., 2015; Kraus et al., 2016; Kraus et al., 2017; Kraus et al., 2018; Kraus et al., 2019). Where several actors have common interests in the processing, they may be _joint controllers_(Kraus et al., 2016); where their interests in the processing diverge, they may be _separate controllers_. In doing so, the Court made several observations: actors can be controllers if they have influence despite not having actual access to the personal data (Kraus et al., 2015; Kraus et al., 2016; Kraus et al., 2018), controllers are not typically responsible for parts of the chain before or after those they actually influence (Kraus et al., 2018); and using another actor's platform does not exempt a controller from their obligations (Kraus et al., 2016).
Recognising the plurality of actors in chains of processing is welcome, but even data protection law's more nuanced assignment of roles and responsibilities may not readily map to algorithmic supply chains (Kraus et al., 2016; Kraus et al., 2018; Kraus et al., 2019). Under current understandings, AI service customers are likely data controllers (the dominant party, ultimately responsible for compliance and accountability), while providers may be _data processors_ (the subordinate party, acting only under the instruction of a controller, with limited obligations) (Kraus et al., 2016; Kraus et al., 2019; Kraus et al., 2019). Yet this assignment of legal roles and responsibilities does not describe the real interdependencies and power relations between AI service providers (who are in control of their technologies, often core actors in supply chains, potentially systemically important more generally, typically presenting customers with 'take-it-or-leave-it' contracts, and to a large extent determining AI-driven functionality in customers' applications through their production processes) and their customers (potentially small companies without AI expertise, typically with no access to the provider's systems, control over them, or knowledge of how they work) (Kraus et al., 2019). Even where providers _are_ likely controllers for aspects of the service-such as where they use customer data for service improvement--they typically attempt to minimise responsibility by claiming in their service agreements to be processors (Kraus et al., 2019). Yet the CJEU has consistently held that the factual situation outweighs contractual or other arrangements, and regulators have contradicted claimed assignments of legal roles in other kinds of data-driven supply chains (Kraus et al., 2019). Challenging providers' claims, however, would involve litigation or regulatory investigation. Moreover, given the need for joint controllers to agree the division of controllers' duties and responsibilities between themselves (Kraus et al., 2016), it is not clear how joint controllers can work where actors don't necessarily know of each other or have any direct relationship.
The EU's proposed AI Act suffers from related tensions. It recognises that the 'user' of an AI system (in this context, generally the customer of an AI service) may differ from its 'provider', and envisages circumstances where a user of an AI service does so for a purpose not intended by the provider, and thus in law becomes responsible for the underlying system (Kraus et al., 2019). However, this does not reflect supply chain interdependencies and dynamics, where production, deployment, and use are distributed between actors. Instead, in this circumstance, the Act would potentially make actors several steps downstream from production responsible for ensuring that the AI technology complies with production-related legal requirements around training and testing, accountability, and risk management. While the user would inevitably be unable to comply (due to the actual distribution of practical responsibilities in algorithmic supply chains), the actor who developed and controls that technology and is thus factually responsible for production would face no obligations. This may incentivise actors who can never provide assurance of compliance to pretend they can - easily done due to the Act's self-regulatory framework and limited planned regulatory capacity (Kraus et al., 2019). Regulatory systems which hold supply chain actors downstream of production to account for design and development may do little to regulate those who are factually responsible for production and who benefit financially from potentially unlawful API queries, effectively shielding them from liability.
#### 4.1.2. Allocating accountability
Governance and accountability mechanisms should therefore be grounded more clearly in and emphasise an understanding of the distribution of responsibility in algorithmic supply chains. Not _every_ actor in a supply chain will be responsible for the outcome of the algorithmic system - some will provide only supporting services which do not meaningfully affect outcomes. Neither will actors who _are_ in some way responsible be _equally_ responsible - some play a bigger role than others in determining outcomes. Nor will they be responsible for the _whole_ supply chain - different actors control different aspects of it. Accountability should thus be allocated to actors across supply chains based on a proper understanding of their technological and political economic dynamics. This requires processes and criteria for identifying the distribution of responsibility across supply chains and allocating accountability to those actors, for which activities, accounting to whom, and with what possible consequences.
It is therefore important to understand the distribution of responsibility in algorithmic supply chains in terms of who is doing what for whom, who is performing what key functions for others, who is core to certain supply chains, and who is systemically important. Particular attention is due to systemically important actors - primarily Amazon, Microsoft, Google, and perhaps a few others. Though technological and political economic dynamics tend towards consolidation around these companies, and though non-AI services often provide supporting infrastructure, it is still important to have ways to determine which aspects of supply chains are key to their outcomes and effects, as opposed to those which could be interchanged without affecting those things. The latter, while potentially significant, are perhaps less of an urgent subject of governance and accountability mechanisms than the former.
### The _accountability horizon_ limits visibility across supply chains
A significant challenge for governance and accountability mechanisms in algorithmic supply chains is the **accountability horizon** - the point beyond which an actor cannot'see', which depends on the actor and the chain. Supply chain actors will generally be able to know whom they interact with directly (a first'step' in the chain), and perhaps whom those first step actors interact with in turn (a second'step'), but may not be able to know about the data flows and interconnections beyond (Kraus et al., 2016; Kraus et al., 2019). Moreover, distributed responsibility between actors (SS4.1) means each has incomplete information even if they _do_ know who is up- and downstream of them. AI service providers, in control of production, may lack knowledge of downstream contexts and use-cases of application deployments (Kraus et al., 2019). Those responsible for deployment and use typically lack access to models and often to information about their
specification, training, testing, validation, and so on (and thus may have limited understanding of their capabilities and limitations).
The accountability horizon is thus a problem for producers of algorithmic systems in the earliest stages of developing their technologies (_problem framing_, SS4.2.1) and for legal and other governance frameworks based around _risk management_ (SS4.2.2).
#### 4.2.1. The accountability horizon makes problem framing difficult
Many algorithmic issues stem from choices around problem definition and framing that inform system design. Complex concepts may be formalised poorly, tasks may be incompletely captured, and different contexts may be insufficiently considered (Sandel, 2017). The'many hands' problem made critical questions of identifying who framed the problem and when it was framed difficult to answer (Sandel, 2017). Supply chain dynamics _giving rise to the accountability horizon_ complicate this further. Those responsible for production have limited capacity to understand the contexts of deployment and use by others, while the actors closest to the problem--those deploying or using the system--are generally unable to influence its design. Moreover, due to the split between production and deployment, application developers necessarily engage in their own problem framing - determining whether they need an AI service to address a particular problem and, if so, which is most suitable. Yet they may lack capacity to determine which service (if any) is most appropriate to their needs (particularly if organisations swap organisational and IT know-how for license managers (Bowdhury et al., 2019)). This is further complicated by the fact that not all services are fungible, or adaptable to a range of different framings. Services may only accept certain kinds of input data, produce certain kinds of output data, or be amenable to certain kinds of alteration and customisation. They may be developed with particular underlying assumptions which can (or should) preclude their deployment or use in other contexts (Sandel, 2017). Supply chain integration may further reduce flexibility in problem framing, as technical hurdles to limit interoperability and cost implications make components less readily swapapable (particularly where services are strategically bundled by providers).
More cynically, actors may encourage problem framing which increases demand for their own products and services. For example, organisations selling technologies for input data, such as cameras and other environmental sensors, may also sell workplace monitoring tools which take advantage of the data produced by these sensors. Application developers with low problem framing capacity might adopt these tools without properly identifying whether they need workplace monitoring at all. The dependency of application developers on supply chains may therefore risk the autonomy of those organisations (Bowdhury et al., 2019). Indeed, using AI services leaves healthcare, education and other established sectors vulnerable to unbundling and rebundling of their fundamental operations, leaving each stage amenable to value extraction through servitisation (Bowdhury et al., 2019).
#### 4.2.2. The accountability horizon makes risk management difficult
Many academic, policy, and legislative initiatives propose impact assessments, risk assessments, and risk management mechanisms to mitigate harms of AI technologies (for example, (Bowdhury et al., 2019; Sandel, 2017;
what form. They may also not know which actors upstream from them they can obtain accounts from. In general, the difficulties raised by the accountability horizon are not easily overcome.
### Providers structure supply chains to minimise accountability
Supply chain dynamics allow providers to maximise commercial benefit while minimising legal accountability by (_i_) extending control over downstream deployment of AI technologies (SS4.3.1) and (_ii_) extending their own supply chains across borders to engage in regulatory arbitrage (SS4.3.2). Providers thus use a techno-legal strategy to position themselves advantageously in markets and shape supply chains to maximise revenue and reduce risk (Zhou et al., 2017; Li et al., 2018).
#### 4.3.1. Servitised distribution models give providers control beyond deployment
Nissenbaum observed that, in the mid-1990s, software vendors often demanded property protection for their products while denying, as far as possible, accountability for them (Zhou et al., 2017). Software licensing agreements precluded _ownership_ by users and emphasised the producer's rights, while disclaiming their legal accountability for the software or anything it might do - even where harms resulted directly from defects in it (Zhou et al., 2017). Developers thus attempted to maintain control over their software to the extent possible given the distribution model at the time (typically physical media), while generating artificial scarcity for an information product to maximise revenue with minimal risk and responsibility. This produced, as Nissenbaum puts it, a 'vacuum' of accountability (Zhou et al., 2017).
Software's distribution has moved away from (licensed) physical media to the service-based, API-centric models described (Zhou et al., 2017). Combined with asymmetrical interdependence in algorithmic supply chains (SS3.1.2), service models offer providers new ways to extend control past the point of deployment. Because providers depend less on individual customers than customers depend on them, providers can impose contractual service agreements and use APIs as tools to advantageously structure their relations with customers and others. Where vendors once sought expansive intellectual property protections, today providers seek to use service agreements to maximise control over deployment of their technologies by reserving rights to dictate terms of use and change, withdraw, or cancel products and services at will. Providers disclaim legal accountability for things that happen through use of their services (Zhou et al., 2017; Li et al., 2018; Li et al., 2018), and attempt to position themselves as data processors (SS4.1) even when using customer data for their own purposes and thus are likely the controller for that processing (Zhou et al., 2017). Providers can also use APIs as 'projections' (Li et al., 2018) of the asymmetric balances of power with customers, to destabilise attempts to hold providers to account: using APIs as tools to shutter businesses, undermine research, and evade scrutiny (Bianchi et al., 2018), while contractually giving themselves those rights and using changes in information policy to control (Zhou et al., 2018).
#### 4.3.2. Cross-border supply chains permit regulatory arbitrage
As we describe, data processing and networking technologies afford a geographical distribution of AI production and deployment (SS3.1.1). The same technologies allow various production-related activities to themselves be distributed geographically, incentivised by jurisdictional differences in cost and regulation. This allows regulatory arbitrage, where companies in one jurisdiction exploit legal and political economic conditions in other jurisdictions to maximise commercial benefit while minimising legal accountability. This often involves contracting third-parties (such as Sama AI (Sama, 1995) or Supahands (Suzuki et al., 2019)) to undertake some aspects of production. For example, differences in privacy and data protection laws and labour protections can lower the legal risk of dataset production activities like data cleaning, and labelling (Sama et al., 1995; Li et al., 2018; Li et al., 2018). Environmental factors like cheap water and energy and lax planning and waste laws can influence the location of compute and storage (Zhou et al., 2017).
While some laws--such as the EU's data protection law (Zhou et al., 2017) and AI Act (Li et al., 2018) and California's Consumer Privacy Act (Li et al., 2018)--have sought extra-territorial effect to address regulatory arbitrage, the cross-border nature of supply chains and difficulties of enforcement remains a significant accountability challenge.
## 5. Conclusions and Further Research
The'many hands' problem has motivated efforts to provide information about the production, deployment, and use of algorithmic systems by teams and organisations (SS2). The emergence of AI 'as a service' (or 'general purpose AI') and developments associated with cloud computing and the services model of software distribution (SS3) challenge organisation-focused understandings of algorithmic accountability (SS4) in ways that have not been widely addressed.
AI technologies now often involve _algorithmic supply chains_, with their production, deployment, and use split between multiple actors who _together_ produce the technology's outcomes and functionality (SS3.1). Major providers--now highly integrated both horizontally and vertically (SS3.3)--are systemically important players (SS3.1.2), and supply chains are increasingly consolidating around them (SS3.4). Issues with particular systems can propagate through supply chains (SS3.1.3), while they often change between instances, making it difficult to understand how they operate or who is involved (SS3.2). Together, these dynamics of interdependence, perpetual change, integration, and consolidation produce supply chains in which responsibility for algorithmic systems is distributed between interdependent actors (SS4.1) and visibility across the actors involved is low (SS4.2). This challenges existing legal accountability frameworks while limiting the effectiveness of mechanisms like risk assessments. Moreover, splitting production and deployment makes it difficult to appropriately develop or choose AI services (SS4.2.1). At the same time, the services distribution model allows providers to use terms of service and APIs to minimise legal accountability and maximise control over technologies beyond deployment (SS4.3.1), while simultaneously extending their own production processes across borders to exploit differences in regulatory regimes (SS4.3.2).
In all, the characteristics of algorithmic supply chains we have identified and the implications they raise challenge existing approaches to algorithmic accountability. Future algorithmic accountability research must therefore contend with supply chain dynamics: how they are structured, how they develop over time, how AI's functionality and effects are produced through them, and importantly--how distributed responsibility challenges governance mechanisms and the accountability horizon limits visibility. This requires a broad view of supply chains, seeking to understand who is involved, what they are doing, and how to allocate accountability between them. Importantly, supply chains are structured by legal
and political economic factors, which must be properly understood, as well as technological ones. If governance and accountability mechanisms are to hold those responsible for developing, deploying, and using AI technologies to account for their workings and effects, the dynamics of supply chains must be urgently addressed.
###### Acknowledgements.
JC and JS are members of the Compliant & Accountable Systems Group, which acknowledges the financial support of UK Research & Innovation (EP/P024394/1, EP/R033501/1, ES/T006315/1), The Alan Turing Institute, and Microsoft (via the Microsoft Cloud Computing Research Centre). MV is supported by the Fondation Botnar.
|
2303.11696 | Regular black holes: A short topic review | The essential singularity in Einstein's gravity can be avoidable if the
preconditions of Penrose's theorem can be bypassed, i.e., if the strong energy
condition is broken in the vicinity of a black hole center. The singularity
mentioned here includes two aspects: (i) the divergence of curvature
invariants, and (ii) the incompleteness of geodesics. Both aspects are now
taken into account in order to determine whether a black hole contains
essential singularities. In this sense, black holes without essential
singularities are dubbed regular (non-singular) black holes. The regular black
holes have some intriguing phenomena that are different from those of singular
black holes, and such phenomena have inspired numerous studies. In this review,
we summarize the current topics that are associated with regular black holes. | Chen Lan, Hao Yang, Yang Guo, Yan-Gang Miao | 2023-03-21T09:33:36Z | http://arxiv.org/abs/2303.11696v4 | # Regular black holes: A short topic review
###### Abstract
The essential singularity in Einstein's gravity can be avoidable if the preconditions of Penrose's theorem can be bypassed, i.e., if the strong energy condition is broken in the vicinity of a black hole center. The singularity mentioned here includes two aspects: (i) the divergence of curvature invariants, and (ii) the incompleteness of geodesics. The both aspects are now taken into account in order to determine whether a black hole contains essential singularities. In this sense, black holes without essential singularities are dubbed regular (non-singular) black holes. The regular black holes have some intriguing phenomena that are different from those of singular black holes, and such phenomena have inspired numerous studies. In this review, we summarize the current topics that are associated with regular black holes.
###### Contents
* 1 Introduction
* 2 Construction of regular black holes
* 2.1 How to construct non-rotating regular black holes?
* 2.1.1 The case with one shape function
* 2.1.2 The case with two shape functions
* 2.2 How many curvature invariants do we need to define a regular black hole?
* 2.3 How to construct rotating regular black holes?
* 2.3.1 What are the problems faced by the Newman-Janis algorithm?
* 2.3.2 How to modify the Newman-Janis algorithm?
* 2.3.3 What are the regularity conditions of rotating regular black holes?
* 3
Interpretation of regular black holes * 3.1 How to understand regular black holes correctly? * 3.2 How to find the sources of non-rotating regular black holes? * 3.3 What are the difficulties for us to find the sources of rotating regular black holes? * 3.4 Can regular black holes have scalar hairs?
* 4 Energy conditions of regular black holes
* 4.1 Is the strong energy condition a key to lead to a regular black hole?
* 4.2 What are the energy conditions of regular black holes?
* 5 Thermodynamics of regular black holes
* 5.1 What is the entropy of regular black holes?
* 5.2 What is the correct first law of thermodynamics for regular black holes?
* 6 Rugular black hole chemistry and thermodynamic geometry
* 6.1 What is the regular black hole chemistry?
* 6.1.1 Thermodynamic phase transition and shift of critical points
* 6.1.2 Regular black hole as a heat engine
* 6.2 How to eliminate the singularity of the thermodynamic geometry for regular black holes?
* 6.2.1 Construction of thermodynamic geometry
* 6.2.2 Singularity of thermodynamic geometry and elimination
* 7 Conclusion and outlook
## 1 Introduction
Regular black holes (RBHs) are a collection of black holes (BHs) that has coordinate singularities (horizons) but lacks essential singularities in the entire spacetime. In most cases, the strategy to determine a RBH refers [1, 2, 3] to the spacetime with _finite curvature invariants_1 everywhere, particularly at the BH center. This is related to Markov's limiting curvature conjecture [4, 5, 6, 7], which states that the curvature invariants must be uniformly restricted by a certain universal value. However, such a strategy fails [8, 9] in the well-known Taub-NUT BH because the null and timelike geodesics are incomplete at the horizon, which contradicts [10, 11] to the alternative strategy to determine a regular spacetime based on the _geodesic completeness_.2 The strategy of complete geodesics also encounters [12, 13] counterexamples, see e.g. Refs. [14, 15], where the geodesics are complete but the curvature invariants are divergent, which consequently contradicts to Markov's limiting curvature conjecture. In this sense, the two strategies should be complementary to each other in order to judge RBHs.
The studies of RBHs can date back to Sakharov and Gliner's works [16, 17], where they stated that the essential singularities can be avoided if the vacuum is replaced by a vacuum-like medium endowed with a de Sitter metric. This idea has been developed further by Dynmnikova, Gurevich, and Starobinsky [18, 19]. The first model of RBHs was implemented by Bardeen [20], now called the Bardeen BH, which was constructed via simply replacing the mass of Schwarzschild BHs with a \(r\)-dependent function. As a result, the essential singularity of the Kretschmann scalar is removed in the Bardeen BH, meanwhile the core of this BH is of de Sitter, i.e., the Ricci curvature is positive in the vicinity of this BH center.
After three decades of Bardeen's proposal, Ayon-Beato and Garcia provided [21] the first interpretation of the Bardeen BH in field theory, i.e., they _speculated_ a source, a magnetic monopole in the context of nonlinear electrodynamics, which can lead to the Bardeen BH solution from Einstein's field equations. Recently, a large number of RBH models have been given in this way. In particular, such an approach has been extended [3, 22, 23, 24, 25] to explain all RBH models with the spherical symmetry. It is different from the usual way in finding BH solutions by solving Einstein's field equations. According to this approach, one writes the desired RBH and magnetic monopole solutions at first, and then determines the corresponding action of nonlinear electrodynamics. Because of such a special logic, i.e., from the solutions to the action of matters, the RBHs have nontrivial (phantom) scalar hairs [26, 23, 27]. Moreover, they are regarded to be classical objects as they are the solutions of Einstein's field equations.
Correspondingly, there are two different ways to construct RBH models: one is to _solve_ Einstein's field equations that are associated with a kind of special sources, e.g., the matters with spatial distributions [1, 28, 29, 30, 31, 32, 33, 34]; and the other is to _derive_ RBHs as quantum corrections to the classical BHs with singularity, e.g., the loop quantum gravity and asymptotic safety method [35, 36, 37, 38, 39, 40, 41, 42, 43]. Based on the former way, the RBHs behave semiclassically; whereas based on the latter, those RBH models exhibit quantum behaviors.
Besides the structures of RBHs [44, 45, 46, 47, 48, 49, 50, 51], the study also extends to the other areas of BH physics, including thermodynamics [52, 53, 54, 55, 22], dynamics [56, 57, 58, 59, 60, 61], shadows [62, 63, 64, 65, 66, 67, 68, 69, 70], quasinormal modes [71, 72, 56, 73], superradiance [74], and synchrotron radiations [75], etc. Currently, the study of RBHs has made a great progress in depth and breadth, which leads to the necessity for us to summarize the new results in a systematic way. Comparing with the previous reviews [19, 76, 29], we would like to gather the new progress developed recently. This is the main motivation for drafting this review.
Our review is organized as follows. In Sec. 2, we address the issue on the construction for both the non-rotating and rotating RBHs, where we also discuss the number of curvature invariants among the Zakhary-Mcintosh invariants for determining whether a BH is regular or not. Sec. 3 includes the clarification for understanding RBHs and the establishment of the sources of RBHs in terms of Petrov's approach. We end this section by showing the peculiarity of RBHs on scalar hairs that does not break the non-hair theorem. In Sec. 4 we demonstrate the role played by the strong energy condition in RBHs, and provide an illustration or a resolution of the issue on the violation of the other energy conditions in RBHs. Sec. 5 is dedicated to the thermodynamics of RBHs, where we give a discussion on the entropy-area law, based on which the self-consistent first law of thermodynamics is given in Sec. 5.2. In Sec. 6, we discuss the chemistry and thermodynamic geometry for RBHs because they are the nontrivial extensions of interesting issues associated with
singular BHs (SBHs). Finally, we conclude in Sec. 7 with some outlook.
## 2 Construction of regular black holes
In this section, we summarize the approaches for both the non-rotating and rotating RBHs, meanwhile we analyze the minimum set of curvature invariants that are needed to judge a RBH. Here the curvature invariants refer to the _Zakhary-Mcintosh (ZM) invariants_[77, 78] that form a complete set of Riemann invariants and contain seventeen elements. The reason that we adopt the ZM invariants rather than the usual ones, such as the Ricci scalar and Kretschmann scalar [6, 22, 79], will be explained in Sec. 2.2.
### How to construct non-rotating regular black holes?
Despite the complexity of ZM invariants, the calculation of ZM invariants becomes simple for those BHs with the spherical symmetry. The RBHs with the spherical symmetry have two types of metrics: A metric in the first type involves one shape function, see the specific square of line elements,
\[\mathrm{d}s^{2}=-f(r)\mathrm{d}t^{2}+f^{-1}(r)\mathrm{d}r^{2}+r^{2}\mathrm{d} \Omega^{2}, \tag{1}\]
and a metric in the second type involves two shape functions, see the specific square of line elements,
\[\mathrm{d}s^{2}=-f(r)\mathrm{d}t^{2}+f^{-1}(r)A^{2}(r)\mathrm{d}r^{2}+r^{2} \mathrm{d}\Omega^{2}, \tag{2}\]
which is equivalent to
\[\mathrm{d}s^{2}=-f(\xi)\mathrm{d}t^{2}+f^{-1}(\xi)\mathrm{d}\rho^{2}+r^{2}( \xi)\mathrm{d}\Omega^{2}, \tag{3}\]
where \(\xi\) is a newly defined variable,
\[\xi\equiv\int\mathrm{d}r\,A(r). \tag{4}\]
In the following of this subsection, we shall give some restrictions to the two types of metrics, which will reveal that the RBHs depicted by the two types of metrics have finite curvature invariants.
#### 2.1.1 The case with one shape function
It is quite general for us to write the shape function as follows,
\[f(r)=1-\frac{2M\sigma(r)}{r}, \tag{5}\]
where \(\sigma(r)\) is a function of the radial variable \(r\) and \(M\) is BH mass. In order to observe the regularity, we expand \(\sigma(r)\) by the power series around \(r=0\),
\[\sigma(r)=\sigma_{1}r+\sigma_{2}r^{2}+\sigma_{3}r^{3}+O(r^{4}), \tag{6}\]
where \(\sigma_{i}\) are constant coefficients. Then, by substituting Eqs. (1), (5) and (6) into the ZM invariants, we can find the conditions of finite curvatures, that is, the coefficients \(\sigma_{1}\) and \(\sigma_{2}\) must vanish,
\[\sigma_{1}=0=\sigma_{2}. \tag{7}\]
As examples, we write down the behaviors of three usual candidates among the seventeen ZM invariants around \(r=0\): The Ricci scalar \(R\), the Weyl scalar \(W\), and the Kretschmann scalar \(K\) have the asymptotic behaviors for RBHs,
\[R=24M\sigma_{3}+O(r),\qquad W=O(r^{2}),\qquad K=96M^{2}\sigma_{3}^{2}+O(r). \tag{8}\]
Alternatively, we can select three curvatures from the seventeen ZM invariants and write \(\sigma(r)\), \(\sigma^{\prime}(r)\), and \(\sigma^{\prime\prime}(r)\) as functions of these three curvatures because the ZM invariants contain \(\sigma(r)\) and only its first and second order derivatives, \(\sigma^{\prime}(r)\) and \(\sigma^{\prime\prime}(r)\). Then, requiring the finiteness of the three curvatures, we can find the behavior of \(\sigma(r)\) around the center \(r=0\), i.e., \(\sigma(r)\) should not decrease slower than \(r^{3}\) as \(r\) approaches to zero [80], otherwise some of the ZM invariants will diverge at \(r=0\).
#### 2.1.2 The case with two shape functions
For the RBHs with two shape functions [6, 81, 82, 23], we apply the similar procedure to the above, that is, we expand the both of shape functions (see Eq. (2)) by the power series,
\[A(r)=A_{0}+A_{1}r+A_{2}r^{2}+O(r^{3}), \tag{9a}\] \[f(r)=B_{0}+B_{1}r+B_{2}r^{2}+O(r^{3}). \tag{9b}\]
After substituting Eqs. (2), (9a) and (9b) into the ZM invariants, we can find the conditions for finite curvatures,
\[A_{0}=B_{0},\qquad A_{1}=B_{1}=0, \tag{10}\]
i.e., the first order of \(r\) must be absent in the power expansions. We still give three curvature invariants for RBHs as examples when \(r\) goes to zero,
\[R=\frac{6(A_{2}-2B_{2})}{A_{0}}+O(r),\qquad W=O(r^{2}),\qquad\mathcal{S}=\frac {3A_{2}^{2}}{A_{0}^{2}}+O(r), \tag{11}\]
where \(\mathcal{S}\) and \(\mathcal{S}_{\mu\nu}\) are defined [6] by \(\mathcal{S}\equiv\mathcal{S}^{\mu\nu}\mathcal{S}_{\mu\nu}\) and \(\mathcal{S}_{\mu\nu}\equiv R_{\mu\nu}-g_{\mu\nu}R/4\), respectively.
### How many curvature invariants do we need to define a regular black hole?
Generally, the _finite curvature invariants_ and _geodesic completeness_ are not equivalent to each other,c but they can be regarded as two independent necessary conditions for checking whether a BH is regular. Moreover, the former is coordinate independent, while the latter depends on the choice of coordinates. For instance, in the Rindler spacetime [83, 11], \(\mathrm{d}s^{2}=-z^{2}\mathrm{d}t^{2}+\mathrm{d}x^{2}+\mathrm{d}y^{2}+ \mathrm{d}z^{2}\), the geodesics cannot be extended along the \(z\)-direction because the corresponding affine parameter is finite at \(z=0\). In other words, the point \(z=0\) acts as a singularity in this spacetime. However, after an appropriate transformation,
Footnote c: For certain cases, these two conditions are equivalent, e.g., for spherically symmetric BHs with one shape function, i.e., \(\mathrm{d}s^{2}=-f(r)\mathrm{d}t^{2}+f^{-1}(r)\mathrm{d}r^{2}+r^{2}\mathrm{d} \Omega^{2}\).
\[t\rightarrow\tanh^{-1}\frac{T}{Z},\qquad x\to X,\qquad y\to Y, \qquad z\rightarrow\sqrt{Z^{2}-T^{2}}, \tag{12}\]
the original metric converts to that of the Minkowski spacetime, \(\mathrm{d}s^{2}=-\mathrm{d}T^{2}+\mathrm{d}X^{2}+\mathrm{d}Y^{2}+\mathrm{d}Z^{2}\), i.e., there are no singularities anywhere. From this point of view, we can see that the condition of _finite curvature invariants_ has its advantage, i.e., it does not need to be concerned about selecting appropriate coordinates.
Nevertheless, it is unclear how many curvatures have to be used in order to determine a RBH. Usually, there are three candidatesd connected by the Ricci decomposition, i.e., the Ricci scalar \(R\), the Kretschmann scalar \(K\) and the contraction of two Ricci tensors \(R_{2}\). The question is whether \(R\), \(K\) and \(R_{2}\) are enough to reveal the singularities in all the seventeen curvature invariants. The answer is of course negative. Let us see the well-known Taub-NUT BH [84, 85] as a sample,
Footnote d: Alternatively, \(K\) and \(R_{2}\) are replaced by \(W\equiv C_{\mu\nu\alpha\beta}C^{\mu\nu\alpha\beta}\), the contraction of two Weyl tensors, and \(\mathcal{S}\equiv\mathcal{S}_{\mu\nu}\mathcal{S}^{\mu\nu}\), where \(\mathcal{S}_{\mu\nu}\equiv R_{\mu\nu}-g_{\mu\nu}R/4\)[6].
\[\mathrm{d}s^{2}=-f(r)\left[\mathrm{d}t+2n\cos(\theta)\mathrm{d}\phi\right]^{2 }+\frac{\mathrm{d}r^{2}}{f(r)}+\zeta^{2}\left[\mathrm{d}\theta^{2}+\sin^{2}( \theta)\mathrm{d}\phi^{2}\right] \tag{13}\]
with
\[f(r)=\frac{\Delta}{\zeta^{2}},\qquad\Delta=r^{2}-2Mr-n^{2},\qquad\zeta=\sqrt{r ^{2}+n^{2}}, \tag{14}\]
where \(M\) is mass, \(n\) called the NUT parameter is positive, and the horizon is located at \(r_{\mathrm{H}}=M+\sqrt{M^{2}+n^{2}}\). It is shown [9, 86] that the geodesics are incomplete at the horizon.
We turn to the investigation of the curvature invariants of the Taub-NUT BH. The Ricci tensor vanishes, \(R_{\mu\nu}=0\). The Kretschmann scalar reads
\[K\sim\frac{48\left(n^{2}-M^{2}\right)}{n^{6}}+O\left(r\right), \tag{15}\]
which is finite as \(r\) approaches to zero. Moreover, \(R_{2}\) is also finite when \(r\) goes to zero. As a result, the three curvature invariants, \(R\), \(K\) and \(R_{2}\) are finite in the Taub-NUT BH spacetime. But we cannot conclude that the other ZM invariants are regular everywhere. In fact, there are two ZM invariants that are divergent at the horizon. They are constructed [78] by the Weyl tensor, \(C_{\mu\nu\alpha\beta}\), and its dual, \(C^{*}_{\mu\nu\alpha\beta}=\epsilon_{\mu\nu\rho\sigma}C^{\rho\sigma}_{\ \ \alpha\beta}/2\), as follows:
\[I_{2}=-C^{\ \alpha\beta}_{\mu\nu}C^{*\ \mu\nu}_{\alpha\beta}=\frac{\Phi(r, \theta)}{\Delta^{2}\sin^{2}\theta},\qquad I_{4}=-C^{\ \alpha\beta}_{\mu\nu}C^{*\ \gamma\zeta}_{\alpha\beta}C^{\ \mu\nu}_{\gamma\zeta}=\frac{\Psi(r, \theta)}{\Delta^{2}\sin^{2}\theta}, \tag{16}\]
where \(\Phi(r,\theta)\) and \(\Psi(r,\theta)\) are holomorphic functions of \(r\) and \(\theta\) and have no zeros at the horizon, \(r_{\mathrm{H}}\), and at \(\theta=0,\uppi\). Since \(I_{2}\) and \(I_{4}\) are proportional to \(1/\Delta^{2}\), the essential singularity reappears at \(r_{\mathrm{H}}\), which coincides with the result deduced from the incomplete geodesics [9, 86]. Moreover, \(I_{2}\) and \(I_{4}\) reveal additional singularities at \(\theta=0,\ \uppi\), where \(\theta=\uppi\) is known as the "Misner string" [86].
What we learn from this example is that the finiteness of the usual candidates of curvature invariants is not sufficient in general, which leads to the conclusion that a complete set of Riemann invariants is necessary from the perspective of finite curvatures in order to determine whether there exist essential singularities.
### How to construct rotating regular black holes?
It is considerably difficult to obtain rotating RBH's solutions from the Einstein field equations because the complexity of Einstein's field equations in the case of rotation is much greater than that of the static case. Therefore, the widely-used method for constructing rotating BHs is the Newman-Janis algorithm (NJA) [87].
The NJA originated from the connection between a static BH and a rotating one in general relativity. It is well known that the Schwarzschild, Reissner-Nordstrom (RN), Kerr, and Kerr-Newman (KN) BHs were obtained by solving Einstein's field equations in vacuum. These solutions have clear physical explanations. By comparing the metrics of these BHs, Newman and Janis proposed the NJA to mathematically describe the transformation from spherically symmetric Schwarzschild BHs to axially symmetric Kerr BHs. The algorithm can also describe the transformation from RN BHs to KN BHs.
Subsequently, Gurses and Gursey extended [88] this algorithm for the Kerr-Schild type of BHs, namely, the metric of one shape function mentioned in Sec. 2.1. Further, Drake and Szekeres generalized [89] the NJA for general spherically symmetric BHs. Based on the above algorithms, many spherically symmetric RBHs have been extended to their axially symmetric counterparts, such as the noncommutative BHs [90, 29, 91], loop quantum corrected BHs [92, 93], Bardeen BHs [94, 20], and Hayward BHs [95, 94], etc.
#### 2.3.1 What are the problems faced by the Newman-Janis algorithm?
The NJA faces the uncertainty of complex transformation of metrics. This problem arises from the complex transformation of coordinates [96, 97, 98]:
\[r\to r+\mathrm{i}a\cos\theta,\qquad u\to u-\mathrm{i}a\cos\theta, \tag{17}\]
where \((u,r,\theta,\varphi)\) are the advanced null coordinates and \(a\) is the rotation parameter. According to the transformation, we need to convert a static and spherically symmetric metric function into a rotational and axially symmetric function, and ensure that the latter is real but not complex. As a result, this conversion of metric functions must follow certain rules. However, such rules are ambiguous in the NJA.
The commonly-used rule is obtained by comparing the RN metric with the KN metric. The \(tt\)-component of the RN metric reads
\[g_{\mathrm{(RN)tt}}=1-\frac{2M}{r}+\frac{q^{2}}{r^{2}}, \tag{18}\]
where \(M\) is mass and \(q\) is charge. The \(tt\)-component of the KN metric takes the form,
\[g_{\mathrm{(KN)tt}}=1-\frac{2Mr}{r^{2}+a^{2}\cos^{2}\theta}+\frac{q^{2}}{r^{2} +a^{2}\cos^{2}\theta}. \tag{19}\]
The conversion rule is as follows:
\[r^{2}=rr^{*}\rightarrow(r+ia\cos\theta)(r-ia\cos\theta)=r^{2}+a \cos^{2}\theta, \tag{20}\] \[\frac{1}{r}=\frac{1}{2}\left(\frac{1}{r}+\frac{1}{r^{*}}\right) \rightarrow\frac{1}{2}\left(\frac{1}{r+ia\cos\theta}+\frac{1}{r-ia\cos\theta} \right)=\frac{r}{r^{2}+a^{2}\cos^{2}\theta}. \tag{21}\]
However, the above conversion rule may not be applicable due to the complexity of RBH metrics. Here we take the black-bounce spacetime [81] as an example, where its metric's \(tt\)-component reads
\[g_{\rm(BB)tt}=1-\frac{2M}{\sqrt{r^{2}+l^{2}}}, \tag{22}\]
where \(l\) is the regularization parameter. When \(l\) vanishes, the metric becomes the Schwarzschild's. Therefore, the rotation formulation of this metric should reduce to the Kerr metric when \(l=0\). But this is not the case. The rotating black-bounce metric under the above conversion rule takes the form,
\[g_{\rm(rBB)tt}=1-\frac{2M}{\sqrt{r^{2}+a^{2}\cos^{2}\theta+l^{2}}}, \tag{23}\]
whereas the \(tt\)-component of the Kerr metric is
\[g_{\rm(K)tt}=1-\frac{2Mr}{r^{2}+a^{2}\cos^{2}\theta}. \tag{24}\]
Obviously, \(g_{\rm(rBB)tt}\) does not reduce to \(g_{\rm(K)tt}\) when \(l=0\). As a result, the above conversion rule does not apply to the black-bounce spacetime. Due to the failure of this rule, we need to find such a rule that is applicable to more models. Furthermore, the ambiguity related to coordinate transformations leads to difficulties in the generalization of the NJA.
#### 2.3.2 How to modify the Newman-Janis algorithm?
In order to avoid the ambiguity caused by the complex transformation, Azreg-Ainou modified [96, 97] the NJA as follows.
For a general static metric,
\[\mathrm{d}s^{2}=-G(r)\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{F(r)}+H(r)\left( \mathrm{d}\theta^{2}+\sin^{2}\theta\mathrm{d}\varphi^{2}\right), \tag{25}\]
one introduces the advanced null coordinates \((u,r,\theta,\varphi)\) defined by
\[\mathrm{d}u=\mathrm{d}t-\frac{\mathrm{d}r}{\sqrt{FG}}, \tag{26}\]
and expresses the contravariant form of the metric in terms of a null tetrad,
\[g^{\mu\nu}=-l^{\mu}n^{\nu}-l^{\nu}n^{\mu}+m^{\mu}m^{*\nu}+m^{\nu}m^{*\mu}, \tag{27}\]
where
\[l^{\mu}=\delta^{\mu}_{r}, \tag{28a}\] \[n^{\mu}=\sqrt{\frac{F}{G}}\delta^{\mu}_{u}-\frac{F}{2}\delta^{\mu}_{r},\] (28b) \[m^{\mu}=\frac{1}{\sqrt{2H}}\left(\delta^{\mu}_{\theta}+\frac{i}{\sin\theta} \delta^{\mu}_{\varphi}\right),\] (28c) \[l_{\mu}l^{\mu}=m_{\mu}m^{\mu}=n^{\nu}n_{\nu}=l_{\mu}m^{\mu}=n_{\mu}m^{\mu}=0, \tag{28d}\]
\[l_{\mu}n^{\mu}=-m_{\mu}m^{*\mu}=1. \tag{28e}\]
Then, one introduces the rotation via the complex transformation, Eq. (17), under which \(\delta^{\mu}_{\nu}\) transform as vectors:
\[\delta^{\mu}_{r}\rightarrow\delta^{\mu}_{r},\qquad\delta^{\mu}_{u}\to \delta^{\mu}_{u},\qquad\delta^{\mu}_{\theta}\rightarrow\delta^{\mu}_{\theta}+ ia\sin\theta(\delta^{\mu}_{u}-\delta^{\mu}_{r}),\qquad\delta^{\mu}_{\varphi} \rightarrow\delta^{\mu}_{\varphi}. \tag{29}\]
For a SBH, the metric function of its rotating counterpart can be determined under the above complex transformation. However, such a transformation does not work well for a RBH as we discussed in Sec. 2.3.1. Thus, one assumes that \(\{G,F,H\}\) transform to \(\{A,B,\Psi\}\):
\[\{G(r),F(r),H(r)\}\rightarrow\{A(r,\theta,a),B(r,\theta,a),\Psi(r,\theta,a)\} \tag{30}\]
where \(\{A,B,\Psi\}\) are real functions to be determined, and they should recover their static counterparts in the limit \(a\to 0\), namely,
\[\lim_{a\to 0}A(r,\theta,a)=G(r),\qquad\lim_{a\to 0}B(r,\theta,a)=F(r), \qquad\lim_{a\to 0}\Psi(r,\theta,a)=H(r). \tag{31}\]
According to Eq. (29) and Eq. (30), the null tetrad becomes
\[l^{\mu}=\delta^{\mu}_{r}, \tag{32a}\] \[n^{\mu}=\sqrt{\frac{B}{A}}\delta^{\mu}_{u}-\frac{B}{2}\delta^{\mu}_{r},\] (32b) \[m^{\mu}=\frac{1}{\sqrt{2\Psi}}\left[\delta^{\mu}_{\theta}+ia\sin\theta( \delta^{\mu}_{u}-\delta^{\mu}_{r})+\frac{i}{\sin\theta}\delta^{\mu}_{\varphi} \right], \tag{32c}\]
and the corresponding metric with rotation takes the form,
\[\begin{split}\mathrm{d}s^{2}=&-A\mathrm{d}u^{2}-2 \sqrt{\frac{A}{B}}\mathrm{d}u\mathrm{d}r-2a\sin^{2}\theta\left(\sqrt{\frac{A} {B}}-A\right)\mathrm{d}u\mathrm{d}\varphi+2a\sin^{2}\theta\sqrt{\frac{A}{B}} \mathrm{d}r\mathrm{d}\varphi\\ &+\Psi\mathrm{d}\theta^{2}+\sin^{2}\theta\left[\Psi+a^{2}\sin^{ 2}\theta\left(2\sqrt{\frac{A}{B}}-A\right)\right]\mathrm{d}\varphi^{2}.\end{split} \tag{33}\]
Next, one rewrites the above metric with the Boyer-Lindquist coordinates, and let the metric have only one off-diagonal term \(g_{t\phi}\). To reach the aim, one needs the following coordinate transformation,
\[\mathrm{d}u=\mathrm{d}t+\lambda(r)\mathrm{d}r,\quad\mathrm{d}\varphi=\mathrm{ d}\phi+\chi(r)\mathrm{d}r, \tag{34}\]
where \(\{\lambda(r),\chi(r)\}\) must depend only on \(r\) to ensure integrability. If the transformation Eq. (30) is a priori determined, \(\{\lambda(r),\chi(r)\}\) may not exist. Considering these constraints, one has the formulations of \(\{A(r,\theta,a),B(r,\theta,a),\lambda(r),\chi(r)\}\),
\[A(r,\theta)=\frac{(FH+a^{2}\cos^{2}\theta)\Psi}{(K+a^{2}\cos^{2}\theta)^{2}}, \tag{35a}\] \[B(r,\theta)=\frac{FH+a^{2}\cos^{2}\theta}{\Psi}, \tag{35b}\]
\[\lambda(r) =-\frac{K+a^{2}}{FH+a^{2}}, \tag{35c}\] \[\chi(r) =-\frac{a}{FH+a^{2}}, \tag{35d}\]
where \(K(r)\) is defined by
\[K(r)\equiv\sqrt{\frac{F(r)}{G(r)}}H(r). \tag{36}\]
As a result, one obtains the metric for rotating RBHs with the Kerr-like form,
\[\mathrm{d}s^{2}=\frac{\Psi}{\rho^{2}}\left[-\left(1-\frac{2f}{\rho^{2}} \right)\mathrm{d}t^{2}+\frac{\rho^{2}}{\Delta}\mathrm{d}r^{2}-\frac{4af\sin^{ 2}\theta}{\rho^{2}}\mathrm{d}t\mathrm{d}\phi+\rho^{2}\mathrm{d}\theta^{2}+ \frac{\Sigma\sin^{2}\theta}{\rho^{2}}\mathrm{d}\phi^{2}\right], \tag{37}\]
where
\[\begin{array}{ll}&\rho^{2}\equiv K+a^{2}\cos^{2}\theta,\qquad 2f(r)\equiv K -FH\\ &\Delta(r)\equiv FH+a^{2},\qquad\Sigma\equiv(K+a^{2})^{2}-a^{2}\Delta\sin^{2} \theta.\end{array} \tag{38}\]
In the above metric, \(\Psi(r,\theta,a)\) remains unknown and may be determined by some specific physical interpretations. For example, if the source is interpreted as an imperfect fluid rotating about the \(z\) axis, \(\Psi\) obeys [96] the Einstein field equations,
\[\left(K+a^{2}y^{2}\right)^{2}(3\Psi_{,r}\Psi_{,y^{2}}-2\Psi\Psi_{,ry^{2}})=3a ^{2}K_{,r}\Psi^{2}, \tag{39}\]
\[\left[K_{,r}^{2}+K(2-K_{,rr})-a^{2}y^{2}(2+K_{,rr})\right]\Psi+(K+a^{2}y^{2})( 4y^{2}\Psi_{,y^{2}}-K_{,r}\Psi_{,r})=0 \tag{40}\]
where "," is the indexical notation for derivatives and \(y\equiv\cos\theta\). However, it is almost impossible to determine \(\Psi(r,\theta,a)\) in this way because of the high complexity. For the RBH metrics mentioned in Sec. 2.1, one usually chooses
\[\Psi(r,\theta,a)=H(r)+a^{2}\cos^{2}\theta. \tag{41}\]
While doing so may lose a reasonable physical explanation, it needs to be tested case by case whether such a choice really loses a physical explanation. It is worth mentioning that Eq. (41) is compatible with the NJA and is available to construct a rotating RBH, but it is still unclear whether Eq. (41) is the only choice.
#### 2.3.3 What are the regularity conditions of rotating regular black holes?
For the seed metric with one shape function, the metric of rotating RBHs takes the form via the NJA,
\[\mathrm{d}s^{2}=-\frac{\Delta}{\rho^{2}}(\mathrm{d}t-a\sin^{2}\theta\mathrm{ d}\phi)^{2}+\frac{\rho^{2}}{\Delta}\mathrm{d}r^{2}+\rho^{2}\mathrm{d}\theta^{2}+ \frac{\sin^{2}\theta}{\rho^{2}}\left[a\mathrm{d}t-(r^{2}+a^{2})\mathrm{d} \phi\right]^{2}, \tag{42}\]
where
\[\rho^{2}=r^{2}+a^{2}\cos^{2}\theta,\qquad\Delta=r^{2}-2M\sigma(r)r+a^{2}. \tag{43}\]
This metric reduces to the Kerr metric when \(\sigma(r)=1\), and to the KN metric when \(\sigma(r)=1-q^{2}/(2Mr)\).
Further, the metric Eq. (42) belongs [76, 99] to the Petrov type \(\mathbf{D}\) because \(\Psi_{2}\) can be the only non-vanishing scalar when \(\Psi_{0},\Psi_{1},\Psi_{3}\) and \(\Psi_{4}\) vanish simultaneously, where the five complex scalar functions can be expressed by the Weyl tensor \(C_{\kappa\lambda\mu\nu}\) as follows:
\[\Psi_{0}=C_{\kappa\lambda\mu\nu}l^{\kappa}m^{\lambda}l^{\mu}m^{\nu}, \tag{44a}\] \[\Psi_{1}=C_{\kappa\lambda\mu\nu}l^{\kappa}k^{\lambda}l^{\mu}m^{\nu},\] (44b) \[\Psi_{2}=C_{\kappa\lambda\mu\nu}l^{\kappa}m^{\lambda}m^{*\mu}k^{\nu},\] (44c) \[\Psi_{3}=C_{\kappa\lambda\mu\nu}k^{\kappa}l^{\lambda}k^{\mu}m^{*\nu},\] (44d) \[\Psi_{4}=C_{\kappa\lambda\mu\nu}k^{\kappa}m^{*\lambda}k^{\mu}m^{*\nu}. \tag{44e}\]
Therefore, the algebraically complete set of second order invariants is \(\{R,I,I_{6},K\}\)[76, 77, 99], which means that Eq. (42) is regular if the set of invariants does not diverge anywhere. The definitions of \(R\) and \(K\) have been introduced in Sec. 2.2, and the definitions of \(I\) and \(I_{6}\) are
\[I\equiv\frac{1}{24}C^{*}_{\alpha\beta\gamma\delta}C^{*\alpha\beta\gamma\delta}, \tag{45}\]
\[I_{6}\equiv\frac{1}{12}\mathcal{S}_{\alpha}{}^{\beta}\mathcal{S}_{\beta}{}^{ \alpha}, \tag{46}\]
which also belong to the seventeen ZM invairants. According to the set of invariants, one can deduce the necessary and sufficient condition for the regularity of Eq. (42): If \(\sigma(r)\) is a \(C^{3}\) function, then it demands
\[\sigma(0)=0,\qquad\sigma^{\prime}(0)=0,\qquad\sigma^{\prime\prime}(0)=0. \tag{47}\]
For the seed metric with two shape functions, a general analytical method is still lacking for the regularity conditions of rotating RBHs, and what one can do is to verify the regularity only by calculating \(R\) and \(K\) in most cases [100, 101, 102].
## 3 Interpretation of regular black holes
A complete RBH theory refers to physical interpretations from either quantum theory of gravity, such as loop quantum gravity and asymptotic safety, or the construction of gravitational sources in the context of classical field theory. In this section, we explain RBHs from the perspective of coordinate transformations at first, and then we summarize the techniques for constructing gravitational sources for both non-rotating and rotating RBHs. At the end of this section, we give a short discussion on the scalar hair of RBHs because it relates to the classical field interpretations.
### How to understand regular black holes correctly?
It is a confusing issue whether RBHs exist in nature or they are just mathematical tricks, which was emphasized in Refs. [81, 103], where a RBH is constructed by a seeming "coordinate transformation". To make the following discussions clear, we ask the question in another way:
_Is a RBH a full spacetime, or is it just a "good" coordinate system that does not cover the entire spacetime in the radial direction?_
Let us examine the Schwarzschild BH as an example to see the essence of the above question, where the Penrose diagram is shown in Fig. 0(a).
In order to construct a RBH, we drag the coordinate system downward by a transformation \(r\to r(\xi)\), in such a way that the coordinate system cannot cover the singularity after the operation, see Fig. 0(b), where \(r\to\sqrt{\xi^{2}+l^{2}}\) with \(l>0\). In other words, we put the singularity \(r=0\) down on the "non-physical" domain in the new coordinate system if the new radial coordinate \(\xi\) is defined in \(\xi\in[0,\infty)\), i.e., the singularity is dragged to the imaginary axis after analytically continuing \(\xi\) into a complex plane. This operation is equivalent to restricting the old radial coordinate in \(r\in[l,\infty)\) by hands. The singularity is subtracted from the "old" spacetime, such that the "new" spacetime is regular. In other words, the "new" coordinates cover a smaller portion of manifold than the old coordinates in the Schwarzschild spacetime, but the topology of manifold never changes. We call the Schwarzschild BH in \(\xi\) as a _fake_ RBH.
Now let us see a _real_ RBH which does not redisplay the singularity by a transformation. Taking the Bardeen BH as an example [20],
\[\mathrm{d}s^{2}=-f\mathrm{d}t^{2}+f^{-1}\mathrm{d}r^{2}+r^{2}\mathrm{d}\Omega ^{2},\qquad f=1-\frac{2Mr^{2}}{(r^{2}+g^{2})^{3/2}}, \tag{48}\]
where \(M\) is mass and \(g\) magnetic charge of monopoles, we know that the Kretschmann scalar is regular in \(r\in[0,\infty)\). After a replacement, \(r^{2}\to\xi^{2}-g^{2}\), the metric Eq. (48) becomes
\[\mathrm{d}s^{2}=-f\mathrm{d}t^{2}+\frac{f^{-1}\xi^{2}}{\xi^{2}-g^{2}}\mathrm{d }\xi^{2}+(\xi^{2}-g^{2})\mathrm{d}\Omega^{2},\qquad f=1-\frac{2M(\xi^{2}-g^{2} )}{\xi^{3}}. \tag{49}\]
If we take Eq. (49) as an independent metric describing a "new" spacetime, the corresponding Kretschmann scalar diverges around \(\xi=0\),
\[K\sim\frac{900g^{8}M^{2}}{\xi^{14}}+O\left(\frac{1}{\xi^{13}}\right). \tag{50}\]
It seems that the Bardeen BH redisplays the singularity at \(\xi=0\), and even that almost all the RBHs regain singularities by a replacement \(r\to r(\xi)\). However, this is not the case because \(\xi\)
Figure 1: Penrose diagrams of a Schwarzschild BH in two different radial coordinates.
could never be smaller than \(g\), otherwise the signature in Eq. (49) gets mess and the integral measure \(\sqrt{-g}\) becomes complex.
The above two examples reflect the fact that the singularity cannot be resolved by a coordinate transformation, which is consistent with the essence of singularities in BHs.
Further, if one analytically continues the radial coordinate to a complex plane, the complex singularities may emerge again. Considering the Kretschmann scalar of Bardeen BHs,
\[K=\frac{12M^{2}\left(-4g^{6}r^{2}+47g^{4}r^{4}-12g^{2}r^{6}+8g^{8}+4r^{8} \right)}{\left(g^{2}+r^{2}\right)^{7}}, \tag{51}\]
we observe that the singularities are moved to the non-physical domain, e.g., \(r=\pm\mathrm{i}g\in\mathbb{C}\). Generally, one can classify RBHs into three types by the characteristics of singularities [73]:
* The first type corresponds to those RBHs whose geodesics are complete in the domain of \(r\in[0,\infty)\) although their curvature invariants have essential singularities at \(r=0\) from the perspective of complex analysis, e.g., \(\sigma(r)=\mathrm{e}^{-1/r}\), see Ref. [79];
* The second type corresponds to those RBHs whose singularities of curvature invariants are moved to the non-physical domain, e.g., the Bardeen and Hayward BHs [104];
* The third type corresponds to those RBHs whose curvature invariants have no singularities on the entire complex plane, e.g., the noncommutative geometry inspired BH [28].
This classification directly affects the calculation on the asymptotic frequencies of quasinormal modes (QNMs) by the monodromy method [73].
### How to find the sources of non-rotating regular black holes?
Gliner discussed [17] an algebraic property of a four-dimensional energy-momentum tensor (EMT) denoted by [(1111)], where symbol 1 corresponds one diagonal component of the EMT and the parentheses imply equal components,e see Refs. [84, 105] for the Segre notations. The matter with the algebraic property [(1111)], called a \(\mu\)-vacuum, has a de Sitter-like metric and thus avoids singularities. Later, Gliner's work was extended [106, 1], where there are four types of algebraic properties in general for spherically symmetric BHs,
Footnote e: As in Ref. [17], we do not distinguish between time and space components by a comma.
\[[(1111)],\qquad[(11)(11)],\qquad[11(11)],\qquad[(111)1]. \tag{52}\]
The matter with these algebraic properties can generate RBHs.
For instance, one RBH given in Ref. [1] has the property [(11)(11)]. Generally, all RBHs with metric Eq. (1) can have this algebraic property because the Einstein tensor is of the following form,
\[G^{0}_{\;0}=G^{1}_{\;1}=\frac{f^{\prime}(r)}{r}+\frac{f(r)}{r^{2} }-\frac{1}{r^{2}}, \tag{53a}\] \[G^{2}_{\;2}=G^{3}_{\;3}=\frac{f^{\prime}(r)}{r}+\frac{f^{\prime \prime}(r)}{2}, \tag{53b}\]
where we have supposed that Einstein's theory of gravity still holds, \(G^{\mu}{}_{\nu}=8\pi\!T^{\mu}{}_{\nu}\), thus we can discuss the algebraic properties of EMTs in terms of Einstein's tensor.
One example given in Ref. [81] has the property [11(11)]. For the metric with the form of Eq. (3), one has the components of the Einstein tensor,
\[G^{0}{}_{0}=\frac{f^{\prime}\rho^{\prime}}{\rho}+\frac{f\rho^{\prime 2}}{\rho^{ 2}}+\frac{2f\rho^{\prime\prime}}{\rho}-\frac{1}{\rho^{2}}, \tag{54a}\] \[G^{1}{}_{1}=\frac{f^{\prime}\rho^{\prime}}{\rho}+\frac{f\rho^{\prime 2}}{ \rho^{2}}-\frac{1}{\rho^{2}},\] (54b) \[G^{2}{}_{2}=G^{3}{}_{3}=\frac{f^{\prime}\rho^{\prime}}{\rho}+ \frac{f^{\prime\prime}}{2}+\frac{f\rho^{\prime\prime}}{\rho}, \tag{54c}\]
where no equality, \(G^{0}{}_{0}=G^{1}{}_{1}\), exists generally. However, when \(\rho\propto r\), \(G^{0}{}_{0}=G^{1}{}_{1}\) appears, and then [11(11)] reduces to [(11)(11)].
For the EMT with the algebraic property [(111)], there is an example in Refs. [23, 45]. Here [(111)] implies \(G^{0}{}_{0}=G^{2}{}_{2}=G^{3}{}_{3}\).
It seems that the algebraic properties do not give us any aid in the construction of RBHs. However, these properties are extremely important. The main reason is that they are associated closely with the construction of RBHs that is different from the construction of SBHs. Let us see the details as follows.
Generally, a complete theory of RBHs can be established with one of two distinct logics. The first is the so-called _bottom-up_ approach, in which the metric with finite curvature invariants is derived based on the First Principle, such as the loop quantum gravity or the theory of asymptotic safety, etc. The second logic is the so-called _top-down_ approach, in which the metric with finite curvature invariants or complete geodesics is postulated at first, and then the classical field that yields such a metric is found out. Therefore, it is necessary to clarify the algebraic properties of the gravitational field for searching matter sources in the second approach.
For instance, the RBH with metric Eq. (1) cannot be interpreted by a scalar phantom field depending only on the radial coordinate, but the RBH with metric Eq. (3) can. The reason is that the algebraic property of a scalar phantom field EMT is consistent with the Einstein tensor based on Eq. (3), i.e., the components of Einstein's tensor match the algebra, [(111)1].
Furthermore, the algebraic properties depend on specific gravitational theories. For the metric Eq. (1), the algebra is [(11)(11)] in the Einstein gravity, but it changes in the \(F(R)\) theory. For instance, if we choose a special case of Starobinsky's action [107, 108, 109],
\[F(R)=R+\alpha R^{2}, \tag{55}\]
the gravitational equations read
\[F^{\mu}{}_{\nu}\equiv F^{\prime}(R)R^{\mu}{}_{\nu}-\frac{1}{2}F(R)g^{\mu}{}_{ \nu}-[\nabla^{\mu}\nabla_{\nu}-g^{\mu}{}_{\nu}\Box]F^{\prime}(R)=8\pi\!T^{\mu} {}_{\nu}. \tag{56}\]
For the metric Eq. (1), the components of tensor \(F^{\mu}{}_{\nu}\) take the forms,
\[\begin{split} 2r^{4}F^{0}{}_{0}=&-4\alpha-2r^{3}f^{ \prime}\left(2\alpha f^{\prime\prime}+\alpha rf^{(3)}-1\right)\\ &+2f\left[12\alpha-2\alpha r^{2}\left(r^{2}f^{(4)}+2f^{\prime \prime}+6rf^{(3)}\right)+8\alpha rf^{\prime}+r^{2}\right]\\ &+4\alpha r^{2}f^{\prime 2}+\alpha r^{4}f^{\prime\prime 2}-2 0\alpha f^{2}-2r^{2},\end{split} \tag{57a}\]
\[2r^{4}F^{1}{}_{1}= -4\alpha-2r^{3}f^{\prime}\left(2\alpha f^{\prime\prime}+\alpha rf^ {(3)}-1\right) \tag{57b}\] \[+2f\left[-12\alpha-4\alpha r^{2}\left(4f^{\prime\prime}+rf^{(3)} \right)+8\alpha rf^{\prime}+r^{2}\right]\] \[+4\alpha r^{2}f^{\prime 2}+\alpha r^{4}f^{\prime\prime 2}+28\alpha f ^{2}-2r^{2},\] \[2r^{4}F^{2}{}_{2}=2r^{4}F^{3}{}_{3}= 4\alpha+2rf^{\prime}\left[-12\alpha-2\alpha r^{2}\left(5f^{ \prime\prime}+rf^{(3)}\right)+r^{2}\right]\] (57c) \[+4\alpha f\left[2r^{2}f^{\prime\prime}-r^{3}\left(5f^{(3)}+rf^{(4 )}\right)+8rf^{\prime}+6\right]\] \[+8\alpha r^{2}f^{\prime 2}+r^{4}f^{\prime\prime}\left(1-\alpha f^{ \prime\prime}\right)-28\alpha f^{2}.\]
Generally, we have the algebraic structure, [11(11)]. In other words, the RBH with the metric Eq. (1) can be generated by the matter with the algebra, [11(11)].
Similarly, the change of algebraic structures appears in the conformal gravity with the following action [110],
\[S=\int\mathrm{d}^{4}x\sqrt{-g}\ W, \tag{58}\]
where \(W\) is the Weyl scalar defined by contracting two Weyl tensors. The variation to the action gives the gravitation-dependent field, \(B^{\mu}_{\ \nu}\), which is called the Bach tensor. For the metric Eq. (1), we obtain
\[24r^{4}B^{0}{}_{0}= -4rf\left[r\left(r^{2}f^{(4)}-f^{\prime\prime}+3rf^{(3)}\right)+2 f^{\prime}\right] \tag{59a}\] \[+r^{2}\left(rf^{\prime\prime}-2f^{\prime}\right)^{2}-2r^{4}f^{(3)} f^{\prime}+4f^{2}-4,\] \[24r^{4}B^{1}{}_{1}= -2r^{3}f^{(3)}\left(rf^{\prime}-2f\right)+\left[r\left(rf^{\prime \prime}-2f^{\prime}\right)+2f\right]^{2}-4,\] (59b) \[24r^{4}B^{2}{}_{2}=24r^{4}B^{3}{}_{3}= -r^{2}\left(rf^{\prime\prime}-2f^{\prime}\right)^{2}+2r^{4}f^{(3) }f^{\prime}-4f^{2}+4\] (59c) \[+2rf\left[r\left(r^{2}f^{(4)}-2f^{\prime\prime}+2rf^{(3)}\right)+4f^{ \prime}\right],\]
whose algebraic structure is also [11(11)].
### What are the difficulties for us to find the sources of rotating regular black holes?
The physical interpretation of a rotating RBH should coincide with its non-rotating counterpart called a seed metric. For the seed metric with one shape function, the physical interpretation contains two aspects: An imperfect fluid and a gravitational field coupled to nonlinear electrodynamics. Although the interpretation from an imperfect fluid is trivial, it is often adopted for the spacetime without electromagnetic fields. Meanwhile, the resulting rotating RBH matches [76, 99, 111] the Segre type, [(11)(11)]. So the interpretation works well in the aspect of imperfect fluids for the models in Refs. [90, 96].
For the spacetime with electromagnetic fields, it is widely used [2, 21, 79] for the physical interpretation that a gravitational field is coupled to nonlinear electrodynamics. However, it is difficult to extend this interpretation to a rotating spacetime. The main reason is that the number of non-zero components of \(F_{\mu\nu}\) is changed from one to four by the introduction of rotation, that is, \(F_{01},F_{02},F_{13}\) and \(F_{23}\) are non-trivial, where the field strength is defined by \(F_{\mu\nu}\equiv\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\). In the metric Eq. (42), these four components satisfy [112, 113] the relations,
\[F_{31}=a\sin^{2}\theta F_{10},\qquad aF_{23}=(r^{2}+a^{2})F_{02}. \tag{60}\]
The gravitational field coupled to nonlinear electrodynamics is described by the action,
\[S=\frac{1}{16\pi}\int\mathrm{d}^{4}x\sqrt{-g}[R-\mathscr{L}(F)], \tag{61}\]
\[F=F_{\mu\nu}F^{\mu\nu}. \tag{62}\]
Using the Einstein field equations,
\[G_{\mu\nu}=2\mathscr{L}_{F}F_{\mu\alpha}F_{\nu}{}^{\alpha}-\frac{1}{2}\delta_{ \mu\nu}\mathscr{L}, \tag{63}\]
one can determine \(\mathscr{L}\) and \(\mathscr{L}_{F}\), where \(\mathscr{L}_{F}\equiv\mathrm{d}\mathscr{L}/\mathrm{d}F\). To further determine \(F_{\mu\nu}\) one needs to utilize the dynamic equations. The variation of the action with respect to \(A^{\mu}\) yields the dynamic equations,
\[\nabla_{\mu}\left(\mathscr{L}_{F}F^{\mu\nu}\right)=0,\qquad\nabla_{\mu}{}^{*} F^{\mu\nu}=0, \tag{64}\]
where \({}^{*}F^{\mu\nu}\equiv\frac{1}{2}\eta^{\mu\nu\alpha\beta}F_{\alpha\beta}\), and \(\eta^{0123}=-1/\sqrt{-g}\). Then one obtains that the non-zero components of \(F_{\mu\nu}\) satisfy the following equations,
\[\frac{\partial}{\partial r}\left[(r^{2}+a^{2})\sin\theta\mathscr{L}_{F}F_{10} \right]+\frac{\partial}{\partial\theta}\left[\sin\theta\mathscr{L}_{F}F_{20} \right]=0, \tag{65a}\] \[\frac{\partial}{\partial r}\left[a\sin\theta\mathscr{L}_{F}F_{10}\right]+\frac{ \partial}{\partial\theta}\left[\frac{1}{a\sin\theta}\mathscr{L}_{F}F_{20} \right]=0,\] (65b) \[\frac{\partial}{\partial r}F_{20}-\frac{\partial}{\partial\theta}F_{10}=0,\] (65c) \[\frac{\partial}{\partial\theta}\left[a^{2}\sin^{2}\theta F_{10}\right]-\frac{ \partial}{\partial r}\left[(r^{2}+a^{2})F_{20}\right]=0. \tag{65d}\]
Because \(\mathscr{L}_{F}\) is quite complicated and these equations are highly nonlinear, it is almost impossible to solve these equations directly. Instead of solving the above equations, one then turns to the nonlinear electromagnetic field by studying [114, 115, 116] the change of gauge fields under the NJA. When the RN metric is transformed into the KN metric, the gauge potential \(A_{\mu}\) changes [117] in the following way.
In the RN metric, \(A_{\mu}\) can be written as
\[A_{\mu}=\frac{q}{r}\delta_{\mu}^{\mu}, \tag{66}\]
and its contravariant counterpart takes the form,
\[A^{\mu}=-\frac{q}{r}\delta_{r}^{\mu}=-\frac{q}{r}l^{\mu}, \tag{67}\]
where \(l^{\mu}\) is the tetrad in Eq. (28). Under the transformation governed by Eqs. (20) and (29), the gauge potential becomes
\[\tilde{A}^{\mu}=-\frac{qr}{\rho^{2}}\delta_{r}^{\mu}, \tag{68}\]
and its 1-form reads
\[\tilde{A}=\frac{qr}{\rho^{2}}(\mathrm{d}u-a\sin^{2}\theta\mathrm{d}\theta), \tag{69}\]
which can be written as
\[\tilde{A}=\frac{qr}{\rho^{2}}\left(\mathrm{d}t-\frac{\rho^{2}}{\Delta}\mathrm{d}r -a\sin^{2}\theta\mathrm{d}\theta\right), \tag{70}\]
due to \(\mathrm{d}u=\mathrm{d}t-\frac{\rho^{2}}{\Delta}\mathrm{d}r\). Because the factor \(\frac{qr}{\Delta}\) depends only on \(r\), the term of \(\mathrm{d}r\) can be removed by a gauge transformation and the final formulation of the gauge potential can be simplified to be
\[\tilde{A}=\frac{qr}{\rho^{2}}\left(\mathrm{d}t-a\sin^{2}\theta\mathrm{d}\phi \right), \tag{71}\]
which is just the gauge potential of the KN metric.
However, this method encounters a problem in the NJA, that is, the conversion rule Eq. (20) may not be applicable to the gauge potentials in RBHs. For example, the gauge field of spherically symmetric RBHs with the magnetic charge \(Q_{m}\) is \(A_{\mu}=Q_{m}\cos\theta\delta^{\phi}_{\mu}\), then the gauge field of rotating RBHs will become [116]
\[A_{\mu}=-\frac{Q_{m}a\cos\theta}{\rho^{2}}\delta^{t}_{\mu}+\frac{Q_{m}(r^{2}+a ^{2})\cos\theta}{\rho^{2}}\delta^{\phi}_{\mu}, \tag{72}\]
if one uses the above method. But \(\mathscr{L}_{F}\) calculated by Eq. (72) is different [118] from that by Eq. (63), which means that this method should be modified in the case of RBHs.
For the seed metric with two shape functions, there are mainly two types of RBHs: One is associated with the loop quantum gravity and the other type is associated with the black-bounce spacetimes. For the first type, the physical interpretation of rotating metrics is just from the loop quantum gravity [43, 102]. And for the second type, the physical interpretation involves two aspects, where one is the stress-energy tensor of a scalar field with nonzero self-interaction potentials and the other is a magnetic field in the framework of nonlinear electrodynamics [119]. In addition, there are two different physical interpretations for rotating black-bounce spacetimes, where one is the gravitational field coupled with a nonlinear electrodynamic field together with a contribution of charged dusts and the other is an anisotropic fluid [100]. However, it needs further studies to judge which interpretation is more reasonable.
### Can regular black holes have scalar hairs?
SBHs are governed by the non-scalar-hair theorem [120, 121]. In RBHs, the situation is improved. For instance, the metric of conformal RBHs in Ref. [110] reads
\[\mathrm{d}s^{2}=\left(1+\frac{L^{2}}{r^{2}}\right)^{2n}\left(-f\mathrm{d}t^{ 2}+f^{-1}\mathrm{d}r^{2}+r^{2}\mathrm{d}\Omega^{2}\right), \tag{73}\]
where \(L\) is regularization parameter with the length dimension and \(f=1-2M/r\). This RBH model can be produced by the following action,
\[I_{\mathrm{conf}}=-\frac{1}{2}\int\mathrm{d}^{4}x\sqrt{-g}\phi\left(\frac{1}{ 6}R\phi-\Box\phi\right), \tag{74}\]
where \(\phi\) is scalar field. The equation of motion for the scalar field \(\phi\) is
\[\Box\phi=\frac{1}{6}R\phi, \tag{75}\]
and its solution takes [122] the form,
\[\phi=\left(1+\frac{L^{2}}{r^{2}}\right)^{-n}\left[\frac{c_{1}}{2M}\ln\left(1- \frac{2M}{r}\right)+c_{2}\right], \tag{76}\]
where \(c_{1}\) and \(c_{2}\) are integration constants. This solution is divergent at the horizon, \(r_{\rm H}=2M\), which implies \(c_{1}=0\), i.e., the solution can be simplified to be
\[\phi=c_{2}\left(1+\frac{L^{2}}{r^{2}}\right)^{-n}. \tag{77}\]
Since \(n\geq 1\), \(\phi\) is bounded by \(0\leq\phi\leq c_{2}\). In other words, Eq. (74) has a non-trivial scalar hair because of a non-minimal coupling [120].
Another example is shown in Ref. [23, 45], where the action is cast in the Einstein gravity with a minimally coupled (phantom) scalar field \(\phi\),
\[I_{\rm phantom}=\int\mathrm{d}^{4}x\sqrt{-g}\left[R-\partial_{\mu}\phi\partial^ {\mu}\phi-2V(\phi)\right], \tag{78}\]
where \(V(\phi)\) is potential, see Ref. [23, 45] for its formulation. This model gives a RBH solution that has the form of Eq. (3) with two shape functions, where one of the functions reads
\[f(\rho)=1-\frac{\rho_{0}\left(\pi\!b^{2}-2b\rho+\pi\!\rho^{2}\right)}{2b^{3}}+ \frac{\rho_{0}\left(b^{2}+\rho^{2}\right)}{b^{3}}\tan^{-1}\left(\frac{\rho}{b} \right), \tag{79}\]
where we have used the condition \(2bc=-\pi\!\rho_{0}\) with \(\rho_{0}>0\) and \(b>0\) to replace \(c\) in the original formula in Ref. [23, 45]. In addition, the potential takes the form,
\[V(\phi)=-\frac{\rho_{0}}{2b^{3}}\left[2\sqrt{2}\phi+3\sin\!\left(\sqrt{2}\phi \right)+(\pi\!-\!\sqrt{2}\phi)\cos\!\left(\sqrt{2}\phi\right)-2\pi\!\right], \tag{80}\]
thus we obtain
\[\phi V^{\prime}(\phi)=\frac{\rho_{0}\phi}{2b^{3}}\left[(\sqrt{2}\pi-2\phi) \sin\!\left(\sqrt{2}\phi\right)-2\sqrt{2}\cos\!\left(1+\sqrt{2}\phi\right) \right]. \tag{81}\]
Because \(\phi V^{\prime}(\phi)\) is not always positive, it is not restricted [120] by the no-hair theorem. Thus the non-trivial scalar-hair solution exists,
\[\phi=\pm\sqrt{2}\tan^{-1}\left(\frac{\rho}{b}\right)+\phi_{0}, \tag{82}\]
where \(\phi_{0}\) is integration constant, and it is bounded by \(\phi_{0}-\pi\!/\sqrt{2}<\phi<\phi_{0}+\pi\!/\sqrt{2}\).
## 4 Energy conditions of regular black holes
The energy conditions are important to the study of RBHs. On the one hand, they are related to the formation of RBHs, and on the other hand they are regarded as a criteria if a RBH is realistic or not. In this section, we explain these two aspects.
### Is the strong energy condition a key to lead to a regular black hole?
It was originally thought [1, 123] that RBHs can be constructed when the singularity at their centers is replaced by the de Sitter core, which implies the violation of the strong energy condition (SEC).
Because of this, RBHs are not governed by the Penrose singularity theorem, which can be understood from the Raychaudhuri equation [124, 83],
\[\frac{\mathrm{d}\Theta}{\mathrm{d}\tau}=-R_{\mu\nu}u^{\mu}u^{\nu}, \tag{83}\]
where \(\tau\) is proper time, \(u^{\mu}\) is four-velocity and \(\Theta\) depicts the expansion of geodesic congruence. For simplicity, we have already ignored higher order terms associated with expansion, rotation and shear in the right hand of Eq. (83). Then, choosing \(u^{\mu}=(1,0,0,0)\), we arrive at
\[\frac{\mathrm{d}\Theta}{\mathrm{d}\tau}=-R_{00}=-4\uppi G\left(\rho+\sum_{i=1 }^{3}p_{i}\right), \tag{84}\]
where \(\rho\) is energy density and \(p_{i}\) are three components of pressure. The violation of SEC, \(\rho+\sum_{i=1}^{3}p_{i}<0\), implies an increasing \(\Theta\) along the proper time, i.e., the interaction is repulsive.
Nevertheless, it was also discovered [125, 126, 79, 125] that RBHs can have an anti-de Sitter or a flat core. For instance, the RBH constructed in Ref. [125] is described by
\[\mathrm{d}s^{2}=-f\mathrm{d}t^{2}+f^{-1}\mathrm{d}r^{2}+r^{2}\mathrm{d} \Omega^{2},\qquad f=1+\frac{r^{4}}{r^{4}+2qQ_{m}^{2}}\left(-\frac{2M}{r}+ \frac{Q_{m}^{2}}{r^{2}}\right), \tag{85}\]
where \(q\) is a _positive_ parameter describing the non-minimal coupling of Yang-Mills fields, \(Q_{m}\) is magnetic charge, and the cosmological constant is set to be zero in order to highlight the essence. The anti-de Sitter core can be seen clearly from the following asymptotic relations,
\[f\sim 1+\frac{r^{2}}{2q}+O(r^{3}),\qquad R\sim-\frac{6}{q}+O\left(r\right) \tag{86}\]
as \(r\) approaches to zero. Moreover, a spherically symmetric RBH model with a flat core is highlighted in Ref. [126, 127], where the shape function reads [128]
\[f=1-\frac{2M}{r}\mathrm{e}^{-a/r}. \tag{87}\]
It is obvious that \(f\to 1\) and \(R\to 0\) as \(r\) approaches to zero because the parameter \(a\) is positive. These two examples meet the SEC in the cores because the AdS and Minkowski spacetimes satisfy the SEC.
The problem immediately arises, if the SEC does not break, i.e., the gravity is attractive in the core, how the collapse can be avoided. One interesting resolution is based [46] on the introduction of the Tolman mass which can be regarded as a kind of _integral_ SEC [129, 130],
\[m_{\mathrm{T}}=\frac{1}{4\uppi}\int\sqrt{-g}R_{00}\mathrm{d}^{3}x=\int r^{2}R _{00}\mathrm{d}r. \tag{88}\]
The _integral_ SEC breaks in the core (\(r\in[0,r_{-}]\), \(r_{-}\) is the innermost horizon) if the Tolman mass is negative. Due to the negative Tolman mass in these two models described by Eq. (85) and Eq. (87), we can conclude that the two models violate the integral SEC in their cores.
In summary, it is the impressionist SEC that plays the key role in constructing RBHs, not the pointillist one [131]. Because of this, it is not important whether the cores of RBHs are de Sitter, anti-de Sitter or flat although the violation of SEC is a necessary condition for the formation of RBHs from gravitational collapse [132].
### What are the energy conditions of regular black holes?
If the SEC answers how RBHs are formed, then the other three energy conditions actually focus on whether RBHs are realistic [133, 134, 135, 136, 44, 112] through the characteristics of _classical_ matters.
Besides the SEC, the other three energy conditions are the weak energy condition (WEC), the null energy condition (NEC), and the dominant energy condition (DEC) [131]. The violation of WEC implies a negative local energy density, and the violation of DEC means the superluminal speed of energy density flow. Moreover, the NEC is a relaxation of WEC, i.e., the energy density can be negative as long as the sum of energy density and pressure is positive [83].
For the RBHs with one shape function, see Eq. (5), the other three energy conditions except the SEC can be reduced to the following differential inequalities,
\[\begin{split}\text{WEC}:&\quad\sigma^{\prime} \geq 0\;\cup\;r\sigma^{\prime\prime}\leq 2\sigma^{\prime},\\ \text{NEC}:&\quad r\sigma^{\prime\prime}\leq 2 \sigma^{\prime},\\ \text{DEC}:&\quad\sigma^{\prime}\geq 0\;\cup\;-2 \sigma^{\prime}\leq r\sigma^{\prime\prime}\leq 2\sigma^{\prime},\end{split} \tag{89}\]
where the prime denotes the derivatives w.r.t. \(r\). The relation between these three energy conditions can be represented by
\[\text{NEC}\subset\text{WEC}\subset\text{DEC}. \tag{90}\]
Among these three independent differential inequalities in Eq. (89), \(\sigma^{\prime}\geq 0\) implies that \(\sigma\) is a monotone increasing function of \(r\); while \(r\sigma^{\prime\prime}\leq 2\sigma^{\prime}\) provides \(\sigma\leq\sigma_{0}r^{3}\), where \(\sigma_{0}\equiv\lim_{r\to 0}\sigma/r^{3}\); the last one \(r\sigma^{\prime\prime}\geq-2\sigma^{\prime}\) gives a solution, \(r\sigma\geq 0\), under the boundary conditions, \(\sigma|_{r=0}=0=\sigma^{\prime}|_{r=0}\).
For the RBHs with two shape functions, see Eq. (3), the situation becomes complicated. The differential inequalities now involve an additional unknown function, \(r(\xi)\), which causes the inequalities to be unsolvable without extra specific constrains. Therefore, we cannot extract any valuable information from these differential inequalities.
In addition, some RBH models violate [137, 73, 138] the three energy conditions. For instance, the well-known Bardeen and Hayward BHs break the DEC, and the RBH generated by the non-minimal coupled Wu-Yang monopole brakes [139, 140, 32, 33, 68] the WEC, etc. A remedy is proposed in Ref. [141] by deforming the shape function. For a generic \(\sigma\) function, its deformed formulation reads
\[\sigma=\frac{M^{\mu\nu-3}r^{3}}{\left(r^{\mu}+q^{\mu}\right)^{\nu}}, \tag{91}\]
where \(M\) is mass and \(q\) regularization parameter, which contains the Bardeen and Hayward BHs as special cases. Meanwhile, it will meet the three energy conditions if the parameters \(\mu\) and \(\nu\)
take the values in the following regions,
\[\frac{2}{\nu}<\mu\leq\frac{1}{2}\sqrt{\frac{49\nu+96}{\nu}}-\frac{7}{ 2} \text{when}\quad\frac{2}{5}<\nu\leq 3; \tag{92a}\] \[\frac{2}{\nu}<\mu\leq\frac{3}{\nu} \text{when}\quad\nu>3, \tag{92b}\]
where \(M^{\mu\nu-3}\) is introduced for balancing the dimension, and this parameterization is not unique.
## 5 Thermodynamics of regular black holes
The thermodynamics of RBHs is a rather confusing battleground in the study of RBHs. The confusion comes from the existence of extra terms in the first law of _mechanics_ such that the correspondence between mechanical and thermodynamic quantities is problematic. In this section, we give our clarifications on these problems.
### What is the entropy of regular black holes?
It was reported [142, 143, 144, 145, 146, 147] that RBHs have a deviation term in entropy, i.e., the entropy of RBHs breaks the area law, \(S\neq A/4\), but the opposite opinion was also reported [148, 149, 55, 22, 52]. This puzzle gives rise to an influence on the first law of thermodynamics and the interpretation of RBHs, where ambiguous deviation terms appear for the former, while the verification of Hawking's quantum theory is hard to be executed for the latter.
Taking the Hayward BH [95] as an example, whose shape function is
\[f=1-\frac{2M}{r}\frac{r^{3}}{r^{3}+2Ml^{2}}, \tag{93}\]
where \(l\) is length scale introduced for regularization, we can obtain the entropy from the first law of thermodynamics, \(\mathrm{d}M=T\mathrm{d}S\),
\[S=\int_{r_{-}}^{r_{+}}\frac{\mathrm{d}M}{T}=S_{\mathrm{BH}}+\Delta S, \tag{94}\]
where \(r_{+}\) and \(r_{-}\) are outer and inner horizons, respectively, \(S_{\mathrm{BH}}\) is the Bekenstein-Hawking entropy,
\[S_{\mathrm{BH}}=\uppi\left(r_{+}^{2}-r_{-}^{2}\right),\] (95a) and \[\Delta S\] is a deviation term, \[\Delta S=\frac{\uppi l^{4}\left(r_{+}^{2}-r_{-}^{2}\right)}{\left(r_{-}^{2}- l^{2}\right)\left(r_{+}^{2}-l^{2}\right)}+2\uppi l^{2}\ln\left(\frac{r_{+}^{2}-l^{2}}{ r_{-}^{2}-l^{2}}\right). \tag{95b}\]
We note \(\Delta S>0\) because of \(r_{+}\geq r_{-}>l\), otherwise there will be no horizons. As a result, if Eq. (94) is still used to calculate the entropy, the area law is violated, \(S\neq A/4\), even though a pressure term,
\[\mathcal{P}=-\frac{3}{8\uppi l^{2}}, \tag{96}\]
is added [151].
As a matter of fact, if we interpret the metric, Eq. (1), as a spacetime produced by a field of Dirac magnetic monopoles [152, 2], the entropy can be derived by Hawking's path integral method for a given RBH depicted by Eq. (1).
To start with, we write down the full action,
\[I=\frac{1}{16\pi}\int\mathrm{d}^{4}x\sqrt{-g}\,\left[R-\mathcal{L}(F)\right], \tag{97}\]
where \(F\) is the contraction of electromagnetic tensors, \(F\equiv F_{\mu\nu}F^{\mu\nu}=2q/r^{4}\), and \(q\) is the magnetic charge of monopole. By solving one of Einstein's equations [22],
\[\frac{\mathcal{L}}{2}-\frac{2M\sigma^{\prime}(r)}{r^{2}}=0, \tag{98}\]
and replacing \(r\) by \(r=[F/(2q)]^{1/4}\), we can determine the Lagrangian \(\mathcal{L}(F)\).
To calculate the entropy, we apply the path-integral method in the zero-loop approximation [153],
\[Z=\int Dg\,DA\;\mathrm{e}^{-I}\approx\mathrm{e}^{-I_{\mathrm{cl}}}, \tag{99}\]
where the full Euclidean action \(I_{\mathrm{cl}}\) consists of four parts,
\[I_{\mathrm{cl}}=I_{\mathrm{EH}}+I_{\mathrm{GHY}}-I_{0}+I_{M}. \tag{100}\]
\(I_{\mathrm{EH}}\) is the Einstein-Hilbert action,
\[I_{\mathrm{EH}}=-\frac{1}{16\pi}\int_{\mathcal{M}}\mathrm{d}^{4}x\sqrt{-g}\,R, \tag{101}\]
\(I_{\mathrm{GHY}}\) is the Gibbons-Hawking-York boundary term,
\[I_{\mathrm{GHY}}=-\frac{1}{8\pi}\int_{\partial\mathcal{M}}\mathrm{d}^{3}x\sqrt {-h}\,K, \tag{102}\]
\(I_{0}\) is the subtraction term,
\[I_{0}=-\frac{1}{8\pi}\int_{\partial\mathcal{M}}\mathrm{d}^{3}x\sqrt{-h}\,K_{0}, \tag{103}\]
and \(I_{M}\) is the matter action of nonlinear electrodynamic source, where \(R\) is the Ricci curvature of bulk space, \(K\) and \(K_{0}\) are extrinsic curvatures of surface and background reference, respectively.
For the metric Eq. (1), we obtain
\[I_{\mathrm{EH}}=\frac{\beta}{2}\left(r_{\mathrm{H}}-M\right)- \pi\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Schwarzschild BH at infinity [79], i.e., \(\lim_{r\to\infty}\sigma=1\) and \(\lim_{r\to\infty}\tau\sigma^{\prime}=0\). Meanwhile, the action of matter can be determined with the help of one component of Einstein's equations Eq. (98),
\[I_{M}=\beta M-\frac{\beta r_{\rm H}}{2}. \tag{107}\]
Substituting Eqs. (104)-(107) into Eq. (100), we arrive at the total Euclidean action,
\[I_{\rm cl}=\beta M-\uppi r_{\rm H}^{2}. \tag{108}\]
On the other hand, the thermodynamic law for the canonical ensemble is \(\mathcal{F}=M-TS\), where \(\mathcal{F}=TI_{\rm cl}\) is Helmholtz free energy. Thus, we can read \(S=\uppi r_{\rm H}^{2}\) from Eq. (108), which exhibits the well-known entropy-area law of BHs. The same result can be obtained by Wald's Noether-charge approach. Since the Lagrangian of the model Eq. (1) is just \(R\), the entropy density can be read off directly from Table I in Ref. [154].
However, the problem is far more complicated than it appears -- the above calculation relies on the interpretation of the metric. That is, the calculation depends on the nonlinear magnetic interpretation of metrics, see Eq. (97). If we reinterpret the source as dyons, the path-integral method is not applicable because no actions for dyons can be constructed [22]. Furthermore, if we interpret a metric by using an alternative gravity, such as \(f(R)\), the entropy-area law will be changed.
### What is the correct first law of thermodynamics for regular black holes?
One feature of RBHs is that their _mechanical_ theorems differ from those of SBHs [155]. More precisely, unexpected extra terms will appear in the first _mechanical_ laws of RBHs, and the number of extra terms depends on the number of parameters involved in Lagrangian of matters.
For instance, the Lagrangian of Bardeen BHs contains two parameters, mass \(M\) and magnetic charge \(q\)[21], thus the first _mechanical_ law reads [155]
\[{\rm d}M=\frac{\kappa}{8\uppi}{\rm d}A+\Psi_{H}{\rm d}q+K_{M}{\rm d}M+K_{q}{ \rm d}q, \tag{109}\]
where \(\kappa\) is the surface gravity, \(\Psi_{H}\) is the magnetic potential and the last two terms are extra. As a result, one encounters several problems when attempting to construct the first _thermodynamic_ law, e.g., what is the correspondence between mechanical and thermodynamic variables and what is the dimension of the thermodynamic phase space?
To construct the first _thermodynamic_ law, Fan and Wang [22] introduced a parameter \(\alpha\) in the action of nonlinear electrodynamics. According to their method, the first thermodynamic law for the Bardeen BH, Eq. (48), can be cast in the following form,
\[{\rm d}E=T{\rm d}S+\Psi_{H}{\rm d}Q_{m}+\Pi{\rm d}\alpha, \tag{110}\]
where the three thermodynamic variables
\[E=M,\qquad Q_{m}=\sqrt{Mq/2},\qquad\alpha=q^{3}/M=8Q_{m}^{6}/M^{4} \tag{111}\]
are not independent from each other in the phase space. In other words, the dimension of thermodynamic phase space is two because \(\alpha\) is a redundant dimension. The other correspondences between thermodynamic and mechanical variables are
\[T\longleftrightarrow\frac{\kappa}{2\uppi},\qquad S\longleftrightarrow\frac{A}{ 4}. \tag{112}\]
Nevertheless, Eq. (110) is still problematic because
\[\frac{A}{4}=S\neq\int\frac{\mathrm{d}E}{T}=\int\frac{\mathrm{d}M}{T}, \tag{113}\]
where the integral is calculated under the condition, \(\mathrm{d}Q_{m}=0=\mathrm{d}\alpha\). The reason of \(S\neq\int\mathrm{d}M/T\) or \(T\neq(\partial E/\partial S)_{Q,\alpha}\) is that the relations, \(\mathrm{d}Q_{m}=0=\mathrm{d}\alpha\), imply that \(M\) is also a constant, see Eq. (111), which leads to a trivial integral, \(\int\mathrm{d}M/T\equiv 0\).
If only one parameter is fixed, say \(\mathrm{d}Q_{m}=0\), we have
\[\frac{A}{4}=\int\frac{\mathrm{d}M}{T}\left(1+\frac{32Q_{m}^{6}\Pi}{M^{5}} \right)\neq\int\frac{\mathrm{d}M}{T}, \tag{114}\]
namely, the simple thermodynamic relation, \(S=\int\mathrm{d}M/T\), is broken. This result gives rise to a tendency to abandon [142, 143, 156, 157, 158, 154, 155, 156] the area-entropy law,
\[S=\int\frac{\mathrm{d}E}{T}\neq\frac{A}{4}. \tag{115}\]
In the above treatment, the entropy \(S\) is not an independent available. In other words, it cannot be determined without the first thermodynamic law. The worst point is that the broken area-entropy relation contradicts the results obtained from either Hawking's path-integral or Wald's entropy formula.
Therefore, the question naturally arises -- what a correct first thermodynamic law is. We list its most important features below:
1. The area law should be maintained, i.e., \(S=A/4\), if one explains RBHs in the context of Einstein's gravity. Generally, the entropy in the first thermodynamic law should be consistent with that calculated from either Hawking's path-integral or Wald's entropy formula.
2. Every thermodynamic variable should be independent from the first thermodynamic law, i.e., it can be determined without the first thermodynamic law, but the thermodynamic formula, \(S=\int\mathrm{d}E/T\), must hold. Here are some counterexamples: temperature is not independent in Ref. [159], deviation of internal energy \(X\) is not independent in Ref. [160], etc.
3. Every thermodynamic variable should be independent from each other, e.g., \(\mathrm{d}M=T\mathrm{d}S+K_{1}\mathrm{d}\alpha+K_{2}\mathrm{d}\beta+\ldots\), is ill-defined if \(\alpha=M\) and \(\beta=TM\) that make \(\alpha\) and \(\beta\) dependent on \(M\) in the thermodynamic phase space.
In order to establish a well-defined first thermodynamic law that meets the above conditions for RBHs with two parameters, such as Bardeen BHs, we have proposed [80] the following form,
\[\mathrm{d}U=T\mathrm{d}S-P_{+}\mathrm{d}V, \tag{116}\]
where the total internal energy reads
\[U=\frac{r_{+}}{2}, \tag{117}\]
\(P_{+}\) and \(V\) are the thermodynamic pressure and volume, respectively,
\[P_{+}=\left.\frac{G^{r}_{\,r}}{8\pi}\right|_{r=r_{+}},\qquad V=\frac{4}{3}\pi r _{+}^{3}. \tag{118}\]
## 6 Rugular black hole chemistry and thermodynamic geometry
Thermodynamic geometry is a very powerful tool for exploring the microstructures of RBHs and more recently BH chemistry has been able to understand BHs in a different way. The study of BH chemistry and Ruppeiner geometry can help us to further understand the thermodynamic properties of RBHs.
### What is the regular black hole chemistry?
#### 6.1.1 Thermodynamic phase transition and shift of critical points
In recent years, there have been a lot of studies on _BH chemistry_ showing that RBHs behave very similar chemical phenomena to those of SBHs, such as gas/liquid phase transitions [161, 162, 156], van der Waals-like fluid properties [163, 164, 165, 166], triple points [167, 168], and heat engines [169, 170, 171, 172, 173]. The BH chemistry is based on equations of state. For a given equation of state, the critical point (\(T_{c}\), \(P_{c}\), \(V_{c}\)) of phase transitions can be determined by the following condition,
\[\left(\frac{\partial P}{\partial V}\right)_{T}=0,\qquad\left(\frac{\partial^{ 2}P}{\partial V^{2}}\right)_{T}=0. \tag{119}\]
The behaviors of critical points allow us to make a deep analogy between BHs and gas/liquid systems. In particular, the phase transitions of van der Waals-like fluid have been well tested [174, 175] in different theories of gravity. However, due to the problematic first law of thermodynamics, the Maxwell equal area law is no longer valid for RBHs. The exotic equal area law for Hayward AdS BHs reads [52]
\[\oint VdP=-\oint\Psi dQ_{m}+\oint\text{extra term}, \tag{120}\]
which makes Maxwell's equal area law invalid in the \(P-V\) plane. Thus, the critical point (\(T_{c}\), \(P_{c}\)) of the first order small/large BH transition does not coincide with the inflection point of isotherms.
#### 6.1.2 Regular black hole as a heat engine
That BHs behave like the van der Waals fluid and have gas/liquid phase transitions also makes it possible to treat BHs as heat engines. The first holographic heat engine was proposed [176] by Johnson, where BHs act as a working substance. The heat engine flows in a cycle are shown in Fig. 2, where \(Q_{H}\) is net input heat flow and \(Q_{C}\) is net output heat flow.
Using equations of state of AdS BHs, for instance, we can compute the total mechanical work as follows,
\[W=Q_{H}-Q_{C}. \tag{121}\]
Therefore, the efficiency of heat engines is defined by
\[\eta\equiv\frac{W}{Q_{H}}=1-\frac{Q_{C}}{Q_{H}}. \tag{122}\]
Recently, some progress has been made [169, 171, 177, 178] in the studies of heat engines for RBHs, where the relation between efficiency and entropy (pressure) has been obtained [171] and the comparisons of efficiency among RBHs have also been made. We note that it is subtle to construct heat engines and especially the engine cycles in the \(P-V\) plane for RBHs, and that some other progress has provided [179, 180, 181] a new perspective on thermodynamic properties of RBHs.
### How to eliminate the singularity of the thermodynamic geometry for regular black holes?
#### 6.2.1 Construction of thermodynamic geometry
The similar situation has happened in the studies of thermodynamic geometry, where such a geometry has been shown to be a powerful tool for understanding the thermodynamic properties and microstructures of SBHs. The Ruppeiner metric is defined [182] by the Hessian matrix of thermodynamic entropy,
\[g^{\rm R}_{\mu\nu}\equiv-\frac{\partial^{2}S(X)}{\partial X^{\mu}\partial X^{ \nu}}, \tag{123}\]
and the Weinhold metric is defined [183] by the Hessian matrix of BH mass,
\[g^{\rm W}_{\mu\nu}=\frac{\partial^{2}M(X)}{\partial X^{\mu}\partial X^{\nu}}, \tag{124}\]
Figure 2: The heat engine flows.
where \(X^{\mu}\) denote thermodynamic variables. Further, we can see that there is the following conformal relationship between the two kinds of thermodynamic geometries if we write the metrics in terms of line elements,
\[\mathrm{d}s_{\mathrm{R}}^{2}=\frac{1}{T}\mathrm{d}s_{\mathrm{W}}^{2}. \tag{125}\]
Note that the above two kinds of thermodynamic geometries are valid only for SBHs because they are based on the corresponding first law of thermodynamics. For RBHs, however, the first law of thermodynamics contains additional terms. Thus, we have to restrict the first law in a subphase space, and rewrite it, as an example, for a RBH described by a four-dimensional subphase space,
\[\mathrm{d}M=\frac{T}{1-\Pi}\mathrm{d}S-\frac{P}{1-\Pi}\mathrm{d}V, \tag{126}\]
where \(\Pi\) is related to additional terms. In this case, if we constructed thermodynamic geometry from the problematic first law Eq. (126), the thermodynamic metric would also contain additional terms associated with \(\Pi\). But \(\Pi\) has no thermodynamic counterpart. Correspondingly, the conformal relationship between Ruppeiner geometry and Weinhold geometry becomes
\[\mathrm{d}s_{\mathrm{R}}^{2}=\frac{1-\Pi}{T}\mathrm{d}s_{\mathrm{W}}^{2}. \tag{127}\]
It is important to emphasize that the construction of thermodynamic geometry for RBHs is a challenging topic, and it must be based on the correct first law of thermodynamics. The microstructure of RBHs has also been studied [163, 184, 185] in different gravity theories with the help of thermodynamic geometry. As is known, RBHs are very special BH models and have been widely concerned [167, 173, 186, 187] recently. Their first law of thermodynamics may need to be modified, and accordingly their thermodynamic geometry may be modified as well. It is hopeful to deeply understand the thermodynamics of RBHs from the perspective of thermodynamic geometry.
#### 6.2.2 Singularity of thermodynamic geometry and elimination
A very special feature of charged AdS BHs is that the heat capacity at constant volume vanishes, i.e., \(C_{V}=0\). This property makes the thermodynamic line element, Eq. (127), singular and the corresponding thermodynamic information is unavailable from thermodynamic geometry. A new normalized scalar curvature is defined [188],
\[R_{N}=R\,C_{V}, \tag{128}\]
under which the divergence of Ruppeiner scalar curvature can subtly be eliminated. For RBHs, extra terms will appear in thermodynamic line elements due to the modification in the first law of thermodynamics, which can be used to eliminate the divergence of thermodynamic scalar curvature. Therefore, RBHs can be viewed as a treatment option to address the divergence of thermodynamic scalar curvature.
Conclusion and outlook
We have presented several topics of RBHs in the current review, where the importance of topics plays a major role in our choice. There are still three interesting issues we want to mention here. The first is whether RBHs are trivial from the point of view of coordinate transformations, the second issue is what sources that generate rotating RBHs are, and the last issue is what differences between RBHs and singular ones are.
For the first issue, it is known that every RBH corresponds to a metric with several coordinate singularities. Thus, one can always find a transformation to remove these singularities. Consequently, the recast metric is completely equivalent to the previous one from the spacetime perspective but it has no longer horizons, i.e., no more so-called _black holes_. Let us phrase the question more clearly: Whether RBH is a realistic object or just a mathematical expression of spacetime?
The second issue arises due to the construction of rotating RBHs. Bambi and Modesto utilized [94] the NJA to create the metrics of rotating RBHs. On the one hand, the NJA depends on theories of gravity. Taking a RBH model in the Chern-Simons gravity [189] as an example, the NJA can provide a rotating metric, but the Pontryagin density, \({}^{*}RR\), does not vanish [190], which means that the constructed metric will not satisfy equations of motion. On the other hand, as far as we know, there is still a lack of investigation on the classical field source of rotating RBHs.
At last, the most intriguing issue is possibly what the differences between RBHs and singular ones are. To answer this question, one must distinguish which differences are caused by the absence of singularities and which are caused by the models' characteristics, such as the differences between Schwarzschild and RN BHs. This issue plays an elementary role in the future study of RBHs.
## Acknowledgments
This work is supported in part by the National Natural Science Foundation of China under Grant No. 12175108.
|
2307.14219 | A survey of universal quantum von Neumann architecture | The existence of universal quantum computers has been theoretically well
established. However, building up a real quantum computer system not only
relies on the theory of universality, but also needs methods to satisfy
requirements on other features, such as programmability, modularity,
scalability, etc. To this end, we study the recently proposed model of quantum
von Neumann architecture, by putting it in a practical and broader setting,
namely, the hierarchical design of a computer system. We analyze the structures
of quantum CPU and quantum control unit, and draw their connections with
computational advantages. We also point out that a recent demonstration of our
model would require less than 20 qubits. | Y. -T. Liu, K. Wang, Y. -D. Liu, D. -S. Wang | 2023-07-26T14:35:40Z | http://arxiv.org/abs/2307.14219v2 | # A survey of universal quantum von Neumann architecture
###### Abstract
The existence of universal quantum computers has been theoretically well established. However, building up a real quantum computer system not only relies on the theory of universality, but also needs methods to satisfy requirements on other features, such as programmability, modularity, scalability, etc. To this end, we study the recently proposed model of quantum von Neumann architecture, by putting it in a practical and broader setting, namely, the hierarchical design of a computer system. We analyze the structures of quantum CPU and quantum control unit, and draw their connections with computational advantages. We also point out that a recent demonstration of our model would require less than 20 qubits.
## I Introduction
At the origin of quantum computing, physicists such as R. Feynman and D. Deustch realized that universal quantum computing is possible [1]. We shall also notice that at that time classical computers were just being built. After decades of evolution, classical computers have become more and more advanced. At the meantime, the field of quantum information science grows, and nowadays physicists and engineers can control quantum processors of tens or even hundreds of qubits.
As the foundation of computation, physics is not only crucial to guide the finding of elementary devices such as transistors, but also crucial to set the principles of computation regarding space, time, energy, efficiency, etc. However, physics itself is not enough. For the building of classical computers, some other disciplines of study also played essential roles, in particular, the theories of information, system, and control. The information theory, established by C. Shannon [2], borrows ideas from thermodynamics but it reveals far more properties of information. The system theory, pioneered by von Bertalanffy [3], has more connections with many-body physics and it emphasizes more on the structure, correlation, etc rather than the individual participant. The control theory, with N. Wiener [4], studies the interplay between the controller and the target system to achieve a certain goal. These studies go beyond the traditional scope of physics. Together with computational complexity theory [5], they form the theoretical foundation to make classical computers real.
The modern quantum physics, especially quantum information science, is not a traditional physics; instead, it shares features of engineering. It does not only study a system passively, namely, only study those that exist naturally, but also actively study a system, e.g., how to make an artificial system for a certain purpose. Therefore, it needs both physicists and engineers to make quantum computers real, too.
The power of quantum computing is currently mainly demonstrated by quantum algorithms. Given a problem, a quantum algorithm is constructed in the framework of a universal quantum computing model, such as the circuit model [6], or a Hamiltonian-based model [7; 1]. An algorithm is realized by a sequence of elementary operations available in a model, such as the CNOT gates and qubit gates [8]. However, from a modern design of computers [9], the above is not enough to guide the design of a real quantum computer. A computer system is far more complicated than a physical experimental device. From the hierarchy of the layers of abstraction, the physical devices and gates are at the bottom layer of the hierarchy, while algorithms and applications are at the top layer of it. See Fig. 1. There is a gap between them. Some investigations on quantum high-level programming and layered design have been taken; e.g., Refs. [10; 11]. Filling up the gap, although may take decades, is necessary to build real quantum computing systems.
To this goal, we need to understand how to satisfy the requirements of programmability, modularity, automation, etc besides the basic requirement of universality. A computing device or system is programmable if it can realize a broad range of programs (or algorithms) without almost any change of its physical structure. That is, programs can be loaded as software. A system is modular if the connections among different parts of it, known as units, are device-independent, namely, a unit can be detached or replaced without affecting other units. A system
is automate if it can realize hierarchical or concatenated tasks without active interfering in the middle. Realize these features have greatly benefited modern computers and also engineering.
With the methodology above, in this work we present a survey of quantum von Neumann architecture. There were explorations on this subject in literature [12; 13; 14], however, they did not present a universal model with explicit stored quantum programs. Based on channel-state duality [15], a universal model for quantum von Neumann architecture is recently developed [16; 17; 18; 19]. In this work, we further study it by focusing on a few subjects, especially the structure of the quantum CPU, also known as QPU, and the structure of the quantum control unit (QCU). We also survey the elementary requirements for a near-term demonstration of this architecture. Our study is purely theoretical without referring to any actual quantum computing platforms. With this survey, we hope to explain some details of the model and identify some research directions to investigate in the near future.
This work contains the following parts. In Section II, we first review the principle of classical computer. We then review the basics of quantum computing in Section III. We then survey the basic operations in quantum von Neumann architecture (QvN) in Section IV. After these, in Sections V we discuss the features of our model compared with other quantum computing models. In Section VI, we survey algorithm designs in QvN and their possible computational advantages. We then study the basic requirement for a NISQ implementation of QvN in Section VII. We then conclude in Section VIII with open questions and perspectives.
## II Classical Computer
It would be interesting to review how a classical computer is built, mainly what are the underlying principles. This will help to understand what are the current situations for quantum computing. In this section, we start from a few universal computing models, and then move on to the layers of structures for the design of a modern computer.
### Computing models
A universal computing model is a framework to design algorithms for solving problems. The most well known model is the circuit model based on Boolean logic, while at the same time there are a few equivalent ones, including the Turing machine, cellular automata, etc. Their logic building blocks are different, but can simulate each other efficiently [5].
We start from the circuit model. Information or data are represented as bit strings, and the basic Boolean gates on bits include NOT, AND, NAND, OR, etc. An important theorem is that there exist a universal set of gates so that any Boolean function \(f:\{0,1\}^{n}\mapsto\{0,1\}\) can be expressed as a sequence of these gates, forming a circuit. Such circuits are not invertible as some bits are lost, but they can be made invertible by using the Toffoli gate to simulate them. The Toffoli gate is
\[\text{Tof}=P_{0}\otimes\mathds{1}+P_{1}\otimes\text{CNOT}, \tag{1}\]
for the controlled-not gate
\[\text{CNOT}=P_{0}\otimes\mathds{1}+P_{1}\otimes\text{NOT}, \tag{2}\]
with \(P_{0}\) (\(P_{1}\)) as projection on bit-value 0 (1). Despite this, a circuit is often designed using the Boolean gates.
The circuit model is fundamental for the characterization of universality and also the design of algorithms. However, it is still abstract without specifying components for building a real computer. The foundation for the design of modern computers is the so-called von Neumann architecture (vNA) [20], which contains a few modular parts, including the central processing unit (CPU), memory, control, internet, and in/out units. All these can be described by the circuit model, but it is crucial to separate them. In particular, the stored programs in the memory unit are essential to realize universality and programmability. Namely, a stored program as bit strings can be read and then loaded to the programmable CPU, without physically changing the structure of the CPU in order to run different algorithms (i.e., programs). Formally, this realizes
\[G(\vec{b}\times\vec{b}_{A})=A\vec{b}\times\vec{b}_{A}^{\prime}, \tag{3}\]
for \(G\) as the CPU, \(\vec{b}\) as an input bit string, \(\vec{b}_{A}\) as the bit-string encoding of an algorithm \(A\). The desired output is \(A\vec{b}\). The final \(\vec{b}^{\prime}_{A}\) is often ignored but can be used to recover \(\vec{b}_{A}\). The program also contains control signals for precise timing and addressing of data and operations or commands. Although the internet was invented later than the vNA itself, and there are also many types of communication, the download and upload of data is an indispensable part of vNA.
Although it seems vNA is a step closer to a real computer than the circuit model itself, vNA is still an abstract model. A modern computer is far more complicated than the abstraction of vNA. In particular, it follows a hierarchical design of layers of abstraction, with the physical logic devices at the bottom, and algorithms and applications at the top. For instance, there are many types of memory, such as the internal storage, hard disk (as external storage), and flash memory etc playing distinct roles in computers and also microchips.
### Hierarchical design
The hierarchical layers of abstraction for a computer architecture is a crucial step to build a real universal programmable computer [9]. Here we take a brief overview of it, mainly the physical aspects, also see the Figure 1. It contains both hardware and software layers, with different programming languages associated with them.
* The physical devices: this is to find physical system as the carrier of bits. For instance, they are the transistors as the basic element to construct logic gates, or the magnetic domain for storage.
* The gates and circuits: this is to design the elementary Boolean gates and also elementary circuits, such as the adder, multiplexer, decoder, and a few sequential circuits, such as Latch, Flip-Flop, Register. This is on the level of machine language.
* The micro-processor: this is based on vNA. It can realize instructions such as if, else, when, while, for, shift, branch, etc for the purpose of programming.
* The instruction-set architecture: this is to design instructions, operand locations, memory, control, etc; such as CISC, MIPS. This is on the level of assemble language.
* The operating system: this decides how people can use a computer, such as how to make input/output, how to input command.
* The algorithm and software: these are programs for solving certain class of problems. This is on the level of advanced language.
Figure 1: Layers of abstraction of a computer.
From the hierarchy of the layers of abstraction, we can see that quantum computing is still at an early stage. Currently, what people mostly aim at is quantum CPU that can run simple circuit-level quantum algorithms, while all other parts can be classical. Namely, it uses classical control, classical memory, and also classical operating system. The quality of qubits and also circuits are getting better, but these are at the lower levels of the hierarchy. There is no real logical qubit yet, which shall be error-correcting, either self-correcting or actively. It is still at the infancy to construct quantum micro-processor and instruction-set architecture, and this requires a better understanding of the roles of quantum memory and quantum control, and the roles of being quantum in other devices.
## III Basics of quantum computing
In this section, we briefly review the basics of quantum computing [8] and set the stage for our study. We focus on finite-dimensional Hilbert spaces. For a Hilbert space \(\mathcal{H}\), quantum states are known as density operators \(\rho\in\mathcal{D}(\mathcal{H})\), forming a convex set of nonnegative semi-definite operators with trace \(1\). A state is pure if it is also a projector. Quantum evolution is in general described by completely-positive trace-preserving (CPTP) maps [21], or known as quantum channels. A fundamental principle is the quantum channel-state duality, i.e., the Choi-Jamiolkowski isomorphism [15; 22] that maps a channel \(\mathcal{E}\) into a quantum state
\[\omega_{\mathcal{E}}:=\mathcal{E}\otimes\mathds{1}(|\omega\rangle\langle \omega|), \tag{4}\]
called Choi state in this work, for
\[|\omega\rangle:=\frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}|i,i\rangle\in\mathcal{H} \otimes\mathcal{H} \tag{5}\]
as a Bell state, known as an ebit, with \(d=\dim(\mathcal{H})\).
A channel can also be written as a Kraus operator-sum representation
\[\mathcal{E}(\rho)=\sum_{i=1}^{r}K_{i}\rho K_{i}^{\dagger}, \tag{6}\]
for \(K_{i}\) as Kraus operators [21] with \(\sum_{i}K_{i}^{\dagger}K_{i}=\mathds{1}\). This can be found from the eigen-decomposition of \(\omega_{\mathcal{E}}\), and \(r\) is the rank of \(\omega_{\mathcal{E}}\).
Unitary evolution and quantum measurement can both be viewed as channels. A unitary \(U\in SU(d)\) is rank \(1\) with \(U^{\dagger}U=UU^{\dagger}=\mathds{1}\). Its dual state is a pure state \(|\omega_{U}\rangle=(U\otimes\mathds{1})|\omega\rangle\), and we will use the notation \(|U\rangle\) for simplicity. A quantum measurement is a POVM (positive operator-valued measure), which is a set of positive operators \(\{M_{i}\}\) with \(\sum_{i}M_{i}=\mathds{1}\). It is clear to see each effect \(M_{i}\) can be realized as \(M_{i}=K_{i}^{\dagger}K_{i}\) for a Kraus operator \(K_{i}\), therefore the POVM is realized by a channel.
A channel can be described as an isometry \(V\) with \(V=\sum_{i}|i\rangle K_{i}\), so it can be realized by a unitary \(U\) with \(V=U|0\rangle\) as the first block column of \(U\), and \(|0\rangle\) as the initial ancillary state. This is the Stinespring's dilation theorem, which guarantees that it is enough to consider unitary evolution on pure states, since non-unitary channels and mixed states can be realized by ignoring ancilla or subsystems.
In quantum circuit model, we consider unitary evolution on multi-qubit states followed by measurement. Similar with the classical circuit model, there also exist universal gate sets to decompose arbitrary unitary operator [23]. Two well known examples are the set \(\{\)H,T,CNOT\(\}\) and \(\{\)H,Tof\(\}\), for the Hadamard gate H, T gate as the forth-root of Pauli Z operator. The Toffoli gate is universal for classical computing, but with H gate, they are universal for quantum computing. The H gate exchanges Pauli X and Z operators
\[HXH=Z,HZH=X, \tag{7}\]
while with \(T^{2}\), which is the phase gate S, and the CNOT, they form the Clifford group [24] that preserves the set of (tensor-product of) Pauli operators. It is known that Clifford circuits are not even universal for classical computing. The non-Clifford gates such as the T and Tof are necessary to achieve quantum universality.
We see that quantum measurement is needed to read out the results, which can be viewed as expectation value of observable on the final state. It is also possible to encode expectation values as bit strings and require the final state of quantum algorithms to be bit strings, such as using the amplitude amplification algorithm [25], but this
will cost more quantum resources. To estimate expectation values, we often run the same circuit multiple times to get the necessary probabilities. Namely, to measure \(\text{tr}(A\rho)\) for a hermitian observable \(A\) on the final state \(\rho\), the eigenspectrum of \(A=\sum_{i}a_{i}|i\rangle\langle i|\) is needed, and probabilities \(p_{i}=\langle i|\rho|i\rangle\) are obtained by repeated measurments so that
\[\text{tr}(A\rho)=\sum_{i}p_{i}a_{i}. \tag{8}\]
That is, there are two primary but fundamental differences between the classical and quantum cases: the quantum evolution is unitary but non-unitary measurement is needed for readout. It is more appropriate to treat quantum algorithms as extensions of probabilistic algorithms, which not only use Boolean circuits acting on bits but also random numbers, in the form of pbits. Qubits can be understood as a combination of bits and pbits in the sense that its basis for a Hilbert space are bits while its amplitudes in this basis are the source of pbits.
## IV Basics of quantum von Neumann architecture
In this section, we discuss the basic model of quantum von Neumann architecture (QvN) based on our recent work [16; 17; 18; 19], and here we aim to explain the details of the elementary operations in our model. Note we do not study how to physically construct or encode a logical qubit, or physically construct a unit, which are separate important subjects.
### The basic model
We describe the basic model as shown in Fig. 2. This is the analog of what exists nowadays for modern computer system. Of course, we only discuss the primary abstract process. A user aims to perform a quantum algorithm, while the algorithm or program is provided by the host through a quantum channel, which can be monitored by an eavesdropper Eve, or suffers from noises. Quantum codes are needed to protect information against noises and Eve, and they are also needed for the computers.
In practice, a host or data centre may have a different design from a desktop computer. However, for simplicity we assume a quantum host follows a similar design with a quantum computer. The program may come from a host or another user. Without digging into the structures of a user or host computer, below we explain the elementary operations that need to be performed.
### Read and write on memory
Given a quantum program encoded in a quantum state, one has to execute it. Using Choi state, the underlying scheme is that the action of a channel \(\mathcal{E}\) on state \(\rho\) is recovered as
\[\mathcal{E}(\rho)=d\ \text{tr}_{\text{B}}[\omega_{\mathcal{E}}(\mathds{1} \otimes\rho^{t})] \tag{9}\]
Figure 2: The model we use for quantum von Neumann architecture in this work. The user aims to perform a quantum algorithm, while the algorithm or program is provided by the host through a quantum channel, which can be monitored by an eavesdropper Eve.
for \(\rho^{t}\) as the transpose of a state \(\rho\). The partial trace \(\mathrm{tr}_{\mathrm{B}}\) is on the 2nd part of \(\omega_{\mathcal{E}}\). Below and most of the time in this work, we only consider unitary programs. A program \(U\) is stored as its Choi state \(|U\rangle=U\otimes\mathds{1}|\omega\rangle.\) See the figure
The curve is the Bell state \(|\omega\rangle\). Given \(|U\rangle\), how to use it? The basic usage is how it acts on input. In our scheme, an input state is injected by a binary projective measurement, and the output is obtained also by a projective measurement (PVM).
Suppose the initial state is \(|0\rangle\), and we need to obtain
\[p_{i}=|\langle\psi_{i}|U|0\rangle|^{2}. \tag{10}\]
The binary PVM for initial-state injection is \(\{P_{0},P_{0}\}\) for \(P_{0}=\mathds{1}-P_{0}\). The PVM for readout is \(\{|\psi_{i}\rangle\langle\psi_{i}|\}\). As measurement outcomes are random, the initial state is only realized with finite probability. However, this is not a problem. For the case of \(P_{0}\), we obtain \(p_{i}\). For the case of \(P_{0}\), we get \(p_{i}^{\prime}=1-p_{i}\) so \(p_{i}\) can also be obtained [16].
If the dimension of \(U\) is \(d\), then we need a qubit-ancilla to realize the binary PVM. See the figure for \(n\)-qubit input:
The Toffoli-like gate (on the left) is needed to extract the parity to the ancilla. A PVM on the ancilla will realize the initial-state injection. For convenience, we often call the 2nd part of a Choi state as the 'tail', which serves as the 'port' for the initial-state injection, a 'write' operation, and the 1st part as the 'head', which is the port for'read' operation.
Besides the Choi-state form, there are also other ways to store a program. Note that a program \(U\) can be decomposed as a circuit of elementary gates \(U\approx\prod_{i}U_{i}\) with a fixed accuracy \(\epsilon\). Here we discuss a few of them.
* A quantum encoding: use the Choi state \(|U\rangle\).
* A classical encoding: use bits \([U]\) to represent \(U\) as a matrix, or as a sequence of gates forming a circuit, with the location and type of each gate encoded by bits.
* A hardware encoding: a gate is stored in a hardware device, just like the optical elements in photonic quantum computing [8].
Different schemes can be applied in different settings. They will also affect the construction of the QPU. Note there are also other ways. There is a nonlocal Choi-state-like form so that a program can be executed blindly [26], but this requires a lot more resources, therefore, we do not study this form in this work. The classical encoding \([U]\) is most popular nowadays. It can be used as classical control signals to guide the execution of gates. This applies to the current framework on circuit model, such as superconducting circuits. Below we will study how to use the quantum encoding to construct QPU.
### Teleportation
Teleporation has been used in many ways, e.g., in quantum communication, in fault-tolerant scheme and in measurement-based quantum computing. For QvN, teleporation is used both for communication and computation. In communication, it has been well established that teleporation can replace the transmission of qubits by bits given distributed ebits [27]. For computation, teleportation is employed to realize gate operations, similar with the measurement-based model. Here we recall its definition and motivate the covariant teleportation.
One often starts from a bipartite nonlocal setting that Alice and Bob already shared ebits, and Alice aims to send qubits (or qudits) to Bob without quantum communication. The scheme is shown in the figure
The Pauli byproducts \(\sigma_{i}\in\{I,X,Y,Z\}\) are corrected by sending the measurement outcomes \(i\) from the Bell measurement of Alice to Bob.
There is an interesting symmetry of this scheme. Each Pauli byproduct is obtained with the same probability. The operators \(\sigma_{i}\) form a projective representation of the group \(Z_{2}\times Z_{2}\). Actually, this fact has been used to define group-based teleportation [28].
This can also be understood from the point of view of tensors. The set of Pauli byproducts form a three-leg tensor, and it has the full symmetry \(SU(2)\) if the identity operator is absent [29]. This also applies to any \(SU(d)\) and leads to the covariant teleportation [16] by grouping the nontrivial Pauli byproducts together, namely, using a qubit ancilla to extract the binary distinction of byproducts.
### Switchable composition of programs
The covariant teleportation, also called universal quantum teleporation (UQT), can be used to compose two programs together. Namely, two program states \(\ket{U}\) and \(\ket{V}\) can be composed together deterministically to be \(\ket{UV}\), or \(\ket{VU}\) depending on the direction of information flow. See the figure:
The shaded region is for the UQT. It requires a qubit ancilla and Toffoli-like gate, and also the adjoint form \(T\), also known as the affine form, of a gate [16]. For instance, a qubit gate \(U\in SU(2)\) corresponds to an orthogonal rotation \(R\in SO(3)\). A PVM on the qubit ancilla leads to either trivial or nontrivial Pauli byproducts, conditioning on which the correction \(T\) is applied. Note that in order to complete composition, the programs need to be known, i.e., as white boxes. This can be used to generate large programs from smaller ones. When only elementary programs are composed, such as \(\ket{H}\), \(\ket{T}\), \(\ket{CZ}\) for the Hadamard gate H, T gate, and CZ gate, only the adjoint form of H and T needs to be done. As H exchanges Pauli X and Z, while T can generate superposition of Pauli X and Y, it is easy to see the affine form of H is a swap gate, while of T is a Hadamard-like gate [16].
The ebits used in the composition have a unique feature. A state injected at its tail can propagate 'backward' to its head, following from the channel-state duality. This leads to a switchable construction of the composition. For instance, for a qubit program it attaches one ebit to it. Then it applies a few CZ gates, as shown in the figure
The top panel shows the circuit, while the bottom panel shows the operations on the qubits explicitly. Blue lines are CZ gates. The box is the stored program. This forms a 'pre-compose' step between the previous program and the current one. There are then two possible paths for the information flow, one with the current program, 1\(\rightarrow\)2\(\rightarrow\)3\(\rightarrow\)4\(\rightarrow\)5, the other without it, 1\(\rightarrow\)4\(\rightarrow\)5. This serves as a switch for turning on or off of the gate depending on the control signal. To complete the composition, one path needs to be chosen while closing the other, and there will also be correctable Pauli byproduct after the composition. This will be used to construct the QPU.
### Program conversion
It is also useful if a program can be changed into another. This needs the operation of quantum superchannel [30; 31; 32]. For notation, we use a hat on the symbols for superchannels. The circuit representation of a superchannel is
\[\hat{\mathcal{S}}(\mathcal{E})(\rho)=\mathrm{tr}_{\mathrm{a}}\mathcal{V}\ \mathcal{E}\ \mathcal{U}(\rho\otimes|0\rangle\langle 0|), \tag{11}\]
for \(\rho\in\mathcal{D}(\mathcal{H})\), \(\mathcal{U}\) and \(\mathcal{V}\) are unitary, and a is an ancilla. The dimension of \(V\) can be larger than \(U\)[33], but we do not need the details here. This can also be represented as the action on Choi state with
\[\hat{\mathcal{S}}(\mathcal{E})(\rho)=\mathrm{tr}_{\mathrm{A}}\mathcal{V} \otimes\tilde{\mathcal{U}}(\omega_{\mathcal{E}}\otimes\omega)(1\otimes\rho^{ t}\otimes|0\rangle\langle 0|). \tag{12}\]
The unitary \(\tilde{\mathcal{U}}\) is the transpose of \(\mathcal{U}\) conjugated by a swap. The trace is over the subsystems except the top one, A. We see that ebits are needed in order to realize nontrivial superchannels. See the figure
The top wire carries the output. It has been shown that [17], a sequence of superchannels acting on Choi states can be composed together with the tool of UQT. This realizes the so-called quantum comb which indeed is a composition of superchannels. This has found applications in quantum estimation, learning, optimization etc, and we will further study this in the Section VI.
### Qcu
Control plays essential roles in classical computers. The simplest example is the CNOT gate, which uses a bit to control another bit. Clock is another notable example which is a building block in classical sequential circuits. There are also schemes which use electric circuits to achieve control of analog signals. Here we analyze the construction of quantum control unit (QCU) for QvN.
First, there are different layers of control tasks. The most familiar one is to use bits to control quantum gates. In the circuit model, each gate has definite spacetime location, i.e., when and where it acts on qubits. Such classical information serves as bits to control the execution of quantum gates. There is no entanglement between the control bits and data qubits.
A semi-classical scheme is to use lasers to interact with qubits, a seminal field of AMO physics and also the most familiar paradigm of quantum control. There is no entanglement between the qubits and the lasers. The dynamical decoupling [34] is a notable example.
One can also use qubits to control quantum gates. This actually has been a quite common scheme for designing quantum algorithms, such as the swap test, DQC1, and also quantum phase estimation [18]. This can lead to interference of quantum gates, and this has been used in the linear combination of unitaries (LCU) algorithm [35, 36] and also in a model of contextual quantum computing [18].
Using QCU also leads to certain issues. A first nontrivial issue arises if the target quantum gate is unknown, i.e., a black box. This applies to situation of modular design, for instance. It was proven that quantum control over unknown gate is impossible [37]. This is because the operation \(U\mapsto\Lambda U\) is not valid as it converts the unphysical global phase of \(U\) to a local phase of \(U\) in \(\Lambda U\). Here \(\Lambda U\) is the controlled-\(U\) gate. A solution for this is to know an eigenstate of \(U\), which serves as a 'flag' of it. The following circuit realizes the desired quantum control
The top wire is the control register, the second wire is the data register, and the third one is the flag. Now the gate is 'grey' instead of black since an eigenstate and also eigenvalue of it are known.
This method can be used to run quantum control over unknown programs, and then realize LCU algorithms. We require each program state \(|U\rangle\) is given with a flag \(|\lambda_{U}\rangle\). A flag state can be injected using our initialization method. See the figure for the linear combination of two unknown programs
The control signal (c) itself is not pre-stored although it can be quantum. That is, control signal is taken as deterministic input, and is not injected by measurement which is random. Now the question is: can it also be pre-stored as quantum states? For comparison, a random input data induced by PVM is okay since these input signals are orthogonal and their results are effectively equivalent. A random control signal induced by PVM will lead to uncertain operation, say, \(U_{1}\) or \(U_{2}\), which are not orthogonal in general. It seems there is no easy solution of this. In order to make orthogonal control signals, the nonlocal scheme [26] can be used which turns a set of \(\{U_{i}\}\) into approximate orthogonal states, but this requires a lot more quantum resources. Therefore, we do not require pre-stored quantum control signals.
For all the above, the control unit is required to be separable from the data unit at the output. This is necessary since, by assumption, the control unit shall not carry final results. They can get entangled during a computation, but at the end they shall be disentangled. This issue has been analyzed in the setting of quantum Turing machine [38, 39, 40, 41, 42], and also in a recent study of quantum control machine [43]. For instance, in the model of a local quantum Turing machine [42], by expressing the final quantum state carrying the results as a matrix-product state
\[|\psi\rangle=\sum_{i}A(i_{n})\cdots A(i_{2})A(i_{1})|\ell\rangle|i_{n}\cdots i _{2}i_{1}\rangle, \tag{13}\]
the machine register, with an edge state \(|\ell\rangle\) serving as the control register, can be disentangled from the data register at the end. Another method is to use measurement feedback to disentangle the controller and data [18]
used to define a contextual quantum computing model. These examples also show that, due to entanglement, the interplay between the control flow and data flow needs more study.
### Qpu
A CPU usually contains a control unit and ALU (arithmic logic unit). For the classical case, to run a program \(A\), which is stored as bits \([A]\) in the hard disc, the \([A]\) is firstly loaded as control and operations on a programmable circuit, aka. chip. The internal storage is also used to store temporary data. In this section, we study the primary structure of QPU in our model, and compare with other existed ones.
For the quantum case, the starting point is the circuit model. However, there are different approaches. We find there are two dual ones:
* The type-I: gates are stored as hardware while qubits are sort of 'not there', this applies to linear optics which uses optical elements as gates and photons as qubits;
* The type-II: qubits are stored as hardware while gates are sort of 'not there', this applies to SC, trapped ions which uses laser pulses (interacting with matter particles) as gates and particles (e.g., electrons) as qubits.
For both of them, a program \([U]\) that is used as control signals is classical. We call this 'classical programmability'. On the contrary, we will define a quantum programmability for our model of QPU. It relies on the quantum encoding of programs, or a semi-quantum one, namely, use Choi states \(|H\rangle\), \(|T\rangle\), \(|CX\rangle\) to store the elementary gates, or other Choi states to store blocks of gates, and use bits \([U]\) to store their spacetime locations. Besides, we need the toolboxes of switchable composition, quantum superchannel, and also quantum control unit.
We have seen that the control flow is distinct from the data flow. Actually, control sequences can also be stored as programs. But in order to compose them, another level of controls are still needed as long as the QPU is not automatic. That is, after all control signals are needed to monitor the evolution. As discussed in the former section, the input control signals need to be deterministic. That is, the \([U]\) is used as control signal to apply composition and other operations on primary Choi states. To run \(U\), qubits in the QPU will be measured. After the run, qubits need to be refreshed to the right Choi states.
For instance, consider the programmable realization of a sequence of H and T gates to approximate a qubit rotation. See the figure
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & qubit & gate \\ \hline type-I & time & space \\ \hline type-II & space & time \\ \hline type-III & spacetime & spacetime \\ \hline \end{tabular}
Table 1: Comparison of three basic types of constructions of QPU. The type-III is used in QvN.
for type-II. For the type-III, qubits are not used to carry data but are used to encode programs. The actual data qubits are prepared by measurements when an algorithm is to be run, so they exist in the space and also time domain. A gate also exists both in the form of a program and the operation in a composition, so in the spacetime domain.
What is the advantage of using type-III? At present, this is not fully clear yet. It apparently consumes a lot more qubits to realize an algorithm due to the usage of teleportation, as the case of measurement-based model [44]. That said, teleportation shall bring some advantages (see section IV.3). For instance, for the architecture design the physical qubits can tolerate more decoherence since they only carry the data for depth one before a composition occurs. Also only qubits need to be manufactured, and gates are applied on them. Finally, there is a clear resource-theoretic characterization of QvN, by treating quantum memory as the universal resources [18]. This can benefit the understanding of quantum superalgorithms which rely on quantum memory.
### Program download/upload
After a computation, the program state is consumed/destroyed. This is not a flaw, however. This is also present in the usual quantum circuit model: after the computation, the qubits need to be refreshed for the next task. This even exists in classical computers which uses temporary data such as cache and buffer.
For QvN, the program states are likely stored in the memory unit. A computation would turn some of the states into garbage, basically. The user has to restore the program. This is achieved by downloading them through the internet from a software producer, or a host. In order to be secure, here we consider quantum internet via quantum communication. The usual carrier, although does not have to be, is photons. Therefore, the user needs to have the ability to receive photons, store and measure them. We find there are four primary schemes:
1. use qubits to send bits: the host employs the bit-string description \([U]\) of a program \(U\), and then encrypt the bits, \([[U]]\). Quantum cryptography such as BB84 scheme can be used to send the bits [45]. At the user side, a control device is needed to receive the bits and apply the gate sequence, without revealing the bits to the user. This is very much like a delegated computing or remote state preparation [46], but now the user is the remote site, and the host does not need to verify the user.
2. use ebits to send bits: one can use ebit-based quantum cryptography to send bit-string description of the gate sequence \([U]\).
3. use qubits to send qubits: the host prepare photons at the state \(|U\rangle\) directly, and send them to the user, who then applies quantum teleportation between photons and the memory qubits to teleport/download the program from the photons to the memory qubits.
4. use ebits to send qubits: the host and user first establish many pairs of ebits of photons, and then the user applies quantum teleportation between some photons and the memory qubits to teleport/download the program from the photons to the memory qubits. Namely, if \(|U\rangle=V|0\rangle\), the host applies \(V^{t}\) and then the projection \(|0\rangle\langle 0|\) on his side, and that will prepare the photons at the user as \(|U\rangle\). The host needs to use our initial-state injection technique, and the effect on the final readout at the user's side can be easily dealt with.
One may wonder which scheme is preferred. For the first two, the goal is to send bits, which need to encode both the space and time information of the gate sequence in a program. For the last two, the goal is to send qubits, which does not need to encode the time information, hence consuming fewer number of qubits than the number of bits apparently. However, currently qubits are much more expensive than bits. The choice of a scheme would depend on many practical conditions.
### Program verification
When a user decides to download a program from a host, the user has to verify that the host indeed has the promised program. This is a quantum verification task [47; 48], which has been widely studied in recent years. Here we discuss how the program-verification would work, but we do not specify all the details since there could be various schemes depending on practical settings.
The verification can be interactive. In the framework of interactive-proof system [47], the user serves as a verifier, and the host serves as a prover. Usually, the verifier is required to be computationally in BPP, while the prover is in BQP. However, here in our setting the verifier is also in BQP but he only has a limited number of copies of the unknown program \(\ket{U}\). That is, the user, as the verifier, can only do verification instead of a full tomography.
It is not hard to determine the number of samples of \(\ket{U}\) the user can download. From verification theory which specifies an infidelity parameter \(\epsilon\) and confidence parameter \(\delta\), the number of samples scales as
\[N\in O\left(\frac{1}{\epsilon}\log\frac{1}{\delta}\right), \tag{14}\]
ignoring other factors that do not matter for our discussion here. Although the scaling with respect to \(\epsilon\) is not efficient, for the purpose of verification a moderate fidelity is acceptable, and the confidence is usually more important.
Given a few samples of \(\ket{U}\), the user can also do quantum estimation or learning. It is also well established that the fidelity scales as \(N^{-2}\) for optimal joint global operations on them [49]. This is the so-called Heisenberg limit.
For a full process tomography, the user has to use a number of measurement operations, hence a number of samples, that scales with the dimension of \(\ket{U}\), which is exponential with the number of qubits it acts on. Therefore, we find that as long as the number of samples is much smaller than that for tomography, the verification can be done efficiently. In addition, there is also another level of sampling which is to obtain the final expectation value of observable. In modern terms this is a special instance of shadow tomography, which can be done with a small number of samples [48].
Verification is an important subject in the study of blind or delegated quantum computing. We will study its difference from QvN in section V.3.
## V Difference from other models
In this section, we analyze the primary differences between QvN and some other models.
### Circuit model
Here we compare QvN with quantum circuit model (QCM), including the execution of an algorithm, security, verification, and other issues. We assume the usual scheme of QCM, which realizes a quantum algorithm as a three-stage process: initial state preparation, gate execution, and measurement. We already see their difference from our study of programmable QPU.
Figure 3: The four basic schemes to realize the download of quantum programs. The upload would be similar.
Actually, it is also fine to treat QvN in the framework of QCM, as the primary operations are either unitary gates or measurements. However, it is necessary to make a distinction between them since conceptually QvN considered more requirements. This is similar for the classical case.
The scheme to realize a circuit can be seen from this figure:
The top register is classical, \([U]\) is the classical representation of the program \(U\). The output from a quantum algorithm is assumed to be the expectation value of a hermitian operator, which reduces to the estimation of a set of probabilities, \(p_{i}\). The implementation of a quantum algorithm can be done efficiently provided that:
* the initial state can be prepared quantum efficiently;
* the program \(U\) can be stored classically efficiently, as \([U]\);
* the program \(U\) can be realized quantum efficiently;
* the measurement for readout can be realized quantum efficiently;
* the number of samples scales efficiently to estimate each \(p_{i}\).
The classical description \([U]\) is often a bit-string of the gate sequence in the circuit, if \(U=\prod_{i}U_{i}\), while bits are used to encode the spacetime location and type of each primary gate \(U_{i}\) from a universal gate set, e.g., {H,T,CZ}.
After the execution, all qubits are measured and need to be refreshed for further usage. The program is stored as bits \([U]\), so do not need refreshment, though. The composition of two programs \(U_{1}\) and \(U_{2}\) is simple, namely, just implement them sequentially. The initial state needs to be prepared before the application of gates, which is not the case for QvN.
Another notable difference is that QvN requires the download of quantum programs, since they cannot be cloned if bit-string description of them are unavailable. The security of quantum communication, relying on the uncertainty principle, ensures the security of the quantum programs. For the circuit model, a circuit is often not secure, i.e., there is a classical circuit _diagram_ which can be easily seen and copied. However, there are also secure protocols relying on QCM. A notable example is the blind or delegated quantum computing (DQC) [50], which though was initially formulated via MBQC but can also be formulated via QCM. This leads to the discussion in the following two subsections.
### Mbqc
Besides the circuit model, QvN also has close connections with MBQC. In the standard MBQC, also known as the one-way model [44], a resource state such as the 2D cluster state is given, and then a sequence of local adaptive measurements is performed to execute gates. The basic underlying mechanism is 1-bit teleportation [29], and a spatial direction is chosen as the 'teleported' evolution direction. Furthermore, it is equivalent to the model based on Bell measurement for the standard teleportation, which is 2-bit [51]. Here we will denote this as the teleportation-based model (TBQC), although sometimes it is treated as a special case of MBQC.
For clarity, we summarized the comparison in Table 2. The TBQC is often used for fault-tolerant execution of gates, hence its byproduct is extended to Clifford operations which still preserve Pauli gates. Due to the covariant teleportation used in QvN, the stored program can be fully quantum, namely, the whole group SU(d) of gates.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & byproduct & type & quantum program & switchability \\ \hline TBQC & Clifford & 2-bit & qubit gates & no \\ \hline MBQC & Pauli & 1-bit & CZ & no \\ \hline QvN & Pauli & covariant & SU(d) & yes \\ \hline \end{tabular}
\end{table}
Table 2: Comparison between QvN and MBQC.
On the contrary, in MBQC qubit gates are induced by local measurements in rotated bases, while in TBQC the entangling gates such as CZ or CNOT is done by two parallel Bell measurements consuming ebits. One shall note that in MBQC the local measurements are relatively simple, compared with Bell measurements, and the resource state can be prepared offline.
Another important difference is that, in QvN the information, injected at the 'tail', is always carried by the 'head' of a Choi state. There is no such explicit head-tail structure for MBQC, and also TBQC. The information flow is shown in Figure 4. For QvN, the information never 'cross' a composition 'box', but this is the opposite for MBQC. Treating a composition as a single depth, each physical qubit for a tail or head only has depth one. This leads to the switchability of the composition, also illustrated in the figure. As has been discussed, the switchability could be useful to construct the QPU.
### Delegated quantum computing
An important model for secure computation is the delegated quantum computing (DQC), which was also much earlier known as blind quantum computing [50]. In this model, a user, as a verifier, aims to delegate computation to a prover without revealing the computation to the prover. See the figure for QCM in subsection V.1, while the classical and quantum registers belongs to the verifier and prover, separately. The verifier is in BPP while the prover is in BQP. Usually, the verifier knows what to compute, but does not have the capability to do so. This model may apply to recent era of quantum computing that only a few labs or companies have powerful quantum computers, and customers can use them blindly and confidently. The input, output, and the computation itself can all be blind to the prover.
This is different from the program-verification in QvN. For QvN, the host has both \([U]\) and \(|U\rangle\), while the user only has \(|U\rangle\). The user will use the program blindly by making measurements on it. Given limited samples of a program, the user can not do tomography, i.e., cannot obtain its classical description \([U]\). In DQC, the prover can do \(U\), which is equivalent to the ability to prepare \(|U\rangle\), while the verifier has \(|U\rangle\). In QvN, the user side is BQP, the host/prover is also BQP. The purpose of verification in DQC is to verify \([U]\), while the purpose of verification in QvN is to verify \(|U\rangle\). There is no apparent delegation in QvN. See the table 3.
## VI Quantum algorithms in QvN
In this section, we study the design of quantum algorithms in QvN. This has been analyzed in our previous work [17; 52], while here our discussion will be more specific, drawing the connection with computational advantages.
\begin{table}
\begin{tabular}{|c|c|c|} \hline & user/verifier & host/prover \\ \hline DQC & \([U]\) & \(|U\rangle\) \\ \hline QvN & \(|U\rangle\) & \(|U\rangle\), \(|U|\) \\ \hline \end{tabular}
\end{table}
Table 3: Comparison between DQC and QvN.
Figure 4: Information flow in QvN (left) and MBQC (right). The vertical line represents the CZ gate. For QvN, a box represents a composition which is a measurement for teleportation. The curve with a dot represents a qubit program. A switchable program is also shown. For MBQC, a box represents a measurement on a site that realizes teleportation.
### Quantum superalgorithm
A quantum algorithm is usually specified by a quantum circuit and a measurement procedure, as has been shown. On top of that, there is also a classical algorithm which designs the quantum algorithm. See the figure
Here a triangle represents a measurement. This design can be iterative, with measurement outcomes feed forward to the classical algorithm, \(A\), which then optimizes the parameterized quantum circuit, \(U\). Some examples are the Solovay-Kitaev algorithm for gate compiling [53], quantum channel simulation [54, 33], and quantum approximate optimization [55]. This actually forms a classical comb of classical-quantum hybrid algorithm, using the terminology of quantum superchannel theory [30, 31, 32].
One can also pose the following question: can we use a quantum algorithm to design another quantum algorithm? Such a scheme works for the classical case, namely, there are classical algorithms that design classical algorithms. Such algorithms are often known as'meta' algorithms, or 'hyper' algorithms, since they contain some meta or hyper variables that need to be optimized. This plays essential roles in machine learning [56].
For the quantum case, it has been confirmed that, indeed we can use a quantum algorithm to design another quantum algorithm. This follows from nothing but the quantum superchannel theory. The superchannel plays the role of the'meta' algorithm, while the channels acted upon by the superchannel serve as the input to it. We will call these algorithms as quantum superalgorithms to be consistent with the superchannel theory. It has the following structure
From the composition technique, it can also be realized by a sequence of composition [17]. This is what a QPU can do, from section IV.7. Compared with the former scheme above, it is clear to see here quantum memory (the bottom register) are used as resources to realize superalgorithms [18].
Many quantum algorithms are of this form, although sometimes they are not under a name of'superalgorithm'. This includes quantum estimation and learning algorithms [56], quantum channel discrimination [57], schemes for quantum games [58], quantum optimization [59], quantum machine learning [60, 61, 62, 63]. The recent quantum singular-value transformation [64] which can unify some quantum algorithms is also a special type of superalgorithm [52].
The theory of superchannel also allows the so-called higher-order operations [32], which are superchannels that act on superchannels, by using the channel-state duality iteratively. These higher-order superchannel can still be viewed as superchannel but with more complicated multipartite structure [17].
Besides, one may wonder that a'mother' algorithm is still needed to design a quantum superalgorithm, and such a mother algorithm can be classical. This is indeed the case, but what matters is that quantum memory, and also control, are used as resources in addition to the current framework based on the circuit model. Recently, people find that quantum memory can lead to exponential advantages for solving some problems [65, 63].
Finally, the output for the final result often contains probabilities. This requests many runs of an algorithm to estimate them. However, there is a method to convert probabilities into amplitudes of quantum states, and then use the quantum amplitude estimation algorithm to obtain them in the form of bit strings [25]. This is the analog of quantum phase estimation which can be used to estimate unknown parameters by encoding them as phase factors. Such algorithms use quantum controlled operations. From matrix decomposition, a controlled operation can be decomposed as the product of a few operations without control and simple controlled operations such as the controlled-not gates. This means that these estimation algorithms can also be put into the framework of quantum superalgorithms.
### Computational advantages
Finding computational advantages would depend on the structure of certain problems. If there is no structure, quantum computing can only provide quadratic advantages. For instance, Grover's algorithm shows that
for structure-less data base of size \(N\), a quantum computer will take time in order \(O(\sqrt{N})\) instead of \(\log N\)[8]. The so-called Heisenberg's limit from the uncertainty principle sets the bound for the precision of estimating an unknown parameter, also with quadratic improvement of precision [66]. This relates to the fact that the square of amplitudes yields probabilities. In the model of QvN, we analyze the potential advantages by using the combination of quantum CPU, control, memory, and also internet, compared to the classical case and also other quantum computing models. Our analysis, however, is primary and we hope this can inspire more investigations.
#### iv.2.1 Storage
It seems the amplitudes of qubits can significantly increase the ability for storage. A pure state \(|\psi\rangle=\sum_{i}\psi_{i}|i\rangle\) already stored the amplitudes \(\psi_{i}\) in it without encoding them as bit strings. However, this turns out not to be completely true, since quantum measurements are needed in order to know \(\psi_{i}\), and this requires a lot copies of the qubit \(|\psi\rangle\).
The seminal result is the Holevo bound [67], which was established originally in the setting of quantum communication. It states that the quantum capacity of a channel is half of its classical capacity. Namely, a qubit can only be used to store or transmit two bits. This may not be hard to understand from the point of view of error correction, whereas correcting Pauli X and Z is enough to correct any linear combination of them. This is also demonstrated by the quantum teleportation and dense coding [27]. Using an ebit, a qubit can be used to transmit two bits, and two bits can be used to transmit a qubit. It also relates to the quadratic speedup of Grover's algorithm. An \(n\)-qubit state can be viewed as \(2n\) bits, hence representing \(2^{2n}\) different values forming a database whose size is the square of that from \(n\) bits. Quantum search, if treated as a state prepration, cannot be faster since otherwise a qubit would carry more than two bits. However, this does not mean there is no larger advantages for specific tasks.
In the circuit model, a state \(|\psi\rangle\) can be represented by its preparation circuit \(U\). As has been discussed, there is an efficient bit-string encoding \([U]\) of the circuit, by only encoding the type of each gate and its spacetime location. However, using qubits to store quantum states can offer advantages, e.g., as we have mentioned for quantum learning [65; 63]. For instance, it has been proven although learning statistical average may not offer an advantage, but accurately predicting the value of any observable can have an exponential advantage by using quantum memory [63]. Therefore, it is promising to explore more primacy of quantum memory.
#### iv.2.2 Speed
Quantum advantage is often used the same as speedup. This is basically because in principle other resource cost can be treated as a cost of computing time, but still, we will distinguish them since this could inspire different intuitions for solving different problems.
First, we need to distinguish an algorithmic speedup from the speed of a quantum computer. The former is on the software level, while the later also depends on the hardware. The same quantum algorithm would take different amount of time on different quantum computers.
The underlying mechanism for algorithmic speedup relates to interference. Compared to the classical case, quantum evolution is unitary, i.e., coherent, and there are significant amount of interference between trajectories, given a fixed basis of the underlying Hilbert space. A speedup occurs if the interference can enhance the probability of the desired trajectory. Note that the pre-condition for a speedup is the accuracy of computing result. The more accurate, the more time is needed.
To achieve a speedup is one of the central tasks of QPU, besides programmability. In the model of QvN, the interplay between QPU and other units will also affect the speedup. This is also the case for classical computers, and that is why cache is needed to replace the hard drive. If a truly quantum computer can be built in the future, following a certain QvN and the hierarchical design, there will certainly be various kinds of quantum memory, control, communication, and even input and output devices, etc. The speed of such a quantum computer would depend on many factors that is still hard to sketch at present.
Security
Quantum cryptography to realize security was one of the promporter of quantum information science, with BB84 scheme of quantum key distribution as the notable example [45]. It is based on the no-cloning theorem, which is equivalent to Heisenberg uncertainty principle.
The program download/upload process can be seen as a special task in quantum communication. So it shares features of standard quantum communication. Here, we point out that it is secure in two senses. First, the program generated by the host is secure against the user. Second, the communication between the user and host is secure against Eavesdropper. There are also other settings of secure quantum computing, such as delegated or blind quantum computing [50, 68] discussed in section V.3 and multi-party quantum computing [69]. For the later, neither one of the parties know the result. Instead, they must communicate to extract the result.
Attacks in quantum cryptography have been well studied [70]. An attack can be detected but not prevented. In the model of QvN, data are stored as qubits instead of bits. Then one may be curious if quantum data can be hacked. Although qubits encoding data such as passwords cannot be accurately cloned hence leaked, they can be measured. A simple quantum virus can be an instruction to make the most trivial measurement which will erase any data. Such a virus is actually classical since it can take the same form and can copy itself. Although there are costs to make an attack possibly at any time, it cannot be prevented in principle. So we suppose that this stands as a 'no-go' for a no-no-virus hope for quantum computers. Despite this, quantum computers have potential for more applications in cryptography due to resources such as coherence and entanglement.
#### vi.3.4 Energy
Reducing energy consumption in computation was one of the original motivation for quantum computing. Landauer showed that erasing bits will cost energy, while with Toffoli gates, a classical computation can be made reversible [71]. It was at that time realized by Feynman and others that a quantum computer can be reversible since its evolution is unitary [1]. However, compared with other features, the study of energy consumption in quantum computing is rare [72, 73, 74].
In the circuit model, initialization and readout by measurement will cost energy. A recent pioneer studied the energy cost in unitary evolution using superchannel theory and resource theory [74], which showed that the energy cost relates to the accuracy of the computation. However, a systematic understanding of the thermodynamics of unitary evolution is lacking. This is not obvious as thermodynamics often deals with non-unitary dissipative evolution.
Here, we point out that the energy issue could be relevant for the design of quantum control schemes, compared with classical ones. However, it is not always straightforward to assess the amount of energy cost since some schemes are'semi-classical.' For instance, cooling a qubit by a reservoir in order to suppress decoherence can be considered semi-classical. Using quantum error correction can also suppress decoherence, but it is not easy to compare the energy costs during the cooling and the error correction.
In the model of QvN, quantum control unit is used to enact quantum operations, and also to form ingredients of quantum algorithms. When the control signal is not a part of the final output, erasing or resetting it will cost energy. On the contrary, using classical control cannot generate entanglement between the controller and the target. At present, it is not clear how to find quantum advantages on energy cost over classical control schemes.
## VII NISQ implementation
In this section, we study how to implement a small-scale QvN on noisy intermediate-scale quantum (NISQ) devices. This would not include massive quantum error correction and quantum verification, for instance, which require more quantum resources.
We can compare to the basic requirements of the circuit model. This dates back to twenty years ago [75], with five requirements: a scalable system of qubits, initialization of qubits, sufficient coherence to carry out an algorithm, universal set of unitary gates, and measurement for read out.
These requirements are also strengthened or expanded for more purposes. They are also the basic requirements to realize a QvN. A few additional requirements are needed. (1) First, it requires the ability to execute multi-qubit controlled gates, such as the Toffoli gate. Such gates can be decomposed into elementary one or two-qubit gates,
but it would be better if they can be directly realized. These gates are needed for the initialization, composition, and also quantum control. (2) Second, it requires quantum communication. This was also an extra request when flying qubits such as photons are needed to connect a few separate quantum stations, such as in the trapped-ions setup [8]. The above two requirements can already be satisfied by some systems [76; 77; 78]. Therefore, it is possible to demonstrate prototypes of QvN.
Here we describe almost the smallest system of QvN, with the process of read-write, download, composition, control, superchannel. They are listed as follows:
* The read-write on program: It needs two qubits to store a qubit gate, and four to store a CZ gate. The initial-state injection (i.e. write) for a qubit program on a standard basis (\(|0\rangle\) and \(|1\rangle\)) does not require ancilla, and also the case for the read operation. For the CZ program, the write operation on a standard basis (\(|00\rangle\), \(|01\rangle\), \(|10\rangle\), \(|11\rangle\)) does require a qubit ancilla and the Toffoli gate, but the read operation does not.
* The download process: For the scheme using ebits to send qubits, the state-injection at the host side requires Toffoli gate and an ancilla. For a qubit program, the teleporation at the user side is on 4 qubits. The download in total involves 9 qubits. It is easy to verify for a two-qubit program, the teleporation is on 8 qubits. The download in total involves 17 qubits. For other schemes of the download, it requires less qubits hence also less gates.
* The composition: To compose two qubit-program states deterministically, this needs 5 qubits with one as ancilla. To compose a qubit program with the CZ program deterministically, it needs 6 qubits if the qubit program applies earlier, while 7 if it applies after the CZ. However, if the Pauli byproduct is not required to be corrected, less number of qubits are needed. This reduces to 4 and 6, respectively. In addition, to make the composition switchable, extra ebits are needed.
* The quantum control: To realize a quantum control of an unknown qubit program, it requires 5 qubits, with one for a qubit control, two for the qubit program, and two for the data register. Recall that an eigenstate of the unknown qubit gate shall be known, and it will be injected by measurement. For the control of an unknown two-qubit program, it needs 9 qubits.
* The quantum superchannel: To realize an arbitrary qubit superchannel, it needs 6 qubits, with two for the qubit program, and 4 as ancilla. However, with a convex-sum decomposition algorithm [33], two ancillary qubits can be saved.
* A quantum superalgorithm: For a simple quantum superalgorithm formed by a sequence of composition, its cost is determined by the composition. It can also include superchannels within such a superalgorithm, then its cost will be higher. For a simple demonstration, however, the Pauli byproduct can be left uncorrected, and even the initialization can be probabilistic. This will realize a probabilistic or random superalgorithm.
We see that less than 20 qubits is enough to realize the primary operations in a QvN. Quantum systems nowadays already have far more qubits than this. Therefore, more complicated operations, such as control or superalgorithm, can also be realized.
## VIII Conclusion
To conclude, we presented a systematic survey of the recently introduced model of quantum von Neumann architecture. We put it in the more complete picture of a hierarchical design principle of modern computers, which, given sufficient space and time, can not only realize universality, but also programmability, modularity, scalability, etc. We also briefly draw its connection with other quantum computing models and algorithmic advantages.
On the theoretical side, there are also many interesting open questions. Here we list a few of them as our conclusion.
* Types of quantum memory unit. A quantum RAM model of states was developed [79], which could be faster to find a specific state than classical ones. Such a scheme can be used for the storage of Choi program states. We mentioned there are various types of classical memory, and also memory devices. This is not clear for the quantum case. Our scheme for the quantum programs is more like the internal memory, instead of an external memory, i.e., a hard disc. Although in the early days of computers, gates are indeed applied on
hard memory, nowadays there is a clear distinction between internal and external memory. It remains to investigate the role of external quantum memory.
* The roles of quantum control. As we have shown, using quantum instead of classical control unit will cause issues such as the entanglement between the control flow and data flow. We also have mentioned a few tools to deal with this. However, a general principle for the design of quantum control unit is still needed. Meanwhile, specific examples and application settings are also needed to show the necessity of it, instead of a classical one. We pointed out energy consumption may relate to quantum control, by studying the dynamics of work, heat, entropy, etc, i.e., the thermodynamics of quantum computing.
* Quantum'sequential' circuit. A large class of classical circuits is known as the sequential circuits, which, roughly speaking, are circuits with memory or loop [9]. They are essential for electric circuit design. There is no apparent quantum analog as quantum circuits do not form loops, despite some explorations [80]. Namely, an output from a quantum process cannot be an input again unless it is trivial, i.e., it is a fixed point of the process. This relates to the quantum closed timelike curve [81]. However, using Bell states and Bell measurements, loops can be formed [17], as we have seen a Bell state or ebit is expressed as half of a loop. The tricky part is that there are Pauli byproducts in Bell measurements. Also using measurements makes the process non-unitary. At present, it is unclear what could be the proper quantum notion of loop, leading to a quantum analog of classical sequential circuits.
## Acknowledgements
This work is funded by the National Natural Science Foundation of China under Grants 12047503 and 12105343.
|
2304.12338 | The MillenniumTNG Project: The impact of baryons and massive neutrinos
on high-resolution weak gravitational lensing convergence maps | We study weak gravitational lensing convergence maps produced from the
MillenniumTNG (MTNG) simulations by direct projection of the mass distribution
on the past backwards lightcone of a fiducial observer. We explore the lensing
maps over a large dynamic range in simulation mass and angular resolution,
allowing us to establish a clear assessment of numerical convergence. By
comparing full physics hydrodynamical simulations with corresponding
dark-matter-only runs we quantify the impact of baryonic physics on the most
important weak lensing statistics. Likewise, we predict the impact of massive
neutrinos reliably far into the non-linear regime. We also demonstrate that the
"fixed & paired" variance suppression technique increases the statistical
robustness of the simulation predictions on large scales not only for time
slices but also for continuously output lightcone data. We find that both
baryonic and neutrino effects substantially impact weak lensing shear
measurements, with the latter dominating over the former on large angular
scales. Thus, both effects must explicitly be included to obtain sufficiently
accurate predictions for stage IV lensing surveys. Reassuringly, our results
agree accurately with other simulation results where available, supporting the
promise of simulation modelling for precision cosmology far into the non-linear
regime. | Fulvio Ferlito, Volker Springel, Christopher T. Davies, César Hernández-Aguayo, Rüdiger Pakmor, Monica Barrera, Simon D. M. White, Ana Maria Delgado, Boryana Hadzhiyska, Lars Hernquist, Rahul Kannan, Sownak Bose, Carlos Frenk | 2023-04-24T18:00:00Z | http://arxiv.org/abs/2304.12338v2 | The Millennium TNG Project: The impact of baryons and massive neutrinos on high-resolution weak gravitational lensing convergence maps
###### Abstract
We study weak gravitational lensing convergence maps produced from the Millennium TNG (MTNG) simulations by direct projection of the mass distribution on the past backwards lightcone of a fiducial observer. We explore the lensing maps over a large dynamic range in simulation mass and angular resolution, allowing us to establish a clear assessment of numerical convergence. By comparing full physics hydrodynamical simulations with corresponding dark-matter-only runs we quantify the impact of baryonic physics on the most important weak lensing statistics. Likewise, we predict the impact of massive neutrinos reliably far into the non-linear regime. We also demonstrate that the "fixed & paired" variance suppression technique increases the statistical robustness of the simulation predictions on large scales not only for time slices but also for continuously output lightcone data. We find that both baryonic and neutrino effects substantially impact weak lensing shear measurements, with the latter dominating over the former on large angular scales. Thus, _both_ effects must explicitly be included to obtain sufficiently accurate predictions for stage IV lensing surveys. Reassuringly, our results agree accurately with other simulation results where available, supporting the promise of simulation modelling for precision cosmology far into the non-linear regime.
keywords: gravitational lensing: weak - methods: numerical - large-scale structure of the Universe
## 1 Introduction
Cosmological observations show that the majority of the present-day energy density of the Universe is composed of two mysterious "dark" components, with \(\approx 70\%\) made up of Dark Energy and \(\approx 25\%\) of Dark Matter; only the remaining \(\approx 5\%\) is baryonic. Understanding the physical nature of these two dark entities is one of the major challenges of modern cosmology.
One particular cosmological probe that can help us to shed light on the dark sector is weak gravitational lensing (hereafter WL; for reviews see, e.g., Bartelmann & Schneider, 2001; Hoekstra & Jain, 2008; Kilbinger, 2015; Mandelbaum, 2018). This effect has already shown its potential in constraining cosmological parameters with the results of Stage-III surveys like the KiDS (Hildebrandt et al., 2016; Heymans et al., 2021), DES (Abbott et al., 2022) and HSC (Aihara et al., 2022). Upcoming Stage-IV WL surveys from Rubin (LSST Science Collaboration et al., 2009), Euclid (Amendola et al., 2018), and Roman (Spergel et al., 2015) will have higher resolution and larger sky coverage. They are poised to increase our knowledge of the dark Universe substantially. In order to exploit the full potential of such surveys it is nevertheless necessary to have access to sufficiently accurate theoretical predictions for WL.
Numerical simulations are the main tool for investigating the non-linear regime of WL. They offer a powerful way to identify potential systematics in observations. Even more importantly, simulations including the physics of baryons and/or massive neutrinos are required to understand how their effects impact the angular power spectrum and other WL observables. Motivated by this, several numerical methodologies for studying WL have been developed in the last decades. Among these are ray-tracing algorithms (e.g. Hilbert et al., 2009), the production of full-sky maps (e.g. Fabbian et al., 2018) and on-the-fly computation (e.g. Barreira et al., 2016) of WL. Different numerical codes implementing these approaches have been compared in Hilbert et al. (2020), which found them to produce consistent results provided certain resolution requirements are met.
Upcoming observations will, however, require WL simulations with very high angular resolution that go beyond modeling based on CDM alone and on purely gravitational interactions. This is why
there is now increasing interest in WL predictions from high-fidelity cosmological simulations including additional components. Some recent papers along these lines include Osato et al. (2021), Coulton et al. (2020) and Gouin et al. (2019), which focus on the impact of baryons, as well as Fong et al. (2019) and Liu et al. (2018) who study the impact of neutrinos. Their results suggest that both baryonic and neutrino effects should be included when interpreting data from upcoming stage-IV surveys.
Previous work has also demonstrated that the WL signal contains important information beyond that in its two-point statistics (Van Waerbeke et al., 2001; Bernardeau et al., 2002; Kilbinger and Schneider, 2005); this information can help break degeneracies in the cosmological parameter space, especially that between \(\sigma_{8}\) and \(\Omega_{m}\), thus shedding light on the so-called \(S_{8}\) tension (see e.g. Asgari et al., 2020). Different higher-order observables have been considered, with popular examples including counts of peaks and minima in the convergence field (Davies et al., 2022; Coulton et al., 2020; Fluri et al., 2018), the one-point PDF of convergence (Thiele et al., 2020; Liu and Madhavacheril, 2019), three-point correlations (Jung et al., 2021; Dodelson and Zhang, 2005), and Minkowski functionals (Shirasaki and Yoshida, 2014; Kratochvil et al., 2012).
In this paper we introduce our own method for computing high-resolution full-sky WL convergence maps, starting from the mass-shell outputs produced by our simulation set; we also present a way of efficiently partitioning a full-sky map into smaller square patches, which is based on the Fibonacci sphere distribution. We apply our WL machinery to the MillenniumTNG (MTNG) state-of-the-art simulation suite to study the impact of baryonic physics, massive neutrinos, and angular resolution. We also test how the use of fixed and paired initial conditions (see Angulo and Pontzen, 2016) can improve the statistical robustness of WL simulations obtained from simulation boxes of limited size. The observables considered in this study are the angular power spectrum of the WL convergence, its one-point probability distribution function (PDF), and peaks and minima counts in the corresponding maps.
This study is part of the introductory paper set of the MTNG project. In Hernandez-Aguayo et al. (2022), the technical aspects of the simulations are introduced together with a high-level analysis of matter and halo statistics. Pakmor et al. (2022) provide more detail of the hydrodynamical simulations, focussing, in particular, on the galaxy cluster population. Barrera et al. (2022) present an updated version of the L-Galaxks semi-analytic modeling code and apply it to obtain lightcone output for the dark-matter-only simulations. Hadzhiyska et al. (2022, 2022) present improved halo occupation distribution models for the halo-galaxy connection, focusing on the one-halo and two-halo terms, respectively. Bose et al. (2022) analyze galaxy clustering, in particular, as a function of the colour-selection. Delgado et al. (2023) investigate the intrinsic alignment of galaxy shapes and large-scale structure, and how it is affected by baryonic physics. Kannan et al. (2022) study the properties of the predicted galaxy population at \(z>8\) in the full-hydro run. Finally, Contreras et al. (2022) shows how the cosmological parameters of MTNG can be recovered from mock SDSS-like galaxy samples, using an extended subhalo abundance matching technique combined with a fast-forward prediction model.
This paper is organized as follows. In the remainder of this section, we introduce the mathematical formalism for weak lensing. In Section 2 we describe the methods we use to compute our WL maps and the associated observables. In particular, after giving an overview of the MillenniumTNG simulation suite (Sec. 2.1), we describe the "mass-shell" outputs (Sec. 2.2) and how these are used in our code to produce WL convergence maps (Sec. 2.3). We then introduce our method for partitioning a full-sky map efficiently into square patches via the Fibonacci sphere distribution (Sec. 2.4), and we briefly describe how the observables are extracted from the maps (Sec. 2.5). In Section 3, we begin by comparing results from maps with different angular resolution (Sec. 3.1). We then show the impact of baryonic effects (Sec. 3.2) and massive neutrinos (Sec. 3.3) on WL statistics. Lastly, we study the extent to which the use of fixed and paired initial conditions improves statistical robustness (Sec. 3.4). In Section 4 we compare our findings on the impact of baryons and massive neutrinos to results from similar recent studies. Finally, in Section 5 we summarise our findings, concluding that WL lensing simulations aimed to inform stage-IV surveys must have high angular resolution and correctly model both baryonic and neutrino effects.
### Weak gravitational lensing formalism
During its travel from the source to the observer, light is deflected by the gravity of structures present along the path. Let us consider a Friedmann-Lemaitre-Robertson-Walker universe with weak perturbations and denote with \(\mathbf{\theta}\) and \(\mathbf{\beta}\), respectively, the true and the observed position angles of a source which is located at a comoving line-of-sight distance \(\chi_{\rm s}\) from the observer, corresponding to a redshift \(z_{\rm s}=z(\chi_{\rm s})\). These two angles are related through the Newtonian gravitational potential \(\Phi\) by means of the lens equation:
\[\mathbf{\beta}(\mathbf{\theta},z_{\rm s})=\mathbf{\theta}-\frac{2}{{\rm c}^{2}}\int_{0}^{ \chi_{\rm s}}{\rm d}\chi_{\rm d}\,\frac{f_{\rm ds}}{f_{\rm d}f_{\rm s}}\mathbf{ \nabla}_{\mathbf{\beta}}\Phi(\mathbf{\beta}(\mathbf{\theta},\chi_{\rm d}),\chi_{\rm d},z_ {\rm d})\,, \tag{1}\]
where \({\rm c}\) is the speed of light, \(f_{K}(\chi)\) is the comoving angular diameter distance related to the comoving line-of-sight distance \(\chi\), and thus \(f_{\rm ds}=f_{K}(\chi_{\rm s}-\chi_{\rm d})\), \(f_{\rm d}=f_{K}(\chi_{\rm d})\) and \(f_{\rm s}=f_{K}(\chi_{\rm s})\), and \(\nabla_{\mathbf{\beta}}\) is the gradient with respect to the angular position \(\mathbf{\beta}\). In the flat sky approximation, the Jacobian
\[\frac{\partial\mathbf{\beta}}{\partial\mathbf{\theta}}=\begin{pmatrix}1-\kappa-\gamma_{ 1}&-\gamma_{2}-\omega\\ -\gamma_{2}+\omega&1-\kappa+\gamma_{1}\end{pmatrix} \tag{2}\]
can be written in terms of the lensing convergence \(\kappa\), the lensing shear \(\gamma=\gamma_{1}+i\gamma_{2}\), and the lensing rotation \(\omega\). Equation (1) can be differentiated w.r.t. to \(\theta\), yielding:
\[\frac{\partial\beta_{i}(\mathbf{\theta},z_{\rm s})}{\partial\theta_{ j}}=\delta_{ij}-\frac{2}{{\rm c}^{2}}\int_{0}^{\chi_{\rm s}}{\rm d}\chi_{ \rm d}\frac{f_{\rm ds}}{f_{\rm d}f_{\rm s}}\\ \times\frac{\partial^{2}\Phi(\mathbf{\beta}(\mathbf{\theta},\chi_{\rm d} ),\chi_{\rm d},z_{\rm d})}{\partial\beta_{i}\partial\beta_{k}}\frac{\partial \beta_{k}\mathbf{\beta}(\mathbf{\theta},\chi_{\rm d})}{\partial\theta_{j}}\,, \tag{3}\]
where we introduced the Kronecker delta symbol \(\delta_{ij}\). We now apply the Born approximation, that is we carry out the integral of the gradient of the potential along the unperturbed radial path \(\mathbf{\theta}\), rather than along the actual light path, yielding:
\[\frac{\partial\beta_{i}(\mathbf{\theta},z_{\rm s})}{\partial\theta_{j}}=\delta_{ij}- \frac{2}{{\rm c}^{2}}\int_{0}^{\chi_{\rm s}}{\rm d}\chi_{\rm d}\,\frac{f_{\rm ds }}{f_{\rm d}f_{\rm s}}\frac{\partial^{2}\Phi(\mathbf{\theta},\chi_{\rm d},z_{\rm d} )}{\partial\theta_{i}\partial\theta_{j}}\,. \tag{4}\]
This approximation is valid for the small deflections of light rays expected in the weak lensing regime.
We can now make use of the Poisson equation for the gravitational potential \(\Phi\) and neglect boundary terms at the observer and source positions to obtain the following expression for the convergence:
\[\kappa(\mathbf{\theta},z_{\rm s})=\int_{0}^{\chi_{\rm s}}{\rm d}\chi_{\rm d}\,q_{ \rm ds}\,\delta_{\rm m}(\mathbf{\theta},\chi_{\rm d},z_{\rm d})\,, \tag{5}\]
where \(\delta_{\rm m}\) is the density contrast and we introduced the lensing efficiency factor \(q_{\rm ds}\), defined as:
\[q_{\rm ds}=\frac{3H_{0}^{2}\Omega_{m}}{2{\rm c}^{2}}(1+z_{\rm d})f_{\rm d} \frac{f_{\rm ds}}{f_{\rm s}}\,. \tag{6}\]
By assuming statistical isotropy and applying a Limber-type approximation (Limber, 1953; LoVerde and Afshordi, 2008), one can furthermore obtain an equation that connects the angular power spectrum of the convergence \(C_{\kappa}(\ell)\) to the three-dimensional matter power spectrum \(P_{\rm m}\) (see, e.g., Hilbert et al., 2020, for the complete derivation):
\[C_{\kappa}(\ell)=\int_{0}^{\rm x_{\rm lim}}\rm d\chi\frac{q_{\rm 20}^{2}}{f_{ \rm d}^{2}}P_{\rm m}(\ell/\chi_{\rm d},z_{\rm d})\,. \tag{7}\]
where \(P_{\rm m}(\ell/\chi_{\rm d},z_{\rm d})\) is the matter power spectrum evaluated at redshift \(z_{\rm d}\) and wave number \(k=\ell/\chi_{\rm d}\). This last equation shows how the convergence power spectrum mixes different 3D \(k\)-modes into 2D \(\ell\)-modes through the line-of-sight integration. It is possible to show that with the approximations made so far, i.e. Limber, flat-sky, and Born, the angular power spectra of the shear (E and B modes), convergence, and rotation, are related as follows:
\[C_{\gamma}^{\rm(EE)}(\ell) =C_{\kappa}(\ell)\,, \tag{8a}\] \[C_{\gamma}^{\rm(BB)}(\ell) =C_{\omega}(\ell)=0\,. \tag{8b}\]
Therefore in the present work, the WL convergence will be the only quantity taken into consideration.
## 2 Methods
### The MTNG project
The MillenniumTNG (MTNG) project is based on a suite of high-resolution cosmological structure formation simulations. The project focuses on the connection between galaxy formation and large-scale structure by combining the statistical power reached with the large box size of the Millennium simulation (Springel et al., 2005), with the high mass-resolution and sophisticated baryonic physics modeling of the IllustrisTNG project (Nelson et al., 2018; Springel et al., 2018; Marinacci et al., 2018; Pillepich et al., 2018; Naiman et al., 2018; Pillepich et al., 2019; Nelson et al., 2019; Nelson et al., 2019). The goal of this synthesis, which inspired the name of the MTNG project, is to realize accurate and reliable theoretical predictions for galaxy formation throughout volumes large enough to be adequate for the upcoming surveys of cosmic large-scale structure.
The initial conditions of MTNG were generated at \(z=63\) with an updated version of the N-GenIC code, directly incorporated in Gadget-4. The algorithm is based on second-order Lagrangian perturbation theory, and the input linear theory power spectrum was the same as the one used for the IllustrisTNG simulations (based on Planck15 cosmological parameters). A new transfer function with updated cosmological parameters was adopted for the simulations with massive neutrinos.
The dark matter (DM)-only simulations were run with the Gadget-4 code (Springel et al., 2021), using the variance-suppression technique introduced by Angulo and Pontzen (2016), so that for every resolution there are two simulations (which we refer to as A- and B-series) whose initial conditions are characterized by perturbations with opposite phases but the same amplitude, fixed to the _rms_ value predicted by the power spectrum. The hydrodynamical simulations start from the same initial conditions as the DM A-series and were performed with the moving-mesh Arepo code, featuring the same galaxy formation model as IllustrisTNG (Weinberger et al., 2017; Pillepich et al., 2018), modulo very small changes1.
Footnote 1: Magnetic fields were not included, and the metallicity tracking was simplified. Both were necessary to reduce the memory consumption of the production run to make it fit into the available memory.
The main characteristics of the MTNG simulations that are primarily used in this work are summarised in Table 1. For the bulk of our analysis, and in particular for studying the impact of resolution and baryons, we use a box size of \(500\,h^{-1}{\rm Mpc}\simeq 740\,{\rm Mpc}\). For the cosmological parameters, we use the Planck Collaboration (2016) cosmology, which is consistent with what had been used for IllustrisTNG: \(\Omega_{\rm m}=\Omega_{\rm cdm}+\Omega_{\rm b}=0.3089\), \(\Omega_{\rm b}=0.0486\), \(\Omega_{\Lambda}=0.6911\), \(h=0.6774\), \(\sigma_{8}=0.8159\) and \(n_{s}=0.9667\). For the study of the impact of massive neutrinos, we use a slightly smaller box size of \(430\,h^{-1}{\rm Mpc}\simeq 630\,{\rm Mpc}\), and updated cosmological parameters that also take the different neutrino masses into account (Abbott et al., 2022). We consider three cases for the neutrino masses, \(\Sigma m_{\nu}=0\) meV (massless), 100 meV and 300 meV (see Table 1). The reader is referred to Hernandez-Aguayo et al. (2022) and Pakmor et al. (2022) for a more detailed description of the MTNG simulations. We simulate the effect of massive neutrinos using the so-called \(\delta f\) method introduced in Elbers et al. (2021), however, we refer to Hernandez-Aguayo et al. (2023, in prep) for a detailed description of the technical aspects of the simulations with neutrinos.
### Mass-shell outputs
Along with snapshot and lightcone data (see Hernandez-Aguayo et al., 2022, for further details), the MTNG simulations provide "mass-shell" outputs, introduced as a new feature in Gadget-4. These consist of a series of onion-shell-like full-sky maps built on-the-fly which store the line-of-sight projected matter field of the full-sky lightcone. Each shell consists of a HEALPix map (Hivon et al., 1999), where each pixel contains, in turn, the total mass of all par
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Type & Run name & Series & Box size & \(N_{\rm cdm}\) & \(N_{\rm gas}\) & \(N_{\nu}\) & Mass-shell \(N_{\rm side}\) & \(\sum m_{\nu}\) & \(\epsilon_{\rm cdm}\) \\ & & & \([h^{-1}{\rm Mpc}]\) & & & & & [eV] & \([h^{-1}{\rm kpc}]\) \\ \hline DM only & MTNG740-DM-1 & A/B & 500 & \(4320^{3}\) & \(-\) & \(-\) & 12288 & \(-\) & 2.5 \\ & MTNG740-DM-2 & A/B & 500 & \(2160^{3}\) & \(-\) & \(-\) & 9182 & \(-\) & 5 \\ & MTNG740-DM-3 & A/B & 500 & \(1080^{3}\) & \(-\) & \(-\) & 4096 & \(-\) & 10 \\ & MTNG740-DM-4 & A/B & 500 & \(540^{3}\) & \(-\) & \(-\) & 2048 & \(-\) & 20 \\ & MTNG740-DM-5 & A/B & 500 & \(270^{3}\) & \(-\) & \(-\) & 1024 & \(-\) & 40 \\ \hline Hydro & MTNG740-1 & A & 500 & \(4320^{3}\) & \(4320^{3}\) & \(-\) & 12288 & \(-\) & 2.5 \\ \hline Neutrinos & MTNG3000-DM-0.1\(\nu\) & A & 2040 & \(10240^{3}\) & \(-\) & \(2560^{3}\) & 12288 & 0.1 & 4 \\ & MTNG630-DM-0.3\(\nu\) & A/B & 430 & \(2160^{3}\) & \(-\) & \(540^{3}\) & 12288 & 0.3 & 4 \\ & MTNG630-DM-0.1\(\nu\) & A/B & 430 & \(2160^{3}\) & \(-\) & \(540^{3}\) & 12288 & 0.1 & 4 \\ \hline & MTNG630-DM-0.0\(\nu\) & A/B & 430 & \(2160^{3}\) & \(-\) & \(540^{3}\) & 12288 & 0.0 & 4 \\ \hline \end{tabular}
\end{table}
Table 1: Specifications of the simulations of the MillenniumTNG project used in this paper.
ticles that intersect the lightcone's time-variable hypersurface2 at a comoving distance that falls within the shell boundaries, and at an angular position that falls within the solid angle corresponding to the pixel. For all the simulations of the MTNG suite, we fixed the comoving depth of these shells to \(25\,h^{-1}{\rm Mpc}\). The angular resolution of a HEALPix map is modulated by the \(N_{\rm side}\) parameter, which determines the total number of pixels through \(N_{\rm pix}=12\,N_{\rm side}^{2}\). For simulations with increasing mass resolution, we typically constructed mass-shells with increasing \(N_{\rm side}\), as can be seen in Table 1. The highest angular resolution we reach with the mass-shells is \(0.28\) arcmin, given by \(N_{\rm side}=12288\), which corresponds to approximately 1.8 billion pixels in the sky.
Footnote 2: This is simply the spherical surface whose comoving radius varies with time according to the finite propagation speed of light, and reaches the observer at the present time.
### Computation of full-sky convergence maps
Starting from the mass-shell output, we developed a Gadget-4 post-processing python package for the computation of full-sky convergence maps in the Born approximation. This works as follows. The \(i\)-th mass shell can be converted into an angular surface mass density distribution \(\Sigma\) dividing the mass at each pixel's angular position by the area of each pixel in steradians (given by \(A_{\rm pix}=4\pi/N_{\rm pix}\), since HEALPix has equal area pixels):
\[\Sigma^{(i)}(\mathbf{\theta})=\frac{M(\mathbf{\theta})}{A_{\rm pix}}. \tag{9}\]
Every shell is then treated as a lens. For a fixed source redshift \(z_{s}\), the convergence in the Born approximation will be given by integrating over the surface mass density at every lens plane (i.e. at every shell) between the source and the observer, weighted by the lensing efficiency factor:
\[\kappa(\mathbf{\theta},\chi_{s})=\frac{4\pi\mathrm{G}}{\mathrm{c}^{2}}\frac{1}{f_ {s}}\sum_{i}(1+z_{\rm d}^{(i)})\frac{f_{\rm da}^{(i)}}{f_{\rm d}^{(i)}}\left[ \Sigma^{(i)}(\mathbf{\theta})-\bar{\Sigma}^{(i)}\right]\, \tag{10}\]
where \(\bar{\Sigma}^{(i)}\) is the mean angular surface mass density of the \(i\)-th shell. In order to optimize computational efficiency, this calculation is parallelized with MP4PY (Dalcin & Fang, 2021).
### Partitioning into square patches
Once a full-sky map is created, one may want to partition it into smaller non-overlapping square patches in order to simplify the analysis. Performing this operation in an efficient way, i.e. covering as much as possible of the sphere's surface while avoiding overlap, is not a trivial task. An example is found in Davies et al. (2019), where a HEALPix-based partitioning is performed to extract 192 maps with size \(10\times 10\,\mathrm{deg}^{2}\); this scheme covers \(\approx 47\%\) of the sphere's surface.
In this work, we introduce a new and more efficient way of partitioning the sphere into smaller square maps. This is directly inspired by a botanical phenomenon known as _phylotaxis_ (from Latin "leaf arrangement"), which refers to the way in which plants arrange their repeating parts (leaves, seeds, florests, etc...) in order to maximize the space occupation (see e.g. Conway & Guy, 1996, p. 113). It turns out that in many cases (e.g. for the dandelon seeds or the florests on the sunflower head) the spatial distribution of points is mathematically described by the so-called Fibonacci grid. As shown in Swinbank & Purser (2006), the spherical coordinates which describe the \(i\)-th point on a Fibonacci grid with a total of \(2N+1\) points are given by,
\[\sin\theta_{i}=\frac{2i}{2N+1},\ \phi_{i}=\frac{2\pi i}{\varphi},\ -N\leq i\leq N,\ -\pi/2\leq\theta_{i}\leq\pi/2, \tag{11}\]
where \(\varphi\approx 1.618\) is the golden ratio. We use these coordinates as the centers of our maps. In addition, for square patches, we find that the coverage of the sphere is maximized when one diagonal of the squares lies on a meridian. Using this method, we place \(1195\) square patches of size \(5\times 5\,\mathrm{deg}^{2}\); therefore covering \(\approx 72\%\) of the sphere's solid angle (the same approximate percentage would also be reached in the case of \(10\times 10\,\mathrm{deg}^{2}\) square patches). The arrangement of
Figure 1: Orthographic projection from the side (left) and from above (right) of the 1195 square maps with size \(5\times 5\,\mathrm{deg}^{2}\) that we extract from a full-sky map. The method we use is based on the Fibonacci grid and manages to cover \(\approx 72\%\) of the sphere surface.
Figure 2: Lensing convergence power spectra (upper panel) of MTNG740-DM-1B obtained with our code from \(N=1195\) non-overlapping \(5\times 5\,\mathrm{deg}^{2}\) square maps (light blue line), and from the full-sky map (green line). These are compared with the convergence power spectrum obtained starting from the redshift-dependent non-linear 3D matter power spectra of the simulation (yellow line) and from 3D power spectra given by the Halofft formula (black line). In both cases, the 3D matter power spectra are integrated according to Eq. (7). Ratios relative to the full-sky map are shown in the lower panel.
spherical squares is shown in Figure 1. Every square patch we extract is sampled on a regular grid with \(2048^{2}\) pixels, resulting in a pixel size of about \(0.14\,\mathrm{arcmin}\).
### Computation of the observables
We compute angular power spectra by means of the HEALPix anafast routine. This operation has been performed for maps with resolution up to \(N_{\mathrm{side}}=8192\), which marks the maximum resolution for which the HEALPix library is able to perform a spherical harmonics decomposition. In the case of square patches, the power spectra are calculated with Fourier transforms on a regular grid with \(2048^{2}\) pixels in the flat-sky approximation, which is valid for the small field-of-view covered by their relatively small area (\(5\times 5\,\mathrm{deg}^{2}\)). The full-sky spectra are then binned into 80 equally spaced logarithmic bins in the range \(\ell\in[10^{0},10^{4}]\). The spectra extracted from the square patches are binned into 20 equally spaced logarithmic bins in the range \(\ell\in[10^{2},10^{4}]\). Before computing the probability distribution function for the convergence, and its peaks and minima statistics, all the square maps are smoothed with a Gaussian kernel characterized by a standard deviation of 1 arcmin. We compute the PDF in 50 linearly spaced convergence bins in the range \(\kappa\in[-0.05,0.1]\). We identified peaks and minima as pixels in the maps that are greater or smaller than their 8 nearest neighboring pixels, respectively. We bin the peak counts into 50 equally spaced bins with \(\kappa\in[-0.1,0.25]\), and the minima counts into 50 equally spaced bins with \(\kappa\in[-0.07,0.06]\). We have fewer maps for the case that includes baryons, as explained in (Sec. 3.2), therefore the peak counts are binned into 12 equally spaced bins with \(\kappa\in[-0.02,0.1]\) and the minima counts into 16 equally spaced bins with \(\kappa\in[-0.07,0.06]\). Unless stated otherwise, all the observables are computed for a source redshift of \(z_{s}=1.0\). Finally, we do not include galaxy shape noise in this analysis, as the focus of this paper is to investigate the properties of the physical signal.
## 3 Results
We begin the presentation of our results with the following sanity check shown in Figure 2. We compute the convergence power spectrum in four different ways:
* We take the average of the convergence power spectrum computed on a large number of \(5\times 5\,\mathrm{deg}^{2}\) square maps extracted from the MTNG740-DM-1-A full-sky map.
* We compute the angular power spectrum of the full-sky map of the MTNG740-DM-1-A simulations by means of the HEALPix anafast routine.
* We use Eq. (7) to obtain the convergence power spectrum by integrating over the 3D matter power spectra measured for MTNG740-DM-1-A at the discrete set of snapshot times.
* Finally, we use the same approach but plug in the 3D matter power spectrum as predicted by the Halofit emulation formula (Takahashi et al., 2012) using the CLASS code (Blas et al., 2011).
Figure 3: The top left shows a full-sky convergence map with \(z_{s}=1.0\) computed with our code from DM only runs with same initial conditions, but increasing resolution both in mass and in angles; the zoom on the top right focuses on a single \(5\times 5\,\mathrm{deg}^{2}\) square patch. The bottom panels show a further zoom onto a square region of \(0.5\times 0.5\,\mathrm{deg}^{2}\); these all represent the same region with increasing resolution from left to right.
As Figure 2 shows, we find quite good agreement between the four spectra. Those computed from the full-sky map and from the square patches differ by less than \(2.5\%\) over \(100\,\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}\ell\, \lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}4000\), indicating the validity of the flat sky approximation in this regime. The increasing discrepancy at smaller angular scales, i.e. for \(\ell\,\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}4000\), is consistent with our predictions for different angular resolutions of the maps, as we will discuss in detail in the next section. The loss of power for Halofit (black curve) at \(\ell\,\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}4000\) is most likely explained by the fact that this model was calibrated on simulations with lower resolution.
### Numerical resolution study
In Figure 3 we show an example of a full-sky convergence map computed with our code. We zoom in on such a map to show an example of an extracted \(5\times 5\,{\rm deg}^{2}\) square patch. A further zoom-in to a \(0.5\times 0.5\,{\rm deg}^{2}\) square region is performed to give a visual impression of how the different angular resolutions look at the smallest scales. By comparing this zoomed region for the \(N_{\rm side}=1024,2048\) and the \(N_{\rm side}=8192,12288\) cases, one can see that information on structures on the smallest angular scales is progressively lost as the resolution becomes lower. In this subsection, we investigate how this loss of angular resolution, in combination with reduced mass reso
Figure 4: Top left: WL convergence power spectrum; top right: WL convergence PDF; bottom left: WL convergence peak counts; bottom right: WL convergence minimum counts. All the observables are computed on the B realization of the MTNG740-DM runs taking \(z_{s}=1.0\). The solid lines indicate the mean of 1195 \(5\times 5\,{\rm deg}^{2}\) square maps with increasing darkness representing increasing resolution both in mass and \(N_{\rm side}\). The dotted line refers to the case with the highest mass resolution but down-sampled to \(N_{\rm side}=1024\). In each lower sub-panel, we show the ratio w.r.t. the reference case with \(N_{\rm part}=4320^{3}\) and \(N_{\rm side}=12288\) (noted with the subscript “ref”).
lution in the simulation itself, can impact weak lensing observables extracted from the corresponding convergence maps. All the following observables are computed as averages of the 1195 square maps of size \(5\times 5\deg^{2}\) extracted from the full-sky maps from the B realization of MTNG740-DM runs.
The first observable we study is the convergence power spectrum. As shown in the top left panel of Figure 4. The solid lines refer to simulations increasing both in mass resolution and angular resolution. This shows that decreasing the resolution (both in mass and angle) reduces the power at progressively larger scales. We verified that this effect is generally independent of \(z_{s}\) over the range \(z_{s}\in[0.2,3.0]\). In order to understand how much of this reduction in power is due to the decrease in angular resolution and how much is due to the decrease in mass resolution, we now consider the convergence maps of the simulation with the highest mass resolution but down-sample the maps to \(N_{\rm side}=1024\), this is represented by the dashed line. We see that for the fixed angular resolution \(N_{\rm side}=1024\) we obtain essentially the same result, independent of the mass resolution. The small-scale suppression we find with the decreasing resolution is consistent with the one found in previous studies (see e.g. Takahashi et al., 2017).
Next, we consider how changes in the resolution impact the WL one-point PDF; the results are shown in the top right panel of Figure 4. First, we see that the PDF is characterized by an asymmetrical shape that reflects the non-Gaussian nature of the WL convergence. The solid lines (which refer to increasing angular and mass resolution) show a broadening of the distribution when the resolution is increased. Since the maps with the higher angular resolution are able to capture the details of the smaller (angular) structures, more extreme convergence values are resolved; as seen in Figure 3, this results in a broader PDF. This explanation is supported by the dashed line (referring to the simulation with the highest mass resolution, but down-sampled to \(N_{\rm side}=1024\)) which is almost indistinguishable from the solid line which refers to maps computed with the same angular resolution but from a simulation with much lower mass resolution. We find that angular resolution affects the convergence PDF similarly for source redshifts over the full range \(0.2\leq z_{s}\leq 3\). The narrowing of the PDF we observe here is consistent with the suppression of the power spectrum seen previously. The comparison
Figure 5: Left panel: one of our \(5\times 5\deg^{2}\) square maps at \(z_{s}=0.5\) from MTNG DM-only run. Right panel: the same map but with the map from the corresponding MTNG hydro run (with the same initial conditions) subtracted.
Figure 6: Power spectra of full-sky lensing convergence maps assuming \(z_{s}=0.5\). The red and green lines indicate results for the DM-only and Hydro runs, respectively. The purple line indicates the power spectrum of the difference between the two maps.
between these two indicates that the most extreme values of \(\kappa\) are contained in the smallest angular scales.
Finally, we present that same investigation for the WL peak and minimum counts; the results are shown in the left and right lower panels of Figure 4, respectively. For the peak counts, we find that when increasing resolution in angle and mass, the high \(\kappa\) tail is more extended and the amplitude of the distribution increases. For the distribution of minimum counts also, we see an increase in amplitude with increasing resolution. In both cases, the suppression of the counts with decreasing resolution is not uniform; in particular, it is stronger for higher \(\kappa\) values. Again, we conclude that the differences are dominated by the angular resolution, since the high-mass-resolution simulation, once down-sampled to \(N_{\rm side}=1024\), is again almost indistinguishable from the low-mass-resolution simulation analyzed with the same \(N_{\rm side}\). As before, we find qualitatively similar results for source redshifts throughout the range \(z_{s}\in[0.2,3.0]\). The results we find for peak and minimum abundance are consistent with what has been seen for the PDF: decreasing the resolution will narrow the PDF, therefore damping the most extreme values of \(\kappa\), which in turn will result in fewer counts of peaks and minima.
### Impact of baryons
In this section, we study the impact of baryonic physics on weak lensing observables. In the left panel of Figure 5, we show a \(5\times 5\,{\rm deg}^{2}\) square patch with \(z_{s}=0.5\) extracted from a full-sky map for MTNG740-DM-1-A. We do not separately show the corresponding convergence map for the hydro run (which was run with the same initial conditions) as the difference with respect to the DM case is almost imperceptible by eye. Instead, in the right panel, we show the difference between the two maps. By comparing the two panels we notice that the regions where the baryonic physics has the strongest impact (redder and bluer areas in the right panel) roughly correspond to regions where the convergence map has high values (lighter areas in the left panel, corresponding to massive structures).
Figure 7: Top left: WL convergence power spectrum; top right: WL convergence PDF; bottom left: WL convergence peak counts; bottom right: WL convergence minimum counts. All the panels show the ratio of the results computed from full-hydro and DM-only runs obtained with our code considering \(z_{s}\in[0.2,1.4]\) with \(\Delta z_{s}=0.2\). The solid lines indicate the mean of \(125\,5\times 5\,{\rm deg}^{2}\) square maps, with the shaded regions representing the standard errors on the means.
We also see that the difference is often characterized by a dipole pattern (neighboring red-blue pairs). This largely reflects the fact that the same objects can end up having slightly different positions when baryonic physics is included and does not necessarily signal a significant difference in internal structure between the two cases.
In order to quantify how baryonic processes affect different angular scales, we compute the power spectrum of the difference between the two full-sky maps; this is shown as a purple line in Figure 6, which is compared to the power spectra of the two individual maps. We see that the power spectrum of the difference map drops rapidly and approximately as a power law towards large scales. The impact of baryonic physics increases strongly towards smaller angular scales over the range of \(l\)-values considered here.
We now consider results for the four primary observables considered previously, adopting a set of source redshifts over the range \(z_{s}\in[0.2,1.4]\) with \(\Delta z_{s}=0.2\). For the hydrodynamic simulation MTNG740-1, a code configuration error, unfortunately, caused the loss of the original full-sky mass-shells for \(z>0.5\), which were intended to be produced on-the-fly. However, it proved possible to reconstruct these data partially in post-processing, because the full-particle lightcone of the simulation was stored for one octant of the sky out to \(z=1.5\). Straightforwardly binning this data onto HEALPix arrays thus allows lensing maps to be recovered over 1/8-th of the full sky out to this redshift. While this restricts us to a direct comparison of just 125 square maps (those that fall into the first octant), resulting in a somewhat larger statistical error (as indicated by the shaded regions that give standard errors), this does not
Figure 8: Top left: WL convergence power spectrum; top right: WL convergence PDF; bottom left: WL convergence peak counts; bottom right: WL convergence minimum counts. The orange, pink, and violet curves indicate the mean of \(1195\,5\times 5\,\mathrm{deg}^{2}\) square maps computed on simulations with summed neutrino masses equal to \(0,100,300\,\mathrm{meV}\) respectively.The lower subpanels show the ratio of each distribution to that of the case with zero neutrino masses.
substantially weaken our ability to assess the small-scale impact of baryonic physics.
In the top left panel of Figure 7, we show the ratio between the convergence power spectrum of DM-only and Hydro runs. We observe a small and almost constant suppression at the larger angular scales and a stronger, scale-dependent suppression at smaller scales. The transition between these two regimes takes place at \(\ell\approx 10^{3}\) and happens at progressively larger \(\ell\) with increasing \(z_{s}\). The overall effect produces a spoon-shaped suppression which reaches \(\approx 15\%\). The dominant component of the power suppression can be explained in terms of feedback from black hole accretion and supernovae explosions, which blow away matter from the central regions of the halos. This will primarily affect relatively small physical (and consequently angular) scales, but the associated redistribution of baryons induces an impact also on larger scales, particularly due to AGN as they are capable of affecting very massive halos. The shift of the spoon feature that we observe can be explained by considering that the physical scale at which the effect of baryons suppresses the most the power spectrum, will correspond to smaller angular scales (and therefore higher values of \(\ell\)) for increasing \(z_{s}\).
We show the ratio between the WL PDF in the Hydro and in the DM-only cases in the top right panel of Figure 7. We find a roughly constant \(\approx 5-10\%\) suppression in the high-\(\kappa\) tail for the Hydro run relative to the DM-only run. In the low-\(\kappa\) regime, there is a suppression as well, and this increases dramatically as \(\kappa\) becomes more negative. The central region of the PDF is in turn enhanced by \(\approx 2-3\%\). These changes impact a progressively broader \(\kappa\) range as \(z_{s}\) increases. Finally, we consider the effect of baryonic physics by considering the WL peaks and minima, shown in the right and left panels of Figure 7, respectively. In the case of the peak abundance, we observe a suppression of \(\approx 5-15\%\) for \(\kappa\gtrsim 0.02\) which is stronger for decreasing \(z_{s}\); although the results are noisier in the case of lower \(z_{s}\). The distribution of the minima shows suppression in the baryonic case in both the high-\(\kappa\) and low-\(\kappa\) tails, and this effect increases the more \(\kappa\) reaches extreme values. The trend is approximately symmetric and broader in \(\kappa\) as \(z_{s}\) increases.
The effects we observe are consistent with the physical explanation given previously for the power spectrum: feedback processes redistribute matter from denser regions to lower-density regions. This manifests in a narrower PDF, and in a suppression of the peaks and minima counts. In particular, the high-\(\kappa\) peaks are expected to correspond mostly to the presence of galaxy clusters along the line of sight, while the low-\(\kappa\) peaks could be produced by haloes in voids or chance alignments of small haloes along the line of sight. We, therefore, expect the baryons to impact the peak abundance in a \(\kappa\)-dependent fashion. This could help in explaining the upturn we see for increasing \(\kappa\) (for a more detailed discussion we direct the reader to e.g. Liu & Haiman, 2016; Yang et al., 2011; White et al., 2002).
Finally, we notice that the impact of baryonic physics on all the four observables is progressively stronger with decreasing \(z_{s}\): we indeed expect this to happen because, at lower redshifts, baryonic processes have had more time to take place and therefore influence the overall cosmic structure.
### Impact of neutrinos
Another important element that influences structure formation, and therefore WL observables, is the presence of massive neutrino species. In the early Universe, these act as an additional relativistic component thus delaying the onset of structure formation and suppressing the formation of structures below the free-streaming scale (for reviews, the reader is redirected to e.g. Lesgourgues & Pastor, 2006; Wong, 2011). We, therefore, expect massive neutrinos to reduce the WL signal at those scales. Consequently, the use of WL has been suggested as a tool to constrain the neutrino mass (see, e.g., Cooray, 1999). In the following, we show results obtained by comparing MTNG DM-only with runs that include neutrino components with different overall mass contributions, corresponding to summed neutrino rest masses of \(\sum m_{\nu}=[0,100,300]\,\mathrm{meV}\).
We start by considering the angular power spectrum, which is shown in the top left panel of Figure 8. We notice that this is suppressed by \(\approx 5\) and \(15-20\%\) for \(\sum m_{\nu}=0.1\) and \(0.3\)\(\mathrm{eV}\), respectively, relative to the massless case. The suppression is slightly greater for intermediate angular scales (\(\ell\approx 1000\)); this effect, which is barely noticeable in the \(\sum m_{\nu}=0.1\)\(\mathrm{eV}\) case, becomes more prominent for \(\sum m_{\nu}=0.3\)\(\mathrm{eV}\). Such an effect is consistent with massive neutrino species suppressing structure formation on small scales. We verified that, as expected, this effect decreases significantly at the smallest \(l\)-values.
We show the convergence PDF in the top right panel of Fig. 8. Here the distribution is enhanced in its central region (for \(-0.015\leq\kappa\leq 0.015\)) of order \(\approx 11\) and \(4\%\) for the \(\sum m_{\nu}=0.1\) and \(0.3\)\(\mathrm{eV}\) cases, respectively. On the other hand, we see that the PDF is progressively suppressed in the tails, an effect that gets stronger for higher neutrino masses. Interestingly, we find that the impact of massive neutrinos on the convergence PDF is quite similar to that induced by decreasing the angular resolution of the map, as one can notice by comparing this panel with the top left panel of Fig. 4. The effect we observe is consistent with the physical interpretation given above: massive neutrinos will smooth out the density field, therefore narrowing the PDF of the WL convergence.
Finally, in the bottom panels of Fig. 8 we consider the effect of neutrinos on the convergence peak and minimum counts. For the peak counts, we see an enhancement for \(-0.015\leq\kappa\leq 0.015\), reaching \(\approx 11\%\) for \(\sum m_{\nu}=0.3\)\(\mathrm{eV}\) and \(\approx 4\%\) for \(\sum m_{\nu}=0.1\)\(\mathrm{eV}\). In the case of the minimum counts, the enhancement is present at \(-0.015\leq\kappa\leq 0.01\) and reaches \(\approx 20\%\) and \(6\%\) for \(\sum m_{\nu}=0.1\) and \(3\)\(\mathrm{eV}\), respectively. Both the peak and minima counts are progressively suppressed along the tails of the distribution, and this effect becomes again stronger when the mass of neutrinos increases. What we observe is consistent with the effects on the PDF and the power spectrum. Massive neutrino species will tend to fill the empilets regions, thus suppressing the negative-\(\kappa\) tail of the minimum counts, and oppose to the formation of large structures, thus damping the high-\(\kappa\) tail of peak counts.
### Paired and fixed initial conditions
To conclude the presentation of our primary results, we investigate the impact of the variance suppression technique introduced by Angulo & Pontzen (2016). As they show, averaging the 3D matter power spectra of two simulations with fixed and paired initial conditions can reduce the noise due to cosmic variance very significantly (for another work that employs a variance suppression technique inspired by the previous citation, we direct the reader to Harnois-Deraps et al., 2019). Here we test whether this approach also helps in the case of the convergence angular power spectrum. This requires a redshift integration over the lightcone, rather than a single time-slice of the underlying simulation, so it remains to be validated that the cancellation of second-order deviations from linear theory will work equally well in this case.
We show our results for this in Figure 9, where the blue and red lines indicate the angular power spectra of full-sky convergence maps computed for the A and B versions of MTNG630-DM-0.1\(\nu\)
respectively, while the green dashed line shows their mean. For comparison, the power spectrum obtained from the full-sky lightcone of the A version of MTNG3000-DM-0.1\(\nu\) is shown as a green solid line. Because of its larger box, the initial conditions of this simulation contain about 100 times as many modes on each scale as those of the smaller box simulations, and so the cosmic variance in its power spectrum is expected to be about 10 times smaller. We find that the power spectra of the smaller simulations differ from each other by up to 5% and from the power spectrum of the big simulation by up to 3% for \(300\leq l\leq 10^{4}\). Their mean, however, differs from the power spectrum of the big run by a maximum of 1% and by much less at the smaller angular scales. Thus, although the suppression of cosmic variance is less strong than found by Angulo and Pontzen (2016) for the power spectra of the dark matter distribution in simulation snapshots, it is still very substantial, thus supporting the notion that the fixed and paired technique is an effective way to reduce cosmic variance uncertainties in simulation results also for WL observables.
## 4 Discussion
We now discuss the implications of the results presented in the previous sections, in particular, for the relative impact of baryons and massive neutrinos on WL observations. Further, we compare our estimates of these effects with results from other recent studies.
The four panels of Figure 10 show results for the four convergence map observables that we focus on in this paper, the angular power spectrum (top-left), the one-point PDF (top right), and the peak and minimum counts (bottom left and bottom right). In this case, in order to make the comparison consistent with other works, we smoothed the square maps with a Gaussian kernel with a standard deviation of \(2\,\mathrm{arcmin}\) when studying the PDF, peak, and minimum counts. As in Figure 7 and in the lower subpanels of Figure 8 we plot the ratios of results obtained in a simulation including either baryons or massive neutrinos to those for a simulation from identical initial conditions that followed only the CDM. First, we discuss the new results from this work, which is represented by the thicker lines: green for the baryons, blue for \(\sum m_{\nu}=0.1~{}\mathrm{eV}\), and dark blue for \(\sum m_{\nu}=0.3~{}\mathrm{eV}\). For the last case, neutrino effects dominate baryonic effects for all four observables: the suppression of the power spectrum, the distortion induced in the PDF, and the modification of the peak and minimum counts are all substantially stronger. On the other hand, for \(\sum m_{\nu}=0.1~{}\mathrm{eV}\) the baryonic and neutrino effects are comparable, though with different scale dependence, highlighting that these effects are partly degenerate.
For the power spectra, the suppression induced by baryonic physics is negligible compared to that induced by neutrinos with \(\sum m_{\nu}=0.1~{}\mathrm{eV}\) for angular scales \(\ell<2000\), but becomes dominant on smaller angular scales \(\ell>4000\). In the case of the PDF, peak counts, and minimum counts, we see that even a total neutrino mass of \(\sum m_{\nu}=0.1~{}\mathrm{eV}\) produces distortions that are larger than those induced by baryonic physics, especially in the tails of the distributions.
A crucial test to validate the reliability of results coming from numerical experiments is the comparison between independent, simulation studies. In Figure 10 results from previous studies are indicated by the thinner lines, yellow, orange, and brown referring to the baryonic effects computed for IllustrisTNG (Osato et al., 2021), HorizonAGN (Gouin et al., 2019), and Bahamas (Coulton et al., 2020), respectively. This last simulation project considered three different models with increasingly the strong AGN feedback: these are indicated by dashed (_lov_) solid (_fiducial_) and dotted (_high_) lines. In purple, we include the result reported for massive neutrinos with \(\sum m_{\nu}=0.1~{}\mathrm{eV}\) for the MassiveNuS simulations, where the angular power spectrum effects were computed by Liu and Madhavacheril (2019), and the peak and minimum counts were computed in Coulton et al. (2020).
Focusing first on the comparison with previous results for \(\sum m_{\nu}=0.1~{}\mathrm{eV}\) neutrinos, we see that our angular power spectrum modification agrees to \(\approx 1\%\) with that of MassiveNuS at all angular scales. For the peak and minimum counts, we also find a reassuring \(\approx 1-2\%\) agreement for all values of \(\kappa\) other than the high-\(\kappa\) tails, where some statistical fluctuations are present. This exemplifies the robustness of such neutrino predictions, provided a sufficiently accurate simulation methodology is employed. This agrees with conclusions from the recent neutrino simulation code comparison project of Adamek et al. (2022).
Next, considering the impact of baryons as predicted by different studies, we can directly compare the power spectrum modification we measure to results from the three independent projects included in Figure 10. We find a \(\approx 1-2\%\) agreement up to \(\ell\approx 4000\) between MTNG and IllustrisTNG, HorizonAGN and the 'low AGN' variant of Bahamas. At smaller angular scales, we see that MTNG and TNG predict a suppression that is of the same order as that in the high AGN variant of Bahamas; for \(\ell\approx 10^{4}\), this is stronger by roughly a factor of two than the predictions of HorizonAGN and of the low and fiducial AGN versions of BAHAMAS. Clearly the predictions are strongly affected by the specific implementation of feedback for \(\ell>10^{3}\). Even at \(\ell=10^{3}\) the fiducial version of Bahamas predicts a 3% effect, which is more than twice that found in the other models.
Moving to the PDF, we find \(\approx 2\%\) agreement between MTNG, TNG, and the Bahamas's low-AGN model, the only exception being for \(\kappa\lesssim 0.01\), where our results predict a milder suppression of
Figure 9: Lensing convergence power spectrum of full-sky convergence maps computed with our code considering \(z_{s}=1.0\). The blue and red lines indicate respectively the A and B series with fixed and paired initial conditions of MTNG3630-DM-0.1\(\nu\), the dashed green line refers to the mean of these two, while the solid green line to the run with the biggest box size, i.e. MTNG3000-DM-0.1\(\nu\). In the lower sub-panel, we show the ratio w.r.t. the MTNG3000-DM-0.1\(\nu\) run (noted with the subscript “ref”).
the tails. The fiducial and high AGN versions of Bahamas deviate substantially, predicting a more extreme narrowing of the PDF. It is interesting to observe that these two cases become quite close to the effects seen for massive neutrinos with \(\sum m_{\nu}=0.1~{}{\rm eV}\). Similar conclusions can be drawn concerning the peak and minimum distributions, where we agree at \(\approx 5\%\) with TNG, and the low-AGN version of Bahamas. Here also, the Bahamas's fiducial and (especially) high AGN models differ substantially, and are quantitatively closer to the results for the \(\sum m_{\nu}=0.1~{}{\rm eV}\) neutrino case.
Overall, our results reaffirm the strong sensitivity of WL observables to AGN feedback physics, and they also point out an important partial degeneracy between the impacts of massive neutrinos and baryonic physics. At the same time, the comparatively good agreement between different simulation methodologies can be seen as encouraging, underlining the predictive power of the simulations far into the non-linear regime.
## 5 Conclusions and Outlook
In this paper, we introduced our methodology for computing full-sky maps of weak lensing convergence, starting from the mass-shell outputs of Gadget-4. We applied our code to a selection of simula
Figure 10: Top left: WL convergence power spectrum; top right: WL convergence PDE; bottom left: WL convergence peak counts; bottom right: WL convergence minimum counts. All the panels show the ratio between a simulation and its DM-only version. Here we compare our findings with those of similar recent studies. The results concerning baryonic effects are from MTNG (green), TNG (yellow), HorizonAGN (brown), and Bahamas (orange); Bahamas comes in three different AGN intensities: low (dashed line), fiducial (solid line), and high (dotted line). The results concerning massive neutrinos are from MTNG with \(\sum m_{\nu}=0.1~{}{\rm eV}\) (blue), MTNG with \(\sum m_{\nu}=0.3~{}{\rm eV}\) (dark blue), and MassiveUnsp (purple).
tions from the MillenniumTMG suite, presenting predictions for four observables, namely the angular power spectrum, the one-point PDF, and counts of peaks and minima as a function of convergence.
After assessing the internal consistency of our code by comparing our results to theoretical predictions, we investigated the impact of mass and angular resolution on the weak lensing convergence, finding low angular resolution to be particularly problematic. Even if the underlying simulation has high mass resolution, insufficient angular resolution causes significant suppression of the angular power spectrum at small scales, as well as a narrowing of the one-point PDF and an underprediction of the numbers of peaks and minima at all values of the WL convergence. Creating convergence maps featuring high angular resolution is therefore of critical importance, arguably even more important than the underlying mass resolution. We also tested whether the "fixed and paired" variance-suppression technique proposed by Angulo and Pontzen (2016) remains beneficial when applied to continuous lightcone output over a wide redshift range, rather than to individual simulation snapshots. We found that it does indeed significantly reduce cosmic variance uncertainties in the angular power spectrum of WL convergence at medium to large \(\ell\)-values.
We investigated the impact of baryonic physics on WL measurements by comparing convergence maps from DM-only and full-hydro simulations run from identical initial conditions. We found that including the baryons results in a redshift-dependent suppression of angular power which can reach \(\approx 15\%\) for \(\ell\gtrsim 10^{3}\). The PDF in turn becomes narrower, increasingly so for higher source redshift, while the counts of peaks and minima are suppressed in an \(\kappa\)-dependent fashion by up to \(\approx 15\%\). This emphasises the need to include the impact of baryons in any attempt to model WL observables with high precision.
We also studied the effect of massive neutrinos on WL observables by comparing simulations with different total neutrino masses, viz. \(\sum m_{\nu}=[0,100,300]\,\mathrm{meV}\). The impact is significant, and is especially drastic for \(\sum m_{\nu}=300\,\mathrm{meV}\), producing a suppression of the angular power spectrum by up to \(\approx 20\%\), and inducing a significant distortion of the PDF and of the distributions of peak and minimum counts, primarily a suppression of the tails and an enhancement of the central parts of the distributions.
In summary, weak lensing predictions of the precision needed to interpret stage IV surveys _require_ appropriate modeling of the impact _both_ of baryonic physics and of massive neutrinos. Furthermore, these must be implemented in simulations which simultaneously have _both_ sufficiently high angular and mass resolution _and_ large enough periodic box size. In the present study we adopted a purely theoretical perspective, focusing exclusively on the mass distributions predicted by the MTNG simulations. However, this simulation suite also predicts the properties of the galaxies themselves, either directly in the large hydrodynamical simulation MTNG740, or through semi-analytic modelling throughout the extremely large-volume DM-only simulation MTNG3000 (which also includes massive neutrinos). Forthcoming work will thus consider realistic forward modelling of weak lensing observations in order to study various correlations between the WL signals and the galaxy distribution.
## Acknowledgements
VS and LH acknowledge support by the Simons Collaboration on "Learning the Universe". SB is supported by the UK Research and Innovation (UKRI) Future Leaders Fellowship [grant number MR/V023381/1]. LH is supported by NSF grant AST-1815978. CH-A acknowledges support from the Excellence Cluster ORIGINS which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094 - 390783311. The authors gratefully acknowledge the Gauss Centre for Supercomputing (GCS) for providing computing time on the GCS Supercomputer SuperMUC-NG at the Leibniz Supercomputing Centre (LRZ) in Garching, Germany, under project pn34mo. This work used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility, with equipment funded by BEIS capital funding via STFC capital grants ST/K00042X/1, ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1.
## Data Availability
The simulations of the MillenniumTMG project will be made fully publicly available in 2024 at the following address: [https://www.mtng-project.org](https://www.mtng-project.org). The data underlying this article will be shared upon reasonable request to the corresponding authors.
|
2301.09878 | ODOR: The ICPR2022 ODeuropa Challenge on Olfactory Object Recognition | The Odeuropa Challenge on Olfactory Object Recognition aims to foster the
development of object detection in the visual arts and to promote an olfactory
perspective on digital heritage. Object detection in historical artworks is
particularly challenging due to varying styles and artistic periods. Moreover,
the task is complicated due to the particularity and historical variance of
predefined target objects, which exhibit a large intra-class variance, and the
long tail distribution of the dataset labels, with some objects having only
very few training examples. These challenges should encourage participants to
create innovative approaches using domain adaptation or few-shot learning. We
provide a dataset of 2647 artworks annotated with 20 120 tightly fit bounding
boxes that are split into a training and validation set (public). A test set
containing 1140 artworks and 15 480 annotations is kept private for the
challenge evaluation. | Mathias Zinnen, Prathmesh Madhu, Ronak Kosti, Peter Bell, Andreas Maier, Vincent Christlein | 2023-01-24T09:35:43Z | http://arxiv.org/abs/2301.09878v1 | # ODOR: The ICPR2022 _O_Deuropa Challenge on Olfactory _O_bject _R_ecognition
###### Abstract
The Odeuropa Challenge on Olfactory Object Recognition aims to foster the development of object detection in the visual arts and to promote an olfactory perspective on digital heritage. Object detection in historical artworks is particularly challenging due to varying styles and artistic periods. Moreover, the task is complicated due to the particularity and historical variance of predefined target objects, which exhibit a large intra-class variance, and the long tail distribution of the dataset labels, with some objects having only very few training examples. These challenges should encourage participants to create innovative approaches using domain adaptation or few-shot learning. We provide a dataset of 2647 artworks annotated with 20 120 tightly fit bounding boxes that are split into a training and validation set (public). A test set containing 1140 artworks and 15 480 annotations is kept private for the challenge evaluation.
## I Introduction
Cultural heritage has been blind to the olfactory senses of our nose. Olfaction is a crucial element of human experience but has not received much attention in the context of cultural heritage, yet. The Odeuropa project1 aims to remedy this shortcoming by promoting, preserving, and recreating the olfactory heritage of Europe. It is possible to make traces of past smells accessible by automatic analyzing large corpora of visual and textual data from 16th to 20th-century European history. However, finding smell references in historical artworks is a very challenging task. These references can be implicit in a painting's narrative, the actions of depicted characters, or the depicted spaces. We try to approximate the recognition of complex and implicit smell references by first detecting objects with olfactory relevance, based on which more complex smell references might be recognized. The detection of olfactory objects in historical artworks is challenging in multiple aspects:
Footnote 1: www.odeuropa.eu
1. Object detection in the artistic domain requires algorithms to cope with varying degrees of abstraction and artistic styles, which leads to a considerably higher intra-class variance than photographic depictions.
2. In contrast to the famous COCO [2] and ImageNet [1] datasets, where the images usually contain repetitive objects with huge per sample instances, historical artworks usually contain many object instances of diverse sizes, which are often partially occluded (cf. Fig. 1).
3. Small-relevant objects can be particular, leading to a fine-grained classification of target objects. Different types of flowers, for example, might have a different smell although looking very similar.
4. Since the dataset covers a period over multiple centuries, the appearance of some target objects is subject to historical change. Particularly, man-made objects like cigars or beverages might have changed their look over the years, whereas others like flowers or animals remained mostly invariant.
The category and domain gap between photographic datasets and our target domain poses a challenge that encourages new approaches to increase object detection models' robustness and transfer capability. In posing the double challenge of overcoming a domain and category gap, we want to foster the development of domain adaptation techniques in object detection and promote a multisensory cultural heritage perspective on computer vision that acknowledges the importance of olfaction.
We allow and encourage the use of different kinds of pre-training on photographic data to enable various domain adaptation methods, e. g., transfer learning or style transfer. Along with our annotated dataset, we provide a hierarchy of object categories, which facilitates the implementation of hierarchical approaches to object detection.
## II Dataset
We provide the first dataset of olfactory objects within artworks for the challenge. This section describes the collection,
Fig. 1: Example image from the challenge dataset exhibiting a large number of small, partially occluded objects. Image credit: _Laid Table with Cheese and Fruit_. 1610. Floris van Dyck. Public Domain, via Wikimedia Commons.
annotation, and a brief description of class distribution.
### _Image Collection & Annotation_
As a prerequisite for the assembly of the dataset, we queried multiple digitized museum collections using a list of search terms (cf. Table I) that allegedly led to images with olfactory relevance. Our image collection strategy is two-fold: In the first step, we defined an initial list of terms that reflect our expectations at the start of the Odeuropa project work, which led to a collection of 30 134 networks. As our knowledge about contexts in which smell active objects might appear evolves in the annotation process, we iteratively extended the image base with new search terms that have become relevant.
The objects were annotated manually using cvat2 and Amazon mechanical turk (only flower subcategories).
Footnote 2: [https://openvinotoolkit.github.io/cva/](https://openvinotoolkit.github.io/cva/)
We predefined a set of categories that were then iteratively extended resulting in a list of 222 classes to date. The high number of object categories, including objects that are very rare and particular, suggests the usage of a hierarchical structure of classes, which has multiple advantages: (1) It makes it easier to find specific object categories, simplifying the annotation process. (2) Detection systems can resolve to a fallback solution in cases where the exact object category cannot be determined but a broader classification can be made (e. g., detecting a flower instead of flower species). In contrast to a WordNet-based concept hierarchy like it is applied by Redmon _et al._[2], we incorporate only two levels of abstraction since a more complex hierarchy remains mostly unused and complicates annotation and detection architectures without adding much extra value. From the leave nodes, the complete WordNet hierarchy can, however, still be created. The selection of the supercategories is based on pragmatic considerations such as visual similarity, assumed familiarity with concepts, and simplicity.
Finally, we filtered out supercategories that had less than ten samples for creating the challenge dataset, resulting in a list of 87 categories.
### _Label Distribution_
Table II lists the supercategories that have been used in the annotation scheme and how many subcategories have been defined for each as well as the number of samples in each supercategory.
Figures 1(a) and 1(b) show the exemplary subcategory distributions of the _mammal_ and _seafood_ categories, respectively.
### _Distribution Format_
For the sake of license compliance, we cannot publish the images directly. Instead, we provide a CSV file with links pointing to the image sources and a script to conveniently download them. The annotations are provided in COCO JSON format3 which defines a bounding box as \([x,y,w,h]\), with \(x\) and \(y\) denoting the coordinates of the upper left corner of a box, and \(w\), \(h\) the box width and height, respectively. Additionally, each bounding box is assigned to one of the predefined categories via the _category_id_ attribute. Apart from the publication via codalab, the challenge training set has also been published on zenodo [3] including additional metadata.
Footnote 3: [https://cocodataset.org/#format-data](https://cocodataset.org/#format-data)
## III Challenge Overview
### _Challenge Protocol and Duration_
The aim of the ODOR challenge is to locate and classify a diverse range of odor-active objects on historical artworks. The participants are provided with a training set of artwork images along with the bounding box annotations of the target objects. Additionally, they are also provided with a validation set of images without annotations. These images can be used for the algorithm development or model training. The competition started with a preliminary warm-up phase,
\begin{table}
\begin{tabular}{l c c} \hline \hline supercategory & \# subcategories & \# samples \\ \hline flower & 20 & 8,484 \\ fruit & 28 & 5,196 \\ mammal & 38 & 2,126 \\ bird & 13 & 1,185 \\ vegetable & 26 & 1,088 \\ smoking equipment & 16 & 958 \\ insect & 17 & 708 \\ beverage & 5 & 553 \\ jewellery & 11 & 433 \\ seafood & 10 & 321 \\ reptile/amphibia & 3 & 105 \\ nut & 3 & 78 \\ other & 14 & 1,094 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Supercategories of the annotation scheme. The middle column gives the number of subcategories that have been defined for each of the supercategories. The right column reports the number of samples that have been annotated for the supercategory including its subtypes. _Other_ subsumes all top-level categories that do not have further subcategories.
\begin{table}
\begin{tabular}{l r} \hline \hline search term & \# images \\ \hline Smell\({}^{\text{a}}\) & 618 \\ Senses\({}^{\text{b}}\) & 2217 \\ Lazarus\({}^{\text{c}}\) & 4215 \\ Still Life\({}^{\text{d}}\) & 21074 \\ Gloves & 901 \\ Donkey\({}^{\text{e}}\) & 2,483 \\ Goat & 5,177 \\ Cheese & 365 \\ Pomander & 146 \\ Tobacco & 1,922 \\ Whale\({}^{\text{f}}\) & 229 \\ Censer\({}^{\text{g}}\) & 195 \\ \hline Total & 41,552\({}^{\text{h}}\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Overview of search terms with the number of images collected for each.
where the participants were provided with the training data and a starter kit enabling them to perform exploratory data analysis and build initial prototypes and setup their code. The challenge was conducted in two main phases: (1) a development phase (2) and a final phase.For both _development_ and _final_ phase, submissions were expected as a zip file containing the predictions as a COCO-_JSON_ format.
#### Iii-C1 Development phase.
For the development phase, the bounding box annotations for the validation set were not provided to the participants. During this phase, participants were allowed to upload their predictions on the validation set. The validation set bounding boxes were used to evaluate each participant's submission and provide feedback as per the COCO evaluation metric. Each participant was allowed to upload one submission per day.
#### Iii-C2 Final phase.
During the final phase, the validation annotations and the test set (without annotations) were provided to the teams to further fine-tune their models and present robust and generic algorithms on the test set. Similar to the development phase, they were required to submit their results on the test set. For this phase, for each participant, a total of six submissions was allowed.
### _Evaluation Metrics_
We use _COCO metric_ as the evaluation metric which determines the participants ranking in the final ranking. To understand any object detection metric, we need to understand Intersection over Union (IoU). IoU decides if a predicted bounding box is correct with respect to the ground truth object bounding box or not. IoU is defined as the ratio of intersection and union between the predicted and actual bounding box. A prediction is considered to be correct (True Positive) if IoU is greater than a predefined threshold value, and False Positive otherwise. For COCO evaluation, the predefined IoU thresholds range from 0.5 to 0.95 with a step size of 0.05. We evaluate _COCO metric_ by calculating the mean average precision (mAP) averaged over all classes, averaged over all threshold values (IoU 0.5:0.05:0.95). Since our dataset contains many small objects that are particularly difficult to detect, we additionally also report the mAP for small, medium, and large objects separately.
### _Participation_
A total of 36 teams registered for the challenge, out of which 6 teams submitted during the development phase, and 4 teams submitted their predictions for the final phase. Although we are happy with the contribution of the existing participants, we initially expected more submissions. One reason might be the challenging nature of the dataset which might discourage some scholars. By skimming through the available codalab challenges, some scholars might have also misinterpreted the challenge name which, in its abbreviated form, does not explicitly link to object detection. We plan to create a follow-up where we consider these findings and attract more participants.
Team _Thousandwords_ consists of Ten Long (University of Amsterdam), Sadaf Gulshad (University of Amstedam), Stuart James (Istituto Italiano di Tecnologia), Noa Garcia (Osaka University), and Nanne von Noord (University of Amsterdam). They proposed the use of a strong object detector network called PPYOLO-E [4] with a CSP-Resnet [5] backbone. The final results were obtained by training the network for 150 epochs using a batch size of 10, base learning rate (LR) of 2.5e-3. They used stochastic gradient descent with momentum as optimizer for the final model. The final model training used a LR scheduler for 5 epochs of _LinearWarmup_ and maximum 360 epochs of _CosineDecay_. For augmentations, they used _BatchRandomResize_ with target random sizes of [320, 352, 384, 416, 448, 480, 512, 544, 576, 608, 640, 672, 704, 736, 768] and random interpolation. They normalized the images with a mean of [0.485, 0.456, 0.406] and standard deviation of [0.229, 0.224, 0.225]. They experimented with various training schemes like using grayscale images as augmentation, excluding small bounding boxes for robust learning and style transfer as augmentation for domain adaptation. Interestingly however, they report that none of these techniques work better than using a strong object detection model.
Team _None4_ proposed the use of a YoloV5 [6] model
Fig. 2: Distribution of subcategory annotations of ((a)) mammals and ((b)) seafood supercategories.
pre-trained on COCO. They fine-tuned the model using the Ultralytics platform5 for 50 epochs with a learning rate of 1e-3 and a batch size of 16. For training, the team applied mild data augmentation as given by the _aug_ffms_ function of the albumentations [7] library.
Footnote 5: [https://github.com/ultralytics/](https://github.com/ultralytics/)
Team _DeadlyDL_ with the single member Badhan Kumar Das (Siemens Healthineers) used a Faster RCNN [8] model for this task. The model was trained for 80 epochs with a learning rate of c. 5e-4 (0.000478), determined by the learning rate finder [9], a batch size of 2 and ADAM optimizer. For preprocessing, the team used padding and data normalization before passing the images to the neural network.
Team _angelvillar96_ (Angel Villar-Corrales, University of Bonn) used a single-shot object detection network called RetinaNet [10] with a Resnet50-FPN [11] backbone pretrained on COCO-2017 dataset. The team used the Adam optimizer with an initial learning rate of 3e-4 with a decay factor of 10 (3e-5, 3e-6). The batch size was set to 32 due to hardware limitations and the network was trained for 50 epochs, with the best performance at 45th epoch. The final model was trained on a machine with an NVIDIA RTX A6000 with 48GB. Training for 50 epochs took about 1.5 hours.
To simplify participation, we provided a simple _baseline method_ that was published on GitHub6. For the baseline, we used an ImageNet pre-trained Faster-RCNN with a Resnet-50 FPN backbone. First, we fine-tuned only the head for 10 epochs using a learning rate of 1e-3, followed by 50 epochs of training the whole network with the same learning rate of 1e-3 before using a lower learning rate of 1e-4 for another 50 epochs. Similar to team None, we used mild data augmentation as provided by the albumentation library and normalized the input using ImageNet-based mean and standard deviation.
Footnote 6: [https://github.com/Odeuropa/ICPR-ODOR-starting-kits/](https://github.com/Odeuropa/ICPR-ODOR-starting-kits/)
## IV Challenge Results
The submissions are ranked according to the _COCO metric_. The winner is team _Thousandwords_ with members from the University of Amsterdam, Istituto Italiano di Tecnologia, and the Osaka University.
Second place goes to team _None_. _DeadlyDL_ from Siemens Healthineers achieves the 3rd place. _Angelvillar96_ from the University of Bonn scores the 4th place.
In order to comprehensively evaluate the submissions, we also report the mean average precision (mAP) for small, medium and large bounding boxes. As expected, all submissions were struggling with small boxes. We can see that _Thousandwords_ achieved the highest mAP for all three types of bounding boxes. As expected, all submissions were struggling with small boxes. Compared with middle-sized boxes, we observe a performance decrease of more than 100% for the first and second ranked team, and an even higher drop of c. 350% for the other participants.
## V Discussion
The major challenges within this competition were detecting objects, that were less represented in the training data, small objects; periodically changing objects with varying styles, and same class of objects obstructing and overlapping with each
\begin{table}
\begin{tabular}{c c c c} \hline \hline & COCO mAP(\%) & mAP@5(\%) & [email protected](\%) \\ \hline baseline & 3.99 & 8.92 & 2.95 \\ \hline Thousandwords & 11.49 & 18.93 & 12.00 \\ None & 7.52 & 12.16 & 8.29 \\ DeadlyDL & 4.58 & 10.00 & 3.77 \\ angelvillar96 & 3.82 & 8.41 & 2.65 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Results on the final test set in terms of COCO mAP, Pascal VOC mAP ([email protected]), and strict evaluation ([email protected]).
Fig. 3: Qualitative comparison of small-object prediction results of the four finalists. The first row shows predictions for rings in a portrait, whereas the second row shows predictions of partially occluded nuts.
Image credits: _(top)_ _Portrait of an 18-year old woman_. Attributed to Pieter Pourbus. 1574. Oil on panel. RKD – Netherlands Institute for Art History, RKDimages (280945). _(bottom)_ Detail from _Stilleven met cen mand met kazen_. Pieter Claesz. 1645 – 1661. Oil on panel. RKD – Netherlands Institute for Art History, RKDimages (108716).
other. Figure 3 gives examples for some challenging categories. The first two rows visualize detections of _small objects_, i. e., a portrait with three rings in the first row, and a still-life containing a large number of (partially occluded) nuts in the second row. Considering the object size, both nuts and rings are reasonably well detected by team Thousandwords and None. While the models of the teams DeadlyDL and angelvillar96 seem to largely overestimate the number of instances, the confidence score is below 0.5 for all instances, meaning that the false predictions do not decrease the COCO metric. However, the large number of overlapping predictions suggest that the usage or modification of non-maximum-suppression might improve the results. What surprised us was the detection performance for the allegedly challenging categories of smoke and fire. We expected both categories to be very challenging to detect since, especially in the case of smoke, they lack clear boundaries and their localisation is ambiguous. As Table V shows, our expectation was met for the teams None and angelvillar96 who both achieved a 0.0 precision for these categories. Surprisingly however, the teams Thousandwords and DeadlyDL achieved precision values considerably higher than their average over all categories. Figure 4, where the Thousandwords and DeadlyDL models both detect instances of smoke with blurry boundaries, emphasizes this finding.
Another positive surprise was the robustness of the participants towards deviations in stylistic representation of the target objects. Figure 5 shows detection of the Thousandwords method for three different representations of pipes. Although the right image exhibits a completely different artistic style, the pipe detection is still detected successfully. Furthermore, the different variations of the pipe object exhibited by the leftmost and the middle image seem not to prevent a successful detection.
Challenging as expected was the detection of large numbers of objects partially occluding each other. Figure 6 shows detections of a heap of apples for three participants. None of the participant models managed to find the majority of the apples in the heap. This motivates an evaluation approach similar to the OpenImages [12] evaluation protocol where groups of objects with at least five overlapping instances are counted as successful detections if at least one instance in the bonding box around the group is being detected. We might adapt this evaluation protocol in a possible future challenge. Interestingly, we do not observe a confusion between the visually relatively similar categories of apples, peaches, and pears, which is reflected in the confusion matrix between those categories (cf Table VI).
\begin{table}
\begin{tabular}{c c c} \hline \hline & smoke AP & fire AP \\ \hline Thousandwords & 0.44 & 0.33 \\ None & 0.00 & 0.00 \\ DeadlyDL & 0.12 & 0.20 \\ angelvillar96 & 0.00 & 0.00 \\ \hline \hline \end{tabular}
\end{table} TABLE V: Average precision of smoke and fire categories for all finalists. All precision values are reported according to COCO evaluation.
Fig. 4: Qualitative comparison of prediction results for the challenging smoke and fire categories.
Image credit: Detail from _Solomon’s idolatry (1 Kings 11:7–8)_. Circle of Claude Vignon. 1650–1674. Oil on canvas. RKD – Netherlands Institute for Art History, RKDimages (114441).
\begin{table}
\begin{tabular}{c c c c} \hline \hline & mAP-small(\%) & mAP-medium(\%) & mAP-large(\%) \\ \hline baseline & 1.07 & 3.50 & 10.25 \\ \hline Thousandwords & 4.19 & 11.71 & 25.24 \\ None & 3.03 & 7.36 & 15.74 \\ DeadlyDL & 1.00 & 4.50 & 10.43 \\ angelvillar96 & 0.84 & 3.76 & 9.19 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Evaluation of COCO mAP for different object sizes
Fig. 5: Exemplary pipe detections of the winning model over different stylistic representations.
Image credits: _(I)_ Detail from _Self portrait in the studio_. Jan Toorop. 1883. Oil on panel. RKD – Netherlands Institute for Art History, RKDimages (128870). _(m)_ Detail from _Portrait of a man smoking_. Anonymous. 1800–1850. Oil on panel. RKD – Netherlands Institute for Art History, RKDimages (294941). _(r)_ Detail from _Peasant seated with pipe_. Adriaen van Ostade. 1625–1685. Graphite on paper. RKD – Netherlands Institute for Art History, RKDimages (198724).
## VI Conclusion
We held the Odeuropa Challenge on Olfactory Object Recognition to promote object detection in the challenging domain of digital heritage. A total of 36 teams participated in the challenge of which 6 submitted to the development phase, and 4 teams submitted to their final predictions. By raising the attention of digital humanities and computer vision alike, the challenge increased the respective visibility and cooperation. Particularly in the emerging discipline of olfactory heritage studies, we hope to promote an interdisciplinary approach that considers computational methods.
We briefly introduced the four final submissions and analyzed their results qualitatively and quantitatively. Especially the winning team shows some promising results in terms of small object detection and robustness towards different styles. To further monitor the progress and enable easy benchmarking of newly developed algorithms, we will reopen the challenge for new submissions.
## Acknowledgment
For feedback, guidance, professional and moral support we would like to thank Lizzie Marx, Sofia Ehrich, William Tullett, Hang Tran, Inger Leemans, Arno Bosse, Marieke van Erp, the whole Odeuropa Team, and of course all participants. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the two Quadro RTX 8000 used for this research. The paper has received funding by Odeuropa EU H2020 project under grant agreement No. 101004469.
|
2304.05644 | Generative Adversarial Networks-Driven Cyber Threat Intelligence
Detection Framework for Securing Internet of Things | While the benefits of 6G-enabled Internet of Things (IoT) are numerous,
providing high-speed, low-latency communication that brings new opportunities
for innovation and forms the foundation for continued growth in the IoT
industry, it is also important to consider the security challenges and risks
associated with the technology. In this paper, we propose a two-stage intrusion
detection framework for securing IoTs, which is based on two detectors. In the
first stage, we propose an adversarial training approach using generative
adversarial networks (GAN) to help the first detector train on robust features
by supplying it with adversarial examples as validation sets. Consequently, the
classifier would perform very well against adversarial attacks. Then, we
propose a deep learning (DL) model for the second detector to identify
intrusions. We evaluated the proposed approach's efficiency in terms of
detection accuracy and robustness against adversarial attacks. Experiment
results with a new cyber security dataset demonstrate the effectiveness of the
proposed methodology in detecting both intrusions and persistent adversarial
examples with a weighted avg of 96%, 95%, 95%, and 95% for precision, recall,
f1-score, and accuracy, respectively. | Mohamed Amine Ferrag, Djallel Hamouda, Merouane Debbah, Leandros Maglaras, Abderrahmane Lakas | 2023-04-12T06:47:27Z | http://arxiv.org/abs/2304.05644v1 | Generative Adversarial Networks-Driven Cyber Threat Intelligence Detection Framework for Securing Internet of Things
###### Abstract
While the benefits of 6G-enabled Internet of Things (IoT) are numerous, providing high-speed, low-latency communication that brings new opportunities for innovation and forms the foundation for continued growth in the IoT industry, it is also important to consider the security challenges and risks associated with the technology. In this paper, we propose a two-stage intrusion detection framework for securing IoTs, which is based on two detectors. In the first stage, we propose an adversarial training approach using generative adversarial networks (GAN) to help the first detector train on robust features by supplying it with adversarial examples as validation sets. Consequently, the classifier would perform very well against adversarial attacks. Then, we propose a deep learning (DL) model for the second detector to identify intrusions. We evaluated the proposed approach's efficiency in terms of detection accuracy and robustness against adversarial attacks. Experiment results with a new cyber security dataset demonstrate the effectiveness of the proposed methodology in detecting both intrusions and persistent adversarial examples with a weighted avg of 96%, 95%, 95 %, and 95% for precision, recall, f1-score, and accuracy, respectively.
IoT, Generative AI, GAN, Adversarial deep learning, Adversarial attacks.
## I Introduction
The advent of the Internet of Things (IoT) has transformed our daily lives, work, and interactions with the environment. As IoT devices continue to evolve and demand more sophisticated features, the next iteration of wireless communication, 6G, has become a critical issue in the field of wireless technology. 6G is the next generation of wireless communication that will offer high-speed and low-latency connectivity to support a diverse range of IoT devices [1]. It is expected to offer even higher performance than 5G. It is expected to provide a bandwidth of up to 10 Gbps, a latency of 100 \(\upmu\)s, and a data rate of up to 100 Gbps. Its energy efficiency is expected to be very high, which means that it can support a very large number of IoT devices without consuming excessive power. The network density for 6G IoT is expected to be 10 million or more devices per square kilometer. The architecture of 6G-enabled IoT devices will have a hierarchical structure that will consist of the following components: Device Layer, Network Layer, Application Layer, and Cloud Layer. The Device Layer encompasses the physical device and will house the necessary hardware and software components to facilitate 6G communication [2, 3].
Our motivation is to improve the robustness of ML/DL-based cyber threat intelligence against adversarial evasion attacks. Several defense methods have been proposed in this context [4], the most promising solution is adversarial training where the cyber threat intelligence model is trained on adversarial examples as well as the original examples (i.e real data) in order to make it more resilient to small perturbations in input data [5]. However, it can lead to overfitting on the adversarial examples and decrease the generalization performance of the model. In addition, some generative methods may be more appropriate for certain types of data or models than others. Using GANs (Generative Adversarial Networks) to generate adversarial examples is one way to address these issues. GANs can generate more diverse and complex adversarial examples that are harder for the model to overfit on, compared to simpler methods like the Carlini-Wagner (CW) attack, DeepFool, Fast Gradient Sign Method (FGSM), etc.
Our contributions to this paper are as follows :
* We investigate the impact of FGSM adversarial attacks on the intrusion detection model.
* We propose a two-stage cyber threat intelligence using two detectors: the first detector uses GAN to detect adversarial examples, and the second detector is for
intrusion detection.
* We evaluate the proposed GAN-based intrusion detection framework's performance in terms of detection accuracy as well as its resistance to adversarial evasion attacks.
```
0: Input data, pre-trained deep neural network, loss function, epsilon
1: Load the pre-trained deep neural network model
2: Choose a sample IoT data as input to the network
3: The gradient of the loss function is computed relative to the input IoT data using the following equation: \[\nabla_{x}J(\theta,x,y)\] (1)
4: Determine the sign of the gradient: \[sign(\nabla_{x}J(\theta,x,y))\] (2)
5: Multiply the sign of the gradient by a small constant \(\epsilon\) to determine the magnitude of the perturbation: \[\epsilon\cdot sign(\nabla_{x}J(\theta,x,y))\] (3)
6: Add the perturbation to the original IoT data to create an adversarial example: \[x_{adv}=x+\epsilon\cdot sign(\nabla_{x}J(\theta,x,y))\] (4)
7: Feed the adversarial example into the deep neural network (CNN)
8: Compare the output of the network with the true label. If the output is incorrect, the attack was successful
9: Use the first-stage detector, the GAN Discriminator model to detect generated successful adversarial attacks.
10: Use the second-stage detector, the CNN model to identify both normal and cyber attack types.
```
**Algorithm 1** Proposed methodology for GAN-based adversarial attack detection and intrusion detection
## II Proposed methodology
The proposed GAN-based model for detecting adversarial attacks is illustrated in Figure 1. To generate attack samples that mimic the real attack distribution, the generator is employed, while the discriminator is tasked with identifying genuine samples. Through dynamic minimum and maximum game theory, the generator and discriminator are trained, resulting in the production of artificial attack behaviors that closely resemble real attacks. The discriminator, on the other hand, can effectively differentiate between genuine and generated attacks.
Algorithm 1 and Figure 2 present the structure of the proposed cyber threat intelligence framework using two detectors. First, input data is forwarded to the first GAN discriminator to detect evasion attacks before being forwarded to the DL classifier to identify real intrusions. GANs comprise two models, the generator and the discriminator. The generator creates new IoT data, while the discriminator determines if the generated IoT data is legitimate or not. The formula for the generator can be expressed as:
\[G(z)=f(z,\theta_{g}) \tag{5}\]
Where \(G(z)\) represents the generated IoT data, \(z\) is the noise factor, and \(\theta_{g}\) are the parameters for the generator. The formula for the discriminator can be depicted as:
\[D(x)=f(x,\theta_{d}) \tag{6}\]
Where \(D(x)\) stands for the discriminator's prediction, \(x\) is the input data, and \(\theta_{d}\) is the discriminator parameters.
To assess our detection approach, we utilize the advanced Fast Gradient Sign Method (FGSM) for creating adversarial instances [6]. This method does not require any knowledge of the target model's architecture or training data, which makes it a "model-agnostic" attack. As long as the attacker has access to the model's output for a given input, they can compute the gradients of the loss function and use FGSM to generate persistent adversarial examples. The aim is to decrease the highest level of distortion added to any characteristic that might result in misidentification. The FGSM procedure for IoT security is outlined in the following subsequent steps:
* Step 1: Begin by loading the pre-trained deep neural network model.
* Step 2: Select a sample IoT data to be used as input for the network.
* Step 3: Compute the loss function gradient in relation to the input IoT data: \[\nabla_{x}J(\theta,x,y)\] (7) where \(J(\theta,x,y)\) is the loss function, \(x\) is the input IoT data, and \(y\) is the true label.
* Step 4: Determine the sign of the gradient: \[sign(\nabla_{x}J(\theta,x,y))\] (8)
* Step 5: Multiply the sign of the gradient by a small constant \(\epsilon\) to determine the magnitude of the perturbation: \[\epsilon\cdot sign(\nabla_{x}J(\theta,x,y))\] (9)
Fig. 1: The proposed GAN model training for adversarial examples detection
Fig. 2: A flow chart of the proposed cyber threat intelligence detection framework using two detectors.
* Step 6: Add the perturbation to the original IoT data to create an adversarial example: \[x_{adv}=x+\epsilon\cdot sign(\nabla_{x}J(\theta,x,y))\] (10)
* Step 7: Feed the adversarial example into the deep neural network and observe the output.
* Step 8: If the output doesn't match the true label, then the deep neural network is considered to be vulnerable to FGSM attacks, and the attack can be deemed successful.
The CNN model adopted by the Discriminator at first-stage detection for detecting adversarial attacks as well as at second-stage detection for detecting IoT attacks uses the following steps:
* Step 1: The initial stage of the CNN model involves the convolutional layer, which utilizes a collection of filters to obtain noteworthy features from the input IoT data. To perform this process, the convolution operation is applied as follows: \[(f*g)(n)=\sum_{m=-\infty}^{\infty}f(m)g(n-m)\] (11) Where \(f\) is the input IoT data. \(g\) represents the filter, which is a smaller set of weights used to extract features from the input data. \(n\) is the current index being evaluated in the output feature map. \(m\) represents the index of the input data that is multiplied by the corresponding weight in the filter \(g\). The summation over \(m\) indicates that the filter is shifted across all possible positions in the input data to extract relevant features. The convolution operation calculates the dot product between the filter and the input data at every possible position, producing a feature map that represents the extracted features.
* Step 2: Following the convolution operation, the subsequent step is to employ a non-linear activation function to the convolutional layer's output. In CNNs, the ReLU (rectified linear unit) activation function is widely used, which can be defined as: \[f(x)=\max(0,x)\] (12) Where \(x\) refers to the input value.
* Step 3: we utilize the pooling layer to decrease the spatial dimensions of the feature maps, resulting in reduced computational complexity and the elimination of noise in the feature maps. To achieve this, we apply the max pooling technique, which can be defined as: \[\max(x)=\max_{i=1}^{k}x_{i}\] (13) Where \(x\) is a set of values and \(k\) is the size of the pooling window.
* Step 4: To make predictions, the pooling layer's output undergoes processing via a multi-layer perceptron (MLP) network, also known as a fully connected layer. The formula for the fully connected layer is established as: \[y=Wx+b\] (14) Where \(W\) is the weight matrix, \(x\) is the input IoT data vector, and \(b\) is the bias vector.
* Step 5: Following the fully connected layer, the output is subjected to a sigmoid or softmax activation function to produce final predictions for binary adversarial attack detection or multi-class attack detection, respectively.
features were extracted from the dataset's 1176 features and have high correlations. In order to assess the robustness of the proposed GAN model against evasion attacks, we first processed the network traffic dataset, cleaned duplicate and corrupted samples, performed one-hot encoding to categorical features, performed features scaling, and finally performed an analysis of network traffic criteria such as binary values, value ranges, and class belonging for categorical features. A series of adversarial examples were generated using the Fast Gradient Sign Method (FGSM) was employed to produce the adversarial validation set. The generated data was carefully selected using the minimum perturbation (\(epsilon=0.01\)) to reflect network traffic boundaries and then evaluated using the GAN discriminator. The results of evaluating the quality of generated adversarial examples, grouped by application layer features and network traffic features, are shown in Table I. The results demonstrate that the generated adversarial examples are similar to the original data in terms of Euclidean distance. The number of perturbed features and maximum perturbation indicates that the generated attack examples were of high quality, with low levels of perturbation and small differences between the original and generated data, but there is a relatively high percentage of invalid data, particularly for application layer features. This may have implications for the practicality of evading attacks. However, these attacks can largely affect the accuracy and reliability of the models.
Table III presents the distribution of different types of attack classes in a train and test data set. The train data distribution table has 14 different attack classes and their respective counts, with a total of 1046926 normal instances and the rest being various types of cyber attacks. The test data distribution table
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Class** & **Precision** & **Recall** & **F1-Score** & **Support** \\ \hline Normal & 1.00 & 1.00 & 1.00 & 23219 \\ \hline Backbone & 0.09 & 0.09 & 0.09 & 0.092 \\ \hline MixMixMix & 0.07 & 0.08 & 0.09 & 10022 \\ \hline MixMix & 1.00 & 1.00 & 1.00 & 22287 \\ \hline Proposed & 0.45 & 0.07 & 0.09 & 10031 \\ \hline PolyGAN & 0.02 & 0.05 & 0.07 & 5413 \\ \hline DBLS (LIP) & 1.00 & 1.00 & 1.00 & 22007 \\ \hline DBLS (LIP) & 0.06 & 0.08 & 0.06 & 2227 \\ \hline DBLS/LIP & 0.06 & 0.08 & 0.04 & 0.082 \\ \hline SGQ-attention & 0.05 & 0.05 & 0.05 & 0.0541 \\ \hline
has the same 14 attack classes with their respective counts, with a total of 323129 normal instances and the rest being various types of cyber attacks
Table IV presents the parameters of the Convolutional Neural Networks (CNN) model used by the discriminator. The architecture consists of three convolutional layers, one fully connected layer, and an output layer. The convolutional layers use the ReLU activation function, which has been shown to be effective in many deep-learning applications. The first layer is a 1-dimensional convolutional layer with 64 output channels, a kernel size of 3, and padding of 1. The second and third layers are similarly designed with 32 and 16 output channels, respectively. The output of the third convolutional layer is passed through a max-pooling layer to reduce the spatial resolution. After the convolutional and pooling layers, the feature map is flattened and passed through a fully connected layer with 30 output units and ReLU activation. The final layer is a linear layer with 15 output units, and the output is passed through a log-softmax activation function to produce class probabilities.
### _Results_
In order to demonstrate the impact of adversarial evasion attacks, we first trained a CNN classifier to use for FGSM attack generation. Figure 6 presents the Loss and Accuracy of CNN training and evaluation. The training and testing accuracies are reported for 15 epochs. The training accuracy starts at 93.707% and increases to 95.442% at the end of 15 epochs. The testing accuracy also starts at 94.073% and increases to 95.435% at the end of 15 epochs. These results indicate that the model is able to generalize well on unseen data. The decreasing trend in loss values indicates that the model is learning the underlying patterns in the data and improving its performance over time. The model performs well and can be considered a good solution for cyber threat detection. However, Figure demonstrates the impact of adversarial threats on a well-trained CNN classifier, where the model's accuracy dropped from 95.44% to 2.55%. We can see that the normal class is identified as malicious traffic, while the attack classes were largely identified as legitimate traffic.
Using our proposed first-stage detection strategy, Figure 3 depicts the training loss of both GAN models, which indicates the discriminator's predictions compared to the input examples' ground truth reality. In the beginning, the discriminator has a high error rate (i.e., loss) and starts decreasing through training. Unlike the generator's loss, which begins low and rapidly increases. We can demonstrate that the discriminator has beaten the generator and has efficiently learned the representation of real input data. We further evaluate the classification performance using the confusion matrix, which is a representation of the true label versus the predicted label. Figure 4 depicts the obtained results. The GAN discriminator was able to identify FGSM adversarial attacks with a recall of 96% and real data with a recall of 100%. These results demonstrate that the proposed GAN method was efficient in detecting high-quality adversarial threats.
Table V presents the classification report for the multi-classification of the discriminator. The precision column shows the accuracy of the positive predictions made by the algorithm. The confusion matrix of the proposed cyber threat intelligence detection framework is presented in Figure 7. For example, the precision for the Normal category is 1.00, which means that all the instances classified as Normal by the algorithm were actually Normal. The Generative Adversarial Networks-Driven Cyber Threat Intelligence Detection Framework has demonstrated impressive results in classifying different types of cyber threats with a high level of accuracy. The model achieved an overall accuracy of 95%, correctly identifying 419,302 out of 441,371 instances. The model showed a perfect precision and recall score for Normal activity and DDoS_ICMP attacks, which had support values of 323,129 and 23,287, respectively. However, for some types of attacks, such as SQL_injection and Port_Scanning, the model showed lower recall scores of 0.23 and 0.52, respectively. Nevertheless, the model's overall performance was excellent, with a weighted average precision and recall score of 0.96 and 0.95, respectively. These results suggest that the Generative Adversarial Networks-Driven Cyber Threat Intelligence Detection Framework has great potential in identifying and preventing various types of cyber threats, making it a valuable tool for cyber security professionals.
## IV Conclusion
In this paper, we proposed a two-stage intrusion detection framework by employing generative adversarial networks (GANs). Specifically, we introduced a GAN model to improve robustness against adversarial attacks and a DL-based intrusion detection approach. We demonstrated the effectiveness of these methods in detecting persistent adversarial examples, generated using the FGSM method. In real-world scenarios, these adversarial examples may be generated intentionally or unintentionally as a result of software or hardware errors, resulting in poor cyber threat intelligence performance.
|
2305.12011 | Boosting Crop Classification by Hierarchically Fusing Satellite,
Rotational, and Contextual Data | Accurate in-season crop type classification is crucial for the crop
production estimation and monitoring of agricultural parcels. However, the
complexity of the plant growth patterns and their spatio-temporal variability
present significant challenges. While current deep learning-based methods show
promise in crop type classification from single- and multi-modal time series,
most existing methods rely on a single modality, such as satellite optical
remote sensing data or crop rotation patterns. We propose a novel approach to
fuse multimodal information into a model for improved accuracy and robustness
across multiple years and countries. The approach relies on three modalities
used: remote sensing time series from Sentinel-2 and Landsat 8 observations,
parcel crop rotation and local crop distribution. To evaluate our approach, we
release a new annotated dataset of 7.4 million agricultural parcels in France
and Netherlands. We associate each parcel with time-series of surface
reflectance (Red and NIR) and biophysical variables (LAI, FAPAR). Additionally,
we propose a new approach to automatically aggregate crop types into a
hierarchical class structure for meaningful model evaluation and a novel
data-augmentation technique for early-season classification. Performance of the
multimodal approach was assessed at different aggregation level in the semantic
domain spanning from 151 to 8 crop types or groups. It resulted in accuracy
ranging from 91\% to 95\% for NL dataset and from 85\% to 89\% for FR dataset.
Pre-training on a dataset improves domain adaptation between countries,
allowing for cross-domain zero-shot learning, and robustness of the
performances in a few-shot setting from France to Netherlands. Our proposed
approach outperforms comparable methods by enabling learning methods to use the
often overlooked spatio-temporal context of parcels, resulting in increased
preci... | Valentin Barriere, Martin Claverie, Maja Schneider, Guido Lemoine, Raphaël d'Andrimont | 2023-05-19T21:42:53Z | http://arxiv.org/abs/2305.12011v3 | # Boosting Crop Classification by Hierarchically Fusing Satellite, Rotational, and Contextual Data
###### Abstract
Accurate in-season crop type classification is crucial for the crop production estimation and monitoring of agricultural parcels. However, the complexity of the plant growth patterns and their spatio-temporal variability present significant challenges. While current deep learning-based methods show promise in crop type classification from single- and multi-modal time series, most existing methods rely on a single modality, such as satellite optical remote sensing data or crop rotation patterns. We propose a novel approach to fuse multimodal information into a model for improved accuracy and robustness across multiple years and countries. The approach relies on three modalities used: remote sensing time series from Sentinel-2 and Landsat 8 observations, parcel crop rotation and local crop distribution. To evaluate our approach, we release a new annotated dataset of 7.4 million agricultural parcels in France (FR) and Netherlands (NL). We associate each parcel with time-series of surface reflectance (Red and NIR) and biophysical variables (LAI, FAPAR). Additionally, we propose a new approach to automatically aggregate crop types into a hierarchical class structure for meaningful model evaluation and a novel data-augmentation technique for early-season classification. Performance of the multimodal approach was assessed at different aggregation level in the semantic domain spanning from 151 to 8 crop types or groups. It resulted in accuracy ranging from 91% to 95% for NL dataset and from 85% to 89% for FR dataset. Pre-training on a dataset improves domain adaptation between countries, allowing for cross-domain zero-shot learning, and robustness of the performances in a few-shot setting from France to Netherlands. Our proposed approach outperforms comparable methods by enabling learning methods to use the often overlooked spatio-temporal context of parcels, resulting in increased precision and generalization capacity.
keywords: agriculture, deep learning, remote sensing, Earth Observation, Hierarchical model, Multimodal, Time series, Crop rotation, Long-Short-Term-Memory, satellite, Sentinel-2, Copernicus, Common Agriculture Policy, parcel, Crop type, Crop yield forecasting, Crop production, Classification +
Footnote †: journal: Remote Sensing of Environment
## 1 Introduction
Crop-type maps are an essential element used in crop production monitoring that feed into global food security assessments (Porter et al., 2014). Satellite Earth Observation (EO) systems offer a valuable data source for crop classification due to the synoptic, repeated, consistent, and timely availability of observations (Weiss et al., 2020). Since 2015, the data from the European Union (EU)'s Copernicus program, in particular those of the Sentinel-1 (S1) and Sentinel-2 (S2) sensors, provide systematic and consistent EO data at a spatial resolution generally higher than the size of most agricultural parcels. Over the last decade, many crop-type mapping studies and operational systems based on EO have been carried out, leveraging the abundant data available.
### Related Works
The related works have been separated in three subsections: EO-based-only models, models using crop-rotations, and works on early-season predictions. We restrain this section to Learning methods, as it is the focus of this work and as they have proven to lead to better results at large scale.
#### 1.1.1 EO-based Models
Ru8swurm et al. (2019) classify 13 crop types at the parcel-level using all the available 13 spectral bands of Sentinel2 data from French Brittany during the 2017 growing season. They compare a Transformer-Encoder (Vaswani et al., 2017) and a Recurrent Neural Network of Long-Short-Term-Memory (LSTM) type (Hochreiter & Schmidhuber, 1997) and find that both models perform similarly, with the Transformer-Encoder and LSTM achieving both comparable accuracy and macro-F1, respectively close to 0.69 and 0.59. Ru8wurm & Korner (2020) design a crop classifier at the parcel-level using S2 data from three regions of Germany and compare different approaches to model the signal, including a Transformer and an LSTM. They achieve overall accuracies between 0.85 and 0.92 using the LSTM, depending on the number of classes considered.
They conclude that data processing was useful for those kind of models. A similar approach was taken by Ru8wurm et al. (2019c) on 40k parcels in Central Europe using S2 data, for which they proposed a new early classification mechanism to enhance a classical model with an additional stopping probability based on previously seen information. Furthermore, Ru8wurm & Korner (2018) tackled the task of crop classification at the pixel level, by accounting for the spatial variation to detect parcels boundaries, using Convolutional Neural Network-Long-Short-Term-Memory (CNN-LSTM) on S2 images to classify 17 types of crops in a unique German region. Sainte Fare Garnot et al. (2019) proposed to use a Convolutional Neural Networks (CNN) before a Recurrent Neural Network (RNN) to learn the aggregation of the parcel pixels instead of classically averaging them, and applied their system on 200k parcels of the south-west of France (FR). In the end, Sainte Fare Garnot et al. (2020) proposed a smart method to tackle parcel-level crop classification, by randomly sampling pixels of the parcels to learn expressive descriptors that are processed by a transformer. The application of their models was carried out on 191k parcels located in the south of FR, encompassing 20 crop classes.
Finally, only some works attempt few-shot classification (i.e., learning a classifier for a new dataset given only a few examples, zero-shot when no examples are available (Peng et al., 2018)) with EO-data because it has been a difficult task for a long-time, knowing that a majority of the systems work poorly without domain data. Nevertheless, Ru8wurm et al. (2019d) and Tseng et al. (2021a) both propose to use the MAML meta-learning algorithm in order to tackle few-shot crop or land cover classification at the pixel-level, using EO data only. The former on the _CropHarvest_ dataset from Tseng et al. (2021b), which is an aggregation of satellite datasets for crop type classification containing annotation at the pixel levels, without an harmonized label taxonomy between the examples of different domains. The latter on the Sen12MS (Schmitt et al., 2019) and DeepGlobe Challenge (Demir et al., 2018) datasets for land cover classification.
#### 1.1.2 Crop-rotation-based Models
Crop rotation is an essential agronomic practice for sustainable farming and preserving long-term soil quality. A good understanding and design of crop rotation is vital for sustainability and mitigating the variability of agricultural productivity induced by climate change (Bohan et al., 2021). Crop rotation patterns are complex and non-stable in time, often dependent on farmer management decisions and subject to changes due to economic considerations and administrative regulations (Doglioti et al., 2003). As a result, expert knowledge-based models have limitations in terms of accuracy and applicability over large areas and long periods. Alternative approaches, such as estimation of crop sequence probabilities using survey data and hidden Markov models have been demonstrated in FR (Xiao et al., 2014), but these methods are not always feasible at large scale due to the extended size of the required sample.
Past research has focused on using machine learning techniques to predict crop rotations. In Osman et al. (2015), a Markov Logic model is used to predict the following year's crop in FR, achieving an accuracy of 60%. Other studies have utilized deep neural networks, such as Yaramasu et al. (2020), which reaches a maximum accuracy of 88% on a 6-class portion of the US Cropland Data Layer (CDL) dataset over 12 years.
Only three studies (Johnson & Mueller, 2021; Giordano et al., 2020; Quinton & Landrieu, 2021) have been identified that combine the use of crop rotations and satellite time-series data with deep learning. Johnson & Mueller (2021) applied this method over several years to derive near real-time CDL. However, this methodology is constrained to a small number of crop types and the use of a Random Forest classifier, while recent advancements in deep learning have shown significant improvements in such high-data regime problems. Giordano et al. (2020) used Conditional Random Fields to model the temporal dynamics of crop rotations. They focused on two French regions with very different climate conditions and agricultural practices, using around 9,230 and 1,902 parcels with 2 years of data. Quinton & Landrieu (2021) propose to use a Pixel-Set Encoder with a Lightweight Temporal Attention Encoder (PSE-LTAE) (Sainte Fare Garnot et al., 2020) combined with a multi-year classification method. They represent the past crops with a one-hot encoder that they sum, without modeling the dynamics of the sequence. In our work, we not only focus on modeling the sequential aspects of crop rotations, but also incorporate the Remote Sensing (RS) signals from previous years.
#### 1.1.3 Early Season Classification
While end-of-season crop type maps have a great interest for agricultural land monitoring (Weiss et al., 2020), in-season crop production monitoring requires a more rapid response, including before-harvest crop type map releases. Some works have also focused on tackling early-season classification. Ru8wurm et al. (2019a, c) proposed to solve the problem in an elegant way, with an adapted cost function that only rewards the classifier for an early classification if the right class has been predicted with a respectable degree of accuracy. They extend this work in Ru8wurm et al. (2023) by presenting end-to-end Learned Early Classification of Time Series, also classifying crops at the parcel-level in FR, Germany, Ghana and South Sudan.
Without using a special cost function, Weilandt et al. (2023) use a PSE-LTAE, a technique initially proposed by Barriere & Claverie (2022), on hierarchial LSTM for crop-type classification at the parcel-level. They perform data-augmentation by randomly cropping the end of the EO time series during training. The data-augmentation technique boost the performances of early-season classification. They also compared separate models trained on data cropped up to a unique certain period in the year (i.e. one model for one period), which is not efficient in terms of computation and yielded similar results.
### Positioning and Objectives
To the best of the authors' knowledge, there are some gaps in the existing literature.
A significant amount of research has focused on using remote sensing to predict crop types at the pixel or parcel level using
only EO and in-situ observations of the current year, treating the signal as independent from one year to another. Other studies have used crop rotations of parcels to address pre-season prediction of crop types Osman et al. (2015); Yaramasu et al. (2020), but the lack of sufficient information in the signal (i.e. short length of the time series) limits their performance even when targeting on minor classes. While the integration of RS data with crop rotations has been investigated in certain studies Giordano et al. (2020); Quinton and Landrieu (2021), no one has yet taken this analysis a step further by incorporating its dynamic modeling.
The dataset we have released to support our development is significantly larger than the similar one introduced above Quinton and Landrieu (2021). We have collected data for a minimum of 5 years and have gathered information on approximately 6.8 million (in FR) and 600 thousand parcels (in the Netherlands (NL)), which is more than three orders of magnitude greater than the typical dataset size used in other studies. Because of its diversity, we propose a method to aggregate the crops at the regional-level.
The contributions of this study are 5-fold:
1. We release a new dataset of more than 7.4 million parcels with their associated crops, and RS signals for the last five years in FR and NL. This allows the integration of crop rotation patterns with the remote sensing signal.
2. We create a dataset-agnostic technique to automatically group crops together, leveraging expert knowledge from the EuroCrops Schneider et al. (2021) taxonomy and derive local crop distribution. This new method allows us to detect the principal crop types from any dataset and to group them in a meaningful way.
3. We construct a novel approach for crop type mapping from crop rotations and Sentinel-2 optical time series in a multimodal way using a hierarchical LSTM network. This approach is unique in its conception as it fuses large amounts of temporally fine-grained EO data with crop rotation analysis in an advanced deep learning method. The crop rotations and the S2 time series are enhanced by previous-year crop distributions of the neighbouring parcels.
4. We develop a data-augmentation technique for the inseason classification, by randomly cropping the end of the RS time-series data. This allows our model to classify parcels before the end of the season, a crucial feature for real-life application of crop monitoring.
5. We assess the cross-domain generalization potential of the framework based on a modified nomenclature of EuroCrops.
## 2 Materials
This section presents a description of the study area and the EO data processing procedure.
### Crop reference data, study area, and harmonization of parcel data
The Geospatial Aid Application (GSA) corresponds to the annual crop declarations made by EU farmers for Common Agricultural Policy (CAP) area-aid support measures. The electronic GSA records include a spatial delineation of the parcels. A GSA element is always a polygon of an agricultural parcel with one crop (or a single crop group with the same payment eligibility). The GSA is operated at the region or country level in the EU 27 member states, resulting in about 60 different designs and implementation schemes over the EU. Since these infrastructures are set up in each region, data are not interoperable at the moment, and the legends are not semantically harmonised. Furthermore, only few EU member states release GSA data as open data, although the overall trend is towards increasingly opening up the data for public use.
Some efforts have been made to provide harmonised GSA dataset over the EU. AI4boundaries d'Andrimont et al. (2023) provides harmonized parcel geometries over 7 countries in the EU to benchmark method for parcel delineation. EuroCrops Schneider et al. (2021) proposed a semantic harmonisation framework to harmonise the legend of GSA across different countries. This harmonisation is open source and is maintained by the community 1. While EuroCrops provides a unique effort so far, this work is still in progress especially with regards to the time dimension. A recent European Commission Implementing Regulation (EU) 2023/1382 identifies a list of specific high-value datasets and the arrangements for their publication. This should be a game changer in the opening of the GSA for public access in the future and thus foster their use for research.
Footnote 1: [https://github.com/maja601/EuroCrops](https://github.com/maja601/EuroCrops)
Footnote 2: [https://eur-lex.europa.eu/eli/reg_impl/2023/138](https://eur-lex.europa.eu/eli/reg_impl/2023/138)
For this study, FR and the NL were selected because of i) their open parcel data availability, ii) their EU representativeness in covering a latitude range from 40\({}^{\circ}\) to 55\({}^{\circ}\) Northern latitude as well as four biogeographic regions (i.e. Atlantic, Continental, Alpine and Mediterranean) and iii) the countries have different size and landscape. parcel GSA data from 2015 to 2021 over FR and from 2009 to 2021 over the NL were collected (Table 1).
### Geometric minimum common parcel extraction through time
GSA are delivered yearly as a set of polygons outlining agricultural parcels. From year-to-year, the parcel boundary may change. We intersected GSA data (i.e. 2013-2020 for NL and 2015-2020 for FR) in order to extract minimum common area, each with a distinct multi-annual crop sequence, named hereafter Feature Of Interest (FOI). Since FOI are the cross-section of varying parcel bounds, their overall size is smaller than the annual GSA parcels. We discarded any FOI with an area of less than 0.1 ha and 0.5 ha for NL and FR, respectively. The total FOI area cover 85% and 93% of the average GSA area for NL and FR respectively (the "stack" entries in Table 1). For each FOI, a crop type sequence was extracted, as well as the remote sensing time series (Fig. 1).
### Earth Observation processing
Remote sensing data were extracted from S2 MSI and L8 OLI sensors. While the GSA data spans from 2013 and 2015, for NL3 and FR respectively, the remote sensing data were used starting 2016 cropping season (i.e., from October-2015), corresponding to the first cropping season with both sensors in-orbit.
Footnote 3: It starts from 2009 but we only took data from 2013
S2 MSI products with Level-2A surface reflectance data were downloaded from the Copernicus Open Access Hub. L8 OLI surface reflectance data were downloaded from the L30 products of the Harmonized Landsat Sentinel-2 (HLS) data set. For both products, L2A and L30 Quality Assessment (QA) layers were used to mask non-surface-related information. We masked all pixels flagged as cloud, cloud-shadow, cirrus and snow.
Leaf Area Index (LAI) and Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) Biophysical variables (BV) maps were derived from the S2 L2A (20m spatial resolution) and L8 L30 (30m spatial resolution) products, using the BV-NET algorithm developed by Weiss & Baret (1999). It aims to retrieve the two BV from multispectral reflectance using the inversion of the radiative transfer model PROSAIL and a back-propagation Artificial Neural Network (ANN). Following the configuration of Delloye et al. (2018), the architecture of the ANN consists of two layers: (i) one layer with five tangent sigmoid transfer functions neurons and (ii) one layer with one linear transfer functions neuron. This configuration allows for
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Country & Year & RS & \# distinct crop types & Number of & Total area \\ & & & original & aggregated & polygons & (1000 ha) \\ \hline NL & 2013 & ✗ & 76 & 41 & 762,725 & 1,855 \\ NL & 2014 & ✗ & 75 & 41 & 765,006 & 1,859 \\ NL & 2015 & ✗ & 260 & 117 & 790,930 & 1,873 \\ NL & 2016 & ✓ & 296 & 133 & 786,572 & 1,871 \\ NL & 2017 & ✓ & 300 & 136 & 785,710 & 1,882 \\ NL & 2018 & ✓ & 312 & 135 & 774,822 & 1,871 \\ NL & 2019 & ✓ & 317 & 139 & 772,565 & 1,868 \\ NL & 2020 & ✓ & 326 & 141 & 767,034 & 1,872 \\ \hline NL & stack & & 401 & 148 & 596,762 & 1,407 \\ \hline FR & 2015 & ✗ & 261 & 150 & 9,434,672 & 27,856 \\ FR & 2016 & ✓ & 261 & 147 & 933,043 & 27,876 \\ FR & 2017 & ✓ & 280 & 148 & 9,393,747 & 27,889 \\ FR & 2018 & ✓ & 282 & 149 & 9,517,878 & 27,917 \\ FR & 2019 & ✓ & 241 & 149 & 9,604,463 & 27,960 \\ FR & 2020 & ✓ & 239 & 148 & 9,778,397 & 27,998 \\ \hline FR & stack & & 319 & 151 & 7,051,683 & 25,495 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Original Geospatial Aid Application (GSA) parcel numbers and area per year used in the study for France (FR) and Netherlands (NL). The number of distinct crop types are provided using original GSA and aggregated using EuroCrops. The ”stack” lines correspond to the Feature Of Interest (FOI, see section 2.2).
Figure 1: Feature Of Interest (FOI) extraction, time series extraction and smoothing. a) the map shows an overlap of the six GSA layers; b) Resulting blocks corresponding to the intersection of the six GSA layers, reduced by an inner buffer; c) rasterized version of the blocks used for extracting the S2 data; d) full S2 and crop types time series of of two selected FOI (shown in panels a-c). Yearly crop types are displayed on the top sub-panel. Input variables time series are displayed using daily observations (circles and squares correspond to S2 and Landsat 8 (L8) data, respectively) and smoothed signal (used as LSTM inputs).
greater dynamics in the output variables (Claverie et al., 2013). The HLS products are normalised using the Bidirectional Reflection Distribution Function (BRDF) with a nadir view zenith angle and a variable sun angle (Claverie et al., 2018), while the S2 L2A products are unadjusted with BRDF. We retained these data specifications and configured two BV-NET models to account for them. For both product types, the cosine of the solar zenith angle was included in the BV-NET input set; for S2 L2A, the view zenith and relative azimuth angles were also included.
Only the Red and Near Infrared bands (NIR) were kept for further analysis; the remaining spectral bands were discarded. Four variables (LAI, FAPAR, Red band and NIR band) pixel-based maps were thus used to derive time series per FOI. Pixels whose centres fell within the FOI boundaries, reduced by a 15 m inner buffer, were averaged using a zonal statistics technique; flagged values (cloud, cloud shadow, cirrus or snow) from QA layers were not included in the averaging. FOI values were only considered valid if more than 75% of the LAI pixels were valid.
Despite filtering the data using relevant QA layers, the resulting FOI-based time series are still contaminated by missed cloud, cloud shadow, haze or dense atmosphere. To remove these remaining outliers, we applied a Hampel filter using red and NIR bands to discard cloud and cloud-shadow in the time series respectively; the parameters of the filter follow Claverie et al. (2018).
Finally, filtered time series of four variables aggregated at FOI level were smoothed individually using a Whittaker filter.4 The time series are first gap-filled in time using a linear interpolation and a time step of 2 days. We applied the Whittaker configuration with V-curve optimization of the smoothing parameter and expectile smoothing using asymmetric weight, with an "Envelope" value of 0.9 and a tested lambda range between -1 and 1 (Eilers et al., 2017). This results in a smoothed time series with a time step of 4 days and no interruption between the seasons.
Footnote 4: as developed by [https://github.com/WFP-VAM/vam](https://github.com/WFP-VAM/vam). Whittaker
## 3 Methods
The features extraction and the model architecture are first described in section 3.1), followed by a description of the learning model and the integration of features as observations (section 3.2). In Section 3.3, we delve into the early-season data augmentation technique. Then in Section 3.4, we explore the training process and the application of models in various countries. Finally, section 3.6 outlines the processing facilities utilized in the study and includes links to the data and code.
### Models description
A series of models were developed involving various configuration and input modalities. The three modalities are RS, Crop Rotation (CR) and Crop Distribution (CD) (see the conceptual framework in Fig. 2). This section describes the model and the integration of the data as features. All the models presented in this section are summarized in Table 2.
Crop rotations have been modeled in a manner similar to how words are structured within a language model (Mikolov et al., 2010). This modeling process is further enhanced by adding S2 time-series data, which is treated as analogous to the prosody of a speaker (Schuller et al., 2016), i.e. the pattern of intonation, stress and rhythm in a speech. Ultimately, the high-level spatial crop distribution features we add on the last layer of the network can be seen as the distribution of the words generally used by the speaker.
#### 3.1.1 Features extraction
Crop RotationFor each parcel, we extracted the crop sequence which corresponds to the CR feature.
#### Remote Sensing temporal integration
We integrated the RS time series into features using a sliding window of size \(W\) months with a step of size \(s_{w}=0.5W\), obtaining a sequence of \(t_{w}=\frac{12}{k_{w}}\) inputs windows for 12 months. On every time window, each signal was integrated using statistical functionals following an approach commonly used for speech data (Schuller et al., 2016). For each window, we used a set of \(F\) statistical functionals to represent the signal as a fixed vector.
#### Crop Distribution
For each FOI, we computed the area of each crops in a circle of radius \(r\) and turned it to percentage of the total cropped area. The rationale to use spatial crop type distribution around the parcel is that, at the scale of a country, agricultural practices vary geographically due to the type of soil and its suitability for crop production, local agro-meteorological conditions, economic and historical factors. In the absence of major shocks, the distribution of the crop types in a region is expected to be stable over the years (Merlos and Hijmans, 2020), which determines the _a priori_ probability of local crop occurrence. We integrate this local information by adding a vector representing the CD over the surrounding crop types centered around each parcel.
#### 3.1.2 Architecture of the models
Baselines using Year-Independent models
We used two baselines models that treat the RS time series signal in a classical way without using hierarchical networks and without modeling the dynamics between the years. One unimodal model is using only RS data and another one is multimodal using RS and CR. **These models are referred to as IntraYE\({}_{RS}\) and IntraYE\({}_{MM}\), respectively**.
As stated in Russwurm et al. (2019), they only consider the time series of a single year, without incorporating a multi-year modeling approach for the RS data which is a key aspect of our proposed approach. This unimodal network is the identical component utilized for encoding the RS signal at the year-level (one green Intra-Year Encoder (IntraYE) in Fig. 3). This provides a strong RS unimodal baseline. The second baseline that
we add comes from the work of Quinton and Landrieu (2021), which integrates the CR modality by using a one-hot encoder vector of the past crop sequence. In our case, as we see the crops as words, this would mean a simple Bag-of-Words Harris (1954), hence we will call it Bag-of-Crops (BoC). Although this type of representation is known not to work well for short texts like tweets or speech turns (Benamara et al., 2016; Barriere et al., 2018; Barriere, 2017; Barriere et al., 2017), we can expect better results with crop sequences which are considerably less complex than natural language.
### Multi-year non-hierarchical models
Following the introduction, a set of novel model architectures is suggested hereafter, and their performance is evaluated in comparison to the existing baseline models. We first aimed to model the sequence of years with a recurrent encoder. These models use year-level features and are called Inter-Year Encoder (InterYE). This corresponds to the orange top Encoder in Fig. 3, modeling the sequence of years.
We modeled the multi-annual crop rotations in a language model fashion by representing the crops as tokens and learning to predict the next one. This model takes the past sequence of crops \((c_{1},...,c_{s})\) as inputs and output the new crop \(c_{r+1}\), modeling the crop rotation dynamics through the years. This corresponds to the orange InterYE in Fig. 3, if only using crop embeddings. It does not use the blue local crop distribution vector. **This unimodal model is denoted InterYE\({}_{Crop}\)**, corresponding to a unimodal Crop Rotations model.
Using solely previous rotations to forecast future crop yields results in inadequate performance due to the limited amount of information provided. Therefore, we decided to enhance the model's robustness by incorporating satellite data, leveraging either the consensus principle or the complementary principle (Xu et al., 2013). We enhanced the unimodal model InterYE\({}_{Crop}\) by adding year-level information from RS. This corresponds to the orange InterYE in Fig. 3, with a green vector being the year-level concatenation of the RS signal (without being processed by the green IntraYE). It does not include the blue local crop distribution vector. **This model is denoted as InterYE\({}_{MM}\)**, corresponding to a non-hierarchical multimodal model with RS. If the model only uses the RS modality, then **it is denoted as InterYE\({}_{RS}\)**.
### Multi-year hierarchical models
We chose to model jointly the inter-year and intra-year dynamics with a hierarchical model composed of one network modeling the RS dynamics within a year underneath another network modeling the rotation dynamics between the years. We processed the RS signal beforehand using another RNN, before concatenating this unimodal RS vector obtained with the crop embedding, in a hierarchical way. This corresponds to the orange color top InterYE and the green color IntraYE in Fig. 3, modeling between the years as well as within a year.
We incorporated the sequential aspect of the RS time-series by processing the RS features at the year level with a first sequential encoder before adding their yearly representation into the second neural network modeling the crop types, leading to a hierarchical network (Serban et al., 2015). This corresponds to
\begin{table}
\begin{tabular}{l|c c c|c c|c} \hline \hline \multirow{2}{*}{**Models**} & \multirow{2}{*}{**CR**} & \multirow{2}{*}{**RS**} & \multirow{2}{*}{**CD**} & \multicolumn{3}{c|}{**Modelisation-level**} & \multirow{2}{*}{**Hierarchical**} \\ & & & & **Within year** & & **Between years** \\ \hline IntraYE\({}_{RS}\) & ✗ & ✓ & ✗ & ✓ & ✗ & ✗ \\ IntraYE\({}_{MM}\) & ✓ & ✓ & ✗ & ✓ & ✗ & ✗ \\ \hline InterYE\({}_{Crop}\) & ✓ & ✗ & ✗ & ✗ & ✓ & ✗ \\ InterYE\({}_{RS}\) & ✗ & ✓ & ✗ & ✗ & ✓ & ✗ \\ InterYE\({}_{MM}\) & ✓ & ✓ & ✗ & ✗ & ✓ & ✗ \\ \hline InterE\({}_{RS}\) & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ \\ \(\text{Hier}{}_{MM}\) & ✓ & ✓ & ✗ & ✓ & ✓ & ✓ \\ \(\text{Hier}{}_{Pand}\) & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary of the different models used in this paper, using Crop Rotations (CR), Remote Sensing (RS), and Crop Distribution (CD).
Figure 2: Overview of the conceptual data framework for crop classification to leverage satellite optical time series, yearly crop rotation history, and spatial local crop distributions.
the orange Inter-year Encoder in Fig. 3, processing the crop embeddings and the RS embeddings coming from the green IntraYE. It does not use the blue local crop distribution vector. **This model is denoted as \(\textbf{HierE}_{MM}\)**, corresponding thus to a hierarchical IntraYE and InterYE to model both crop rotation and RS time-series.
We enhanced the model by adding the CD vector after the IntraYE because it is a high-level feature regarding the task we are tackling and the deeper you go into the layers the higher-level the representations are w.r.t. the task (Sanh et al., 2018). This corresponds to the full network presented in Fig. 3, including the blue local crop distribution vector. **This model is denoted as \(\textbf{HierE}_{final}\)**, corresponding thus to a hierarchical IntraYE and InterYE to model the three modalities.
### Automatic Hierarchical Label Aggregation
#### 3.2.1 Rationale
Training and evaluating a model at large scale, on regions that contain different agro-climatic zones is complex due to the heterogeneity of the temporal and spectral representations of the crops and variability of the climate and agricultural practices. Indeed, if the labels distribution is highly variable between two datasets, one label that was representative of one domain would become not representative in the other. In this work we propose an aggregation of the labels, that would be on the one hand representative of the dataset, and on the other hand thematically pertinent. In this way, it should be possible to evaluate the performances of the classification model at different scales: using all the labels from the region, even the ones with very few examples, then using an aggregation of labels that is representative of the region. This also offers the advantage to evaluate a model on two different datasets with a relevant evaluation on each dataset.
#### 3.2.2 EuroCrops
The Hierarchical Crop and Agriculture Taxonomy version 2 (HCATv2) from EuroCrops offers a knowledge graph re-grouping crops together in a hierarchical way that is coherent with agricultural practices. It contains 393 classes, which are defined at six hierarchical levels of which the first two are fixed due to compatibility with other taxonomies. For example 33-01-01-05-01 corresponds to the class _Summer Oats_, which is included in its parent class 33-01-01-05-00 (_Oats_) and its grand-parent class 33-01-01-00-00 (_Cereals_). Nevertheless, it is not possible to merge the labels only using the hierarchy because some branches go to a deeper level than others. For example, the class _Capsicum_ is level-4 and represent 0.004% of FR, which is the same level than the class _Cereal_ representing 32%. HCATv2 (Schneider et al., 2021) was used to represent a Sankey diagram linking the French GSA (left) and the Dutch GSA (right), using HCATv2 (Schneider et al., 2021, centre) is represented in Fig. A.1 using 40 main crop types for
Fig. 3: Hierarchical Multimodal Model Conceptual Diagram.
each country5.
Footnote 5: An interactive version of the diagram without class limitation is available on [https://jeodpp.jrc.ec.europa.eu/ftp/jrc-opendata/DRLL/CropDeepTrans/data/sankey_All_crops.html](https://jeodpp.jrc.ec.europa.eu/ftp/jrc-opendata/DRLL/CropDeepTrans/data/sankey_All_crops.html).
#### 3.2.3 A Dataset-Agnostic Method to Generate Labels
EC labels have heterogeneous distribution and level of interest because of geographically constrained occurrence. We propose a method to merge non-representative crops together to only keep the most relevant. The method is applied for the evaluation only. By using a dataset-agnostic algorithm, we take the best of both worlds by fusing expert knowledge and data-driven method. The method is applicable at any geographic scale and fully automatic.
We create a graph using EuroCrops and the distribution of each crops in the dataset. The crops are merged together, with the other sibling crops in the EuroCrops graph using their parent-class, and so-on. Each node \(n_{ijkl}\) is a class, with the number of FOI being its weight \(w_{ijkl}\). Each class is link with its parent-class \(n_{ijkl}\). At the beginning, each weights are initialized to zero, except the ones of the end nodes.
With \(i,j,k,l\in\mathbb{N}^{*}\) being the indices of the nodes in the 4 levels of hierarchy, for each node \(n_{ijkl}\) starting by deepest level \(l\): if its weight \(w_{ijkl}\) is above a threshold of representativeness \(th\) then it becomes a class of interest, otherwise \(w_{ijkl}\) is transferred to the weight \(w_{ijk}\) of its parent-class \(n_{ijkl}\). When all the child nodes of \(n_{ijk}\) have been seen, pass to the next node \(n_{ijkl}\).
### Early-Season data augmentation
The end-of-season models that are trained with the data from the whole season are known not suited to classify at early-season. We propose a data-augmentation technique in order to help the model to classify a sample even without getting the whole time series of the season. We follow the approach of Barriere and Claverie (2022) by randomly cropping the end of the vector feature of the RS data \(RS=(RS_{1},...,RS_{\iota})\). To proceed, we sampled an integer \(t<t_{w}\) (the maximum length of the time series) from a discrete probability distribution \(\mathcal{D}\), and used this number to crop the remote sensing data, as if it was ending before:
\[RS_{cropped}=RS[:t]=(RS_{1},...,RS_{\iota}),\quad t\sim\mathcal{D}\]
### Transfer learning between countries
We ran several experiments in order to take advantage of the normalized taxonomy that we used for both countries, by investigating the potential of transferring knowledge between different domains. For these purposes, we compared the performances of a model trained from scratch (i.e. Vanilla) and a model pre-trained over a country before being fine-tuned over another one. This pre-training allows to transfer knowledge from a source task and domain to a target task and domain. We tested this approach in zero-shot and few-shot settings. For the few-shot, we selected \(2^{N}\) (with \(N\in\{4,6,8,10\}\)) examples of every aggregated class (see Section 3.2), in a random way. We added more and more data increasingly so that all the examples from \(2^{n_{1}}\) are comprised in \(2^{n_{2}}\), with \(n_{1}<n_{2}\). A summary of the different experiments can be seen in Table 3.
### Implementation
#### RS features
For each FOI and each year, the RS time series of the four variables are integrated, using a sliding window of size \(W=1\) month; this leads to \(t_{w}=24\) sliding windows. By utilizing this setup, we obtain some overlap between the windows, which should prevent loss of information by breaking the signal dynamics, albeit with a slight trade-off of redundancy in the features. The \(F=7\) statistical functional used were: average mean, standard deviation, min, max, median, first quartile, third quartile. As a total, we obtained \(4\times 24\times 7=672\) features per FOI per year.
#### Encoders
We compared several type of models using different architectures and different modalities. Because our work mainly focuses on how to integrate multimodal data, we opted to use Recurrent Neural Network-Long-Short-Term-Memory (RNN-LSTM) backbones, proven competitive for this kind of task (RuSwurm and Korner, 2020). Our method is also applicable using other encoders such as transformers (Vaswani et al., 2017) or Gated Recurrent Units (Chung et al., 2015).
The Inter-year encoder, we first add an embedding layer to transform the crop type \(c_{t}\) at year \(t\) into a vector \(\mathbf{emb}_{t}=f_{c}(c_{t})\). This embedding vector \(\mathbf{emb}_{t}\) is used as input of the LSTM to produce a hidden state \(h_{t}\) at year \(t\) as seen in Equation 1, which will be used to predict the next crop \(c_{t+1}\) in Equation 2.
\[\mathbf{h}_{t}=\mathrm{LSTM}_{y}(\mathbf{emb}_{t},\mathbf{h}_{t-1}) \tag{1}\]
\[P(c_{t+1}|c_{t},...,c_{1})=f_{c}(\mathbf{h}_{t}) \tag{2}\]
The RS features were integrated at the year-level into a feature vector \(\mathbf{RS}_{t}\) before the modeling of the crop types by the LSTM. We feed the year \(t\) feature vector \(\mathbf{RS}_{t}\) into a neural network layer \(f_{rs}\) to reduce its size and then concatenate it with the crop embeddings before the LSTM (see Equation 3), using \(\mathbf{emb}_{MM_{t}}\) instead of \(\mathbf{emb}_{t}\) in Equation 1.
\[\mathbf{emb}_{MM_{t}}=[\mathbf{emb}_{t},f_{rs}(\mathbf{RS}_{t})] \tag{3}\]
For the IntraYE, we chose to use a bidirectional LSTM (biLSTM) with a self-attention mechanism (Bahdanau et al., 2016) following the assumption that some parts of the year are more important than others to discriminate the crop type. The biLSTM is composed of two LSTM, one of which reads the sequence forward and the other reads it backward. The final hidden states are a concatenation of the forward and backward hidden states. For a sequence of inputs \([\mathbf{RS}_{t_{1}},...,\mathbf{RS}_{\iota}]\) it outputs \(w\) hidden states \([\mathbf{h}_{RS_{t_{1}}},...,\mathbf{h}_{RS_{\iota}}]\). The attention layer will
compute the scalar weights \(u_{t_{v}}\) for each of the \(\mathbf{h}_{RS_{u_{v}}}\) (see Equation 4) in order to aggregate them to obtain the final state \(\mathbf{h}_{RS_{s}}\) (see Equation 5).
\[u_{t_{w}}=att(\mathbf{h}_{RS_{u_{v}}}) \tag{4}\]
\[\mathbf{h}_{RS_{s}}=\sum_{w}u_{t_{w}}\mathbf{h}_{RS_{u_{w}}} \tag{5}\]
For the crop distribution, we concatenated the hidden state \(\mathbf{h}_{t}\) of the LSTM with the crop distribution vector \(\mathbf{d}\) and mixed them using two fully connected layers \(f_{fc1}\) and \(f_{fc2}\) (see Equation 6). Hence, we obtain \(\mathbf{h}_{d_{t}}\) instead of \(\mathbf{h}_{t}\) before the final fully connected layer \(f_{fc}\) from Equation 2.
\[\mathbf{h}_{d_{t}}=f_{fc2}(f_{fc1}([\mathbf{h}_{t},\mathbf{d}])) \tag{6}\]
#### 3.2.2 In-country Early-season experiments
For the early-season experiments, we reduce the size of the time-series by sampling its maximum length from a discrete uniform distribution between 10 and 24: \(\mathcal{D}=\mathcal{U}(10,24)\). By setting the minimum number of steps to 10, we ensure that each sample contains sufficient information (at least 5 months) to facilitate the training phase. Knowing the start of the time-series is October, it means we do not crop the end of the time series up to 1st of March. We used the same cropping size \(t\) for all the samples of the same mini-batch.
#### 3.2.3 Zero-shot, Few-shot and Cross-country Experiments
In order to investigate the potential of transferring knowledge between two different domains, we pre-trained a network on one country, before fine-tuning it over another country. For this, we simply initialized the network weights with the ones of a model already trained on the other domain. We did not freeze any layer during the fine-tuning.
For the zero-shot setting, we used a network that was trained over one country on the other country, without fine-tuning it. For the few-shot setting, we fine-tuned the network only on a subset of the target dataset, taking few examples representative of this dataset. We generated the few-shot subsets by randomly sampling \(2^{N}\) of each of the aggregated class (see Section 4.1). For this, we sampled using the crop of the 2019 distribution, which was used as validation year. We think that this setup is realistic as we only sample from the aggregated classes that are the prominent ones in each of the datasets, and the ones to calculate the metrics to validate the models.
#### 3.2.4 Technicalities
We trained all the networks via mini-batch stochastic gradient descent using the Adam optimizer (Kingma & Ba, 2014) with a learning rate of \(10^{-3}\) and a cross-entropy loss function. The number of neurons for the crop embedding layer, both the RNN internal layers, and the fully connected RS layer \(f_{rs}\) as well as the number of stacked LSTM were chosen using hyperparameters search. The sizes of the layers \(f_{c1}\) and \(f_{c2}\) are the same as the one from the second RNN state \(\mathbf{h}_{t}\).
We trained our networks as for a sequence classification task, always with several years of data. The labels up to 2018 were used as training set, while the labels from 2019 were used as validation set and the labels from 2020 as test set. All results presented hereafter refer to the analysis of 2020 crop types, which are based on models trained with the period 2013-2019 for NL and 2015-2019 for FR, thus independent from the 2020 crop types observations. We zero-padded when no RS data was available (before 2016).
### Processing facility, data and code
The EO extraction and processing, the classification and the benchmarking were performed on the JRC Big Data Analytics Platform (BDAP) using an HTCondor environment (Soille et al., 2018). The platform6 has been built upon the near-data processing concept, which prescribes placing the computing facility close to the storage units to avoid the bottleneck of delaying or degrading interconnection. Experiments with the neural networks were run using PyTorch 1.4.0 (Paszke et al., 2019) on a GPU Nvidia RTX-8000 using CUDA 12.0. The training phase allows to process between roughly 5k and 30k examples per seconds, with each example containing 4 years of data.
Footnote 6: [https://jeodpp.jrc.ec.europa.eu/bdap/](https://jeodpp.jrc.ec.europa.eu/bdap/)
Footnote 7: The data can be downloaded on [https://jeodpp.jrc.ec.europa.eu/ftp/jrc-opendata/DRLL/CropDeepTrans/](https://jeodpp.jrc.ec.europa.eu/ftp/jrc-opendata/DRLL/CropDeepTrans/).
The data extracted and used for this study are openly available on the public FTP7. The code for the data processing, the labels aggregation, and deep learning experiments will be freely available after publication.8
Footnote 8: Add URL after publication
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Name** & **Pre-Training** & **Training** & **Testing** & **\# data from target** & **Models** \\ \hline few-shot-NL & \(\varnothing\) & NL & NL & \(2^{N}\) & \(\text{Hier}\text{E}_{final}\) \\ few-shot-FR & \(\varnothing\) & FR & FR & \(2^{N}\) & \(\text{Hier}\text{E}_{final}\) \\ Vanilla-FR & \(\varnothing\) & FR & FR & \(100\%\) & All models \\ Vanilla-NL & \(\varnothing\) & NL & NL & \(100\%\) & All models \\ \hline
0-shot-NL & \(\varnothing\) & FR & NL & \(0\) & \(\text{Hier}\text{E}_{final}\) \\ Transfer-few-shot-NL & FR & NL & NL & \(2^{N}\) & \(\text{Hier}\text{E}_{final}\) \\ transfer-NL & FR & NL & NL & \(100\%\) & \(\text{Hier}\text{E}_{final}\) \\ Transfer-FR & NL & FR & FR & \(100\%\) & \(\text{Hier}\text{E}_{final}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Transfer learning summary. \(N\in\{4,6,8,10\}\) for the few-shot experiments.
## 4 Experiments and Results
### Feature Extractions and Label Aggregation
#### Crop Types
The crop types labels were extracted from the respective GSA and remapped using EuroCrops. This yields to a total of \(V_{NL}=141\) and \(V_{FR}=151\) classes, for NL and FR respectively, corresponding to \(V=225\) unique classes. We modeled the crop by a one-hot vector of size \(V\) and used it as an input to an embedding layer. The sizes of the different vocabularies determines the sizes of the respective Bag-of-Crops vectors.
The crop label categories for the 2020 test set correspond to a long-tailed class distributions, as shown for the 32-class and 24-class aggregations for the French and the Dutch data sets in Fig. 5 and Fig. 4, respectively. The models are finally validated on a set of crops of interest from the 32-class and 24-class aggregation. Those 12 and 8 respectively for FR and the NL were selected by an expert from Food Security and identified as essential. They are shown in green in Fig. 5 and Fig. 4.
#### RS-based Features
First, the RS data retrieved as described in 2.3 were obtained. The final dataset which consists in the full time series of more than 7M FOI, for a total of more than 35M FOI-year (NL : 5 years x 596k FOI; FR: 5 years x 6,49M) are available for download, as well as the extracted features used in the experiments9.
Figure 4: Distributions of the crop types in the NL dataset for the test year (2020), after aggregation. Green bars are the selected crops for the 8-class evaluation; Red bars refer to the grasslands, and blue bars to the remaining crops.
Figure 5: Distributions of the crop types in the France dataset for the test year (2020), after aggregation. Green bars are the selected crops for the 12-class evaluation; Red bars refer to the grasslands, and blue bars to the remaining crops.
### Spatial Crop Distribution
The spatial CD was derived for the 2019 validation set year, i.e. not for the test year 2020. We used \(r\)=10km as the radius of the circle. We rounded the probability at \(10^{-4}\), leading to some values being 0 when not null. We harmonized the crop list of the two datasets (FR and NL) by gap-filling the missing classes with a value of zero.
### Hierarchical Label Aggregation
For each country, we set the threshold \(th\) at 0.3% of the dataset size, which roughly corresponds to 2k samples for the NL and 20k samples for FR. We classified the class _Permanent Crops_ as one in both cases, as they are the simplest examples to classify when considering rotations. This was done to mitigate its impact by creating multiple labels for permanent crops. The automatic aggregation over FR and NL is shown inFig. 6.
each sliding window. Notably, the unimodal IntraYE\({}_{RS}\) outperformed InterYE\({}_{RS}\), while the multimodal IntraYE\({}_{MM}\) and \(\text{Hier}\text{E}_{MM}\) outperform InterYE\({}_{MM}\).
Several similarities and notable differences can be observed when comparing the models between FR (Table 5) and the NL (Table 4). First, the general performance metrics were lower for FR than for the NL. This difference may be attributed to FR having more classes than the NL. Second, the performances of the RS model's performance using only one year of context (InterYE\({}_{RS}\)) were significantly lower than for the same one on NL, relatively to the other models. The higher variance of the RS data in FR, resulting from its larger size and greater diversity compared to the NL, may contribute to the lower performances observed in this scenario.
### Performances per crops
The utilization of various modalities and their combinations for each of the primary crops in both countries is depicted in
Fig. 6: Aggregated classes selected for validation in each country for FR (A) and NL (B) along with the distribution. The colour highlight the crop type or crop group that were assessed for both country in red and only for one respective country in cyan.
Fig. 9 and Fig. 8. It demonstrated an upward trend in the level of improvement with an increase in the number of modalities employed. The benefits of crop rotation were more significant for certain crops such as pasture, while others such as beetroot are harder to predict without RS signal. The suggested that the crop rotation for FR, limited to only starting from 2015, might not offer enough information to accurately predict crops in complex or irregular crop rotation sequences.
Fig. 10 provides more detailed information on the performance of the best model in the NL and FR. It displays the F1 scores for eight crops of interest in the NL and twelve crops of interest in FR, based on the time of the year used for prediction. Furthermore, for the same crops, we analysed the time series data for the 2020 cropping season (i.e., the test year) averaged at the country level for each of the four remotely sensed variables on Fig. 20, accompanied by the standard error shown on Fig. 21. These visualizations highlight both the variability between crops and the potential confusion that can arise between different crop types.
The end-of-season performances of the models in the NL and FR, denoted as \(\text{HierE}_{final}\), are presented using 5km grid cell maps in Fig. 11 and Fig. 12 respectively. The maps reveal notable regional effects, particularly in FR. For instance, in Brittany (located in the north-west of FR), lower accuracy was observed for most crops, especially _winter barley_. It worth noting that despite labeling the crop types uniformly by country, variations in crop varieties, climates, and agricultural practices among farmers have an impact on phenology, consequently affecting the RS signal. Consequently, using a single dataset for training the model resulted in heterogeneous performances over a large country like FR. The development of regional models is thus highly recommended. Furthermore, the model lacks in
\begin{table}
\begin{tabular}{l|c|c c c c|c c c|c c c|c c c|c c c} \multicolumn{1}{l}{**Labels**} & \multicolumn{3}{c}{**\multicolumn{3}{c}{**141-class**} & \multicolumn{3}{c}{**24-class**} & \multicolumn{3}{c}{**10-class**} & \multicolumn{3}{c}{**8-class**} \\
**Model** & \# Modalities** & P & R & F1 & Acc & P & R & F1 & Acc & P & R & F1 & Acc & P & R & F1 & m-F1 \\ \hline \hline \(\text{Iter}\text{YE}_{Crop}\) & 1 (C) & 36.0 & 25.5 & 27.4 & 76.2 & 53.3 & 37.2 & 39.1 & 76.5 & 51.8 & 43.0 & 43.5 & 77.7 & 43.3 & 35.5 & 34.9 & 53.6 \\ \hline \(\text{IntraYE}_{\text{ES}}\) 2019b & 1 (RS) & 27.4 & 20.9 & 20.4 & 89.8 & 64.0 & 60.9 & 60.4 & 90.3 & 78.8 & 75.9 & 74.5 & 92.9 & 76.1 & 72.6 & 70.8 & 87.8 \\ \(\text{Iter}\text{YE}_{\text{ES}}\) & 1 (RS) & 22.8 & 17.7 & 17.1 & 89.1 & 59.2 & 58.5 & 57.3 & 89.6 & 71.2 & 73.4 & 72.0 & 92.1 & 67.0 & 69.6 & 68.0 & 85.6 \\ \(\text{HierE}_{\text{ES}}\) & 1 (RS) & 0.4 & 0.8 & 0.6 & 56.9 & 2.4 & 4.2 & 3.0 & 56.9 & 5.7 & 10.0 & 7.3 & 57.4 & 0.0 & 0.0 & 0.0 & 0.0 \\ \hline \(\text{IntraYE}_{\text{BM}}\) 2021 & 2 (RS+BoC) & 55.6 & 39.7 & 43.2 & 92.8 & 76.6 & 69.8 & 72.1 & 93.1 & 83.0 & 80.5 & 80.9 & 94.7 & 80.2 & 77.9 & 78.0 & 90.0 \\ \(\text{InterYE}_{\text{BM}}\) & 2 (RS+C) & 41.1 & 33.0 & 33.6 & 92.2 & 70.8 & 70.5 & 69.9 & 92.6 & 82.2 & 79.7 & 80.4 & 94.5 & 80.2 & 76.3 & 77.5 & 89.5 \\ \(\text{HierE}_{\text{BM}}\) & 2 (RS+C) & 47.3 & 38.7 & 39.7 & 93.3 & 74.7 & 75.5 & 74.7 & 93.7 & 85.2 & 81.9 & 83.1 & 95.2 & 83.6 & 78.8 & 80.6 & 91.1 \\ \(\text{HierE}_{final}\) & 3 (All) & 47.1 & 39.3 & 40.2 & **93.6** & 76.6 & 75.8 & 75.7 & **94.0** & 86.7 & 81.9 & 83.6 & **95.5** & 85.3 & 78.7 & 81.1 & **91.6** \\ \end{tabular}
\end{table}
Table 4: Results over Netherlands of the end-of-season classification models with different modalities: Remote Sensing (RS), Crop Rotations as embeddings (C) or BoC, and Spatial Crop Distribution. The metrics shown are macro Precision (P), Recall (R) and F1 score, as well as accuracy and micro-F1 score (m-F1).
\begin{table}
\begin{tabular}{l|c|c c c|c c c|c c c|c c c|c c c} \multicolumn{1}{l}{**Labels**} & \multicolumn{3}{c}{**151-class**} & \multicolumn{3}{c}{**32-class**} & \multicolumn{3}{c}{**14-class**} & \multicolumn{3}{c}{**12-class**} \\
**Model** & P & R & F1 & Acc & P & R & F1 & Acc & P & R & F1 & Acc & P & R & F1 & Acc & P & R & F1 & m-F1 \\ \hline \hline \(\text{IterYE}_{Crop}\) & 1 (C) & 35.6 & 31.0 & 31.7 & 66.0 & 43.7 & 38.8 & 38.7 & 66.2 & 38.9 & 34.3 & 31.7 & 69.1 & 30.9 & 26.4 & 23.0 & 42.7 \\ \(\text{IntraYE}_{\text{ES}}\) 2019b & 1 (RS) & 22.9 & 15.7 & 15.2 & 64.0 & 51.1 & 46.0 & 44.6 & 64.5 & 69.8 & 62.2 & 64.7 & 75.7 & 69.3 & 59.7 & 63.1 & 74.6 \\ \(\text{InterYE}_{\text{ES}}\) & 1 (RS) & 21.3 & 13.2 & 12.6 & 54.9 & 46.5 & 41.5 & 39.2 & 55.5 & 63.9 & 59.6 & 60.2 & 72.2 & 62.7 & 57.4 & 58.5 & 71.2 \\ \(\text{HierE}_{\text{ES}}\) & 1 (RS) & 0.1 & 0.7 & 0.1 & 12.4 & 0.4 & 3.1 & 0.7 & 12.4 & 1.4 & 7.1 & 2.4 & 19.9 & 0.0 & 0.0 & 0.0 & 0.0 \\ \hline \(\text{IntraYE}_{\text{BM}}\) 2021 & 2 (RS+BoC) & 52.7 & 32.4 & 35.9 & 82.7 & 70.1 & 59.3 & 61.8 & 82.8 & 78.1 & 68.7 & 71.0 & 86.6 & 76.2 & 65.6 & 68.0 & 80.3 \\ \(\text{InterYE}_{\text{BM}}\) & 2 (RS+C) & 45.9 & 35.2 & 36.4 & 82.4 & 67.7 & 60.5 & 62.4 & 82.7 & 72.7 & 67.4 & 69.2 & 86.1 & 70.0 & 63.6 & 65.8 & 77.5 \\ \(\text{HierE}_{\text{BM}}\) & 2 (RS+C) & 50.2 & 41.9 & 43.2 & 84.8 & 70.7 & 67.6 & 68.2 & 85.0 & 77.0 & 73.4 & 74.9 & 88.4 & 75.0 & 70.2 & 72.3 & 81.8 \\ \(\text{HierE}_{final}\) & 3 (All) & 45.1 & 37.3 & 38.1 & **85.4** & 72.1 & 68.8 & 69.2 & **85.7** & 79.8 & 76.1 & 77.6 & **89.1** & 78.1 & 73.5 & 75.4 & **83.6** \\ \end{tabular}
\end{table}
Table 5: Results over France of the end-of-season classification models with different modalities: Remote Sensing (RS), Crop Rotations as embeddings (C) or Bag-of-Crops (BoC), and Spatial Crop Distribution. The metrics shown are macro precision, recall and F1 score, as well as accuracy and micro-F1 score (m-F1).
Figure 7: Comparison of early classification using different modalities, with/out data augmentation (macro-F1 with 10 classes) on Netherlands.
formation regarding the specific locations, except through the proxy of crop distribution. Incorporating geographic coordinates and/or weather variables as model inputs could contribute to accounting for spatial variations over large areas, such as FR.
The confusion matrices for both countries can be seen in Fig. 13. These matrices allow for the clustering of crops based on the observed confusions. In general, for both countries, there are instances of crop confusions, indicating misclassifications or difficulties in distinguishing between certain crop types: (i) between _green silo maize_ and _grain maize corn popcorn_, (ii) between winter cereals (_winter common soft wheat_, _winter barley_, _winter tritical_), (iii) between spring cereals (_spring barley_, _spring common soft wheat_). These confusions are anticipated as they occur with synchronous phenologies of the crops that could differs significantly from one region to the other in Europe (d'Andrimont et al., 2020, 2021; Meroni et al., 2021).
### Early-season models
The use of sub-setting technique, intended for in-season classification, was found to be ineffective in improving the performance of a model solely trained on remote sensing data, as shown in Fig. 7. However, when the multimodal model including crop rotations was applied, it resulted in improved performances as early as May. It is worth noting that the overall performances of the multimodal model was observed to be inferior to that of a unimodal crop-only model. This was due to the model overemphasizing the RS modality as the season progressed. To address this issue, a gate mechanism could be incorporated, as proposed in Arevalo et al. (2017) and Chen et al. (2017), which selectively discards noisy modalities. By utilizing the multimodal hierarchical configuration, the model achieved around mid-July 95% of the end-of-season overall accuracy. This corresponds to the period when the winter crops are harvested, and the summer crops reach their peak vegeta
\begin{table}
\begin{tabular}{c|c|c c c c|c c c c|c c c|c c c c|c c} \multicolumn{1}{c|}{**Labels**} & \multicolumn{4}{c}{**N**} & \multicolumn{4}{c}{**141-class**} & \multicolumn{4}{c}{**24-class**} & \multicolumn{4}{c}{**10-class**} & \multicolumn{4}{c}{**8-class**} \\
**Pre-train.** & P & R & F1 & Acc & P & R & F1 & Acc & P & R & F1 & Acc & P & R & F1 & m-F1 \\ \hline \hline & 0 & \(\varnothing\) & \(\varnothing\) & \(\varnothing\) & \(\varnothing\) & \(\varnothing\) & \(\varnothing\) & \(\varnothing\) & \(\varnothing\) & \(\varnothing\) & \(\varnothing\) & \(\varnothing\) & \(\varnothing\) & \(\varnothing\) & \(\varnothing\) & \(\varnothing\) & \(\varnothing\) & \(\varnothing\) & \(\varnothing\) & \(\varnothing\) \\ & 16 & 5.8 & 5.1 & 4.8 & 70.8 & 23.7 & 21.4 & 20.4 & 71.1 & 38.5 & 37.4 & 36.3 & 73.6 & 38.5 & 37.4 & 36.3 & 45.3 \\ ✗ & 64 & 2.7 & 2.5 & 2.2 & 69.2 & 17.1 & 13.1 & 12.5 & 69.4 & 27.3 & 25.7 & 23.3 & 69.6 & 27.3 & 25.7 & 23.3 & 34.7 \\ & 256 & 4.2 & 4.8 & 2.9 & 66.5 & 18.2 & 16.9 & 14.1 & 66.8 & 25.0 & 23.2 & 20.5 & 68.1 & 25.0 & 23.2 & 20.5 & 20.4 \\ & 1024 & 19.6 & 13.3 & 12.4 & 80.8 & 53.6 & 39.8 & 37.2 & 80.3 & 69.7 & 60.4 & 61.5 & 84.0 & 69.7 & 60.4 & 61.5 & 76.3 \\ \hline \hline & 0 & 5.7 & 4.8 & 4.2 & 47.3 & 14.7 & 15.1 & 11.1 & 46.6 & 20.6 & 19.7 & 16.6 & 46.9 & 12.3 & 7.4 & 8.4 & 24.5 \\ & 16 & 12.2 & 7.8 & 7.6 & 70.3 & 30.5 & 23.8 & 24.5 & 70.4 & 37.9 & 33.9 & 34.0 & 72.3 & 37.9 & 33.9 & 34.0 & 45.2 \\ ✓ & 64 & 16.7 & 13.6 & 13.5 & 74.7 & 41.9 & 38.7 & 38.1 & 75.0 & 51.6 & 45.4 & 46.6 & 76.4 & 51.6 & 45.4 & 46.6 & 54.4 \\ & 256 & 25.8 & 21.4 & 20.8 & 82.5 & 55.6 & 51.1 & 50.6 & 82.7 & 67.3 & 58.0 & 60.1 & 84.6 & 67.3 & 58.0 & 60.1 & 69.2 \\ & 1024 & 32.7 & 27.3 & 26.0 & 84.9 & 61.3 & 57.3 & 54.3 & 84.9 & 73.8 & 72.0 & 71.6 & 87.0 & 73.8 & 72.0 & 71.6 & 80.9 \\ \hline \hline ✗ & All & 47.1 & 39.2 & 40.2 & 93.7 & 76.6 & 75.8 & 75.8 & 94.0 & 86.7 & 81.9 & 83.6 & 95.5 & 85.3 & 78.7 & 81.1 & 91.6 \\ ✓ & All & 42.5 & 35.3 & 36.0 & 92.8 & 67.3 & 53.4 & 55.9 & 94.2 & 89.9 & 82.2 & 85.3 & 95.7 & 88.8 & 77.6 & 82.3 & 91.8 \\ \end{tabular}
\end{table}
Table 6: Results over Netherlands of the few-shot final classification models, with or without pre-training over France. N represent the number of examples shown per aggregated class on the target dataset. The metrics shown are macro precision, recall and F1 score, as well as accuracy and micro-F1 score (m-F1). N stands for the Few-Shot size.
Figure 8: Comparison of the F1-scores by crops of the best hierarchical multimodal model and the model using different modalities (Crop rotation, Remote sensing only and all) on the Netherlands. We used the InterYE\({}_{Crop}\) (in blue), IntraYE\({}_{RS}\) (in orange) and HierE\({}_{final}\) (in green) models.
tion.
In Fig. 10, the \(\text{HierE}_{MM}\) model data-augmented model was assessed through time. F-score are provided for the 10 classes of interest for NL, as defined in Section 4. Overall, there was an offset of the curve rise-up. The shift is earlier (from mid-April to mid-June), which was consistent with other studies (Russwurm et al., 2023).
### Cross-country transfer learning results
According to Table 6, there were two notable distinctions between the models pre-trained on FR and the ones trained from scratch. The first distinction pertained to few-shot setting, where the pre-training enables not only superior but also
Fig. 10: F1-score per crop group along the season for NL. When crop have winter and spring varieties, the spring varieties are represented as dashed lines.
Fig. 9: Comparison of the F1-scores by crops of the best hierarchical multimodal model and the model using different modalities (Crop rotation, Remote sensing only and all) on France We used the InterYE\({}_{crop}\), IntraYE\({}_{\text{ES}}\) and \(\text{HierE}_{final}\) models.
more consistent results in few-shot classification, reducing dependence on the specific examples used during few-shot training. The second distinction lies in the full-data setting, where we observed that the from-scratch model performs better when validated with 141 and 24 classes (with an m-F1 score of 40.2 and 75.8, respectively, compared to 36.0 and 55.9), while the pre-trained model was better when using fewer classes. This results highlighted an intriguing behaviour, suggesting that the pre-trained neural network was more general and less prone to over-fitted on the prominent classes of the NL dataset. We can conclude that transfer learning proves beneficial when limited labeled examples are available, while in the full-dataset training mode, it enhances the model's performance for general classes at the expense of specific classes in the dataset.
### Limitations
The primary constraint of this study, particularly in developing countries and numerous other nations, is the requirement for digitized parcel boundary data. Another limitation is the necessity to obtain information regarding the crops cultivated in the previous year. The impact of noisy input data, such as past crop information derived from a prediction system rather than ground-truth data, remains unknown in terms of the system's response. Exploring this aspect constitutes an intriguing avenue for future research.
For encoding the RS signal, we utilized a backbone consisting of a mean aggregation of the FOI's pixels, followed by the application of a temporal context window with statistical functionals. Studies proved that other methods were more efficient in terms of performances (Sainte Fare Garnot et al., 2020). An inherent enhancement to encode the RS signal in a more effective manner would involve employing an end-to-end approach. This approach entails learning the aggregation of RS data and integrating its representation into a neural network, similar to architectures such as CNN-RNN (Pelletier et al., 2019; Sainte Fare Garnot et al., 2019) or more advanced structures like PSE-LTAE (Sainte Fare Garnot et al., 2020; Quinton and Landrieu, 2021; Weilandt et al., 2023). By adopting these powerful architectures, the encoding of RS signals can be optimized, thereby potentially improving the overall performance and accuracy of the system.
Fig. 11: Map of F1 for the six most important crops over The Netherlands. The F1-score is computed for each crop and for each 5km grid cell. Grid cells with less than 50ha (i.e., 2% of the land) of the given crops are not plotted. Map projection is EPSG:3035.
Figure 12: Map of F1 for the six most import crops over Metropolitan France. See legend of Fig. 11.
Figure 13: Confusion matrices for the 10- and 14-class settings for Netherlands and France, respectively.
### Recommendations
Regarding future research directions, there are several avenues for further exploration. One potential direction is to integrate knowledge from the EuroCrop ontology graph inside the learning model, for example by creating multi-level embeddings of each crop. This could improve the ability of the model to capture the complex spatiotemporal variability of crops.
A more complex way to fuse the modalities together could also be explored, such as using a Gated Multimodal Unit (Arevalo et al., 2017). This could lead to better integration of the different data modalities and improved performance of the model.
It would also be valuable to investigate the results at a more regional/local level, especially for FR with its large landmass, crop diversity, and meteorological conditions. Local hierarchical clustering and the performance of the model at the regional level could be examined to gain a deeper understanding of how the model performs in different regions. We saw that the results for FR were lower, possibly due to the diversity of crops and the distribution vector used. Investigating the effect of a regionalized model, for example by fine-tuning using Adapter layers (Poth et al., 2020), could be a potential solution. In addition, adding meteorological features and investigating their impact could be worthwhile, especially in the case of extreme events. Methods such as Tseng et al. (2021) or using learned embeddings that represent the time of the thermals (Nyborg et al., 2022) could be explored.
Another area to investigate is the potential of a specific loss function for the early season model, like the one proposed in RuBswurm et al. (2023), as per our simple data-augmentation technique. This could lead to better performance in the early season, which is an ongoing challenge for crop classification.
Other potential avenues for future work include adding more countries to the experiments, but also testing the system with different backbones, allowing ingestion of the EO raw time series as they are. This last step would prevent reliance on man-made filters like Hampel or Whittaker and man-made features like FAPAR and LAI, as they contain filtered information, filtering some that may be useful for the final task (Trigeorgis et al., 2016).
Overall, these future directions could further improve the accuracy and generalization of the proposed multimodal approach for crop classification.
## 6 Conclusions
In conclusion, we proposed a multimodal hierarchical approach for crop classification that leverages crop rotation history, optical remote sensing signals, and local crop distributions. We released a large harmonized time series dataset of 7M Feature Of Interest (FOI) for a total of around 35M FOI-year. We introduced a new dataset-agnostic method relying on data and expert knowledge for aggregating crops, allowing to evaluate a classifier on a specific region in a meaningful way. Finally, we propose a data-augmentation method to boost the results in early-season setting. Our approach achieved high accuracy without in-situ data from the test year and showed promising results for cross-domain generalization through transfer learning and few-shot learning experiments. Pre-training on a dataset improves domain adaptation between countries, allowing for cross-domain zero-shot learning and stabilization of the learning in a few-shot setting. Our approach can contribute significantly to agriculture management and policy monitoring.
## Author contributions
V.B. and M.C. conceptualized the study. V.B., M.C. and R.D. designed the methodology: M.C. extracted the RS time-series and the original cropcodes on every Feature Of Interest, and proposed to use neural nets on rotations. V.B. proposed hierarchical multimodal models, the hierarchical aggregation, the data-augmentation, the few-shot and transfer learning experiments, extracted the features and ran the experiments. M.S. provided the EuroCrops dataset, harmonisations and support. R.D. helped to formalize all the research. V.B., M.C. and R.D. wrote the draft of the paper. All the authors analyzed the results and wrote the final paper.
## Acknowledgements
The authors would like to thank Momtchil Iordanov for his support for visuals and Loic Landrieu for the useful comments on the manuscript. They also would like to thank the Big Data Analytics project for their continuous support. V.B. has been funded by the grant National Center for Artificial Intelligence CENIA FB210017, Basal ANID.
|
2310.11962 | Machine Learning for Staggered Difference-in-Differences and Dynamic
Treatment Effect Heterogeneity | We combine two recently proposed nonparametric difference-in-differences
methods, extending them to enable the examination of treatment effect
heterogeneity in the staggered adoption setting using machine learning. The
proposed method, machine learning difference-in-differences (MLDID), allows for
estimation of time-varying conditional average treatment effects on the
treated, which can be used to conduct detailed inference on drivers of
treatment effect heterogeneity. We perform simulations to evaluate the
performance of MLDID and find that it accurately identifies the true predictors
of treatment effect heterogeneity. We then use MLDID to evaluate the
heterogeneous impacts of Brazil's Family Health Program on infant mortality,
and find those in poverty and urban locations experienced the impact of the
policy more quickly than other subgroups. | Julia Hatamyar, Noemi Kreif, Rudi Rocha, Martin Huber | 2023-10-18T13:41:41Z | http://arxiv.org/abs/2310.11962v1 | # Machine learning for staggered difference-in-differences and dynamic treatment effect heterogeneity
###### Abstract.
We combine two recently proposed nonparametric difference-in-differences methods, extending them to enable the examination of treatment effect heterogeneity in the staggered adoption setting using machine learning. The proposed method, machine learning difference-in-differences (MLDID), allows for estimation of time-varying conditional average treatment effects on the treated, which can be used to conduct detailed inference on drivers of treatment effect heterogeneity. We perform simulations to evaluate the performance of MLDID and find that it accurately identifies the true predictors of treatment effect heterogeneity. We then use MLDID to evaluate the heterogeneous impacts of Brazil's Family Health Program on infant mortality, and find those in poverty and urban locations experienced the impact of the policy more quickly than other subgroups.
Julia Hatamyar, Corresponding Author e-mail: [email protected]. All code used in this paper is available at [https://github.com/jhatamyar/MLDID](https://github.com/jhatamyar/MLDID).
This work was funded by the UK Medical Research Council (Grant #: MR/T04487X/1).
1
Footnote 1: See A. C. Baker, Larcker, and Wang (2022) for an overview of various estimators for use with staggered DID |
2304.03221 | Degrees of interior polynomials and parking function enumerators | The interior polynomial of a directed graph is defined as the
$h^*$-polynomial of the graph's (extended) root polytope, and it displays
several attractive properties. Here we express its degree in terms of the
minimum cardinality of a directed join, and give a formula for the leading
coefficient. We present natural generalizations of these results to oriented
regular matroids; in the process we also give a facet description for the
extended root polytope of a regular oriented matroid.
By duality, our expression for the degree of the interior polynomial implies
a formula for the degree of the parking function enumerator of an Eulerian
directed graph (which is equivalent to the greedoid polynomial of the
corresponding branching greedoid). We extend that result to obtain the degree
of the parking function enumerator of an arbitrary rooted directed graph in
terms of the minimum cardinality of a certain type of feedback arc set. | Tamás Kálmán, Lilla Tóthmérész | 2023-04-06T16:53:26Z | http://arxiv.org/abs/2304.03221v2 | # Degrees of interior polynomials and parking function enumerators
###### Abstract.
The interior polynomial of a directed graph is defined as the \(h^{*}\)-polynomial of the graph's (extended) root polytope, and it displays several attractive properties. Here we express its degree in terms of the minimum cardinality of a directed join. We present a natural generalization of this result to oriented regular matroids; in the process we also give a facet description for the extended root polytope of a regular oriented matroid.
By duality, our expression for the degree of the interior polynomial implies a formula for the degree of the parking function enumerator of an Eulerian directed graph (which is equivalent to the greedoid polynomial of the corresponding branching greedoid). We extend that result further to obtain the degree of the parking function enumerator of an arbitrary rooted directed graph in terms of the minimum cardinality of a certain type of feedback arc set.
## 1. Introduction
### Summary of results
In this paper we compute the degrees of two interrelated (in fact, in an appropriate sense, dual) graph and matroid polynomials. In particular, we show that they can be expressed using common graph/matroid theoretic concepts.
The first type of polynomial we deal with is the interior polynomial of a directed graph [12]. This notion generalizes some well-studied graph invariants, most notably the specialization \(T(x,1)\) of the Tutte polynomial, as well as its extension to hypergraphs [9]. It is defined as the \(h^{*}\)-vector of the (extended) root polytope associated to the digraph; see Section 2 for detailed definitions. In this paper we point out that the degree of the interior polynomial has a meaningful connection to the graph structure.
In a directed graph, we call a cut _directed_ if all of its edges point toward the same shore. An edge set in a digraph is called a _directed join_, or _dijoin_ for short, if it intersects every directed cut.
**Theorem 1.1**.: _Let \(G\) be a connected digraph. Then the degree of the interior polynomial of \(G\) is equal to \(|V(G)|-1-\nu(G)\), where \(\nu(G)=\min\{|K|\mid K\subseteq E\text{ is a dijoin of }G\}\)._
Recall that by a theorem of Lucchesi and Younger [15], the quantity \(\nu(G)\) above is also the maximal number of edge-disjoint directed cuts in \(G\). Furthermore, if the underlying undirected graph of \(G\) is \(2\)-edge connected, then \(\nu(G)\) is the minimal number of edges whose reversal yields a strongly connected orientation of \(G\)[7, Proposition 9.7.1].
**Remark 1.2**.: Let us explain a simple case that inspired Theorem 1.1. The interior polynomial was first introduced by Kalman [9], for hypergraphs. By the main
## 1. Introduction
Let \(M\) be a finite finite dimensional manifold with \(n\) vertices and \(\nu(M)\) a finite dimensional manifold with \(n\) vertices. We say that \(M\) is _regular_ if \(M\) is _regular_ if \(M\) is _regular_ if \(M\) is _regular_ if \(M\) is _regular_ if \(M\) is _regular_.
Let \(M\) be a finite dimensional manifold with \(n\) vertices and \(\nu(M)\) a finite dimensional manifold with \(n\) vertices. We say that \(M\) is _regular_ if \(M\) is _regular_ if \(M\) is _regular_ if \(M\) is _regular_ if \(M\) is _regular_ if \(M\) is _regular_ if \(M\) is _regular_ if \(M\) is _regular_ if \(M\) is _regular_.
Let \(M\) be a finite dimensional manifold with \(n\) vertices and \(\nu(M)\) a finite dimensional manifold with \(n\) vertices and \(\nu(M)\) a finite dimensional manifold with \(n\) vertices. We say that \(M\) is _regular
term depends only weakly on the greedoid, namely it is the size of the ground set minus the rank. For exact definitions, see Section 5.) In the special case of branching greedoids of Eulerian digraphs, the greedoid polynomial is related to the interior polynomial via duality. Let us sketch this connection. For rooted directed graphs, the greedoid polynomial of the induced branching greedoid is equivalent to the enumerator of graph parking functions [5] (also commonly called \(G\)-parking functions or generalized parking functions [18]) via a reversal of the sequence of the coefficients. Now, for Eulerian digraphs, the parking function enumerator agrees with the interior polynomial of the cographic matroid of the digraph [19]. Hence in this case the degree of the interior polynomial corresponds to the degree of the lowest term in the greedoid polynomial.
An edge set of a digraph is called a _feedback arc set_ if it intersects each directed cycle. Using this dual notion of a dijoin, Theorem 1.3 implies the following.
**Theorem 1.4**.: _The degree of the parking function enumerator of a connected Eulerian digraph \(G\) (with any root) is equal to \(|E(G)|-|V(G)|+1-\mathrm{minfas}(G)\), where \(\mathrm{minfas}(G)\) denotes the minimal cardinality of a feedback arc set in \(G\)._
_Equivalently, for the branching greedoid of \(G\) (with any root), the coefficient of \(x^{i}\) in the greedoid polynomial is zero for \(i=0,\ldots,\mathrm{minfas}(G)-1\), and nonzero for \(i=\mathrm{minfas}(G)\)._
We remark that, in connected Eulerian digraphs, not just the degree but each coefficient of the parking function enumerator/greedoid polynomial is independent of the root [17, 5]. None of this holds for general digraphs.
We generalize Theorem 1.4 to branching greedoids of all rooted digraphs and, in a certain sense, to all greedoids. (These settings do not correspond to interior polynomials anymore.) To state our result on general directed branching greedoids, we define a rooted variant of a feedback arc set. For a graph \(G=(V,E)\) and edge set \(F\subset E\), we put \(G[F]=(V,F)\).
**Definition 1.5**.: Let \(G\) be a root-connected digraph with root \(s\) and edge set \(E\). We say that a set of edges \(F\subset E\) is an _\(s\)-connected feedback arc set_ if \(G[E-F]\) is an \(s\)-connected acyclic digraph. (Such a set \(F\) always exists.) We denote by \(\mathrm{minfas}(G,s)\) the minimum cardinality of an \(s\)-connected feedback arc set of \(G\).
With that, our formula is as follows.
**Theorem 1.6**.: _Let \(G=(V,E)\) be a root-connected digraph with root \(r\). Then the degree of the parking function enumerator of \(G\), rooted at \(r\), is equal to \(|E|-|V|+1-\mathrm{minfas}(G,r)\)._
_Equivalently, in the greedoid polynomial of the branching greedoid of \(G\) rooted at \(r\), the coefficients of \(x^{0},\ldots,x^{\mathrm{minfas}(G,r)-1}\) are zero, and the coefficient of \(x^{\mathrm{minfas}(G,r)}\) is nonzero._
**Remark 1.7**.: By [4, Theorem 6.10], the constant term of the greedoid polynomial of a (root-connected) rooted digraph \(G\) is zero if and only if \(G\) contains a directed cycle. Theorem 1.6 strengthens this statement. Indeed, (for an \(s\)-root-connected digraph \(G\)) we have \(\mathrm{minfas}(G,s)>0\) if and only if \(G\) contains a directed cycle.
For general greedoids, we show that in order to determine the degree of the lowest term of the greedoid polynomial, it is in fact enough to understand when a greedoid has nonzero constant term in its greedoid polynomial.
**Theorem 1.8**.: _Let \(X=\{E,\mathcal{F}\}\) be a greedoid of rank \(r\), and for a subset \(S\subset E\) let \(X|_{S}\) denote the greedoid obtained by restricting \(X\) to \(S\). Let_
\[k=\min\{|S|\mid\text{$S\subset E$ so that $\operatorname{rank}(X|_{E-S})=r$ and the constant term of the $\text{greedoid polynomial of $X|_{E-S}$ is nonzero}\}.\]
_Then in the greedoid polynomial of \(X\), the coefficient of \(x^{i}\) is zero for \(i=0,\ldots,k-1\), and the coefficient of \(x^{k}\) is nonzero._
In fact, Theorem 1.6 follows directly from Theorem 1.8 and the characterization of Bjorner, Korte, and Lovasz mentioned in Remark 1.7.
The paper is structured as follows. Section 2 contains our results in the most 'classical' case of directed graphs, including the proof of Theorem 1.1. Section 3 discusses the comparison of different orientations of the same graph. In Section 4 we generalize Section 2 to regular oriented matroids and prove Theorem 1.3. Finally, in Section 5 we turn to parking function enumerators and greedoid polynomials, and prove Theorems 1.4, 1.8, and 1.6.
### Acknowledgements
We are grateful to Andras Frank for stimulating conversations and for pointing out to us Theorem 9.6.12 of his book [7].
TK was supported by a Japan Society for the Promotion of Science (JSPS) Grant-in-Aid for Scientific Research C (no. 17K05244).
LT was supported by the National Research, Development and Innovation Office of Hungary - NKFIH, grant no. 132488, by the Janos Bolyai Research Scholarship of the Hungarian Academy of Sciences, and by the UNKP-22-5 New National Excellence Program of the Ministry for Innovation and Technology, Hungary. This work was also partially supported by the Counting in Sparse Graphs Lendulet Research Group of the Alfred Renyi Institute of Mathematics.
### Graph notation
Let \(G=(V,E)\) be a directed graph (abbreviated to digraph throughout). A non-empty set of edges \(C^{*}\subseteq E(G)\) is called a _cut_ if there is a partition \(V_{0}\sqcup V_{1}=V(G)\) of the vertex set such that \(C^{*}\) is the set of edges connecting a vertex of \(V_{0}\) and a vertex of \(V_{1}\). The sets \(V_{0}\) and \(V_{1}\) are called the _shores_ of the cut. Note that in a connected graph, a cut uniquely determines its two shares. A cut \(C^{*}\) is called _elementary_ if it is minimal among cuts with respect to containment; this is equivalent to the condition that \(V_{0}\) and \(V_{1}\) both span (weakly) connected subgraphs. A cut \(C^{*}\) is _directed_ if either each edge of \(C^{*}\) leads from \(V_{0}\) to \(V_{1}\), or each edge of \(C^{*}\) leads from \(V_{1}\) to \(V_{0}\).
In a directed graph, we call a subgraph a _spanning tree_ if it is a spanning tree (i.e., connected and cycle-free) when forgetting the orientations. For a spanning tree \(T\) and an edge \(e\notin T\), the _fundamental cycle_ of \(e\) with respect to \(T\), denoted by \(C(T,e)\), is the unique cycle in \(T\cup e\). Similarly, for an edge \(e\in T\), the _fundamental cut_ of \(e\) with respect to \(T\) is the unique (elementary) cut in the complement of \(T-e\). We denote it by \(C^{*}(T,e)\).
A set of edges \(K\subseteq E(G)\) is called a _dijoin_ if for any directed cut \(C^{*}\) of \(G\) we have \(C^{*}\cap K\neq\emptyset\). We denote the cardinality of a minimal dijoin of \(G\) by \(\nu(G)\). A set of edges \(F\subseteq E(G)\) is called a _feedback arc set_ if for any directed cycle \(C\) we have \(C\cap F\neq\emptyset\). We denote the minimal cardinality of a feedback arc set of \(G\) by \(\operatorname{minfas}(G)\).
Let \(G\) be a digraph with a fixed root vertex \(s\). The graph \(G\) is said to be _root-connected_ if each vertex is reachable along a directed path from \(s\). We call a
subgraph \(A\) an _arborescence_ rooted at \(s\) if its underlying undirected graph is a tree and each vertex of \(A\) is reachable from \(s\) along a directed path in \(A\). A _spanning arborescence_ is an arborescence that contains each vertex of \(G\). We will denote the set of spanning arborescences of \(G\), rooted at \(s\), by \(\operatorname{Arb}(G,s)\). We note that a digraph \(G\), rooted at \(s\), has a spanning arborescence (rooted at \(s\)) if and only if it is \(s\)-root-connected.
For a digraph \(G=(V,E)\), and an edge set \(S\subseteq E\), we denote by \(G[S]\) the digraph with vertex set \(V\) and edge set \(S\).
## 2. The degree of the interior polynomial of a digraph
Let \(G=(V,E)\) be a directed graph. To an edge \(e=\overrightarrow{th}\in E\), let us associate the vector \(\mathbf{x}_{e}=\mathbf{1}_{h}-\mathbf{1}_{t}\in\mathbb{R}^{V}\). (Here \(t,h\in V\) and \(\mathbf{1}_{t},\mathbf{1}_{h}\in\mathbb{R}^{V}\) are the corresponding generators.)
**Definition 2.1**.: The _root polytope_ of a directed graph \(G=(V,E)\) is the convex hull
\[\mathcal{Q}_{G}=\operatorname{Conv}\{\,\mathbf{x}_{e}\mid e\in E\,\}\subset \mathbb{R}^{V}.\]
The _extended root polytope_ of \(G\) is
\[\tilde{\mathcal{Q}}_{G}=\operatorname{Conv}(\{\mathbf{0}\}\cup\{\,\mathbf{x} _{e}\mid e\in E\,\})\subset\mathbb{R}^{V}.\]
In the paper [12] we have already used this latter notation, although not the name, in the special case when \(G\) is a forest. Let us also recall the well known facts that for \(G=T\) a tree (or forest), both \(\mathcal{Q}_{T}\) and \(\tilde{\mathcal{Q}}_{T}\) are simplices (moreover, they are unimodular with respect to the lattice \(\mathbb{Z}^{V}\)).
Since we are about to define the interior polynomial of a digraph as the \(h^{*}\)-polynomial of its extended root polytope, let us recall that for any \(d\)-dimensional polytope \(Q\subset\mathbb{R}^{n}\) with vertices in \(\mathbb{Z}^{n}\), its _\(h^{*}\)-polynomial_\(\sum_{i=0}^{d}h_{i}^{*}t^{i}\), also commonly called the _\(h^{*}\)-vector_ of \(Q\), is defined by Ehrhart's identity
\[\sum_{i=0}^{d}h_{i}^{*}t^{i}=(1-t)^{d+1}\mathrm{Ehr}_{Q}(t),\quad\text{where} \quad\mathrm{Ehr}_{Q}(t)=\sum_{k=0}^{\infty}|(k\cdot Q)\cap\mathbb{Z}^{n}|\,t ^{k} \tag{2.1}\]
is the so called _Ehrhart series_ of \(Q\). We note that \(h_{0}^{*}=1\) whenever \(d\geq 0\), i.e., whenever \(Q\) is non-empty.
Intuitively, the \(h^{*}\)-polynomial can be thought of as a refinement of volume. Indeed, \(h^{*}(1)\) (that is, the sum of the coefficients) is equal to the normalized volume of the polytope, where by normalized we mean that the volume of a \(d\)-dimensional unimodular simplex is \(1\).
Now we are in a position to introduce our object of study for this section.
**Definition 2.2** (Interior polynomial).: Let \(G\) be a directed graph. We call the \(h^{*}\)-polynomial of the extended root polytope \(\tilde{\mathcal{Q}}_{G}\) the _interior polynomial_ of \(G\), and denote it with \(I_{G}\).
**Remark 2.3**.: In the earlier papers [9, 10, 12], the interior polynomial was defined only for so-called semi-balanced digraphs, and as the \(h^{*}\)-polynomial of \(\mathcal{Q}_{G}\) instead of \(\tilde{\mathcal{Q}}_{G}\). As we will soon point out, for these graphs, the \(h^{*}\)-polynomials of \(\mathcal{Q}_{G}\) and \(\tilde{\mathcal{Q}}_{G}\) agree. Thus our current definition contains the previous one.
Let us repeat our main claim about the interior polynomial.
**Theorem 1.1**.: Let \(G\) be a connected digraph. The degree of the interior polynomial of \(G\) is equal to \(|V(G)|-1-\nu(G)\), where
\[\nu(G)=\min\{|K|\mid K\subseteq E\text{ is a dijoin of }G\}.\]
Before proving Theorem 1.1, we examine root polytopes in more detail. If \(G\) is connected, then the dimension of \(\mathcal{Q}_{G}\) is either \(|V|-1\) or \(|V|-2\). It is \(|V|-2\) if and only if \(G\) satisfies the following condition [12].
**Definition 2.4**.: [12] A directed graph \(G\) is _semi-balanced_ if there is a function \(\ell\colon V\to\mathbb{Z}\) such that we have \(\ell(h)-\ell(t)=1\) for each edge \(\overrightarrow{th}\) of \(G\). We call such a function \(\ell\) a _layering_ of \(G\).
We note that an alternative characterization of semi-balanced digraphs is that each cycle has the same number of edges going in the two directions around the cycle. This description was given as the definition in [12].
Now let us turn to the extended root polytope \(\tilde{\mathcal{Q}}_{G}\). This is the same as the root polytope of \(G\cup f\), where \(f\) is a loop edge attached to any vertex of \(G\). Indeed, in this case \(\mathbf{x}_{f}=\mathbf{0}\). Furthermore, if \(G\) has a directed cycle \(C\) then \(\frac{1}{|C|}\sum_{e\in C}\mathbf{x}_{e}=\mathbf{0}\), that is, \(\mathbf{0}\) is a point of \(\mathcal{Q}_{G}\). Hence if \(G\) is not acyclic, then \(\mathcal{Q}_{G}=\tilde{\mathcal{Q}}_{G}\).
On the other hand if \(G\) is semi-balanced, then \(\mathcal{Q}_{G}\) lies in the affine hyperplane \(\{\mathbf{x}\in\mathbb{R}^{V}\mid\ell\cdot\mathbf{x}=1\}\). (Here we view the layering \(\ell\) as a vector in \(\mathbb{R}^{V}\) and use the standard dot product.) Thus, \(\tilde{\mathcal{Q}}_{G}\) and \(\mathcal{Q}_{G}\) do not coincide but as \(\tilde{\mathcal{Q}}_{G}\) is just a cone (of the minimum possible height) over \(\mathcal{Q}_{G}\), their \(h^{*}\)-vectors still agree.
Hence if we defined the interior polynomial using \(\mathcal{Q}_{G}\) instead of the extended root polytope, that would only make a difference for acyclic, but not semi-balanced graphs. However, Theorem 1.1 would not hold if the \(h^{*}\)-polynomial of \(\mathcal{Q}_{G}\) replaced that of \(\tilde{\mathcal{Q}}_{G}\).
**Example 2.5**.: Consider the triangle \(G=(\{v_{1},v_{2},v_{3}\},\{\overrightarrow{v_{1}v_{2}},\overrightarrow{v_{2} v_{3}},\overrightarrow{v_{1}v_{3}}\})\). It is easy to check that \(h^{*}_{\mathcal{Q}_{G}}(x)=1\), while \(h^{*}_{\tilde{\mathcal{Q}}_{G}}(x)=x+1\). Moreover we have \(\nu(G)=1\), whence \(|V(G)|-1-\nu(G)=1\). Thus indeed, \(h^{*}_{\tilde{\mathcal{Q}}_{G}}\) does not satisfy the degree formula, only \(h^{*}_{\tilde{\mathcal{Q}}_{G}}\) does.
We will need a description of \(\tilde{\mathcal{Q}}_{G}\) by linear inequalities (half-spaces). It requires the following notions.
**Definition 2.6**.: Let \(C^{*}\) be a cut in the graph \(G\) with shores \(V_{0}\) and \(V_{1}\). Let \(f_{C^{*}}\) be the linear functional with \(f_{C^{*}}(\mathbf{1}_{v})=1\) when \(v\in V_{1}\) and \(f_{C^{*}}(\mathbf{1}_{v})=0\) when \(v\in V_{0}\). If \(G\) is directed and \(C^{*}\) is a directed cut, we will always suppose that \(V_{1}\) is the shore containing the heads of the edges in the cut. We will refer to \(f_{C^{*}}\) as the _functional induced by the cut_\(C^{*}\).
**Definition 2.7**.: Let \(G=(V,E)\) be a directed graph. A function \(\ell\colon V\to\mathbb{Z}\) is called an _admissible layering_ of \(G\) if \(\ell(h)-\ell(t)\leq 1\) holds for each \(\overrightarrow{th}\in E\), and the edges with \(\ell(h)-\ell(t)=1\) form a (weakly) connected subgraph of \(G\).
We remark that any admissible layering of a semi-balanced graph is actually a layering in the sense of Definition 2.4, so we have not introduced anything new into that context.
Then, our description is as follows.
**Proposition 2.8**.: _The extended root polytope of any connected digraph \(G\) satisfies_
\[\tilde{\mathcal{Q}}_{G}=\left\{\mathbf{x}\in\mathbb{R}^{V}\Bigg{|}\begin{array}[ ]{ll}f_{C^{*}}(\mathbf{x})\geq 0&\text{for all elementary directed cuts $C^{*}$ of $G$}\\ \ell\cdot\mathbf{x}\leq 1&\text{for all admissible layerings $\ell$ of $G$}\\ \mathbf{1}\cdot\mathbf{x}=0&\end{array}\right\}.\]
Here \(\mathbf{1}=\sum_{v\in V}\mathbf{1}_{v}\in\mathbb{R}^{V}\). Note that if \(\ell\) is an admissible layering then so is \(\ell+m\cdot\mathbf{1}\) for all \(m\in\mathbb{Z}\). As is obvious from the Proposition, members of such equivalence classes of admissible layerings describe the same constraint for \(\tilde{\mathcal{Q}}_{G}\). If we chose a representative of each class, for example by assuming that \(\ell(v_{0})=0\) for some fixed vertex \(v_{0}\), then the number of constraints would become formally finite. (Indeed, there are only finitely many ways to choose the connected subgraph whose edges \(\overleftarrow{th}\) satisfy \(\ell(h)-\ell(t)=1\), and with \(\ell(v_{0})=0\) that subgraph already determines \(\ell\).)
Proposition 2.8 contains some previously published special cases, such as [10, Proposition 3.6] and [8, Theorem 3.1]. A very similar facet description of \(\mathcal{Q}_{G}\) was recently obtained by Numata et al. [16].
Proof of Proposition 2.8.: First note that each vector \(\mathbf{x}\in\tilde{\mathcal{Q}}_{G}\) satisfies the conditions of the right hand side. Clearly \(\mathbf{1}\cdot\mathbf{x}_{e}=0\) for each \(e\in E(G)\) and \(\mathbf{1}\cdot\mathbf{0}=0\), whence \(\mathbf{1}\cdot\mathbf{x}=0\) for each \(\mathbf{x}\in\tilde{\mathcal{Q}}_{G}\). Similarly for any admissible layering \(\ell\), by definition we have \(\ell\cdot\mathbf{x}_{e}\leq 1\) for each \(e\in E\), which in addition to the obvious \(\ell\cdot\mathbf{0}=0\) implies \(\ell\cdot\mathbf{x}\leq 1\) for each \(\mathbf{x}\in\tilde{\mathcal{Q}}_{G}\). Finally let \(C^{*}\) be any directed cut. Then by definition, the induced functional \(f_{C^{*}}\) is such that \(f_{C^{*}}(\mathbf{x}_{e})=0\) for \(e\notin C^{*}\) and \(f_{C^{*}}(\mathbf{x}_{e})=1\) for \(e\in C^{*}\); moreover \(f_{C^{*}}(\mathbf{0})=0\). Hence each \(\mathbf{x}\in\tilde{\mathcal{Q}}_{G}\) satisfies \(f_{C^{*}}(\mathbf{x})\geq 0\).
Conversely, we show that any element \(\mathbf{x}\) of the right hand side belongs to \(\tilde{\mathcal{Q}}_{G}\). From the first part of the proof, and the fact that \(\tilde{\mathcal{Q}}_{T}\) is a \((|V|-1)\)-dimensional simplex for any spanning tree \(T\subset G\), it is already clear that \(\dim\tilde{\mathcal{Q}}_{G}=|V|-1\). The condition \(\mathbf{1}\cdot\mathbf{x}=0\) says that \(\mathbf{x}\) is in the same hyperplane of \(\mathbb{R}^{V}\) as \(\tilde{\mathcal{Q}}_{G}\); in the rest of the proof we work relative to this hyperplane.
Consider an arbitrary facet \(F\) of \(\tilde{\mathcal{Q}}_{G}\). It suffices to show that \(\mathbf{x}\) lies either in the hyperplane of \(F\) or on the same side of the hyperplane as the interior of \(\tilde{\mathcal{Q}}_{G}\).
The facet \(F\) needs to contain \(|V|-1\) affine independent vertices of \(\tilde{\mathcal{Q}}_{G}\). If \(F\) does not contain \(\mathbf{0}\), then those are \(|V|-1\) affine independent vertices of type \(\mathbf{x}_{e}\), which means that the corresponding edges \(e\) form a spanning tree \(T\). The tree \(T\), along with the orientations of its edges, determines a layering \(\ell\colon V\to\mathbb{Z}\) so that \(\ell(\mathbf{x}_{e})=1\) for all \(e\in T\). This layering is necessarily admissible, for otherwise there would be an edge \(f\) with \(\ell(\mathbf{x}_{f})\geq 2\), meaning that the vertices \(\mathbf{0}\) and \(\mathbf{x}_{f}\) lie on opposite sides of the hyperplane of \(F\). Therefore the hyperplane of the facet is \(\{\mathbf{x}\mid\ell\cdot\mathbf{x}=1\}\) for the admissible layering \(\ell\), and as \(\ell\cdot\mathbf{0}=0\), the polytope \(\tilde{\mathcal{Q}}_{G}\) lies in the half-space \(\{\mathbf{p}\mid\ell\cdot\mathbf{p}\leq 1\}\), whence indeed the hyperplane of \(F\) does not separate \(\mathbf{x}\) from \(\tilde{\mathcal{Q}}_{G}\).
If \(F\) contains \(\mathbf{0}\), then it must additionally contain \(|V|-2\) affine independent vectors of the form \(\mathbf{x}_{e}\). Here the corresponding edges \(e\) form a two-component forest in \(G\), whose components give rise to an elementary cut \(C^{*}\). That in turn induces the functional \(f_{C^{*}}\) that vanishes along \(F\). This and \(F\) being a facet imply that \(C^{*}\) is a directed cut, for otherwise there would exist vectors \(\mathbf{x}_{e}\) on both sides of its kernel. As the vectors \(\mathbf{x}_{e}\) for \(e\in C^{*}\) have \(f_{C^{*}}(\mathbf{x}_{e})=1\), clearly \(\tilde{\mathcal{Q}}_{G}\) lies in the
half-space \(\{\mathbf{p}\mid f_{C^{*}}(\mathbf{p})\geq 0\}\). Thus again, the hyperplane of \(F\) does not separate \(\mathbf{x}\) from \(\tilde{\mathcal{Q}}_{G}\).
Note that we can also conclude from the previous proof that (within the subspace \(\{\mathbf{x}\mid\mathbf{1}\cdot\mathbf{x}=0\}\)), the hyperplanes \(\{\mathbf{x}\mid f_{C^{*}}(\mathbf{x})=0\}\) for elementary directed cuts \(C^{*}\) and \(\{\mathbf{x}\mid\ell\cdot\mathbf{x}=1\}\) for admissible layerings \(\ell\) all contain facets of \(\tilde{\mathcal{Q}}_{G}\), and collectively these are all the facets of \(\tilde{\mathcal{Q}}_{G}\).
A key ingredient in proving Theorem 1.1 will be the following corollary of Ehrhart-Macdonald reciprocity.
**Theorem 2.9**.: _[_1_, Theorem 4.5]_ _Let \(P\subset\mathbb{R}^{n}\) be a \(d\)-dimensional (\(d\geq 0\)) lattice polytope with \(h^{*}\)-polynomial \(h_{d}x^{d}+\cdots+h_{1}x+1\). Then \(h_{d}=\cdots=h_{k+1}=0\) and \(h_{k}\neq 0\) if and only if \((d-k+1)P\) is the smallest integer dilate of \(P\) that contains a lattice point in its relative interior._
Notice here that \((d+1)P\) certainly contains an interior lattice point. The degree of the \(h^{*}\)-polynomial of \(P\) tells us exactly 'how much sooner' such a point occurs.
In our cases, \(\tilde{\mathcal{Q}}_{G}\) is a \((|V|-1)\)-dimensional polytope. Thus if we show that
\[(\nu(G)+1)\cdot\tilde{\mathcal{Q}}_{G}\]
is the smallest integer dilate of the extended root polytope that contains a lattice point in its interior, then by Theorem 2.9, it follows that the degree of \(I_{G}\) is indeed \(|V|-1-\nu(G)\). For this, we need the following basic connection between dijoins of \(G\) and interior points of \(\tilde{\mathcal{Q}}_{G}\).
**Proposition 2.10**.: _If a point \(\mathbf{p}\in\tilde{\mathcal{Q}}_{G}\) is in the relative interior of \(\tilde{\mathcal{Q}}_{G}\) then there exists a cycle-free dijoin \(K\) of \(G\) such that \(\mathbf{p}=\sum_{e\in K}\lambda_{e}\mathbf{x}_{e}\), where \(\lambda_{e}>0\) for each \(e\in K\) and \(\sum_{e\in K}\lambda_{e}\leq 1\). Moreover, if \(K\) is a minimal cardinality dijoin, then \(\sum_{e\in K}\lambda_{e}<1\)._
_Conversely, if \(\mathbf{p}=\sum_{e\in K}\lambda_{e}\mathbf{x}_{e}\), for a dijoin \(K\) where \(\lambda_{e}>0\) for each \(e\in K\) and \(\sum_{e\in K}\lambda_{e}<1\), then \(\mathbf{p}\) is in the relative interior of \(\tilde{\mathcal{Q}}_{G}\)._
Proof.: By Proposition 2.8, a point \(\mathbf{p}\in\tilde{\mathcal{Q}}_{G}\) is in the interior of \(\tilde{\mathcal{Q}}_{G}\) if and only if \(f_{C^{*}}(\mathbf{p})>0\) for each directed cut \(C^{*}\), and \(\ell\cdot\mathbf{p}<1\) for each admissible layering \(\ell\). Recall that the functional induced by \(C^{*}\) satisfies \(f_{C^{*}}(\mathbf{x}_{e})=0\) whenever \(e\notin C^{*}\), and \(f_{C^{*}}(\mathbf{x}_{e})=1\) for each \(e\in C^{*}\).
Suppose that \(\mathbf{p}=\sum_{e\in K}\lambda_{e}\mathbf{x}_{e}\) with \(\sum_{e\in K}\lambda_{e}<1\) and \(\lambda_{e}>0\) for each \(e\in K\), where \(K\) is a dijoin. Then \(\mathbf{p}=\sum_{e\in K}\lambda_{e}\mathbf{x}_{e}+(1-\sum_{e\in K}\lambda_{e}) \mathbf{0}\), in particular \(\mathbf{p}\in\tilde{\mathcal{Q}}_{G}\). For any directed cut \(C^{*}\), we have \(f_{C^{*}}(\mathbf{p})=\sum_{e\in C^{*}\cap K}\lambda_{e}>0\), since the intersection is nonempty (by the definition of a dijoin), and the summands are all positive. Similarly, for any admissible layering \(\ell\) we have \(\ell\cdot\mathbf{x}_{e}\leq 1\) for any \(e\in E\), and \(\ell\cdot\mathbf{0}=0\). Hence \(\ell\cdot\mathbf{p}<1\). In other words, in this case \(\mathbf{p}\) is in the interior of \(\tilde{\mathcal{Q}}_{G}\).
In the other direction, we show that if \(\mathbf{p}\) is in the interior, then we can find a convex combination \(\mathbf{p}=\sum_{e\in K}\lambda_{e}\mathbf{x}_{e}+\mu\cdot\mathbf{0}\) where \(K\) is a cycle-free dijoin, \(\lambda_{e}>0\) for each \(e\in K\), and \(\mu\geq 0\). Take any convex combination \(\mathbf{p}=\sum_{e\in S}\lambda_{e}\mathbf{x}_{e}+\mu\cdot\mathbf{0}\) with \(\lambda_{e}>0\) for all \(e\in S\), where \(S\subset E\) is some subset. For a point of \(\tilde{\mathcal{Q}}_{G}\), we can always find such a formula (usually more than one). We claim that if \(\mathbf{p}\) is in the interior, then \(S\) is necessarily a dijoin (for any \(S\) that arises this way, that is, for any \(S\subset E\) so that the convex hull of the corresponding vectors plus possibly the origin contains an interior point of \(\tilde{\mathcal{Q}}_{G}\)). Indeed, suppose that \(S\) is disjoint from
a directed cut \(C^{*}\). Then \(f_{C^{*}}(\mathbf{p})=\sum_{e\in S}\lambda_{e}\cdot f_{C^{*}}(\mathbf{x}_{e})+\mu \cdot f(\mathbf{0})=0\), which would contradict \(\mathbf{p}\) being an interior point of \(\tilde{\mathcal{Q}}_{G}\).
We may also arrange that \(S\) be cycle-free. Assume that \(S\) contains a cycle \(C\). Denote the set edges of \(C\) oriented in one cyclic direction by \(C^{+}\) and the rest of the edges by \(C^{-}\), in such a way that \(|C^{-}|\geq|C^{+}|\). Let \(\delta=\min\{\lambda_{e}\mid e\in C^{-}\}\) and let \(S^{\prime}=\{e\in C^{-}\mid\lambda_{e}=\delta\}\). Then \(\mathbf{p}=\sum_{e\in S}\lambda_{e}^{\prime}\mathbf{x}_{e}+\mu^{\prime}\cdot \mathbf{0}\) where
\[\lambda_{e}^{\prime}=\left\{\begin{array}{ll}\lambda_{e}&\text{if }e\notin C,\\ \lambda_{e}-\delta&\text{if }e\in C^{-},\\ \lambda_{e}+\delta&\text{if }e\in C^{+},\end{array}\right.\]
and \(\mu^{\prime}=\mu+(|C^{-}|-|C^{+}|)\cdot\delta\). This is a new convex combination for \(\mathbf{p}\), where the coefficients are only positive for edges in \(S-S^{\prime}\), and potentially for \(\mathbf{0}\). Moreover, \(S-S^{\prime}\) is also a dijoin because any directed cut that intersects \(C\) necessarily intersects both \(C^{+}\) and \(C^{-}\). As \(S^{\prime}\subseteq C^{-}\), the set \(C^{+}\) is still contained within \(S-S^{\prime}\).
Finally, let \(S\) be a minimal cardinality dijoin. We show that in this case \(\mu>0\). By a theorem of Lucchesi and Younger [15], there exist \(|S|\) edge-disjoint directed cuts in \(G\). Let us denote them with \(C_{1}^{*},\ldots,C_{|S|}^{*}\). Notice that necessarily each of these cuts contains exactly one edge from \(S\). Indeed, as \(S\) is a dijoin, each cut \(C_{i}^{*}\) needs to contain at least one element of \(S\), but as the cuts are disjoint, and there are \(|S|\) of them, each needs to contain exactly one. This also means that each edge of \(S\) is contained by a cut \(C_{i}^{*}\).
Let \(f=f_{C_{1}^{*}}+\cdots+f_{C_{|S|}^{*}}\). Then, by the definition of the functional induced by a cut, and since our cuts are all directed, \(f(\mathbf{x_{e}})\) equals the number of cuts among \(C_{1}^{*},\ldots,C_{|S|}^{*}\) that contain \(e\). Since the cuts are edge-disjoint, this means that each edge has either \(f(\mathbf{x}_{e})=0\) or \(f(\mathbf{x}_{e})=1\), and of course \(f(\mathbf{0})=0\). Thus \(L=\{\mathbf{x}\mid f(\mathbf{x})=1\}\) is a supporting hyperplane for \(\tilde{\mathcal{Q}}_{G}\). As we argued that each element of \(S\) is in one of the cuts, we see that \(f(\mathbf{x}_{e})=1\) for each \(e\in S\). If we had \(\mu=0\), then \(\mathbf{p}\) would be a convex combination of some points, all of which lie in \(L\). But in such a case \(\mathbf{p}\) could not be an interior point.
The following Lemma is equivalent to the well-known fact that \(\tilde{\mathcal{Q}}_{F}\) is a unimodular simplex for any forest \(F\). See, e.g., [12, Corollary 3.6] for a proof.
**Lemma 2.11**.: _For a forest \(F\) and any positive integer \(s\), a point \(\mathbf{p}\in s\cdot\tilde{\mathcal{Q}}_{F}\) is a lattice point if and only if \(\mathbf{p}=\sum_{e\in F}\mu_{e}\mathbf{x}_{e}+\mu\cdot\mathbf{0}\), where \(\mu\) and each \(\mu_{e}\) are integer._
Proof of Theorem 1.1.: Let \(K\) be a dijoin of \(G\) with cardinality \(\nu(G)\). Then \(\mathbf{p}=\sum_{e\in K}\mathbf{x}_{e}+\mathbf{0}\) is a point of \((\nu(G)+1)\cdot\mathcal{Q}_{G}\), moreover, it clearly has integer coordinates. Now by Proposition 2.10 we have that \(\mathbf{q}=\frac{1}{\nu(G)+1}\mathbf{p}=\sum_{e\in K}\frac{1}{\nu(G)+1}\mathbf{ x}_{e}\) is an interior point of \(\mathcal{Q}_{G}\), which implies that \(\mathbf{p}\) is also an interior point of \((\nu(G)+1)\cdot\mathcal{Q}_{G}\).
We also need to prove that for \(s<\nu(G)+1\), there is no interior lattice point in \(s\cdot\mathcal{Q}_{G}\). Suppose that there is an interior lattice point \(\mathbf{p}\in s\cdot\mathcal{Q}_{G}\) for some \(s\in\mathbb{Z}_{>0}\) and consider \(\mathbf{q}=\frac{1}{s}\mathbf{p}\), which is an interior point of \(\mathcal{Q}_{G}\). Then by Proposition 2.10 there is a cycle-free dijoin \(K\) such that \(\mathbf{q}=\sum_{e\in K}\lambda_{e}\mathbf{x}_{e}+\mu\cdot\mathbf{0}\), where \(\mu>0\), \(\lambda_{e}>0\) for each \(e\in K\), and \(\sum_{e\in K}\lambda_{e}+\mu=1\).
Now we may apply Lemma 2.11 to \(s\), \(K\), and \(\mathbf{p}=\sum_{e\in K}s\lambda_{e}\mathbf{x}_{e}+s\mu\mathbf{0}\). This tells us that for \(\mathbf{p}\) to be an integer vector, \(s\lambda_{e}\) needs to be a positive integer for each \(e\in K\), and \(s\mu\) is also a positive integer. Hence altogether, we have \(s=\sum_{e\in K}s\lambda_{e}+s\mu\geq|K|+1\geq\nu(G)+1\)
## 3. Degree-minimizing orientations
Let \(\mathcal{G}\) be an undirected graph. It is natural to ask about the relationship between interior polynomials of different orientations of \(\mathcal{G}\).
In [11] we looked at a special case of this problem, and considered the semi-balanced orientations of a bipartite graph \(\mathcal{G}\) with partite classes \(U\) and \(W\). Any bipartite graph \(\mathcal{G}\) has some (typically, many) semi-balanced orientations, but there are two special ones among them: The one where each edge is oriented from \(U\) to \(W\), and the one where each edge is oriented from \(W\) to \(U\). It is easy to see that these orientations are indeed semi-balanced. We call them the _standard orientations_ of \(\mathcal{G}\). The root polytopes of the two standard orientations are isometric, as they are reflections of each other. In particular, their interior polynomials coincide.
After looking at several examples, in [11] we conjectured that among all semi-balanced orientations of \(\mathcal{G}\), the standard orientations minimize every coefficient of the interior polynomial. See [12, Example 6.5] for some concrete instances of this phenomenon. With Theorem 1.1 in hand, we are able to prove a weakened version of the conjecture, showing that the degree of the interior polynomial is minimized for the standard orientation. In fact, we can prove this among _all_ orientations.
**Theorem 3.1**.: _Let \(\mathcal{G}\) be a connected, undirected bipartite graph with partite classes \(U\) and \(W\). Then among all orientations of \(\mathcal{G}\), the degree of the interior polynomial is minimized by the standard orientations._
Proof.: Let \(G\) be an arbitrary semi-balanced orientation of \(\mathcal{G}\). By the Lucchesi-Younger theorem, the maximal number of disjoint directed cuts in \(G\) is equal to \(\nu(G)\), the minimal cardinality of a dijoin in \(G\).
As the degree of \(I_{G}\) is equal to \(|V(G)|-1-\nu(G)\), the degree of \(I_{G}\) is minimized for those orientations of \(\mathcal{G}\) that maximize the number of disjoint directed cuts.
Let \(c(\mathcal{G})\) denote the maximal number of disjoint cuts in \(\mathcal{G}\). Clearly, for any orientation \(G\) of \(\mathcal{G}\), we have \(\nu(G)\leq c(\mathcal{G})\), since if we take \(\nu(G)\) disjoint directed cuts in \(G\), those correspond to disjoint cuts in \(\mathcal{G}\).
On the other hand, [7, Theorem 9.6.12] claims that the standard orientations have \(c(\mathcal{G})\) disjoint directed cuts. Hence they do maximize \(\nu\) among all orientations of \(\mathcal{G}\), as claimed.
Concrete examples (e.g., [12, Example 6.5]) show that typically, there are some non-standard orientations that also minimize the degree of the interior polynomial. It would be interesting to give a characterization for all the other orientations that attain the minimal degree.
For non-bipartite graphs, we have no candidates for the degree-minimizing orientations. Yet, based on computer calculations, we make the following conjecture.
**Conjecture 3.2**.: For any undirected graph, there exists an orientation that coefficientwise minimizes the \(h^{*}\)-polynomial of the extended root polytope among all orientations. Also, there exists an orientation that coefficientwise maximizes the \(h^{*}\)-polynomial of the extended root polytope among all orientations.
## 4. A generalization to regular matroids
It turns out that many properties of the root polytopes of digraphs extend word-by-word to regular oriented matroids, moreover, in some applications, one needs this more general case (see, for example, Section 5 or [19]). Here we briefly recall
the notion of a regular matroid, along with ways of orienting such a structure, and then generalize the results of the last section to this context.
From among the many equivalent characterizations of the class of regular matroids, we will use the following: A matroid is _regular_ if it can be represented by the column vectors of a totally unimodular matrix. Here a matrix is _totally unimodular_ if each of its subdeterminants is either \(0\), \(-1\), or \(1\).
More concretely, let \(A\) be an \(n\times m\) totally unimodular matrix with columns \(\mathbf{a}_{1},\ldots,\mathbf{a}_{m}\). The ground set of our matroid is going to be \([m]\). A set of elements \(C=\{i_{1},\ldots,i_{s}\}\subset[m]\) is a _circuit_ if the corresponding columns \(\mathbf{a}_{i_{1}},\ldots,\mathbf{a}_{i_{s}}\) are minimally linearly dependent (over \(\mathbb{R}\)). Total unimodularity implies that in this case, the coefficients of the linear relation can be chosen from \(\{-1,1\}\). This fact gives rise to an oriented matroid structure, as follows. For a circuit \(C\), we set \(C^{+}\) to be the subset of \(C\) with positive coefficients, and \(C^{-}\) to be the subset of \(C\) with negative coefficients. (Obviously, \(C^{+}\) and \(C^{-}\) might switch roles but the partition is well-defined because of the minimality of \(C\).) By a _regular oriented matroid_, we mean a pair \(([m],\mathcal{C})\), where \(\mathcal{C}\) is the system of pairs \((C^{+},C^{-})\) defined as above from a totally unimodular matrix \(A\).
Let us stress that a given regular oriented matroid may have several different totally unimodular representing matrices. (For example, row elimination steps performed on \(A\) do not change the oriented matroid.)
One important example of regular oriented matroids is that of directed graphs. To a directed graph, one can associate a regular oriented matroid using its (directed) vertex-edge incidence matrix. The circuits of this matroid correspond to the cycles of the graph, and the partition \(C=C^{+}\sqcup C^{-}\) of a circuit is to edges of the cycle pointing in the two cyclic directions.
We call a subset \(C^{*}\subset[m]\) a _cocircuit_ if there is a linear functional \(h\colon\mathbb{R}^{n}\to\mathbb{R}\), with kernel the hyperplane \(H\subset\mathbb{R}^{n}\), such that \(\mathbf{a}_{k}\in H\) if \(k\notin C^{*}\) and \(\mathbf{a}_{k}\notin H\) if \(k\in C^{*}\); moreover, \(C^{*}\) is minimal with respect to this property. Cocircuits generalize elementary cuts of graphs. By the total unimodularity of \(A\), we may suppose that \(h(\mathbf{a}_{k})\in\{0,1,-1\}\) for each \(k\in[m]\). We say that \(k\in(C^{*})^{+}\) if \(h(\mathbf{a}_{k})=1\) and \(k\in(C^{*})^{-}\) if \(h(\mathbf{a}_{k})=-1\). Again, \((C^{*})^{+}\) and \((C^{*})^{-}\) can switch roles, but the partition \(C^{*}=(C^{*})^{+}\sqcup(C^{*})^{-}\) is well-defined.
A cocircuit is called _directed_ if either \((C^{*})^{+}\) or \((C^{*})^{-}\) is empty. In this case we will always suppose that \((C^{*})^{-}\) is the empty part. A set \(K\subset[m]\) is called a _dijoin_ if \(K\) intersects each directed cocircuit.
We call a regular oriented matroid _co-Eulerian_ if \(|C^{+}|=|C^{-}|\) for each circuit \(C\). For graphic oriented matroids, being co-Eulerian is equivalent to the orientation of the graph being semi-balanced.
If \(A\) is a totally unimodular matrix with columns \(\mathbf{a}_{1},\ldots,\mathbf{a}_{m}\), then the _root polytope_\(\mathcal{Q}_{A}\) is defined as \(\mathcal{Q}_{A}=\operatorname{Conv}\{\mathbf{a}_{1},\ldots,\mathbf{a}_{m}\}\), and the _extended root polytope_ is
\[\tilde{\mathcal{Q}}_{A}=\operatorname{Conv}\{\mathbf{0},\mathbf{a}_{1},\ldots, \mathbf{a}_{m}\}.\]
It turns out that if \(A\) and \(A^{\prime}\) are two totally unimodular matrices representing the same oriented matroid \(M\), then the \(h^{*}\)-polynomials of \(\tilde{\mathcal{Q}}_{A}\) and \(\tilde{\mathcal{Q}}_{A^{\prime}}\) are the same [19]. (We note that [19] proves this for \(\mathcal{Q}_{A}\) and \(\mathcal{Q}_{A^{\prime}}\), but a straightforward modification of the argument yields the result also for \(\tilde{\mathcal{Q}}_{A}\) and \(\tilde{\mathcal{Q}}_{A^{\prime}}\).) Hence the \(h^{*}\)-polynomial of \(\tilde{\mathcal{Q}}_{A}\) is an invariant of the regular oriented matroid \(M\), which we call the _interior polynomial_ and denote by \(I_{M}\). Note that the orientation of the matroid
matters: if we keep the (unoriented) matroid structure, but change the orientation, then the interior polynomial might change. (This is true even for graphs, cf. [12, Example 6.5].)
**Theorem 1.3**.: Let \(M\) be a regular oriented matroid of rank \(r\). Then the degree of \(I_{M}\) is equal to \(r-\nu(M)\), where
\[\nu(M)=\min\{|K|\mid K\subseteq[m]\text{ is a dijoin of }M\}.\]
The proof proceeds through the same steps as in the graph case. First, we again need a facet description for \(\tilde{\mathcal{Q}}_{A}\).
Let us first examine how facets of \(\tilde{\mathcal{Q}}_{A}\) not containing the origin look. Take a maximal affine independent set of vectors \(\mathbf{a}_{i_{1}},\dots,\mathbf{a}_{i_{s}}\) along such a facet. Then together with \(\mathbf{0}\) they form a maximal affine independent set among the generators of \(\tilde{\mathcal{Q}}_{A}\). This happens if and only if \(\mathbf{a}_{i_{1}},\dots,\mathbf{a}_{i_{s}}\) are a maximal linearly independent set, that is, the corresponding elements form a basis in the matroid (and thus \(s=r\)). As the hyperplane of our facet does not pass through \(\mathbf{0}\), we can choose a normal vector \(\ell\) such that \(\ell\cdot\mathbf{a}_{i_{j}}=1\) for each \(j=1,\dots,r\). Since \(\ell\cdot\mathbf{0}=0\), this implies \(\ell\cdot\mathbf{a}_{i}\leq 1\) for each \(i\in[m]\).
Referring to the above, let us call a vector \(\ell\) an _admissible vector_ if \(\ell\cdot\mathbf{a}_{i}\leq 1\) for each \(i\in[m]\) and the elements for which equality holds form a full rank set in the matroid.
**Proposition 4.1**.: _Let \(A\in\mathbb{R}^{n\times m}\) be a totally unimodular matrix, representing a co-Eulerian regular oriented matroid \(M\). Let \(R\) be the linear span of the columns \(\mathbf{a}_{1},\dots,\mathbf{a}_{m}\) of \(A\). Then the extended root polytope satisfies_
\[\tilde{\mathcal{Q}}_{A}=\left\{\begin{array}{ll}\mathbf{x}\in R\left|\begin{array} []{ll}h_{C^{*}}(\mathbf{x})\geq 0&\text{for all directed cocircuits $C^{*}$ of $G$}\\ \ell\cdot\mathbf{x}\leq 1&\text{for all admissible vectors $\ell$ of $G$}\end{array}\right.\end{array}\right\}. \tag{4.1}\]
Here \(h_{C^{*}}\) is the functional that appears in the definition of a cocircuit and we remark that our conventions guarantee that the restriction of \(h_{C^{*}}\) to \(R\) is determined by \(C^{*}\). Similarly, even though the number of admissible vectors is typically infinite, there are only finitely many possibilities for the restriction of \(\mathbf{x}\mapsto\ell\cdot\mathbf{x}\) to \(R\).
Proof.: First note that each vector \(\mathbf{x}\in\tilde{\mathcal{Q}}_{A}\) also belongs to the right hand side. It suffices to check this for the generators of the convex hull: For any admissible vector \(\ell\), by definition \(\ell\cdot\mathbf{a}_{i}\leq 1\) for each \(i\in[m]\), and \(\ell\cdot\mathbf{0}=0\leq 1\), whence \(\ell\cdot\mathbf{x}\leq 1\) for each \(\mathbf{x}\in\tilde{\mathcal{Q}}_{A}\). Let \(C^{*}\) be a directed cocircuit. Then by definition, the corresponding functional \(h_{C^{*}}\) is such that \(h_{C^{*}}(\mathbf{a}_{i})=0\) for \(i\notin C^{*}\) and \(h_{C^{*}}(\mathbf{a}_{i})=1>0\) for \(i\in C^{*}\), moreover, \(h_{C^{*}}(\mathbf{0})=0\). Hence each \(\mathbf{x}\in\tilde{\mathcal{Q}}_{A}\) satisfies \(h_{C^{*}}(\mathbf{x})\geq 0\).
Conversely, we show that any vector \(\mathbf{y}\), satisfying the conditions of the right hand side, belongs to \(\tilde{\mathcal{Q}}_{A}\). Take any facet \(F\) of \(\tilde{\mathcal{Q}}_{A}\). If it does not contain \(\mathbf{0}\), then by the argument before the statement of the proposition, \(F\) lies in the hyperplane \(\{\mathbf{x}\mid\ell\cdot\mathbf{x}=1\}\) for an admissible vector \(\ell\), moreover, \(\tilde{\mathcal{Q}}_{A}\) is a subset of \(\{\mathbf{x}\in\mathbb{R}^{n}\mid\ell\cdot\mathbf{x}\leq 1\}\), wherefore the hyperplane of \(F\) does not separate \(\mathbf{y}\) from \(\tilde{\mathcal{Q}}_{A}\).
If \(F\) contains \(\mathbf{0}\), then it must additionally contain a set of \(r-1\) linearly independent vectors \(S=\{\mathbf{a}_{i_{1}},\dots,\mathbf{a}_{i_{r-1}}\}\), where \(r=\dim\tilde{\mathcal{Q}}_{A}=\operatorname{rank}(M)\). Let \(U\subset\{\mathbf{a}_{1},\dots,\mathbf{a}_{m}\}-S\) be the set of vectors that do not lie along \(F\). Then \(C^{*}=\{i\in[m]\mid\mathbf{a}_{i}\in U\}\) is a cocircuit, with \(\{\mathbf{x}\mid h_{C^{*}}(\mathbf{x})=0\}\) being the linear span of \(S\) As \(F\) is a facet, all vectors \(\mathbf{a}_{i}\) must lie on one side of the span of \(S\)
Thus \(C^{*}\) must be a directed cocircuit, and \(\{{\bf x}\mid h_{C^{*}}({\bf x})=0\}\) supports \(\mathcal{Q}_{A}\) along \(F\). Again, we conclude that the hyperplane of \(F\) does not separate \({\bf y}\) from \(\tilde{\mathcal{Q}}_{A}\).
Altogether, if a vector \({\bf y}\) belongs to the right hand side of (4.1), then it is on the appropriate side of each facet-defining hyperplane, whence \({\bf y}\in\tilde{\mathcal{Q}}_{A}\).
Next, we elucidate the connection of dijoins to the geometry of the extended root polytope.
**Proposition 4.2**.: _If a point \({\bf p}\in\tilde{\mathcal{Q}}_{A}\) is in the relative interior of \(\tilde{\mathcal{Q}}_{A}\), then there exists a circuit-free dijoin \(K\) of \(M\) (the oriented regular matroid induced by the totally unimodular matrix \(A=[{\bf a}_{1}\cdots{\bf a}_{m}]\)) such that \({\bf p}=\sum_{k\in K}\lambda_{k}{\bf a}_{k}\), where \(\lambda_{k}>0\) for each \(k\in K\) and \(\sum_{k\in K}\lambda_{k}\leq 1\). Moreover, if \(K\) is a minimal cardinality dijoin, then \(\sum_{k\in K}\lambda_{k}<1\)._
_Conversely, if \({\bf p}=\sum_{k\in K}\lambda_{k}{\bf a}_{k}\) for a dijoin \(K\) where \(\lambda_{k}>0\) for each \(k\in K\) and \(\sum_{k\in K}\lambda_{k}<1\), then \({\bf p}\) is in the relative interior of \(\tilde{\mathcal{Q}}_{A}\)._
Proof.: By Proposition 4.1, a point \({\bf p}\in\tilde{\mathcal{Q}}_{A}\) is in the interior of \(\tilde{\mathcal{Q}}_{A}\) if and only if \(f_{C^{*}}({\bf p})>0\) for each directed cut \(C^{*}\), and \(\ell\cdot{\bf p}<1\) for each admissible vector \(\ell\). Recall that the functional induced by \(C^{*}\) satisfies \(h_{C^{*}}({\bf a}_{i})=0\) whenever \(i\notin C^{*}\), and \(h_{C^{*}}({\bf a}_{i})=1\) for each \(i\in C^{*}\). We start by showing the last claim of the Proposition.
Suppose that \({\bf p}=\sum_{k\in K}\lambda_{k}{\bf a}_{k}\) with \(\sum_{k\in K}\lambda_{k}<1\) and \(\lambda_{k}>0\) for each \(k\in K\), where \(K\) is a dijoin. Then \({\bf p}=\sum_{k\in K}\lambda_{k}{\bf a}_{k}+(1-\sum_{k\in K}\lambda_{k}){\bf 0}\), in particular \({\bf p}\in\tilde{\mathcal{Q}}_{A}\). For any directed cut \(C^{*}\), we have \(h_{C^{*}}({\bf p})=\sum_{k\in C^{*}\cap K}\lambda_{k}>0\), since the intersection is nonempty (by the definition of a dijoin), and the summands are all positive. Moreover, for any admissible vector \(\ell\), we have \(\ell\cdot{\bf a}_{i}\leq 1\) for any \(i\in[m]\) and \(\ell\cdot{\bf 0}=0\). Hence \(\ell\cdot{\bf p}<1\). In other words, in this case \({\bf p}\) is in the interior of \(\tilde{\mathcal{Q}}_{A}\).
In the other direction, we show that if \({\bf p}\) is in the interior, then we can find a convex combination
\[{\bf p}=\sum_{k\in K}\lambda_{k}{\bf a}_{k}+\mu\cdot{\bf 0}, \tag{4.2}\]
where \(K\) is a circuit-free dijoin, \(\lambda_{k}>0\) for each \(k\in K\), and \(\mu\geq 0\). Take any convex combination \({\bf p}=\sum_{i\in S}\lambda_{i}{\bf a}_{i}+\mu\cdot{\bf 0}\), where \(S\subset[m]\) and \(\lambda_{i}>0\) for all \(i\in S\). For a point of \(\tilde{\mathcal{Q}}_{A}\), we can always find such a formula (usually more than one). We claim that if \({\bf p}\) is in the interior, then \(S\) is necessarily a dijoin (for any \(S\) that arises this way). Indeed, suppose that \(S\) is disjoint from a directed cut \(C^{*}\). Then \(h_{C^{*}}({\bf p})=\sum_{i\in S}\lambda_{i}\cdot h_{C^{*}}({\bf a}_{i})=0\), which would contradict \({\bf p}\) being an interior point of \(\tilde{\mathcal{Q}}_{A}\).
We may also assume that \(S\) contains no circuit. Indeed, suppose that \(S\) contains the circuit \(C\), and suppose (without loss of generality) that \(|C^{-}|\geq|C^{+}|\). Let \(\delta=\min\{\lambda_{i}\mid i\in C^{-}\}\) and let \(S^{\prime}=\{i\in C^{-}\mid\lambda_{i}=\delta\}\). Then \({\bf p}=\sum_{i\in S}\lambda_{i}^{\prime}{\bf a}_{i}+\mu^{\prime}\cdot{\bf 0}\), where
\[\lambda_{i}^{\prime}=\left\{\begin{array}{ll}\lambda_{i}&\mbox{if $i\notin C$,}\\ \lambda_{i}-\delta&\mbox{if $i\in C^{-}$,}\\ \lambda_{i}+\delta&\mbox{if $i\in C^{+}$,}\end{array}\right.\]
and \(\mu^{\prime}=\mu+(|C^{-}|-|C^{+}|)\cdot\delta\). This is a new convex combination for \({\bf p}\) where the coefficients are only positive for elements of \(S-S^{\prime}\), and possibly \({\bf 0}\). Moreover, \(S-S^{\prime}\) is also a dijoin because any directed cocircuit that intersects \(C\) necessarily intersects both \(C^{+}\) and \(C^{-}\). Here as \(S^{\prime}\subseteq C^{-}\), we know that \(C^{+}\) is still contained within \(S-S^{\prime}\).
It remains to show that if \(S\) is a minimal cardinality dijoin (and therefore circuit-free, by the previous paragraph), then \(\mu>0\) in (4.2). For graphs, the corresponding statement (cf. Proposition 2.10) was proved via the Lucchesi-Younger theorem. However, the analogue of that theorem is not true for matroids, which necessitates a longer argument.
Notice the following property of minimal cardinality dijoins: If the circuit \(C\) has \(|C^{+}|>|C^{-}|\), and \(K\) is a minimal cardinality dijoin, then \(C^{+}\not\subseteq K\). Indeed, we claim that if \(C^{+}\subseteq K\), then \((K\setminus C^{+})\cup C^{-}\) is also a dijoin, and if \(|C^{+}|>|C^{-}|\), then it has smaller cardinality than \(K\), contradicting the assumption. To see that \((K\setminus C^{+})\cup C^{-}\) is a dijoin, notice that if a directed cocircuit intersects \(C^{+}\), then it also intersects \(C^{-}\)[3, p. 115].
Suppose that \(S\) is a minimal cardinality dijoin. Our goal is to find an admissible vector \(\ell\) such that \(\ell\cdot\mathbf{a}_{i}=1\) for each \(i\in S\). That implies \(\mu>0\), for otherwise \(\mathbf{p}\) would be on the facet determined by \(\ell\).
Since \(S\) is circuit-free, we can take a basis \(B\supseteq S\) in the matroid \(M\). The vectors \(\{\mathbf{a}_{i}\mid i\in B\}\) form an affine independent set of codimension one, not passing through the origin, whereby there exists a vector \(\ell\) such that \(\ell\cdot\mathbf{a}_{i}=1\) for each \(i\in B\). If this \(\ell\) is admissible (that is, \(\ell\cdot\mathbf{a}_{i}\leq 1\) for each \(i\in[m]\)), then we are done. Suppose (without loss of generality) that for any \(i\notin B\), we have \(i\in C(B,i)^{+}\). Then, for any \(i\notin B\), we have \(\mathbf{a}_{i}=\sum_{j\in C(B,i)^{-}}\mathbf{a}_{j}-\sum_{j\in C(B,i)^{+}-\{i \}}\mathbf{a}_{j}\). Hence we have \(\ell\cdot\mathbf{a}_{i}\leq 1\) if and only if \(|C(B,i)^{+}|\geq|C(B,i)^{-}|\). In other words, we need to prove that
(4.3) there exists a basis \(B\supseteq S\) such that for each
\[i\notin B\]
, we have
\[|C(B,i)^{+}|\geq|C(B,i)^{-}|\]
,
where again, \(C(B,i)^{+}\) is the part of the fundamental circuit \(C(B,i)\) which contains \(i\). The rest of the proof is concerned with establishing this statement.
Let \(t\) be the number of bases of \(M\) and let \(\eta=\min\{\lambda_{i}\mid i\in S\}\). Let also \(\varepsilon=(1/\mathrm{tower}(2,t+1))\cdot\eta\), where \(\mathrm{tower}(2,t+1)\) is obtained from \(2\) by squaring it \(t+1\) times.
Take an arbitrary basis \(B\supseteq S\). If \(|C(B,i)^{+}|\geq|C(B,i)^{-}|\) for each \(i\notin B\), then we are done. Otherwise, consider the vector
\[\mathbf{q}=\sum_{i\in B}\nu_{i}\cdot\mathbf{a}_{i}+\mu\cdot\mathbf{0},\]
where for \(i\in S\) we have \(\nu_{i}=\lambda_{i}-\frac{1}{|S|}\varepsilon\), and for \(i\in B-S\) we have \(\nu_{i}>0\) in such a way that \(\sum_{i\in B-S}\nu_{i}=\varepsilon\) and no integer linear combination of the values \(\{\nu_{i}\mid i\in B-S\}\) is \(0\).
For an arbitrary \(i\notin B\) such that \(|C(B,i)^{+}|<|C(B,i)^{-}|\), let \(\delta=\min\{\nu_{j}\mid j\in C(B,i)^{-}\}\) and let \(k\) be the unique element of \(C(B,i)^{-}\) where that minimum is achieved. Consider the convex combination \(\mathbf{q}=\sum_{j\in B\cup\{i\}}\nu_{j}^{\prime}\mathbf{a}_{j}+\mu^{\prime} \cdot\mathbf{0}\) where
\[\nu_{j}^{\prime}=\left\{\begin{array}{ll}\nu_{j}&\mbox{if }j\notin C(B,i),\\ \nu_{j}^{\prime}=\nu_{j}-\delta&\mbox{if }j\in C(B,i)^{-},\\ \nu_{j}^{\prime}=\nu_{j}+\delta&\mbox{if }j\in C(B,i)^{+},\end{array}\right.\]
and \(\mu^{\prime}=\mu+(|C(B,i)^{-}|-|C(B,i)^{+}|)\cdot\delta\). This is now a convex combination where summands with nonzero coefficients belong to the basis \(B^{\prime}=B\cup i-k\), and note that \(\mu^{\prime}>\mu\). Since \(S\) is a minimal cardinality dijoin, \(C(B,i)^{-}\not\subseteq S\). As the coefficients of the elements of \(B-S\) are smaller than the coefficients of the elements of \(S\), we
have \(k\notin S\) and \(\delta<\varepsilon\). Hence the new basis \(B^{\prime}\) still satisfies \(S\subseteq B^{\prime}\). Moreover, the coefficients \(\nu^{\prime}\) for elements of \(B^{\prime}-S\) are at most \(2\varepsilon\). If \(|C(B^{\prime},j)^{+}|\geq|C(B^{\prime},j)^{-}|\) for each \(j\notin B^{\prime}\), then we are done. If not, continue in a similar fashion. We claim that the process cannot go on indefinitely. Indeed, if it did, then by the finiteness of the number of bases, there would be a basis \(B\) that occurs at least twice. As in each step, we write \(\mathbf{q}\) as the convex combination of linearly independent vectors plus \(\mathbf{0}\), for a given basis, the coefficients are uniquely determined. However, in each step, the coefficient of \(\mathbf{0}\) increases, which precludes returning to a basis already seen. In other words, the process stops in at most \(t\) steps. During this time, the minimal nonzero coefficient can at most double in each step. Hence even at the last stage, the minimum is obtained at an element outside of \(S\). I.e., in each step, our basis contains \(S\). In the end, we necessarily end up with a basis satisfying (4.3).
**Lemma 4.3**.: _Let \(F\) be an independent set in the regular oriented matroid \(M\), represented by the totally unimodular matrix \(A\), and let \(s\in\mathbb{Z}_{>0}\). A point \(\mathbf{p}\in s\cdot\tilde{\mathcal{Q}}_{F}\) is a lattice point if and only if \(\mathbf{p}=\sum_{i\in F}\mu_{i}\mathbf{a}_{i}\), where each \(\mu_{i}\) is integer._
Proof.: This follows from Cramer's rule and the total unimodularity of \(A\).
Proof of Theorem 1.3.: Let \(K\) be a dijoin of cardinality \(\nu(M)\). Then \(\mathbf{p}=\sum_{k\in K}\mathbf{a}_{k}+\mathbf{0}\) is a point of \((\nu(M)+1)\cdot\tilde{\mathcal{Q}}_{A}\), moreover, it has integer coordinates. By Proposition 4.2, \(\mathbf{q}=\frac{1}{\nu(M)+1}\mathbf{p}=\sum_{k\in K}\frac{1}{\nu(M)+1} \mathbf{a}_{k}\) is an interior point of \(\tilde{\mathcal{Q}}_{A}\), because the coordinates belonging to the elements of a dijoin are all positive, and their sum is less than \(1\). Hence \(\mathbf{p}\) is also an interior point of \((\nu(M)+1)\cdot\tilde{\mathcal{Q}}_{A}\).
We need to prove that for \(s<\nu(M)+1\), there is no interior lattice point in \(s\cdot\tilde{\mathcal{Q}}_{A}\). Suppose that \(\mathbf{p}\in s\cdot\tilde{\mathcal{Q}}_{A}\) is such a point. Take \(\mathbf{q}=\frac{1}{s}\mathbf{p}\in\tilde{\mathcal{Q}}_{A}\), which is then an interior point of \(\tilde{\mathcal{Q}}_{A}\). By Proposition 4.2 there is a circuit-free dijoin \(K\) such that \(\mathbf{q}=\sum_{k\in K}\lambda_{k}\mathbf{a}_{k}+\mu\cdot\mathbf{0}\), where \(\lambda_{k}>0\) for each \(k\in K\), \(\mu\geq 0\), and \(\sum_{k\in K}\lambda_{k}+\mu=1\); moreover, if \(K\) has minimal cardinality, then \(\mu>0\). We can apply Lemma 4.3 to \(s\), \(K\), and \(\mathbf{p}\). This tells us that for \(\mathbf{p}\) to be an integer vector, \(s\lambda_{k}\) needs to be an integer for each \(k\in K\). As \(\sum_{k\in K}s\cdot\lambda_{k}+s\cdot\mu=s\), in this case \(s\cdot\mu\) is also an integer. Hence altogether, \(s=\sum_{k\in K}s\lambda_{k}+s\mu\geq|K|\), and if \(K\) is a minimal cardinality dijoin, then \(\mu\) is also positive, so in fact in that case \(s=\sum_{k\in K}s\lambda_{k}+s\mu\geq|K|+1\). In both cases we obtain that \(s\geq\nu(M)+1\).
One may wonder if an analogue of Theorem 3.1 holds.
**Problem 4.4**.: Given a regular matroid, which orientation has the interior polynomial of smallest degree? Is there any orientation whose interior polynomial is coefficientwise minimal?
## 5. Parking function enumerators and greedoid polynomials
In [19] it is proved that the parking function enumerator of an Eulerian digraph can be expressed as the interior polynomial of the cographic matroid. (Prior to that, [12] settled the planar case, which in turn extended [10, Corollary 5.9].) Hence the results of the previous section give us information on the degree of the parking function enumerator of an Eulerian digraph and, equivalently, on the degree of the lowest term of the greedoid polynomial. It turns out that similar results hold for all directed graphs.
In this section we recall the definition of a greedoid and the relationship between parking function enumerators and interior polynomials, then show how to generalize the result on the degree of the parking function enumerator to all directed graphs.
### Preliminaries on greedoids and parking functions
Greedoids were introduced by Korte and Lovasz as a structure in which the greedy algorithm works. Matroids are a special class of greedoids, but greedoids are able to express connectivity properties that matroids cannot.
**Definition 5.1** (greedoid [13]).: A set system \(\mathcal{F}\) on a finite ground set \(E\) is called a _greedoid_ if it satisfies the following axioms.
1. \(\emptyset\in\mathcal{F}\);
2. for all \(X\in\mathcal{F}-\{\emptyset\}\) there exists \(x\in X\) such that \(X-x\in\mathcal{F}\);
3. if \(X,Y\in\mathcal{F}\) and \(|X|=|Y|+1\), then there exist an \(x\in X-Y\) such that \(Y\cup x\in\mathcal{F}\).
Elements of \(\mathcal{F}\) are called _accessible sets_, and maximal accessible sets are called _bases_.
It follows from the axioms that bases have the same cardinality, which is called the _rank_ of the greedoid.
An interesting subclass of greedoids is that of directed branching greedoids: For a digraph \(G\) and its vertex \(s\), the _branching greedoid of \(G\) rooted at \(s\)_ is the set system consisting of the arborescences of \(G\) rooted at \(s\). The bases of this greedoid are the maximal arborescences.
The greedoid polynomial was introduced by Bjorner, Korte, and Lovasz [4] in several equivalent ways. Here we recall the definition using activities with respect to a fixed ordering of the edges.
Let \(B=\{b_{1},\ldots,b_{r}\}\) be a basis of the greedoid. We can form words by concatenating the elements of \(B\) in some order: \(b_{i_{1}}b_{i_{2}}\ldots b_{i_{r}}\). Such a word is called _feasible_ if \(\{b_{i_{1}},\ldots,b_{i_{j}}\}\in\mathcal{F}\) for each \(j=1,\ldots,r\). Note that the axioms guarantee the existence of at least one feasible word for each basis. Let us fix an ordering of the ground set \(E\). Now to any basis \(B\) of the greedoid, one can associate its lexicographically minimal feasible word.
**Example 5.2**.: Consider the rooted digraph of Figure 1, and take the ordering of the edges indicated by the labelling. The edges \(\{1,3,7,8,9\}\) form a spanning arborescence, i.e., a basis of the branching greedoid. The word \(39187\) is feasible for
Figure 1. Eulerian digraph with root \(s\). The non-dashed arcs form a spanning arborescence rooted at \(s\).
this basis, but for example the word \(89713\) is not (since \(\{8\}\) is not an arborescence rooted at \(s\)). The lexicographically minimal feasible word is \(13879\).
**Definition 5.3** (external activity for greedoids [4]).: Let \((E,\mathcal{F})\) be a greedoid and fix an ordering of \(E\). For a basis \(B\), an element \(e\notin B\) is _externally active_ for \(B\) if for any \(f\in B\) such that \(B\cup e-f\in\mathcal{F}\), the lexicographically minimal feasible word for \(B\) is lexicographically smaller than the lexicographically minimal feasible word for \(B-f\cup e\). The _external activity_ of a basis \(B\) is the number of externally active elements for \(B\), and it is denoted by \(e(B)\).
**Definition 5.4** (greedoid polynomial, [4]).: Using the above, we associate
\[\lambda(t)=\sum_{B:\text{ basis}}t^{e(B)}\]
to an arbitrary greedoid.
We note that this is indeed well-defined, that is, independent of the ordering of the edges used to define the activities.
We will especially be interested in branching greedoids of digraphs. Swee Hong Chan proves [5] that the greedoid polynomial of a branching greedoid is a simple transformation of the enumerator of graph parking functions. Let us recall these notions, too.
**Definition 5.5** (graph parking function).: For a directed graph \(G\) and a fixed root vertex \(s\), a graph _parking function_ rooted at \(s\) is a function \(p\in\mathbb{Z}_{\geq 0}^{V-s}\) such that for each \(S\subseteq V-s\), there is at least one vertex \(u\in S\) with \(p(u)<d(V-S,u)\), where \(d(V-S,u)\) denotes the number of directed edges leading from \(V-S\) to \(u\).
We denote the set of these functions by \(\operatorname{Park}(G,s)\). For a parking function \(p\in\operatorname{Park}(G,s)\), we put \(|p|=\sum_{v\in V-s}p(v)\).
**Definition 5.6** (parking function enumerator).: For a directed graph \(G=(V,E)\) and a fixed root vertex \(s\), the _parking function enumerator_ is the polynomial
\[\operatorname{park}_{G,s}(x)=\sum_{p\in\operatorname{Park}(G,s)}x^{|p|}.\]
**Example 5.7**.: Figure 2 shows a rooted digraph and each one of its parking functions. Altogether, the parking funtion enumerator is \(x^{2}+2x+1\).
The relationship of the greedoid polynomial and the parking function enumerator is the following.
**Theorem 5.8**.: _[_5_, Theorem 1.3]__\(\lambda_{G,s}(x)=x^{|E|-|V|+1}\mathrm{park}_{G,s}(x^{-1})\)_
The second named author has previously observed the following connection.
Figure 2. A rooted digraph (with root \(s\)), and its parking functions.
**Theorem 5.9**.: _[_19_]_ _Let \(G\) be a connected Eulerian digraph, and let \(M\) be the directed dual matroid of \(G\). Then_
\[\lambda_{G,s}(x)=x^{|E(G)|-|V(G)|+1}I_{M}(x^{-1})\qquad\text{and}\qquad\text{ \rm park}_{G,s}(x)=I_{M}(x).\]
Hence if \(G\) is Eulerian, we can use Theorem 1.1 to obtain a formula for the degree of the parking function enumerator, or equivalently, a formula for the degree of the lowest term of the greedoid polynomial.
Proof of Theorem 1.4.: For the (directed) cographic matroid \(M\) of \(G\), a dijoin of \(M\) (that is, a set of edges intersecting each directed cocircuit) corresponds to an edge set of \(G\) that intersects each directed cycle. Hence dijoins of \(M\) correspond to feedback arc sets of \(G\). Now Theorem 5.9 implies the statement of the theorem.
**Remark 5.10**.: The analogue of Theorem 5.9 is not true for general (non-Eulerian) digraphs. Indeed, for non-Eulerian digraphs, the parking function enumerator _does_ depend on the root \(r\), while the interior polynomial of the dual does not.
As an example, take the digraph \(G\) that has two vertices \(s\) and \(u\), one edge from \(s\) to \(u\), and two edges from \(u\) to \(s\). As \(G\) is planar, its dual is also a graphic matroid, associated to the digraph \(G^{*}=(\{v_{1},v_{2},v_{3}\},\{\overline{v_{1}v_{2}},\overline{v_{2}v_{3}}, \overline{v_{1}v_{3}}\})\). The unique parking function of \(G\) rooted at \(s\) associates \(0\) to \(u\), whereby the parking function enumerator is \(\text{\rm park}_{G,s}(x)=1\). However, it is easy to compute that \(I_{G^{*}}(x)=1+x\).
**Remark 5.11**.: In [19], Theorem 5.9 is proved by noting that the complements of the arborescences of \(G\) (that are bases in the cographic matroid \(M\) of \(G\)) induce a triangulation of the root polytope of \(M\). If we consider a non-Eulerian rooted digraph, then we can still associate the extended root polytope of \(M|_{E-F}\) to any arborescence \(F\), and this will be a simplex in \(\tilde{\mathcal{Q}}_{M}\). If we take these simplices for each arborescence rooted at \(s\), then they will be mutually disjoint, but they will typically not fill \(\tilde{\mathcal{Q}}_{M}\). In fact, their union is not even necessarily convex. However it is easy to check that the union covers a neighborhood of \(\mathbf{0}\) within \(\tilde{\mathcal{Q}}_{M}\).
Take for example the planar digraph \(G\) on two vertices \(s\) and \(u\), with two edges from \(s\) to \(u\) and three edges from \(u\) to \(s\). Then the dual can be given as \(G^{*}=(\{v_{1},v_{2},v_{3},v_{4},v_{5}\},\{\overline{v_{1}v_{2}},\overline{v_ {2}v_{3}},\overline{v_{3}v_{5}},\overline{v_{1}v_{4}},\overline{v_{4}v_{5}}\})\). In \(G\) there are two arborescences rooted at \(s\), and their respective complements in \(G^{*}\) are \(\{\overline{v_{1}v_{2}},\overline{v_{2}v_{3}},\overline{v_{3}v_{5}},\overline {v_{4}v_{5}}\}\) and \(\{\overline{v_{1}v_{2}},\overline{v_{2}v_{3}},\overline{v_{3}v_{5}},\overline {v_{1}v_{4}}\}\). The resulting two simplices are the convex hulls of \(\mathbf{0}\) and the vectors corresponding to these edges. Now the points \(\mathbf{p}_{1}=\mathbf{1}_{v_{5}}-\mathbf{1}_{v_{4}}\) and \(\mathbf{p}_{2}=\mathbf{1}_{v_{4}}-\mathbf{1}_{v_{1}}\) are in the respective simplices, on the other hand it is easy to check that \((\mathbf{p}_{1}+\mathbf{p}_{2})/2=(1/2)\cdot(\mathbf{1}_{v_{5}}-\mathbf{1}_{v_ {1}})\) is not contained in either of the two simplices. That is, the union of these two simplices is not convex.
Nevertheless, Theorem 1.4 can still be generalized to any rooted digraph. For this, we first give a formula for the degree of the lowest term of the greedoid polynomial for general greedoids.
### General greedoids
To give a formula for the degree of the lowest term of an arbitrary greedoid polynomial, let us make two easy observations.
**Definition 5.12**.: Let \(X=(E,\mathcal{F})\) be a greedoid, and let \(S\subseteq E\). We define \(X|_{S}\) as the pair \((S,\mathcal{F}|_{S})\), where \(\mathcal{F}|_{S}=\{A\in\mathcal{F}\mid A\subseteq S\}\), and call it the _restriction_ of \(X\) to \(S\).
**Claim 5.13**.: \(X|_{S}\) _is a greedoid._
Proof.: It is easy to check that the axioms are true.
**Claim 5.14**.: _Let \(X=(E,\mathcal{F})\) be a greedoid and \(S\subseteq E\). Fix an ordering of the elements of \(E\), and its restriction to \(S\). Suppose that \(F\) is a basis of both \(X\) and \(X|_{S}\). An element \(e\in S-F\) is externally active for \(F\) in \(X|_{S}\) (with respect to the above mentioned ordering) if and only if \(e\) is externally active for \(F\) in \(X\)._
Proof.: The definition of external activity only considers bases that are subsets of \(F\cup e\), and those are the same for \(X|_{S}\) as for \(X\).
**Theorem 1.8**.: Let \(X=\{E,\mathcal{F}\}\) be a greedoid of rank \(r\). Let \(k=\min\{|S|\mid S\subset E\text{ with }\text{rank}(X|_{E-S})=r\text{ and }\lambda_{X|_{E-S}}(0)\neq 0\}\). Then in the greedoid polynomial of \(X\), the coefficient of \(x^{i}\) is zero for \(i=0,\dots,k-1\) and the coefficient of \(x^{k}\) is nonzero.
Proof.: Take an arbitrary basis \(B\) of \(X\), and let \(P\) be the set of elements of \(E-B\) that are externally passive for \(B\). We claim that \(B\cup P\) is a set such that \(\text{rank}(X|_{B\cup P})=r\) and \(\lambda_{X|_{B\cup P}}(0)\neq 0\). The claim \(\text{rank}(X|_{B\cup P})=r\) follows immediately since \(B\) is a basis of \(X\). On the other hand, since elements of \(P\) were all externally passive for \(B\) in \(X\), this remains so for \(X|_{B\cup P}\), thus, in \(X|_{B\cup P}\) there are no externally active elements for \(B\). This shows that \(\lambda_{X|_{B\cup P}}(0)\neq 0\). Hence \(k\leq|E-B-P|\) for any basis \(B\) of \(X\). As the external activity of \(B\) is \(e_{X}(B)=|E-B-P|\), this shows that if the coefficient of \(x^{i}\) is positive in the greedoid polynomial, then \(i\geq k\).
It remains to show that the coefficient of \(x^{k}\) is positive. Take a set \(S\subseteq E\) with \(|S|=k\) such that \(\text{rank}(X|_{E-S})=r\) and \(\lambda_{X|_{E-S}}(0)\neq 0\). We show that there exists a basis \(B\) such that \(e_{X}(B)\leq|S|\). Since we have already proved that we cannot have \(e_{X}(B)<k\), this will complete the proof.
As \(\text{rank}(X|_{E-S})=r\), each basis of \(X|_{E-S}\) is a basis of \(X\). Since \(\lambda_{X|_{E-S}}(0)\neq 0\), the greedoid \(X|_{E-S}\) has a basis \(B\) such that \(e_{X|_{E-S}}(B)=0\). That is, all elements of \(E-S-B\) are externally passive in \(B\). Hence, in \(X\), only the elements of \(S\) can be externally active for \(B\), thus indeed, \(e_{X}(B)\leq|S|=k\).
### Arbitrary directed graphs
In this section we use the result of the previous section to generalize Theorem 1.4 to arbitrary digraphs. Let \(G\) be a digraph, and let \(s\) be an arbitrary fixed vertex of \(G\).
Note that when examining the greedoid polynomial, we can suppose that \(G\) is root-connected, i.e., that each vertex of \(G\) is reachable on a directed path from \(s\). (This property is equivalent to \(G\) having a spanning arborescence rooted at \(s\).) Indeed, if \(G\) is not root-connected, then the bases of the branching greedoid of \(G\) rooted at \(s\) will be the same as the bases of the branching greedoid of \(G^{\prime}\) rooted at \(s\), where \(G^{\prime}\) is the subgraph of \(G\) spanned by vertices reachable on a directed path from \(s\). The edges of \(G-G^{\prime}\) will be externally semi-active for any basis, and any edge ordering, that is in this case \(\text{park}_{G}(x)=\text{park}_{G^{\prime}}(x)\) and \(\lambda_{G}(x)=x^{|E(G)|-|E(G^{\prime})|}\lambda_{G^{\prime}}(x)\). Hence from now on, we may suppose that \(G\) is root-connected.
Swee Hong Chan points out that in the case of directed branching greedoids, the greedoid activity notion of [4] specializes to the following.
**Definition 5.15** (external semi-activity in digraphs, [5]).: Let \(G\) be a root-connected digraph with a fixed ordering of the edges. Let \(A\) be a spanning arborescence rooted at \(s\) in \(G\). An arc \(e\notin A\) is _externally semi-active_ for \(A\) if in the fundamental cycle
\(C(A,e)\) the maximal edge (with respect to the fixed ordering) stands parallel to \(e\). If \(e\notin A\) is not externally semi-active for \(A\), then we call it _externally semi-passive_.
The name semi-activity comes from Li and Postnikov [14], who introduced this notion independently from the greedoid context.
**Example 5.16**.: For the rooted spanning arborescence in Figure 1, and the indicated edge ordering, edge number 2 is externally semi-active. Indeed, its fundamental cycle is \(\{2,3,8,7\}\), along which the maximal edge, 8, stands parallel to 2. On the other hand, 6 is externally semi-passive, because its fundamental cycle is \(\{6,1,3,8,7\}\), and among these, the maximal edge 8 stands opposite to 6.
**Theorem 1.6**.: Let \(G=(V,E)\) be a root-connected digraph. The degree of the parking function enumerator of \(G\) rooted at \(s\) is equal to \(|E|-|V|+1-\mathrm{minfas}(G,s)\).
Equivalently, for the greedoid polynomial of the branching greedoid of \(G\) rooted at \(s\), the coefficients of \(x^{0},\ldots,x^{k-1}\) are zero, and the coefficient of \(x^{k}\) is nonzero for \(k=\mathrm{minfas}(G,s)\).
Recall that \(\mathrm{minfas}(G,s)\) is the minimal cardinality of an edge set whose removal leaves a root-connected acyclic digraph (Definition 1.5). It follows from Theorems 1.4 and 1.6 that for a connected Eulerian digraph and an arbitrary vertex \(s\), \(\mathrm{minfas}(G)=\mathrm{minfas}(G,s)\). It is also not very hard to prove this directly.
For general digraphs, \(\mathrm{minfas}(G)\) and \(\mathrm{minfas}(G,s)\) might differ. For example if \(G\) has two vertices, \(s\) and \(v\), with one edge from \(s\) to \(v\), and two edges from \(v\) to \(s\), then \(\mathrm{minfas}(G)=1\) but \(\mathrm{minfas}(G,s)=2\).
**Remark 5.17**.: For the interior polynomial (consequently, also for the parking function enumerator of Eulerian digraphs), we had a definition using Ehrhart theory. For the parking function enumerator of general digraphs we are unaware of such a definition. Hence in the proof of Theorem 1.6, we need to use a different, combinatorial argument.
As we remarked in the introduction, Theorem 1.6 strengthens a theorem of Bjorner, Korte, and Lovasz, who showed in [4] that for the branching greedoid of a root-connected digraph, the greedoid polynomial has a nonzero constant term if and only if the digraph is acyclic. They in fact prove more than this statement, also establishing the equivalence of this property to certain topological assumptions. For the sake of self-containedness, we give a short proof of the part of their statement that we will use.
**Lemma 5.18**.: _[_4_]_ _Let \(G\) be a root-connected digraph with respect to the root \(s\). The greedoid polynomial \(\lambda_{G,s}\) has nonzero constant term if and only if \(G\) is acyclic._
Proof.: Suppose that \(G\) is acyclic and fix an arbitrary ordering of its edges. In order to show that \(\lambda_{G,s}(0)\neq 0\), we need to find a spanning arborescence (rooted at \(s\)) with 0 external activity.
Consider the spanning arborescence \(A\) whose lexicographically minimal feasible word is lexicographically maximal. For any edge \(e\notin A\), if there is any arborescence of the form \(A^{\prime}=A\cup e-f\), then the lexicographically minimal feasible word for \(A^{\prime}\) can only be smaller than that of \(A\). Hence it suffices to show that there is always such an arborescence \(A^{\prime}\). As \(A\) is a spanning arborescence, it is a tree with a unique directed path from \(s\) to any vertex. If we add an edge \(e=\overrightarrow{uv}\) to \(A\), then we create a unique cycle. Since we assumed that \(G\) is acyclic, this cycle is not directed, in
particular \(v\neq s\). By removing the unique in-edge \(f\) of \(v\) that \(A\) contains, we once again get a tree with exactly one directed path to each vertex, i.e., \(A^{\prime}=A\cup e-f\) is an arborescence. With this, we have finished proving that \(e(A)=0\).
Now we show that if the root-connected graph \(G\) is not acyclic, then \(\lambda_{G,s}(0)=0\). We do this by proving that for any spanning arborescence \(A\), the externally semi-active edges form a feedback arc set of \(G\), whereby if \(G\) is not acyclic, the set of externally semi-active edges cannot be empty for any spanning arborescence.
Let \(A\) be an arbitrary spanning arborescence (rooted at \(s\)), and let \(C\) be an arbitrary directed cycle of \(G\). We will work in the cycle space within \(\mathbb{R}^{E}\), which is generated by the vectors \(\sum_{e\in C^{+}}\mathbf{1}_{e}-\sum_{e\in C^{-}}\mathbf{1}_{e}\) taken for all cycles \(C\subset G\). As the fundamental cycles of \(A\) form a basis of the cycle space of \(G\) over \(\mathbb{R}\), the cycle \(C\) can be written (with a slight abuse of notation) as a combination of some fundamental cycles: \(C=\lambda_{1}\cdot C(A,e_{1})+\cdots+\lambda_{j}\cdot C(A,e_{j})\), where each coefficient is nonzero. Some edges of these fundamental cycles might cancel out, but notice that \(e_{1},\ldots,e_{j}\) only occur in their respective fundamental cycles, wherefore they are all part of \(C\). We may suppose without loss of generality that each \(e_{i}\) is positive in \(C(A,e_{i})\) and \(C=C^{+}\). This implies \(\lambda_{1}=\cdots=\lambda_{j}=1\).
Let \(e\) be the maximal edge in the union of the cycles \(C(A,e_{1}),\ldots,C(A,e_{j})\). If \(e\in C\), then (since \(C\) is a directed cycle) \(e\) is parallel to \(\{e_{1},\ldots,e_{j}\}\) in \(C\), and hence if \(e\in C(A,e_{i})\), then \(e\) is also parallel to \(e_{i}\) in \(C(A,e_{i})\), and it is the maximal edge in this cycle. Thus, \(e_{i}\in C\) is externally semi-active for \(A\). If \(e\notin C\), then \(e\) needs to occur in at least two fundamental cycles, once with positive, and once with negative sign. If \(e\) occurs with positive sign in \(C(A,e_{i})\), then its maximality in this cycle ensures that \(e_{i}\in C\) is externally semi-active for \(A\). In both cases we found an externally semi-active element in \(C\), in particular the externally semi-active elements cover each directed cycle.
Proof of Theorem 1.6.: We apply Theorem 1.8. If \(G\) is root-connected, then the rank of the branching greedoid rooted at \(s\) is equal to \(|V(G)|-1\). For an edge set \(S\), the rank of the branching greedoid of \(G[E-S]\) remains \(|V(G)|-1\) if and only if \(G[E-S]\) is root-connected. On the other hand, Lemma 5.18 tells us that \(\lambda_{G[E-S],s}(0)=0\) is equivalent to \(G[E-S]\) being acyclic. Hence the condition of Theorem 1.8 indeed gives the condition of Theorem 1.6 for branching greedoids of root-connected digraphs.
|
2305.03108 | The Saltbox-Roof Probability Distribution | The saltbox-roof parametric probability distribution is a special case of the
triangular distribution, where only one side is truncated. Here it is presented
as a single and independent distribution, where the explicit equations are
defined for its probability density--, the cumulative distribution--, and the
inverse of the cumulative distribution (quantile--) functions as also its
random generator. Four parameters are necessary to define it: the lower and the
upper limits, the mode, and a shape parameter. Also, the saltbox-roof
distribution degenerates into the uniform distribution, into a kind of a
trapezoidal distribution and into other special cases of the general triangular
distribution, all of them which are related to the domain of the shape
parameter within the mode. The mean, median, and the variance are also here
expressed by explicit equations. The function equations have been verified with
theorems of truncated distributions. Three application examples are exposed. | Ludger O. Suarez-Burgoa | 2023-05-04T18:50:36Z | http://arxiv.org/abs/2305.03108v1 | # The _Saltbox-Roo_ Probability Distribution
###### Abstract
The saltbox-roof parametric probability distribution is a special case of the triangular distribution, where only one side is truncated. Here it is presented as a single and independent distribution, where the explicit equations are defined for its probability density-, the cumulative distribution-, and the inverse of the cumulative distribution (quantile-) functions as also its random generator. Four parameters are necessary to define it: the lower
and the upper limits, the mode, and a shape parameter. Also, the saltbox-roof distribution degenerates into the uniform distribution, into a kind of a trapezoidal distribution and into other special cases of the general triangular distribution, all of them which are related to the domain of the shape parameter within the mode. The mean, median, and the variance are also here expressed by explicit equations. The function equations have been verified with theorems of truncated distributions. Three application examples are exposed.
**Keywords:** saltbox-roof, probability distribution, triangular distribution
Introduction
This article deals with a probability distribution that has been called the _Saltbox-Roof_ probability density function (PDF), because its shape has a similitude with the roofs that some houses have (Figure 1): In front-view of the house appears to have two stories (plane \(BB^{\prime}E^{\prime}E\)) and one short attic (plane \(EE^{\prime}C^{\prime}C\)), but in the back-view it has one story (plane \(AA^{\prime}D^{\prime}D\)) and a tall attic (plane \(DD^{\prime}C^{\prime}C\)). In lateral view, its shape is not symmetric, indeed is a house with two asymmetric hips (lines \(DCE\)) with different slopes.
The here mentioned PDF is not similar to the complete house described above, it is similar to the roof only (without the house's walls); then, the distribution has been named the _Saltbox-Roof_ probability distribution.
This probability distribution degenerates into other simpler PDF whose shapes can also be classified as roofs. The reader will find during this article, that the Saltbox-Roof PDF degenerates into six roof-like PDF, as shown in Figure 2 and described next.
1. The Flat-Roof PDF, which is the known _uniform_ PDF.
2. The Gabled-Roof PDF, which is the known _triangular_ PDF.
3. The Right-sided-- and Left-sided-- Shed-Roofs, which are the triangular PDF in its special case of a right triangle; the first one with its vertical side to the left and the other with its vertical side to the right, respectively. In this case, roof directions are according to the side they
drop the water.
4. The Shed-Flat Roof PDF (also can be called the Shed-Plateau), which
Figure 1: A scheme of a house with a saltbox roof: Plane \(DD^{\prime}C^{\prime}C\) and plane \(EE^{\prime}C^{\prime}C\).
possibly it doesn't has an equivalent name in statistics.
5. The Skillion-Roof PDF (also can be called the Shed-Clerestory), which maybe doesn't has a statistical equivalent name either.
In this article, the explicit equations that defines this Saltbox-Roof distribution is presented: The probability density function, the cumulative density function (CDF), the quantile function (inverse function of the CDF), and the random number generator, among their respective expectation moments such as the mean and the variance.
The validation of the mentioned explicit equations has been made with the property that this Saltbox-Roof distribution is indeed a one _right-sided truncated_ triangular distribution; then, theorems of conditional expectations analysis over the triangular distribution has been used for this task.
In the end of this article, three applications examples of the potential use of this distribution is presented.
Figure 2: Roof-like probability density functions derived from the Saltbox-Roof PDF (in gray): **a** Flat roof; **b** Shed Flat roof; **c** Left-sided Shed roof; **d** Gabled roof; **e** Right-sided Shed roof; **f** Skillion roof.
Equations that defines the Saltbox-Roof distribution
The Saltbox-Roof shape is depicted in a Cartesian two dimensional coordinate system, where the abscissas \(x\) are the quantiles of a _variate value_\(\mathbf{X}\) and the ordinates the frequencies \(h=f(x)\), as shown in Figure 3. The variate value is a random variable in the interval of the real line \(\mathbb{R}\).
The equation of the ascending slope, from the left point \(a\) starting from the zero frequency (\(h_{a}=0\)) point, \((a,h_{a})\) to point \(c\) in the \(h_{c}\) frequency, \((c,h_{c})\), is
\[f_{1}(x)=\frac{x-a}{c-a}h_{c}. \tag{1}\]
The equation of the descending slope, from point \((c,h_{c})\) to an end frequency \(h_{b}\) at \(b\) point, \((b,h_{b})\), is
\[f_{2}(x)=h_{c}-\frac{h_{c}-h_{b}}{b-a}(x-a). \tag{2}\]
The conditions here are that
\[h_{a}=0,\]
\[h_{c}\geq h_{b},\]
\[a\leq c\leq b.\]
If those conditions are not accomplished, then the function is not a Saltbox
Roof shape!
Figure 3: Saltbox-Roof shape as a probability density function, between values \(a\) and \(b\) and mode \(c\).
### The probability density function
To convert the Saltbox-roof shape functions, at Equations 1 and 2, to a probability density function (PDF), the area under it should be equal to one, _i.e._
\[A_{p}=\frac{1}{2}(c-a)h_{c}+\frac{1}{2}(h_{c}-h_{b})(b-c)=1.\]
By solving for \(h_{b}\), it is obtained that
\[\frac{1}{2}(c-a)h_{c}+\frac{1}{2}(h_{c}+h_{b})(b-c) = 1,\] \[(c-a)h_{c}+(h_{c}+h_{b})(b-c) = 2,\] \[h_{c}[(c-a)+(b-c)]+h_{b}(b-c) = 2,\] \[h_{b}(b-c) = 2-h_{c}(b-a),\] \[h_{b} = \frac{2-(b-a)h_{c}}{b-c}. \tag{3}\]
By doing some mathematical manipulation in \(f_{1}(x)\) one gets
\[f_{1}(x) = \frac{x-a}{c-a}h_{c},\] \[= -\frac{a\,h_{c}}{c-a}+\frac{h_{c}}{c-a}x.\]
By replacing \(h_{b}\) in \(f_{2}(x)\) and performing more mathematical manipula
tion, one gets
\[f_{2}(x) = h_{c}-\frac{h_{c}}{b-c}(x-c)+\frac{h_{b}}{b-c}(x-c),\] \[= h_{c}-\frac{h_{c}}{b-c}(x-c)+\frac{2-(b-a)}{(b-c)^{2}}(x-c),\] \[= h_{c}-\frac{h_{c}}{b-c}(x-c)+\frac{2(x-c)}{(b-c)^{2}}-\frac{b-a}{ (b-c)^{2}}(x-c),\] \[= h_{c}+\frac{2(x-c)}{(b-c)^{2}}-\left[\frac{1}{b-c}+\frac{b-a}{(b -c)^{2}}\right]h_{c}(x-c),\] \[= h_{c}+\frac{2}{(b-c)^{2}}x-\frac{2c}{(b-c)^{2}}-\frac{h_{c}}{b-c }(x-c)-\frac{h_{c}(b-a)}{(b-c)^{2}}(x-c),\] \[= h_{c}+\frac{2}{(b-c)^{2}}x-\frac{2c}{(b-c)^{2}}-\frac{h_{c}}{b-c }x+\frac{c\,h_{c}}{b-c}-\frac{h_{c}(b-a)}{(b-c)^{2}}x+\cdots\] \[\cdots+\frac{h_{c}(b-a)c}{(b-c)^{2}},\] \[= \frac{h_{c}(b-c)^{2}-2c+h_{c}\,c(b-c)+h_{c}\,c(b-a)}{(b-c)^{2}}+\cdots\] \[\cdots+\frac{2-h_{c}(b-a)-h_{c}(b-c)}{(b-c)^{2}}x,\] \[= \frac{h_{c}(b-c)^{2}-2\,c+c\,h_{c}(b-c+b-a)}{(b-c)^{2}}+\frac{2-h _{c}(b-a+b-c)}{(b-c)^{2}}x,\] \[= \frac{h_{c}(b-c)^{2}-2c+h_{c}(2b-c-a)c}{(b-c)^{2}}+\frac{2-h_{c}( 2b-c-a)}{(b-c)^{2}}x.\]
Finally, the PDF of the Saltbox-Roof distribution is defined by the following explicit equation:
\[f(x) = \begin{cases}-\dfrac{a\,h_{c}}{c-a}+\dfrac{h_{c}}{c-a}x,&\text{ for }a\leq x\leq c\\ \dfrac{h_{c}(b-c)^{2}-2\,c+h_{c}(2b-c-a)c}{(b-c)^{2}}+\cdots&\text{.}\\ \cdots+\dfrac{2-h_{c}(2b-c-a)}{(b-c)^{2}}x,&\text{ for }c<x\leq b\end{cases} \tag{4}\]
Note that the frequency \((h_{b})\), the left extreme at \(b\) frequency, is not part of the expressions, because it is function of \(h_{c}\) as shown in Equation 3.
### The cumulative density function
The cumulative density function (CDF) of the above function is
\[F(x) = \begin{cases}F_{1}(x)=\int_{a}^{c}f_{1}(x)\mathrm{d}x,&\text{for }a \leq x\leq c\\ F_{2}(x)=\int_{c}^{b}f_{2}(x)\mathrm{d}x,&\text{for }c<x\leq b\end{cases};\]
where by solving the integrals they result in
\[F_{1}(x) = \dfrac{a^{2}h_{c}-2a\,h_{c}x+h_{c}x^{2}}{2(c-a)},\] \[= \dfrac{a^{2}h_{c}}{2(c-a)}-\dfrac{a\,h_{c}}{c-a}x+\dfrac{h_{c}}{2 (c-a)}x^{2};\]
\[2c^{2}+[x^{2}(c+a-2b)h_{c}+2]\,+\cdots\] \[F_{2}(x) = \frac{\cdots+[2\,cab-(c+a)b^{2}]h_{c}-2x[\left(ca-b^{2}\right)h_{c}+ 2c]}{2(c-b)^{2}},\] \[= \frac{2\,c^{2}+[2cab-(c+a)b^{2}]h_{c}}{2(c-b)^{2}}+\cdots\] \[\cdots+\frac{[(c+a-2b)h_{c}+2]x^{2}-2[(ca-b^{2})h_{c}+2c]x}{2(c-b )^{2}},\] \[= \frac{2\,c^{2}+[2cab-(c+a)b^{2}]h_{c}}{2(c-b)^{2}}-\frac{(ca-b^{2 })h_{c}+2c}{(c-b)^{2}}x+\cdots\] \[\cdots+\frac{(c+a-2b)h_{c}+2}{2(c-b)^{2}}x^{2}.\]
Which in resume is
\[F(x) = \begin{cases}\frac{a^{2}h_{c}}{2(c-a)}-\frac{a\,h_{c}}{c-a}x+ \frac{h_{c}}{2(c-a)}x^{2},&\text{for $a\leq x\leq c$}\\ \frac{2\,c^{2}+[2cab-(c+a)b^{2}]h_{c}}{2(c-b)^{2}}-\cdots\\ \cdots-\frac{(ca-b^{2})h_{c}+2c}{(c-b)^{2}}x+\frac{(c+a-2b)h_{c}+2}{2(c-b)^{2} }x^{2},&\text{for $c\leq x\leq b$}\end{cases} \tag{5}\]
### The inverse of the cumulative density function
The inverse of CDF, _i.e._ the quantile function, is
\[F^{-1}(U) = \begin{cases}F_{1}^{-1}(U),&\text{for $F(a)\leq U\leq F(c)$}\\ F_{2}^{-1}(U),&\text{for $F(c)<U\leq F(b)$}\end{cases};\]
where
\[F_{1}^{-1}(U) = \frac{a\,h_{c}+\sqrt{2\,U(c-a)h_{c}}}{h_{c}},\] \[= a+\frac{\sqrt{2\,U(c-a)h_{c}}}{h_{c}};\]
\[2c+(c\,a-b^{2})h_{c}-(c-b)\sqrt{\begin{array}{c}(a^{2}-2ab+b^{2})h_{c}^{2}+4U \,+\cdots\\ \cdots+2h_{c}[d(U+1)+c(U-1)-2bU]\\ (c+a-2\,b)h_{c}+2\\ =\frac{(c\,a-b^{2})h_{c}+2c}{(c+a-2b)h_{c}+2}-\frac{(c-b)}{(c+a-2a)h_{c}+2} \cdots\\ \cdots\sqrt{h_{c}^{2}\,(a-b)^{2}+2h_{c}[a(U+1)+c(U-1)-2bU]+4U}.\end{array}}\]
\[F_{2}^{-1}(U) = \frac{(c\,a-b^{2})h_{c}+2c}{(c+a-2b)h_{c}+2}-\frac{(c-b)}{(c+a-2a )h_{c}+2}\cdots\] \[\cdots\sqrt{h_{c}^{2}\,(a-b)^{2}+2h_{c}[a(U+1)+c(U-1)-2bU]+4U}.\]
In conclusion
\[F^{-1}(U)=\begin{cases}a+\dfrac{\sqrt{2U(c-a)h_{c}}}{h_{c}},&\text{for }F(a)\leq U \leq F(c)\\ \dfrac{(c\,a-b^{2})h_{c}+2c}{(c+a-2b)h_{c}+2}-\dfrac{(c-b)}{(c+a-2b)h_{c}+2} \cdots\\ \cdots\sqrt{\cdots+2h_{c}\left[a(U+1)+c(U-1)-2bU\right]+4U}\\ \text{for }F(c)\leq U\leq F(b)\end{cases}. \tag{6}\]
### Domain
The \(x\)-values of the Saltbox-Roof distribution is between its corresponding lower and upper limits defined by \([a,b]\). The mode of the distribution is the \(x\)-value with the highest frequency \(h_{c}\); thus \(x=c\) is the mode, where \(a\leq c\leq b\). The _residual frequency_, referred here as \(h_{b}\) and always located at \(x=b\) can't be an arbitrary value, it depends on \(h_{c}\) and the above x-limits, as described when the probability function was being defined. This residual frequency is ruled by \(h_{b}=[2-(b-a)h_{c}]/(b-c)\), with the condition that \(h_{c}\geq h_{b}\).
The frequency at the mode, \(h_{c}\), can't be an arbitrary value either because its value depends on its location \(c\); then, the existence of a Saltbox-Roof distribution is also related to the paired values \((c,h_{c})\). Both values should take any values inside a domain that will be determined here.
To obtain the domain of the pair of variables \((c,h_{c})\) it is better to use
_relative parameters_ of the variables that define the Saltbox-Roof PDF; _i.e._ convert it into a similar PDF shape where the lowest and highest values of \(x\) are transformed to be between \([0,1]\) and where the frequency at the mode \(h_{c}\) to be a proportion number (denoted here as \(\rho\), do not confuse the term _proportion_ with the population proportion meaning in statistics which is a parameter that describes a percentage value associated with a population) of an interval where \(h_{c}\) is valid, _i.e._ where \(h_{cm}\leq h_{c}\leq h_{cM}\), meaning the sub-indexes \(m\) and \(M\) be respectively the minimum and the maximum values of \(h_{c}\).
The \(x\)-relative value \(x\) with a hat ans expressed as \(\hat{x}\) is
\[\hat{x}=\frac{x-a}{b-a}.\]
Every variable over a hat will express to be relative to \((b-a)\). Hence
\[\hat{a}=0,\]
\[\hat{b}=1,\]
\[\hat{c}=\frac{c-a}{b-a}.\]
The proportion \(\rho\) according to the above definition is related with the other variables as
\[h_{c}=h_{cm}+\left(h_{cM}-h_{cm}\right)\rho.\]
When \(\rho=0\), then \(h_{c}=h_{cm}\) and when \(\rho=1\), then \(h_{c}=h_{cM}\).
The minimum value of \(h_{c}\) is when the PDF degenerates into an uniform distribution, _i.e._ when \(h_{c}=h_{b}\) and \(c\leftarrow[a,b]\). According to the uniform distribution, then
\[h_{c}=h_{b}=h_{r}=\frac{1}{b-a},\]
\[h_{cm}=\frac{1}{b-a};\]
where \(h_{r}\) is the frequency of a uniform distribution between the same limits \([a,b]\). It is better to put
\[h_{cm} = \frac{1}{b-a}d_{1},\] \[= \frac{d_{1}}{b-a}1;\]
where \(d_{1}\) will be the relative x-interval which is an _unitary x-interval_.
The maximum value of \(h_{c}\) exist when the PDF degenerates into a triangular distribution, where \(h_{c}\) can be located at any value of \(c\) between \([a,b]\). It means, that \(h_{b}=0\). According to the triangular distribution, it results that
\[h_{c}=\frac{2}{b-a},\]
\[h_{cM}=\frac{2}{b-a}.\]
Also, by taken into account the unitary x-interval one obtains
\[h_{cM} = \frac{2}{b-a}d_{1},\] \[= \frac{d_{1}}{b-a}2.\]
Calling again the definition of \(\rho\),
\[h_{c} = h_{cm}+\left(h_{cM}-h_{cm}\right)\rho,\] \[= \frac{d_{1}}{b-a}1+\left(\frac{d_{1}}{b-a}2-\frac{d_{1}}{b-a}1 \right)\rho,\] \[= \frac{d_{1}}{b-a}1+\frac{d_{1}}{b-a}(2-1)\,\rho,\] \[= \frac{d_{1}}{b-a}1+\frac{d_{1}}{b-a}1\,\rho,\] \[= 1\,\frac{d_{1}}{b-a}\left(1+\rho\right).\]
If the unit 1 is named \(h_{1}\), as being an _unitary frequency_, it results that
\[h_{c}=h_{1}\,\left[\frac{d_{1}}{b-a}(1+\rho)\right];\]
indeed, clearly a proportion.
Now solving for \(\rho\)
\[h_{c} = h_{1}\,\left[\frac{d_{1}}{b-a}\left(1+\rho\right)\right],\] \[\frac{h_{c}}{h_{1}} = \frac{d_{1}}{b-a}\left(1+\rho\right),\] \[\frac{h_{c}}{h_{1}}\frac{b-a}{d_{1}} = \left(1+\rho\right),\] \[\rho = \frac{h_{c}}{h_{1}}\frac{b-a}{d_{1}}-1;\]
also clearly shows a proportion.
Because \(h_{1}=1\) and \(d_{1}=1\) one can express that
\[\rho=h_{c}(b-a)-1;\]
with the reminding that their corresponding unitary dimensions are dividing \(h_{c}\) and \((b-a)\).
Using the relative values of \(h_{c}\), \(b\) and \(a\), _i.e._ the variable \(\hat{h}_{c}\), \(\hat{b}\gets 1\) and \(\hat{a}\gets 0\), it results that \((\hat{b}-\hat{a})=1\) and that the proportion for the relative values is
\[\hat{\rho} = \hat{h}_{c}(\hat{b}-\hat{a})-1, \tag{7}\] \[= \hat{h}_{c}-1;\]
being this time \(\hat{\rho}\) as \(\rho\) relative to \((b-a)\) as stated above.
Since relative proportions to \((b-a)\) should be equal in the original and
the transformed shapes, then
\[\hat{\rho}=\rho,\]
\[\hat{h}_{c}-1=h_{c}(b-a)-1,\]
\[\hat{h}_{c}=h_{c}(b-a);\]
where \((b-a)\) can be considered a _scaling factor_ to transform from \(h_{c}\) to \(\hat{h}_{c}\).
In resume, to transform the parameters of the Saltbox-Roof distribution into its corresponding _x-interval_\([0,1]\) assume that
\[\hat{a} = 0,\] \[\hat{b} = 1,\] \[\hat{c} = \frac{c-a}{b-a},\] \[\hat{h_{c}} = h_{c}(b-a).\]
Within the Saltbox-Roof distribution expressed in its \([0,1]\)\(\hat{x}\)-interval, the domain can be expressed in terms of the pair of variables \((\hat{c},\hat{\rho})\). The variable \(\hat{c}\) may vary within the interval \([0,1]\), the variable \(\hat{\rho}\) also between the interval \([0,1]\).
When \(\hat{\rho}=0\), it means that \(\hat{h}_{c}\) is the minimum with a value of 1. If in addition \(\hat{c}=0\), it means that \(\hat{h}_{c}\) is located at the initial values of \(\hat{x}\). With respect to the shape, the saltbox-roof shape degenerates into a flat-roof shape, _i.e._ the uniform distribution as shown in Figure 2a.
If \(\hat{c}=0\) is maintained constant, an increment \(\hat{\rho}\) from \(0\) to \(1\), \(\hat{h}_{c}\) initially being the minimum value of \(1\), it will grown until its maximum possible value, _i.e._ 2; but to balance that growth, \(\hat{h}_{b}\) should shrink from \(1\) until \(0\); then, the shape starts from being a flat-roof to being a skillion-roof (Figure 2f) until being a right-sided shed-roof, _i.e._ the right triangular distribution with its maximum at the beginning (Figure 2e).
When \(\hat{\rho}=1\), it means that \(\hat{h}_{c}\) is the maximum with the value of \(2\) (as mentioned); to be so, \(\hat{h}_{b}\) should be zero and the shape result into a gabled-roof; this if \(0\leq\hat{c}\leq 1\). This gabled-roof shape is the triangular distribution (Figure 2d). By maintaining \(\hat{\rho}=1\); if \(\hat{c}=0\), the gabled-roof shape converts into the mentioned right-sided shed-roof shape, _i.e._ the right-triangular distribution with its maximum at the beginning (Figure 2e); but, if \(\hat{c}=1\) the gabled-roof shape converts to a left-sided shed-roof shape, _i.e._ the right-triangular distribution with its maximum at the end (Figure 2c).
In the interval \(0\leq\hat{c}\leq 1\) but with \(0\leq\hat{\rho}\leq 1\) one has the saltbox-roof shape, but as mentioned above, there is a limit; that limit is when \(\hat{h}_{c}=\hat{h}_{b}\), because one important condition of all the equations defined for this distribution is that \(\hat{h}_{c}\geq\hat{h}_{b}\). Therefore, it is of interest to find the location \(\hat{c}_{L}\) in the interval \(]0,1[\) as a limit where \(\hat{h}_{c}=\hat{h}_{b}\). Under that condition, the saltbox-roof shape degenerates into a shed-flat-roof shape (Figure 2d). This, with the condition that the areas of both shapes should preserve unitary, maintaining invariant also the values of the locations \(\hat{a}\) and \(\hat{b}\).
To solve this, the area of the saltbox-roof shape \(\hat{A}_{ps}\), is evaluated for
\(\hat{h}_{b}=\hat{h}_{c}\leftarrow\hat{h}_{L}\) and \(\hat{c}\leftarrow\hat{c}_{L}\) as following
\[\hat{A}_{ps} = \frac{1}{2}(\hat{c}-\hat{d})\hat{h}_{c}+\frac{1}{2}(\hat{h}_{c}+ \hat{h}_{e})(\hat{e}-\hat{c}),\] \[= \frac{1}{2}(\hat{c}_{L}-\hat{d})\hat{h}_{L}+\frac{1}{2}(\hat{h}_ {L}+\hat{h}_{L})(\hat{e}-\hat{c}_{L}),\] \[= \frac{1}{2}(\hat{c}_{L}-\hat{d})\hat{h}_{L}+\hat{h}_{L}(\hat{e}- \hat{c}_{L});\]
where \(\hat{h}_{L}\) is the frequency at the limit and \(\hat{c}_{L}\) is the position in the x-axis of this frequency.
Because the shape is representing a PDF, \(\hat{A}_{ps}=1\), then
\[1=\frac{1}{2}(\hat{c}_{L}-\hat{a})\hat{h}_{L}+\hat{h}_{L}(\hat{b}-\hat{c}_{L})\]
and is solved for \(\hat{h}_{L}\) with respect to \(\hat{c}_{L}\), as following
\[2 = (\hat{c}_{L}-\hat{a})\hat{h}_{L}+2\hat{h}_{L}(\hat{b}-\hat{c}_{L }),\] \[2 = \hat{h}_{L}(\hat{c}_{L}-\hat{a}+2\hat{b}-2\hat{c}_{L}),\] \[\hat{h}_{L} = \frac{2}{2\hat{b}-\hat{a}-\hat{c}_{L}},\] \[= \frac{2}{2\hat{b}-(\hat{a}+\hat{c}_{L})}.\]
As being \(\hat{b}=1\) and \(\hat{a}=0\), then
\[\hat{h}_{L}=\frac{2}{2-\hat{c}_{L}}.\]
Finally, to convert it into a \(\rho=\hat{\rho}\) value, it is remembered that
\[\hat{\rho}=\hat{h}_{c}-1\]
and by analogy
\[\hat{\rho}=\hat{h}_{L}-1.\]
Solving for \(\hat{h}_{L}\) is obtained that
\[\hat{h}_{L}=\hat{\rho}+1.\]
Substituting this into the above found expression, it results that
\[\hat{h}_{L} = \frac{2}{2-\hat{c}_{L}},\] \[\hat{\rho}+1 = \frac{2}{2-\hat{c}_{L}},\] \[\hat{\rho} = \frac{2}{2-\hat{c}_{L}}-1. \tag{8}\]
It will be also useful to have the solution for \(\hat{c}_{L}\), then
\[\hat{\rho} = \frac{2}{2-\hat{c}_{L}}-1,\] \[(\hat{\rho}+1)(2-\hat{c}_{L}) = 2,\] \[2-\hat{c}_{L} = \frac{2}{\hat{\rho}+1},\] \[\hat{c}_{L} = 2-\frac{2}{\hat{\rho}+1}. \tag{9}\]
The rightest limit of the saltbox-roof distribution is defined by the following inequality
\[\hat{c} \leq \hat{c}_{L},\] \[\leq 2-\frac{2}{\hat{\rho}+1}.\]
and the lowest limit of the the saltbox-roof distribution is defined by the following inequality
\[\hat{\rho} \leq \frac{2}{2-\hat{c}_{L}}-1,\] \[> 1-\frac{2}{2-\hat{c}_{L}}.\]
The equation that limits the values of \(\hat{c}\) and \(\hat{\rho}\), _i.e._ where the shed-flat-roof distribution exist as in Figure 2b, is
\[(\hat{\rho}+1)(2-\hat{c}_{L})-2 = 0,\] \[2\hat{\rho}-\hat{\rho}\hat{c}_{L}+2-\hat{c}_{L}-2 = 0,\] \[2\hat{\rho}-\hat{\rho}\hat{c}_{L}-\hat{c}_{L} = 0. \tag{10}\]
All the above mentioned limits that define the domain of the existence of the Saltbox-Roof distribution is resumed in Figure 4.
### Properties
In this section it will put the explicit equations of the mean, median, mode, and variance of the Saltbox-roof distribution. These expressions where obtained using a Computer Arithmetic System (CAS), such as SageMath, and were verified numerically. The expressions are the following.
Figure 4: Domain of the Saltbox-Roof distribution.
#### 2.5.1 Mean
\[\mu = -\frac{1}{6}(a^{2}-2ab+b^{2})h_{c}+\frac{1}{3}c+\frac{2}{3}b, \tag{11}\] \[= -\frac{1}{6}(a-b)^{2}h_{c}+\frac{1}{3}c+\frac{2}{3}b.\]
#### 2.5.2 Median
\[m=\frac{ah_{c}+\sqrt{(c-a)h_{c}}}{h_{c}}. \tag{12}\]
#### 2.5.3 Mode
\[M=c. \tag{13}\]
#### 2.5.4 Variance
\[\sigma^{2} = -\frac{1}{36}(a^{4}-4a^{3}b+6a^{2}b^{2}-4ab^{3}+b^{4})h_{c}^{2}+ \frac{1}{18}c^{2}-\frac{1}{9}cb+\frac{1}{18}b^{2}+\cdots \tag{14}\] \[\cdots+\frac{1}{36}(ca^{2}-3a^{3}+(c-7a)b^{2}+2b^{3}-2(ca-4a^{2})b )h_{c},\] \[= -\frac{1}{36}(a-b)^{4}h_{c}^{2}+\frac{1}{18}c^{2}-\frac{1}{9}cb+ \frac{1}{18}b^{2}+\frac{1}{36}(c-3a+2b)(a-b)^{2}h_{c},\] \[= -\frac{1}{36}(a-b)^{4}h_{c}^{2}+\frac{1}{18}(c-b)^{2}+\frac{1}{36 }(c-3a+2b)(a-b)^{2}h_{c},\] \[= \frac{1}{36}\left[2(c-b)^{2}+(c-3a+2b)(a-b)^{2}h_{c}-(a-b)^{4}h_{ c}^{2}\right].\]
## 3 Distributions that degenerates from the Saltbox-Rood distribution
As shown in Figure 4 the Saltbox-Rood distribution degenerates to other functions. In this section it is presented the equations of these functions, some of them well known by the mathematical community.
### Uniform distribution (Flat-Rood)
The uniform distribution between the interval \([a,b]\) has the same probability for all its values; necessary should have a constant height \(h_{r}\) to have a rectangular area equal the unity. Then, the rectangle's area is
\[A_{p}=(b-a)h_{r}=1.\]
Solving for \(h_{r}\) it results that
\[h_{r}=\frac{1}{b-a}.\]
The equation of the probability density function in the interval is
\[f(x) = h_{r}, \tag{15}\] \[= \frac{1}{b-a},\qquad\mbox{ for }a\leq x\leq b.\]
The cumulative density function is
\[F(x) = \int_{d}^{x}f(x)\mathrm{d}x, \tag{16}\] \[= \int_{d}^{x}h_{r}\mathrm{d}x,\] \[= \int_{d}^{x}\frac{1}{b-a}\mathrm{d}x,\] \[= \frac{1}{b-a}x\bigg{|}_{d}^{x},\] \[= \frac{x-a}{b-a},\qquad\text{ for }a\leq x\leq b.\]
The inverse of the CDF is
\[F^{-1}(U)=d+U(b-a); \tag{17}\]
where \(U\) is any random number, _i.e._ the same uniform distribution, but between the interval \([0,1]\).
### Triangular distribution (Gabled-Roof)
The PDF function is
\[f(x)=\left\{\begin{aligned} &\frac{2(x-a)}{(b-a)(c-a)},& \text{for }a\leq x\leq c\\ &\frac{2(b-x)}{(b-a)(b-c)},&\text{for }c\leq x\leq b \end{aligned}\right.. \tag{18}\]
To obtain the CDF, the function \(f(x)\) is integrated between the interval
\([a,b]\)
\[F(x) = \int f(x)\mathrm{d}x,\] \[= \begin{cases}\int_{a}^{x}\frac{2(x-a)}{(b-a)(c-a)}\mathrm{d}x,& \text{for }a\leq x\leq c\\ \int_{c}^{x}\frac{2(b-x)}{(b-a)(b-c)}\mathrm{d}x,&\text{for }c\leq x\leq b \end{cases}.\]
After the integration is performed, the CDF is
\[F(x) = \begin{cases}\frac{a^{2}-2\,ax+x^{2}}{(a-b)\,(a-c)},&\text{for }a\leq x \leq c\\ \frac{a^{2}-2\,ac+c^{2}}{(a-b)(a-c)}+\frac{2\,bc-c^{2}-2\,bx+x^{2}}{(a-b)(b-c) },&\text{for }c\leq x\leq b\end{cases}. \tag{19}\]
To generate random numbers for this CDF, the inverse of \(F(x)\) is obtained; this by changing \(F(x)\gets U\) and solving for \(x\); _i.e._\(x\gets F^{-1}\), resulting in the following equations:
\[F_{1}^{-1}(U) = a+\sqrt{Ua^{2}-Uab-(Ua-Ub)c},\] \[= a+\sqrt{U}\sqrt{a^{2}-ab-(a-b)c},\] \[= a+\sqrt{U}\sqrt{a(a-b)-(a-b)c},\] \[= a+\sqrt{U}\sqrt{(a-b)(a-c)}.\]
\[F_{2}^{-1}(U) = b-\sqrt{(U-1)ab-(U-1)b^{2}-((U-1)a-(U-1)b)c},\] \[= b-\sqrt{U-1}\sqrt{ab-b^{2}-(a-b)c},\] \[= b-\sqrt{U-1}\sqrt{b(a-b)-(a-b)c},\] \[= b-\sqrt{U-1}\sqrt{(a-b)(b-c)},\] \[= b-\sqrt{1-U}\sqrt{(b-a)(b-c)}.\]
Finally, is resumed that
\[F^{-1}(U)=\begin{cases}a+\sqrt{U}\sqrt{(b-a)(c-a)},&\text{for }F_{1}(a)\leq U \leq F_{1}(c)\\ b-\sqrt{1-U}\sqrt{(b-a)(b-c)},&\text{for }F_{2}(c)\leq U\leq F_{2}(b)\end{cases}; \tag{20}\]
with \(U\) in the interval \([0,1]\).
### Right-Triangular distribution
The right-triangular distribution is a special case of the triangular distribution. We can distinguish two right-triangular distributions, whose maximum frequency is located at the end of the interval (where the slope points to the left) named here the _Left-sided Sheed Roof_ distribution and the _Right-sided Sheed Roof_ distribution, whose slope points to the right and the maximum frequency is located at the beginning of the interval.
#### 3.3.1 The Left-sided Shed Roof distribution
The PDF of the Left-sided Shed Roof distribution is found by setting \(c\gets b\) in the equation of the same function for the triangular distribution; then
\[f(x)=\frac{2(x-a)}{(b-a)^{2}},\ \ \mbox{for}\ a\leq x\leq b. \tag{21}\]
The corresponding CDF of this function (using the same rule to set \(c\) equal to \(b\)) is
\[F(x)=\frac{(x-a)^{2}}{(b-a)^{2}},\ \ \mbox{for}\ a\leq x\leq b. \tag{22}\]
The inverse of the CDF is therefore
\[F^{-1}(x)=a+\sqrt{U}(b-a),\ \ \mbox{for}\ F(a)\leq U\leq F(b). \tag{23}\]
#### 3.3.2 The Right-sided Shed Roof distribution (Right-sided and Left-sided Shed Roofs)
Similar to the above case, the PDF of the Right-sided Shed Roof distribution is found by setting \(c\gets a\), then
\[f(x)=\frac{2(b-x)}{(b-a)^{2}},\ \ \mbox{for}\ a\leq x\leq b. \tag{24}\]
The corresponding CDF of this function (using the same rule to set
equal to \(a\)) is
\[F(x) = \frac{2ba-a^{2}-2bx+x^{2}}{(a-b)(b-a)}, \tag{25}\] \[= -\frac{2ba-a^{2}-2bx+x^{2}}{(a-b)^{2}},\qquad\mbox{for $a\leq x\leq b$}.\]
The inverse of the CDF is
\[F^{-1}(x) = b-\sqrt{(1-U)(b-a)},\qquad\mbox{for $F(a)\leq U\leq F(b)$}. \tag{26}\]
### One special case of the Trapezoidal distribution (Shed-Flat Roof)
The functions that defines the _Shed-Flat Roof_ distribution are the following
\[f_{1}(x)=\frac{x-a}{c-a}h_{p},\qquad\mbox{for $a\leq x\leq c$};\]
\[f_{2}(x)=h_{p},\qquad\mbox{for $c\leq x\leq b$}.\]
The height of the plateau (\(h_{p}\)) should be determined in function of the variables \(a,b,c\) to have an unitary area; therefore
\[A_{p}=\frac{1}{2}(c-a)h_{p}+(b-a)h_{p}=1.\]
Solving for \(h_{p}\) is obtained that
\[\frac{1}{2}(c-a)h_{p}+(b-c)h_{p} = 1,\] \[(c-a)h_{p}+2(b-c)h_{p} = 2,\] \[h_{p}(c-a+2b-2c) = 2,\] \[h_{p} = \frac{2}{2b-a-c},\] \[= \frac{2}{2b-(a+c)}.\]
Once replacing \(h_{p}\) in \(f_{1}(x)\) and \(f_{2}(x)\) and performing some math manipulation, one gets
\[f_{1}(x) = \frac{x-a}{c-a}\frac{2}{2b-(a+c)},\] \[= \frac{2(x-a)}{(c-a)[2b-(c+a)]},\] \[= \frac{2(x-a)}{2b(c-a)-(c^{2}-a^{2})},\] \[= \frac{2(x-a)}{(a^{2}-c^{2})-2b(a-c)}\]
and
\[f_{2}(x) = h_{p},\] \[= \frac{2}{2b-(a+c)}.\]
The PDF of this distribution is
\[f(x)=\begin{cases}\frac{2(x-a)}{(a^{2}-c^{2})-2b(a-c)},&\text{for $a\leq x\leq c$} \\ \frac{2}{2b-(a+c)},&\text{for $c\leq x\leq b$}\end{cases}. \tag{27}\]
The CDF is
\[F(x)=\begin{cases}-\frac{a^{2}-2\,ax+x^{2}}{c^{2}-a^{2}-2(c-a)e},&\text{for $a \leq x\leq c$}\\ -\frac{a^{2}-2\,ac+c^{2}}{c^{2}-a^{2}-2(c-a)e}+2\frac{c-x}{c+a-2b},&\text{for $c \leq x\leq b$}\end{cases}. \tag{28}\]
To generate random numbers under this CDF the inverse functions has been determined as following
\[F^{-1}(x) = \begin{cases}a+\sqrt{-Uc^{2}+Ua^{2}+2(Uc-Ua)b},\\ \text{for $F(a)\leq U\leq F(c)$}\\ -\frac{1}{2}(U-1)c-\frac{1}{2}(U-1)a+Ub,\\ \text{for $F(c)\leq U\leq F(b)$}\end{cases}. \tag{29}\]
### Another trapezoidal distribution (Skillion Roof)
In the literature, _e.g._ Kim (2007), it is defined the trapezoidal distribution as one that has an ascending line up to a limit where it becomes a plateau and then follows a descending line. The present trapezoidal distribution that is dealt here is not that mentioned in the literature.
Here it is defined a slightly different trapezoidal distribution which has a vertical ascending line \(h_{c}\) at the starting-point of the interval (point \(a\)) and then a descending line up to a vertical line \(h_{b}\) at the end-point of the interval (point \(b\)); thus \(c=a\). It is a quadrilateral with vertical sides, one side horizontal (the base) and an inclined side opposite to the base (Figure 2f).
The function that defines the _Skillion Roof_ distribution is
\[f(x)=h_{c}-\frac{h_{c}-h_{b}}{b-c}(x-c);\]
with the condition that \(h_{c}\geq h_{b}\) and \(c\leq b\), for \(c\leq x\leq b\).
In order to convert it into a PDF, the are should be unitary, then
\[A_{p}=\frac{1}{2}(h_{c}+h_{b})(b-c)=1.\]
Solving for \(h_{b}\) it results that
\[\frac{1}{2}(h_{c}+h_{b})(b-c) = 1,\] \[h_{c}(b-c)+h_{b}(b-c) = 2,\] \[h_{b} = \frac{2-h_{c}(b-c)}{b-c}.\]
Replacing \(h_{b}\) into \(f(x)\) one gets
\[f(x) = h_{c}-\frac{h_{c}}{b-c}(x-c)+\frac{h_{b}}{b-c}(x-c),\] \[= h_{c}-\frac{h_{c}}{b-c}(x-c)+\frac{2-h_{c}(b-c)}{(b-c)^{2}}(x-c),\] \[= h_{c}+\frac{c\,h_{c}}{b-c}-\frac{h_{c}}{b-c}x-\frac{2c-c\,h_{c}( b-c)}{(b-c)^{2}}+\frac{2-h_{c}(b-c)}{(b-c)^{2}}x,\] \[= \frac{h_{c}(b-c)^{2}+c\,h_{c}(b-c)+c\,h_{c}(b-c)-2c}{(b-c)^{2}}+\cdots\] \[\cdots+\left(\frac{2-h_{c}(b-c)}{(b-c)^{2}}-\frac{h_{c}}{b-c} \right)x.\]
Finally, the PDF is
\[f(x) = \frac{h_{c}(b-c)^{2}+2\,c\,h_{c}(b-c)-2c}{(b-c)^{2}}+\cdots\] \[\cdots+\left(\frac{2-h_{c}(b-c)}{(b-c)^{2}}-\frac{h_{c}}{b-c} \right)x,\qquad\mbox{for $c\leq x\leq b$}.\]
The range of the value of \(h_{c}\) is maximum when \(h_{c}\equiv h_{b}\) turning it into a _Flat-Roof_ (uniform) PDF, and minimum when \(h_{b}=0\) turning it into a _Right-sided Shed Roof_ PDF.
When \(h_{c}\equiv h_{b}=h_{t}\)
\[h_{c}=\frac{1}{b-a}.\]
When \(h_{b}=0\)
\[h_{b} = \frac{2-h_{c}(b-c)}{b-c},\] \[0 = \frac{2-h_{c}(b-c)}{b-c},\] \[h_{c}(b-c) = 2,\] \[h_{c} = \frac{2}{b-a}.\]
Therefore, those values are a reference for the range of \(h_{c}\)
\[\frac{1}{b-a}\leq h_{c}\leq\frac{2}{b-a}.\]
In conclusion, the PDF is
\[f(x) = \frac{h_{c}(b-a)^{2}+2\,ah_{c}(b-a)-2\,a}{(b-a)^{2}}+\cdots \tag{30}\] \[\cdots+\left[\frac{2-h_{c}(b-a)}{(b-a)^{2}}-\frac{h_{c}}{(b-a)} \right]x,\qquad\mbox{for $a\leq x\leq b$}.\]
Within that PDF, the CDF has been obtained as following
\[F(x) = \frac{1}{(a-b)^{2}}\{[(a-b)h_{c}+1]x^{2}+a^{2}+(a^{2}b-ab^{2})h_{ c}-\cdots \tag{31}\] \[\cdots-[(a^{2}-b^{2})h_{c}+2\,a]x\},\qquad\mbox{for $a\leq x\leq b$}.\]
Finally, the inverse of the CDF of this function is
\[F^{-1}(U) = \frac{1}{2[(a-b)h_{c}+1]}\cdots \tag{32}\] \[\cdots\left[2a+(a^{2}-b^{2})h_{c}-(a-b)\sqrt{\cdots+4\,U(a-b)h_{c}+ 4\,U}\right],\] \[\mbox{for }F(a)\leq x\leq F(b).\]
## 4 Validation
The validation of the explicit equations of the Saltbox-Roof distribution was made by the use of the concept of the _truncated distribution_, used in _conditional probability_, according to Ushakov (2002) at Hazenwinkel (2002).
Ushakov (2002) says: Let \(G(x)\) be a cumulative distribution function (the original CDF) between the interval \([d,e]\). The truncated distribution corresponding to \(G(x)\) between \(a\) and \(b\) is understood to be the distribution function
\[F_{\{a,b\}}(x)=\begin{cases}0,&\mbox{for }x\leq a\\ \frac{G(x)-G(a)}{G(b)-G(a)},&\mbox{for }a<x\leq b\\ 1,&\mbox{for }x>b,\,a\leq b\end{cases}\quad. \tag{33}\]
Similar holds for the truncated probability density function \(f(x)\)
\[f_{\{a,b\}}(x)=\begin{cases}\dfrac{g(x)}{G(b)-G(a)},&\text{for }a\leq x\leq b \\ 0,&\text{for }x\notin]a,b]\end{cases}. \tag{34}\]
Also, for the inverse of the CDF it holds that it is equivalent to evaluate \(G^{-1}\) with a variable \(G_{a}+U[G(b)-G(a)]\), _i.e._
\[F_{\{a,b\}}^{-1}(U) = G^{-1}(U), \tag{35}\] \[= G^{-1}(U\left[G(b)-G(a)\right]+G(a));\]
where \(g(x)\) is the probability density function of \(G(x)\) between the interval \([d,e]\).
In Figure 5 is shown this concept graphically for the case of a two-sided truncation. The shape of the original triangle is given by the PDF \(g(x)\), this triangle with base \(\overline{de}\), whose \(G(x)\) and \(G^{-1}(U)\) functions also hold. From this triangle, the _truncated triangle_ is the region whose base is \(\overline{ab}\), whose F-functions hold.
If only one side is truncated at \(x\gets b\), such that \(d\gets a\) as shown in Figure 6; it may say that the triangular distribution is right-sided truncated, _i.e._ the Saltbox-Roof distribution that is dealt in this article.
Similitude between the right-triangles whose heights are \(h_{c}\) and \(h_{b}\) is considered to find the value of \(e\) by knowing the values of \(h_{b}\) and \(h_{c}\), which
result in the following relation
\[e=\frac{h_{c}b-h_{b}c}{h_{c}-h_{b}}.\]
According to this, \(G(a)=0\) and the above functions are reduced as
Figure 5: Two sided truncated triangular distribution.
following:
\[f_{\{b\}}(x)=\begin{cases}0,&\text{for }x\leq a\\ \dfrac{g(x)}{G(b)},&\text{for }a\leq x\leq b\,;\\ 0,&\text{for }x>b\end{cases}\]
Figure 6: One sided truncated triangular distribution.
\[F_{\{b\}}(x)=\begin{cases}0,&\text{for $x\leq a$}\\ \dfrac{G(x)}{G(b)}&=\text{for $a\leq x\leq b$};\\ 0,&\text{for $x>b$}\end{cases}\]
and
\[F_{\{b\}}^{-1}(U)=G^{-1}(U\,G(b)).\]
For the case of the Saltbox-Roof distribution, \(g(x)\), \(G(x)\) and \(G^{-1}(U)\) are respectively the corresponding PDF, CDF, and the inverse of the CDF; all three of the triangular distribution. In this case, within the interval \([d\gets a,e]\) are respectively:
\[g(x)=\begin{cases}\dfrac{2(x-a)}{(e-a)(c-a)},&\text{for $a\leq x\leq c$}\\ \dfrac{2(e-x)}{(e-a)(e-c)},&\text{for $c\leq x\leq e$}\end{cases}; \tag{36}\]
\[G(x)=\begin{cases}\dfrac{a^{2}-2\,ax+x^{2}}{(a-e)(a-c)},&\text{for $a\leq x\leq c$}\\ \dfrac{a^{2}-2\,ac+c^{2}}{(a-e)(a-c)}+\dfrac{2\,ec-c^{2}-2\,ex+x^{2}}{(a-e)(e- c)},&\text{for $c\leq x\leq e$}\end{cases}; \tag{37}\]
and
\[G^{-1}(U)=\begin{cases}a+\sqrt{U}\sqrt{(e-a)(c-a)},&\text{for $G(a)\leq U\leq G (c)$}\\ e-\sqrt{1-U}\sqrt{(e-a)(e-c)},&\text{for $G(c)\leq U\leq G(e)$}\end{cases}. \tag{38}\]
The validation was made numerically by generating 50 random numbers between \([0,1]\). With the same random numbers, the explicit function of the inverse CDF of the Saltbox-Roof distribution was used. In Figure 7 is presented the comparison between both approaches. Clearly, they are equivalent.
Figure 7: Validation of the Saltbox-Roof distribution.
## 5 Applications
### Generating random values representing the friction angle of rock surfaces
The _basic friction angle_ (denoted by \(\phi_{b}\)) between two rock surfaces are those angles obtained by the arc-tangent of the _friction coefficient_ at atmospheric Earth pressure, _i.e._ at zero normal-to-the-contact-surface stresses, which is a concept of rock mechanics found for example in Talobre (1957).
Those friction angles can't be negative by physic rules, and they also aren't commonly greater than say around 55 degrees. Indeed, to find rocks with basic friction angles, around 45 degrees, is scarce.
To perform a Monte-Carlo analysis, for example, one can propose a Saltbox-Roof distribution for this basic friction angle, _e.g._ input a minimum of 20 degrees, a maximum of 45 degrees and a mode of 32 degrees. The shape factor \(\hat{\rho}\) could be intuitively preferred be equal to 0.8.
In Figure 8 is shown the histogram of 2000 generated numbers according to the Saltbox-Roof distribution representing \(\phi_{b}\).
### Generating spaced points under a rule in the real-line
For some computer calculations there is the need to generate equal-spaced points in the set of a one-dimensional space, _i.e._ in \(\mathbb{R}^{1}\). Many programming
Figure 8: The Saltbox-Roof distribution for the basic friction angle.
languages have functions that generate them; for example, in MATLAB/Octave is available the function named linspace and in Python (in the module Numpy) the function is named the same: numpy.linspace. But among those equal spaced functions there are no more alternatives to control the generation of points in \(\mathbb{R}^{1}\) apart of the logarithmic version of the linear space, named logspace.
The Saltbox-Roof distribution, and any other probabilistic distribution, can be used to generate _ruled-spaced points_ in the real-line; _i.e._ to generate points under an specified rule that is not linear.
Points that are not equal-spaced but not random, they are generated suggested to a rule given by any PDF. In this case, the Saltbox-Roof PDF was be used for that purpose.
In Figure 9 is shown the generation of 30 spaced points in \(\mathbb{R}^{1}\) in the interval \([0,1]\) according to the Saltbox-Roof distribution with mode at 0.7 and shape factor of 0.8.
To avoid the randomness of the generated points, it is important to give equal spaced points in the interval \([0,1]\) and pass them through the inverse CDF function of the PDF; in this case, they are passed through the function in Equation 6.
Figure 9: Points spaced by rule of the Saltbox-Roof distribution.
### Approximating plane curves by polygons with unequal sides
Plotting curves in the plane (in \(\mathbb{R}^{2}\)) with variable curvature during its evolution are approximated in principle by polygons, specially in vectorial plotting. The normal method to plot curves is: Generate an equal spaced sequence of numbers that represent the independent variable or the parameter of the curve's function; then, pass those values through the curve's function to get the values of the dependent variable; finally, plot the curves in a 2D Cartesian coordinates system with the values of the independent variable in abscissas (horizontal rightwards x-axis) and the dependent variable in ordinates (vertical upwards y-axis).
In the intervals of the curve where its _curvatures_ are small, the resulted polygons are representing the curve smoothly; but in the intervals where the curvatures are high, the curve is shown as polygons rather than a smooth curve.
A common practice to solve this is to increment the number of equal spaced points to obtain smoother curves in the intervals of high curvatures, but in the intervals of low curvatures, those points are unnecessary too much. Then, a non-equal-spaced point generator will save a lot of points in the parts of the curve that has small curvatures and it will concentrate points in the places where the curve has high curvatures.
One accurate algorithm may consider to concentrate points in a direct
relation with the different curvatures the curve has, but that will imply to calculate the function of the curve's curvature previous to the generation of the same curve. Sometimes, the curvature functions are cumbersome to obtain. A faster but not an accurate solution may be the use of a flexible distribution generator of points that can save many unnecessary points to define the curve. This can be approximated for example with the inverse function of the CDF of the Saltbox-Rood distribution (or other distribution's inverse CDF).
For this application, consider the vertical parabola \(y=x^{2}\) with vertex at origin, with parameters \(a_{2}=1\); \(a_{1}=a_{0}=0\) for a general parabola \(y=a_{2}x^{2}+a_{1}x+a_{0}\). The signed curvature \(r^{-1}\) is
\[r^{-1}(x)=\frac{2a_{2}}{[1+(2a_{2}x+a_{1})^{2}]^{\frac{3}{2}}};\]
where by substituting its values \(\{a_{0},a_{1},a_{2}\}\) gives the equation of the curvature function with respect \(x\),
\[r^{-1}(x) = \frac{2}{[1+(2x)^{2}]^{\frac{3}{2}}},\] \[= \frac{2}{(1+4x^{2})^{\frac{3}{2}}}.\]
If the parabola is wanted to plot between the interval \([-1,\frac{1}{5}]\) as shown in Figure 10a, one will note that it has a curvature that varies in this interval, as shown in the parabola's curvature \(\frac{1}{r}\) at Figure 10b.
Using the plot of Figure 4 about the domain of the Saltbox-Roof distribution, choose the relative pair of parameters \((\hat{c},\hat{\rho})\) that will give a similar shape as in the parabola's curvature plot, as shown in doted-line in Figure 10b.
For example, consider to use the relative mode location at \(\frac{5}{6}\), _i.e._\(\hat{c}=\frac{5}{6}\); and the shape factor \(\hat{\rho}=\frac{3}{4}\). The choose of \(\hat{c}\) is because the maximum curvature in the plot has an abscissa of \(x=0\), which is located at \(\frac{5}{6}\) the interval width of \(\frac{6}{5}\) length. The choice of \(\hat{\rho}\) is more intuitive, but it is considered that as \(\hat{\rho}\) approaches to zero, the _mode_ (at the highest frequency \(h_{c}\)) and the _residual mode_ (at the residual frequency \(h_{b}\)) are close to be equal.
With that parameters and using the Saltbox-Roof distribution inverse CDF, 20 points are generated and stored in \(\hat{U}\). To transform those values
Figure 10: The vertical parabola with vertex at origin \(y=x^{2}\) in the interval \([-1,\frac{1}{5}]\): **a** the curve; **b** the curvature \(\frac{1}{r}\) and with dot line the saltbox-roof distribution shape.
from the interval \([0,1]\) into the interval \([-1,\frac{1}{5}]\) is used the following equation
\[U=x_{m}+(x_{M}-x_{m})\hat{U};\]
where \(x_{m}\) and \(x_{M}\) are the minimum and maximum values in the interval \([-1,\frac{1}{5}]\), respectively; _i.e._\(x_{m}\leftarrow-1\) and \(x_{m}\leftarrow\frac{1}{5}\).
Those \(U\) values are passed through function \(y=U^{2}\), as being \(x\gets U\). The resulting plot is shown in Figure (a)a. Note that points are concentrated near the parabola's vertex, where the curvature is high. In Figure (b)b is shown the same parabola in the same interval as plotted with the same number of points by generating x-values only with the uniform distribution.
Figure 11: The vertical parabola curve approximated by a 20 points polygon: **a** points generated with the Saltbox-Roof distribution; **b** points generated with the Uniform distribution (Flat-Roof).
Closure
The Saltbox-roof distribution and its corresponding degenerated cases presented in this document define a set which are intimately related according to which paired of values \((\hat{c},\hat{\rho})\) one can choose for any case. Figure 4 is a graphical representation of these relations, where the domain described there hold for any possible saltbox-roof distribution. With this graph and the equations of the probability functions presented here, the user can have control of the shape changes the PDF can have in this set.
## 7 Abbreviations
**PDF**: Probability Density Function.
**CDF**: Cumulative Density Function.
**CAS**: Computer Arithmetic System.
**2D**: Two Dimensions.
## 8 Symbols
\(a\):, Inferior limit (minimum value) of \(x\) of the Saltbox-roof distribution.
\(\hat{a}\):, Inferior relative limit of \(x\).
\(\{a_{0},a_{1},a_{2}\}\):, Set of polynomial coefficients of a \(2^{nd}\) order polynom \(y(x)\).
\(A_{p}\) , Area of a PDF.
\(\hat{A}_{ps}\) , Are of a PDF in relative variables when \(\hat{h}_{c}=\hat{h}_{b}\).
\(b\) , Superior limit (maximum value) of \(x\) of the Saltbox-roof distribution.
\(\hat{b}\) , Superior relative limit of \(x\).
\(c\) , Mode at \(x\) of the Saltbox-roof distribution.
\(\hat{c}_{L}\) , Location of \(\hat{c}\) where \(\hat{h}_{c}=\hat{h}_{b}\).
\(\hat{c}\) , Relative mode of \(c\).
\(d_{1}\) , Unitary length of an interval.
\(d\) , Inferior limit of \(x\) in \(g(x)\), \(G(x)\), or \(G^{-1}(x)\).
\(\overline{de}\) , Length between \(d\) and \(e\).
\(e\) , Superior limit of \(x\) in \(g(x)\), \(G(x)\), or \(G^{-1}(x)\).
\(f_{\{a,b\}}(x)\) , PDF of a truncated function between the limits \([a,b]\).
\(f_{\{b\}}(x)\) , PDF of a right one-sided truncated function between \([a,b]\).
\(f(x)\) , Function of a PDF.
\(F_{\{a,b\}}(x)\) , CDF of a truncated function between the limits \([a,b]\).
\(F_{\{b\}}(x)\) , CDF of a right one-sided truncated function between \([a,b]\).
\(F(x)\) , Function of a CDF.
\(F_{\{b\}}^{-1}(x)\) , Inverse CDF of a right one-sided truncated function between \([a,b]\).
\(F^{-1}(x)\) , Function of an inverse CDF.
\(F_{\{a,b\}}^{-1}(x)\) , Inverse CDF of a truncated function between the limits \([a,b]\).
\(g(a)\) , \(g(x)\) evaluated in \(a\).
\(g(b)\) , \(g(x)\) evaluated in \(b\).
\(g(x)\) , PDF of an non-truncated function evaluated in \(x\).
\(G(a)\) , \(G(x)\) evaluated in \(a\).
\(G(b)\) , \(G(x)\) evaluated in \(b\).
\(G(x)\) , CDF of an non-truncated function evaluated in \(x\).
\(G^{-1}(a)\) , \(G^{-1}(x)\) evaluated at \(a\).
\(G^{-1}(b)\) , \(G^{-1}(x)\) evaluated at \(b\).
\(G^{-1}(x)\) , Inverse function of \(G(x)\) evaluated at \(x\).
\(h\) , Frequency.
\(h_{1}\) , Unitary value of frequency.
\(h_{a}\) , Frequency at \(a\).
\(h_{b}\) , Frequency at \(b\), the residual frequency of the Saltbox-roof distribution.
\(h_{c}\) , Frequency at \(c\), the highest frequency of the Saltbox-roof distribution.
\(\hat{h}_{c}\), Frequency at the relative value \(\hat{c}\).
\(h_{cM}\), Maximum possible value of \(h_{c}\).
\(h_{cm}\), Minimum possible value of \(h_{c}\).
\(\hat{h}_{L}\), Frequency at \(\hat{c}_{L}\).
\(h_{p}\), Plateau frequency in an Shed-flat roof distribution.
\(h_{r}\), Frequency constant in an Uniform distribution.
\(r^{-1}\), Signed curvature of \(y\).
\(\mathbb{R}^{2}\), Set of the Cartesian product of the set \(\mathbb{R}\).
\(\mathbb{R}\) **or \(\mathbb{R}^{1}\)**, Set of real numbers.
\(m\), Median.
\(u\), Uniform random variable between \([0,1]\).
\(U\), Uniform random function generator between \([0,1]\).
\(x\), Quantil.
\(\hat{x}\), Relative quantil.
\(x_{m}\), Minimum value of \(x\).
\(x_{M}\), Maximum value of \(x\).
\(X\), Variate value.
\(y\), Planar curve function, _i.e._\(y(x)\) defined in canonical form.
\(\mu\), Mean.
\(\phi_{b}\), Basic friction angle (a rock-mechanics concept).
\(\rho\), Shape factor of the Saltbox-roof distribution.
\(\hat{\rho}\), Relative shape factor.
\(\sigma^{2}\), Variance.
|
2306.02746 | On the Split Closure of the Periodic Timetabling Polytope | The Periodic Event Scheduling Problem (PESP) is the central mathematical tool
for periodic timetable optimization in public transport. PESP can be formulated
in several ways as a mixed-integer linear program with typically general
integer variables. We investigate the split closure of these formulations and
show that split inequalities are identical with the recently introduced flip
inequalities. While split inequalities are a general mixed-integer programming
technique, flip inequalities are defined in purely combinatorial terms, namely
cycles and arc sets of the digraph underlying the PESP instance. It is known
that flip inequalities can be separated in pseudo-polynomial time. We prove
that this is best possible unless P $=$ NP, but also observe that the
complexity becomes linear-time if the cycle defining the flip inequality is
fixed. Moreover, introducing mixed-integer-compatible maps, we compare the
split closures of different formulations, and show that reformulation or
binarization by subdivision do not lead to stronger split closures. Finally, we
estimate computationally how much of the optimality gap of the instances of the
benchmark library PESPlib can be closed exclusively by split cuts, and provide
better dual bounds for five instances. | Niels Lindner, Berenike Masing | 2023-06-05T09:58:20Z | http://arxiv.org/abs/2306.02746v1 | # On the Split Closure of the Periodic Timetabling Polytope
###### Abstract
The Periodic Event Scheduling Problem (PESP) is the central mathematical tool for periodic timetable optimization in public transport. PESP can be formulated in several ways as a mixed-integer linear program with typically general integer variables. We investigate the split closure of these formulations and show that split inequalities are identical with the recently introduced flip inequalities. While split inequalities are a general mixed-integer programming technique, flip inequalities are defined in purely combinatorial terms, namely cycles and arc sets of the digraph underlying the PESP instance. It is known that flip inequalities can be separated in pseudo-polynomial time. We prove that this is best possible unless \(P=NP\), but also observe that the complexity becomes linear-time if the cycle defining the flip inequality is fixed. Moreover, introducing mixed-integer-compatible maps, we compare the split closures of different formulations, and show that reformulation or binarization by subdivision do not lead to stronger split closures. Finally, we estimate computationally how much of the optimality gap of the instances of the benchmark library PESPlib can be closed exclusively by split cuts, and provide better dual bounds for five instances.
Keywords:Periodic Event Scheduling Problem, Periodic Timetabling, Split Closure, Mixed-Integer Programming
Mathematics Subject Classification (MSC2020):90C11, 90C35, 90B35, 90B20
## 1 Introduction
The timetable is the core of a public transportation system. It serves as a basis for cost-sensitive tasks such as vehicle and crew scheduling, and is required for accurate planning of passenger routes. A high-quality timetable is thus of utmost importance for a well-planned transportation system. Particularly in the context of urban traffic, a large number of transportation networks are operated with a periodic pattern, creating the demand to optimize periodic timetables. The standard mathematical model for this task is the _Periodic Event Scheduling Problem_ (PESP) introduced by Serafini and Ukovich (1989). PESP is a combinatorial optimization problem on a digraph with respect to a certain period time, and it is notoriously hard: Deciding whether a feasible periodic timetable exists is NP-complete for any fixed period time \(T\geq 3\)(Odijk, 1994). The feasibility problem remains NP-hard on graphs with bounded treewidth
(N. Lindner & Reisch, 2022). The difficulty of PESP is also reflected in the fact that since its establishment in 2012, none of the instances of the benchmark library PESPlib could be solved to proven optimality up to date (Goerigk, 2022). Nevertheless, many primal heuristics have been developed (Bordorfer, Lindner, & Roth, 2020; Goerigk & Liebchen, 2017; Grossmann et al., 2012; N. Lindner & Liebchen, 2022; Nachtigall & Opitz, 2008; Patzold & Schobel, 2016), and there are success stories concerning the implementation of mathematically optimized timatables in practice (Kroon et al., 2009; Liebchen, 2008).
PESP can be formulated as a mixed-integer linear program (MIP) in a multitude of ways (Liebchen, 2006). Several studies of the _periodic timetabling polytope_ have been conducted, leading to the discovery of families of cutting planes, such as, e.g., _cycle inequalities_ (Odijk, 1994), _change-cycle inequalities_ (Nachtigall, 1996), and more recently, _flip inequalities_(N. Lindner & Liebchen, 2020). The separation of cycle and change-cycle inequalities is known to be NP-hard (Borndorfer, Hoppmann, Karbstein, & Lindner, 2020), and flip inequalities are a superset of both cycle and change-cycle inequalities (N. Lindner & Liebchen, 2020). A common theme that cycle, change-cycle and flip inequalities share as well with other families of cutting planes (T. Lindner, 2000; Nachtigall, 1996) is that they are all described in purely combinatorial terms. For example, flip inequalities are determined by a cycle and a set of arcs of the underlying digraph of the PESP instance.
In this paper, we pursue a somewhat opposite strategy: Rather than starting with a combinatorial analysis, we investigate _split inequalities_, a general-purpose tool for treating MIPs introduced by Cook, Kannan, and Schrijver (1990) as an analogon to the Chvatal closure for pure integer programs. The _split closure_ given by these inequalities has several nice properties: It is a polyhedron (Conforti, Cornuejols, & Zambelli, 2010; Cook et al., 1990), coincides with the closure given by mixed-integer rounding and Gomory mixed-integer cuts (Cornuejols & Li, 2001; Nemhauser & Wolsey, 1990), and leads to finite cutting plane algorithms for binary MIPs (Balas, Ceria, & Cornuejols, 1993).
While the second Chvatal closure for a pure IP formulation has already been investigated by Liebchen and Swarat (2008), we apply split closure techniques to proper mixed-integer formulations of PESP. Our first result is the following correspondence (Theorem 3.1): Every non-trivial split inequality is a non-trivial flip inequality, and vice versa. The split closure of the periodic timetabling polytope is therefore identical with the closure given by the flip inequalities. Moreover, the split inequalities coming from split disjunctions, where one of the two sides of the split is empty, coincide with Odijk's cycle inequalities (Theorem 3.3).
In general, the separation of split inequalities is NP-hard (Caprara & Letchford, 2003). In the periodic timetabling situation, we show in Theorem 4.4 that it is weakly NP-hard to separate maximally violated split/flip inequalities. This is best possible unless \(P=NP\), as N. Lindner and Liebchen (2020) have already outlined a pseudo-polynomial-time algorithm. The separation problem can however by solved by a parametric IP in the spirit of Balas and Saxena (2008) and Bonami (2012), which in the special case of PESP boils down to a sequence of \(\lfloor T/2\rfloor-1\) standard IPs (Theorem 4.5). In the event that the cycle defining a flip inequality is fixed, the separation becomes linear-time (Theorem 4.7).
So far, the results on the split closure of the periodic timetabling polytope apply for the cycle-based MIP formulation of PESP (Liebchen & Peeters, 2009; Nachtigall, 1996). Another popular formulation is the incidence-based formulation that is straightforward from the original problem definition by Serafini and Ukovich (1989). In order to compare the split closures of two different MIP formulations, we introduce _mixed-integer-compatible maps_, i.e., affine maps that map mixed-integer points to mixed-integer points. These maps have the general property that they map split closures into split closures (Theorem 5.3). For PESP, the polytope defined by the cycle-based formulation turns out to be a mixed-integer-compatible projection of the
polytope defined by the incidence-based formulation. However, we show that the restriction of this projection to split closures is surjective, so that there is no gain in information concerning split cuts when switching to a different formulation (Theorem 5.11). More results that can be proven using mixed-integer-compatible maps are the following: The split closure commutes with Cartesian products (Theorem 5.5). This enables us to show that the split closure of PESP instances on cactus graphs is exact (Theorem 5.6).
The behavior of split or lift-and-project closures with respect to binarizations, i.e., MIP reformulations with only binary integer variables, have received some attention lately (Aprile, Conforti, & Di Summa, 2021; Dash, Gunluk, & Hildebrand, 2018). In the context of PESP, the incidence-based MIP formulation can be binarized in a combinatorial manner by subdivision of arcs. Although split closures of binary MIPs are known to be much better behaved (Balas et al., 1993), we prove by another application of mixed-integer-compatible maps that this binarization procedure does also not lead to stronger split closures (Theorem 5.15).
Finally, we evaluate split closures in practice. To this end, we consider the 22 PESPlib instances and some derived subinstances. We devise an algorithmic procedure to optimize over the split closure making use of our theoretical insights. Our separation algorithm consists of a heuristic and an exact part. The outcome is that although the split closure closes a significant part of the primal-dual gap, it is almost never exact. However, our separation method produces incumbent dual bounds for 5 of the PESPlib instances.
The paper is structured as follows: We summarize the relevant definitions and notions for PESP in Section 2. The correspondence between split and flip inequalities follows in Section 3. The subsequent Section 4 is devoted to separation of split/flip inequalities. Mixed-integer compatible maps and the results on comparing split closures of different formulations are presented in Section 5. Our computational results can be found in Section 6. We conclude the paper in Section 7.
## 2 Periodic Event Scheduling
The Periodic Event Scheduling Problem has originally been introduced by Serafini and Ukovich (1989), and has gained much attention ever since. In this chapter, we establish the basics, formally state the problem, introduce two equivalent model formulations and introduce our main object of interest, the periodic timetabling polytope.
### Problem Definition
An instance of the _Periodic Event Scheduling Problem_ (PESP) is given by a 5-tuple \((G,T,\ell,u,w)\), where
* \(G=(V,A)\) is a directed graph,
* \(T\in\mathbb{N}\), \(T\geq 2\), is a _period time_,
* \(\ell\in\mathbb{Z}^{A}\) is a vector of _lower bounds_,
* \(u\in\mathbb{Z}^{A}\) is a vector of _upper bounds_,
* \(w\in\mathbb{R}^{A}\) is a vector of _weights_.
A _periodic tension_ is a vector \(x\in\mathbb{R}^{A}\) with \(\ell\leq x\leq u\) such that
\[\exists\,\pi\in[0,T)^{V}:\quad\forall a=(i,j)\in A:\quad x_{a}\equiv\pi_{j}- \pi_{i}\mod T. \tag{1}\]
In this case, the vector \(\pi\) is called a _periodic timetable_. In the context of periodic timetabling in public transport, the vertices of \(G\) typically correspond to arrival or departure _events_ of vehicles at some station. The arcs of \(G\) are _activities_; they model relations between the events such as, e.g., driving between two stations, dwelling at a station, or passenger transfers (Liebchen & Mohring, 2007). A periodic timetable \(\pi\) thus assigns timings in \([0,T)\) to each event, repeating periodically with period \(T\). The periodic tension \(x\) collects the activity durations, which are supposed to lie within the feasible interval \([\ell,u]\). A typical source for the weight of an arc is the estimated number of passengers using the corresponding activity. A reasonable quality indicator of a periodic timetable is hence \(w^{\top}x\), the total travel time of all passengers.
**Definition 2.1** (Serafini and Ukovich 1989).: Given \((G,T,\ell,u,w)\) as above, the _Periodic Event Scheduling Problem_ is to find a periodic tension \(x\) such that \(w^{\top}x\) is minimum, or to decide that none exists.
**Example 2.2**.: Figure 1 shows a small PESP instance together with an optimal periodic tension and a compatible periodic timetable.
**Remark 2.3**.: As described, e.g., by Liebchen (2006), any PESP instance can be preprocessed in such a way that \(G\) contains no loops and is weakly connected, \(0\leq\ell<T\) and \(\ell\leq u<\ell+T\).
### Mixed-Integer Programming Formulations
PESP can be formulated as a mixed-integer linear program in several ways (Liebchen, 2006). The _incidence-based_ model is a straightforward interpretation of the problem definition, introducing auxiliary integer _periodic offsets_ to resolve the modulo constraints (1):
\[\begin{array}{llll}\text{Minimize}&w^{\top}x\\ \text{s.t.}&x_{a}=\pi_{j}-\pi_{i}+Tp_{a}&a=(i,j)\in A\\ &\ell_{a}\leq x_{a}\leq u_{a},&a\in A\\ &0\leq\pi_{i}\leq T-1,&i\in V,\\ &p_{a}\text{integer},&a\in A.\end{array} \tag{2}\]
When all periodic offsets \(p_{a}\) are fixed, (2) becomes a linear program with a totally unimodular constraint matrix. It is hence no restriction to assume that \(x\) and \(\pi\) are integral, so that the bound \(\pi<T\) in (1) can safely be replaced with \(\pi\leq T-1\). For the purpose of this paper, we will however not treat (2) as a pure integer program, as was done by Liebchen and Swarat (2008). We will instead investigate proper mixed-integer formulations, where the periodic tension variables \(x\) and the periodic timetable variables \(\pi\) are considered as continuous variables.
Figure 1: A PESP instance on a digraph \(G=(V,A)\) with \(T=10\). The upper label of an arc \(a\in A\) is \([\ell_{a},u_{a}],w_{a}\). The blue lower arc labels indicate a periodic tension \(x\) compatible with the periodic timetable \(\pi\) as given by the vertex labels.
An alternative MIP formulation for PESP is the _cycle-based_ formulation, which has been reported to be computationally beneficial (see., e.g., Borndorfer, Lindner, and Roth 2020; Liebchen 2008; Liebchen and Peeters 2009; Peeters 2003; Schiewe and Schobel 2020):
\[\begin{array}{ll}\text{Minimize}&w^{\top}x\\ \text{s.t.}&\Gamma x=Tz,\\ &\ell\leq x\leq u,\\ &z\text{ integer}.\end{array} \tag{3}\]
In (3), \(x\) represents a periodic tension, and \(z\) is an integral _cycle offset_. A periodic timetable \(\pi\) can be recovered from \(x\) by a graph traversal.
To explain the further ingredients of the formulation (3), we will require more definitions about cycles, cycle spaces and cycle bases, see Kavitha et al. (2009) for an overview. The _cycle space_\(\mathcal{C}\) of \(G\) is the abelian group
\[\mathcal{C}\coloneqq\left\{\gamma\in\mathbb{Z}^{A}\;\middle|\;\forall i\in V: \sum_{a\in\delta^{+}(i)}\gamma_{a}=\sum_{a\in\delta^{-}(i)}\gamma_{a}\right\}.\]
In terms of linear algebra, \(\mathcal{C}\) is the kernel over the integers of the incidence matrix of \(G\); in the language of network flows, \(\mathcal{C}\) is the space of all integer-valued (and arbitrarily signed) circulations in \(G\). The rank of \(\mathcal{C}\) is the _cyclomatic number_\(\mu\) of \(G\). We assume that \(G\) is weakly connected (Remark 2.3), so that \(\mu=|A|-|V|+1\).
A vector \(\gamma\in\mathcal{C}\cap\{-1,0,1\}^{A}\) will be called an _oriented cycle_. When ignoring arc directions, the support \(\{a\in A\;|\;\gamma_{a}\neq 0\}\) makes up a possibly non-simple cycle in \(G\). We call arcs \(a\) with \(\gamma_{a}>0\)_forward_ and those with \(\gamma_{a}<0\)_backward_. Any \(\gamma\in\mathcal{C}\) can be decomposed into its positive resp. negative part \(\gamma_{+}\) resp. \(\gamma_{-}\), i.e., \(\gamma_{+}\coloneqq\max(\gamma,0)\) and \(\gamma_{-}\coloneqq\max(-\gamma,0)\). The _length_ of an oriented cycle \(\gamma\) is \(|\gamma|\coloneqq|\{a\in A\;|\;\gamma_{a}\neq 0\}|\).
A set \(B\) of \(\mu\) oriented cycles is called an _integral cycle basis_ of \(G\) if \(B\) is a basis for \(\mathcal{C}\) as an abelian group, i.e., if every element of the cycle space \(\mathcal{C}\) can be written as a unique integral linear combination of the oriented cycles in \(B\). A particular class of integral cycle bases are the _(strictly) fundamental cycle bases_: Let \(\mathcal{T}\) be some spanning tree of \(G\). Then the fundamental cycle induced by the co-tree arc \(a\) of \(\mathcal{T}\) is the unique cycle \(\gamma\) obtained by adding \(a\) to \(\mathcal{T}\) with the convention that \(\gamma_{a}=1\). A fundamental cycle basis is then given by the collection of \(\mu\) fundamental cycles of \(\mathcal{T}\). Arranging the oriented cycles of an integral cycle basis \(B\) as rows of a matrix, we obtain a _cycle matrix_\(\Gamma\in\{-1,0,1\}^{B\times A}\).
**Example 2.4**.: In the example from Figure 1, we have \(\mu=3\). An integral cycle basis \(B\) is outlined in Figure 2.
The following theorem shows that the MIP (3) is indeed a valid formulation of PESP.
**Theorem 2.5** (Cycle periodicity property, Liebchen & Peeters, 2009).: _For a vector \(x\in\mathbb{R}^{A}\), the following are equivalent:_
1. \(x\) _satisfies condition (_1_),_
2. \(\gamma^{\top}x\equiv 0\bmod T\) _for all_ \(\gamma\in\mathcal{C}\)_,_
3. \(\Gamma x\equiv 0\bmod T\) _for the cycle matrix_ \(\Gamma\) _of an integral cycle basis of_ \(G\)_._
In the sequel, we will focus on the cycle-based formulation (3), which is justified by the following remark.
**Remark 2.6**.: The incidence-based formulation (2) is a particular incarnation of the cycle-based formulation (3) in the following sense: Let \(I=(G,T,\ell,u,w)\) be a PESP instance. We can augment \(I\) to an instance \(I^{\prime}\) such that the incidence-based MIP formulation (2) for \(I\) coincides with the cycle-based MIP formulation (3) for \(I^{\prime}\) for a certain integral cycle basis \(B\) with cycle matrix \(\Gamma\). To this end, we add a new vertex \(s\) and connect it to every original vertex \(i\in V\). Set \(\ell_{si}\coloneqq 0,u_{si}\coloneqq T-1,w_{si}\coloneqq 0\). The subgraph \(\mathcal{T}\) on the arcs \(\{(s,i)\ |\ i\in V\}\) is a spanning tree of the augmented graph. Each fundamental cycle has the vertex sequence \((s,i,j,s)\) for some arc \(a=(i,j)\in A\); we assume that the arcs \((s,i)\) and \((i,j)\) are forward, and that the arc \((s,j)\) is backward. The constraint in (3) for the cycle \((s,i,j,s)\) is then given by \(x_{si}+x_{a}-x_{sj}=Tz_{a}\). Relabeling \(x_{si}\) as \(\pi_{i}\) for \(i\in V\) and \(z_{a}\) as \(p_{a}\) for \(a\in A\), the formulation (3) for the augmented instance and the cycle matrix \(\Gamma\) given by the fundamental cycle basis with respect to \(\mathcal{T}\) indeed turns out to be the same as the formulation (2) for the original instance \(I\). In particular, the PESP instances \(I\) and \(I^{\prime}\) can be considered equivalent.
**Example 2.7**.: Figure 3 shows the augmented instance \(I^{\prime}\) obtained from the instance \(I\) from Figure 1 according to Remark 2.6.
Figure 3: Augmentation of the instance in Figure 1 according to Remark 2.6. The new vertex \(s\) and the new arcs \((s,i)\) are highlighted in green. The highlighted arcs form a spanning tree of the augmented instance. The periodic tension \(x_{si}\) of a highlighted arc \((s,i)\) can be read off the timetable value \(\pi_{i}\) given as vertex label at the gray vertex \(i\).
Figure 2: In the instance from Figure 1, the oriented cycles \(\gamma_{1},\gamma_{2},\gamma_{3}\) constitute an integral cycle basis, as they are the fundamental cycles of the highlighted spanning tree. The cycle \(\gamma_{2}\) uses only forward arcs, while \(\gamma_{1}\) and \(\gamma_{3}\) have both forward and backward arcs. The tension \(\gamma_{3}^{\top}x\) along \(\gamma_{3}\) is \(1+1+9-1=10\equiv 0\) mod \(10\), and \(\gamma_{1}^{\top}x\) and \(\gamma_{2}^{\top}x\) are integer multiples of \(T=10\) as well.
### The Periodic Timetabling Polytope
Before analyzing the split closure, we need to understand the geometric object behind the feasible region of a PESP instance, and also of its natural LP relaxation.
**Definition 2.8**.: For a PESP instance \((G,T,\ell,u,w)\) and a cycle matrix \(\Gamma\) of an integral cycle basis \(B\), define
\[\mathcal{P} \coloneqq\{(x,z)\in\mathbb{R}^{A}\times\mathbb{R}^{B}\mid\Gamma x= Tz,\ell\leq x\leq u\},\] \[\mathcal{P}_{1} \coloneqq\operatorname{conv}\{(x,z)\in\mathbb{R}^{A}\times \mathbb{Z}^{B}\mid\Gamma x=Tz,\ell\leq x\leq u\}.\]
We will call \(\mathcal{P}\) the _fractional periodic timetabling polytope_ and \(\mathcal{P}_{I}\) the _integer periodic timetabling polytope_.
\(\mathcal{P}_{1}\) is the convex hull of the feasible solutions to (3), and the fractional periodic timetabling polytope \(\mathcal{P}\) is the polyhedron associated to the natural linear programming relaxation of (3). Observe that this relaxation is very weak: \(\mathcal{P}\) is combinatorially equivalent to the hyperrectangle \(\prod_{a\in A}[\ell_{a},u_{a}]\), and an optimal vertex of the LP relaxation of (3) is given by \((\ell,\Gamma\ell/T)\).
**Remark 2.9**.: The choice of a cycle basis \(\Gamma\) is not essential for the definition of \(\mathcal{P}\) and \(\mathcal{P}_{1}\): If \(\Gamma^{\prime}\) is the cycle matrix of another integral cycle basis, then there is a unimodular matrix \(U\) such that \(\Gamma^{\prime}=U\Gamma\), and \((x,z)\mapsto(x,Uz)\) is a \(\mathbb{Z}\)-linear isomorphism.
Several classes of valid inequalities for \(\mathcal{P}_{1}\) are known (N. Lindner & Liebchen, 2020; T. Lindner, 2000; Nachtigall, 1996, 1998; Odijk, 1994). We will focus on those that are defined in terms of elements of the cycle space \(\mathcal{C}\). The cycle periodicity property (Theorem 2.5) immediately shows:
**Theorem 2.10** (Odijk, 1994).: _Let \(\gamma\in\mathcal{C}\). Then the following cycle inequality holds for all \((x,z)\in\mathcal{P}_{1}\):_
\[\left\lceil\frac{\gamma_{+}^{\top}\ell-\gamma_{-}^{\top}u}{T}\right\rceil\leq \frac{\gamma^{\top}x}{T}\leq\left\lfloor\frac{\gamma_{+}^{\top}u-\gamma_{-}^{ \top}\ell}{T}\right\rfloor. \tag{4}\]
Since the rows of the cycle matrix \(\Gamma\) are oriented cycles, Theorem 2.10 implies bounds on the \(z\)-variables in Definition 2.8 as well, so that \(\mathcal{P}\) and \(\mathcal{P}_{I}\) are indeed polytopes.
Let \([\cdot]_{T}\) denote the modulo \(T\) operator with values in \([0,T)\). Another well-known class of inequalities is the following:
**Theorem 2.11** (Nachtigall, 1996).: _Let \(\gamma\in\mathcal{C}\) and \(\alpha_{\gamma}\coloneqq[-\gamma^{\top}\ell]_{T}\). Then the following change-cycle inequality holds for all \((x,z)\in\mathcal{P}_{1}\):_
\[(T-\alpha_{\gamma})\gamma_{+}^{\top}(x-\ell)+\alpha_{\gamma}\gamma_{-}^{\top} (x-\ell)\geq\alpha_{\gamma}(T-\alpha_{\gamma}). \tag{5}\]
A class generalizing both cycle and change-cycle inequalities are the _flip inequalities_ introduced by N. Lindner and Liebchen (2020). Let \(I\) be a PESP instance and let \(F\subseteq A\) be an arbitrary subset of arcs. We construct a new PESP instance \(I_{F}\) from \(I\) by "flipping" the arcs in \(F\): We replace each arc \(a=(i,j)\in F\) by an arc \(\overline{a}=(j,i)\), and set \(\ell_{\overline{a}}\coloneqq-u_{a}\), \(u_{\overline{a}}\coloneqq-\ell_{a}\), and \(w_{\overline{a}}\coloneqq-w_{a}\). From any periodic tension \(x\) for \(I\), we obtain a periodic tension \(x_{F}\) for \(I_{F}\) by defining \(x_{F,a}\coloneqq x_{a}\) for \(a\in A\setminus F\), and \(x_{F,\overline{a}}\coloneqq-x_{a}\) for \(a\in F\). In particular, \(I\) is feasible if and only if \(I_{F}\) is feasible, and in case of feasibility, both \(I\) and \(I_{F}\) have the same optimal objective value. Moreover, for any \(\gamma\in\mathcal{C}\), we obtain an element \(\gamma_{F}\) in the cycle space of \(I_{F}\) by setting \(\gamma_{F,a}\coloneqq\gamma_{a}\) for \(a\in A\setminus F\), and \(\gamma_{F,\overline{a}}\coloneqq-\gamma_{a}\) for \(a\in F\). We can hence consider the change-cycle inequality for \(\gamma_{F}\) on \(I_{F}\) and transform it back to \(I\):
**Theorem 2.12** (N. Lindner & Liebchen, 2020).: _Let \(\gamma\in\mathcal{C}\) and \(F\subseteq A\). Set_
\[\alpha_{\gamma,F}\coloneqq\left[-\sum_{a\in A\setminus F}\gamma_{a}\ell_{a}- \sum_{a\in F}\gamma_{a}u_{a}\right]_{T}.\]
_Then the following flip inequality holds for all \((x,z)\in\mathcal{P}_{\mathrm{I}}\):_
\[\begin{split}&(T-\alpha_{\gamma,F})\sum_{\begin{subarray}{c}a\in A \setminus F:\\ \gamma_{a}>0\end{subarray}}\gamma_{a}(x_{a}-\ell_{a})+\alpha_{\gamma,F}\sum_{ \begin{subarray}{c}a\in A\setminus F:\\ \gamma_{a}<0\end{subarray}}(-\gamma_{a})(x_{a}-\ell_{a})\\ &+\alpha_{\gamma,F}\sum_{\begin{subarray}{c}a\in F:\\ \gamma_{a}>0\end{subarray}}\gamma_{a}(u_{a}-x_{a})+(T-\alpha_{\gamma,F})\sum_{ \begin{subarray}{c}a\in F:\\ \gamma_{a}<0\end{subarray}}(-\gamma_{a})(u_{a}-x_{a})\quad\geq\quad\alpha_{ \gamma,F}(T-\alpha_{\gamma,F}).\end{split} \tag{6}\]
**Remark 2.13**.: The flip inequalities (6) for \(F=\emptyset\) give exactly the change-cycle inequalities (5). Moreover, by flipping all backward resp. all forward arcs of some \(\gamma\in\mathcal{C}\), we obtain Odijk's cycle inequalities (4). Since the left-hand side of (6) is always non-negative for \((x,z)\in\mathcal{P}\), flip inequalities with \(\alpha_{\gamma,F}=0\) are trivial. Due to symmetry reasons, the flip inequalities for \((\gamma,F)\) and \((-\gamma,F)\) coincide, and \(\alpha_{\gamma,F}=T-\alpha_{-\gamma,F}\) when \(\alpha_{\gamma,F}\geq 1\).
**Definition 2.14**.: We define the _flip polytope_ as
\[\mathcal{P}_{\mathrm{flip}}\coloneqq\{(x,z)\in\mathcal{P}\mid(x,z)\text{ satisfies the flip inequality for all }\gamma\in\mathcal{C}\text{ and }F\subseteq A\}.\]
Apart from the trivial relation \(\mathcal{P}_{\mathrm{I}}\subseteq\mathcal{P}_{\mathrm{flip}}\subseteq \mathcal{P}\), the flip polytope has some interesting properties (N. Lindner & Liebchen, 2020): Every vertex of \(\mathcal{P}_{\mathrm{I}}\) is a vertex of \(\mathcal{P}_{\mathrm{flip}}\), but in general not a vertex of \(\mathcal{P}\). Moreover, if \(G\) is a cactus graph, i.e., every arc is contained in at most one simple cycle, then \(\mathcal{P}_{\mathrm{flip}}=\mathcal{P}_{\mathrm{I}}\). However, there are PESP instances with \(\mu=2\) and \(\mathcal{P}_{\mathrm{flip}}\neq\mathcal{P}_{\mathrm{I}}\).
## 3 The Split Closure of the Periodic Timetabling Polyhedron
The relation between the periodic timetabling polytope and the flip polytope seems close and deserves more attention. In fact, in this section, we will establish that the flip polytope can be identified with the split closure.
### Preliminaries
We will now recall the definition of split inequalities, split disjunctions, and the split closure, following the treatment by Conforti, Cornuejols, and Zambelli (2014). To two matrices \(A_{C}\in\mathbb{Q}^{m\times n}\), \(A_{I}\in\mathbb{Q}^{m\times p}\) and a vector \(b\in\mathbb{Q}^{p}\), we associate the mixed-integer set
\[S\coloneqq\{(x,z)\in\mathbb{R}^{n}\times\mathbb{Z}^{p}\mid A_{C}x+A_{I}z\leq b\},\]
and the two polyhedra
\[P\coloneqq\{(x,z)\in\mathbb{R}^{n}\times\mathbb{R}^{p}\mid A_{C}x+A_{I}z\leq b\}, P_{\mathrm{I}}\coloneqq\mathrm{conv}(S).\]
A _split_ is a pair \((\beta,\beta_{0})\in\mathbb{Z}^{p}\times\mathbb{Z}\). The disjunction
\[\beta^{\top}z\leq\beta_{0}\quad\vee\quad\beta^{\top}z\geq\beta_{0}+1\]
is satisfied for all \((x,z)\in\mathrm{conv}(S)\) and is called a _split disjunction_. In particular, the polyhedron
\[P^{(\beta,\beta_{0})}\coloneqq\mathrm{conv}(\{(x,z)\in P\mid\beta^{\top}z \leq\beta_{0}\}\cup\{(x,z)\in P\mid\beta^{\top}z\geq\beta_{0}+1\})\]
contains \(\operatorname{conv}(S)\). The _split closure_ is now defined as
\[P_{\text{split}}\coloneqq\bigcap_{(\beta,\beta_{0})\in\mathbb{Z}^{p}\times \mathbb{Z}}P^{(\beta,\beta_{0})}=\bigcap_{\beta\in\mathbb{Z}^{p}}\operatorname{ conv}(\{(x,z)\in P\mid\beta^{\top}z\in\mathbb{Z}\}). \tag{7}\]
The split closure \(P_{\text{split}}\) is a polyhedron (Cook et al., 1990) with the property that \(\operatorname{conv}(S)\subseteq P_{\text{split}}\subseteq P\). It is identical to the closure given by Gomory's mixed-integer (GMI) cuts or mixed-integer rounding (MIR) cuts (Nemhauser & Wolsey, 1990). On the downside, optimization and hence separation are in general NP-hard (Caprara & Letchford, 2003).
The split closure can be described as well in terms of defining inequalities: Define
\[\Lambda\coloneqq\left\{\lambda\in\mathbb{R}^{m}\left|\begin{array}{c} \lambda^{\top}A_{C}=0,\lambda^{\top}A_{I}\in\mathbb{Z}^{p},\lambda^{\top}b \notin\mathbb{Z},\text{and the rows of }(A_{C}\ A_{I})\\ \text{corresponding to non-zero entries of }\lambda\text{ are linearly independent}\end{array}\right.\right\}.\]
Each multiplier vector \(\lambda\in\Lambda\) defines the _split inequality_
\[\frac{\lambda_{+}^{\top}(b-A_{C}x-A_{I}z)}{[\lambda^{\top}b]_{1}}+\frac{ \lambda_{-}^{\top}(b-A_{C}x-A_{I}z)}{1-[\lambda^{\top}b]_{1}}\geq 1. \tag{8}\]
Here, we decompose \(\lambda\) into its positive part \(\lambda_{+}\) and negative part \(\lambda_{-}\), and denote by \([\cdot]_{1}\) the fractional part in \([0,1)\). The split inequality (8) for \(\lambda\in\Lambda\) is valid for \(P^{(\lambda^{\top}A_{I},[\lambda^{\top}b])}\). Conversely, any facet of \(P^{(\beta,\beta_{0})}\) is defined by an inequality of the form (8) for some \(\lambda\in\Lambda\) with \(\lambda^{\top}A_{I}=\beta\) and \([\lambda^{\top}b]=\beta_{0}\). Consequently,
\[P_{\text{split}}=\{(x,z)\in P\mid(x,z)\text{ satisfies the split inequality (8) for all }\lambda\in\Lambda\}.\]
### Flipping is Splitting for Periodic Timetabling
We investigate now the split closure for the cycle-based MIP formulation (3) for the Periodic Event Scheduling Problem. Thus let \((G=(V,A),T,\ell,u,w)\) be a PESP instance, and let \(B\) be an integral cycle basis of \(G\) with cycle matrix \(\Gamma\). Rewriting in the form \(A_{C}x+A_{I}z\leq b\), the fractional periodic timetabling polytope \(\mathcal{P}\) is defined by
\[\left(\begin{array}{cc}\Gamma&-TI\\ -\Gamma&TI\\ I&0\\ -I&0\\ \end{array}\right)\left(\begin{array}{c}x\\ z\\ \end{array}\right)\leq\left(\begin{array}{c}0\\ 0\\ u\\ -\ell\end{array}\right), \tag{9}\]
where \(I\) denotes the identity matrix. We will write a multiplier vector \(\lambda\in\Lambda\) as
\[\lambda=(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4})\in\mathbb{R}^{B} \times\mathbb{R}^{B}\times\mathbb{R}^{A}\times\mathbb{R}^{A}\]
corresponding to the four row blocks in (9).
**Theorem 3.1**.: _Every flip inequality with \(\alpha_{\gamma,F}\neq 0\) is a split inequality for the cycle-based MIP formulation of PESP (3) and vice versa. In particular, \(\mathcal{P}_{\text{split}}=\mathcal{P}_{\text{flip}}\)._
Proof.: We analyze the set \(\Lambda\) for the MIP (3). For \(\lambda\in\Lambda\), we have
\[\lambda^{\top}A_{C}=(\lambda_{1}-\lambda_{2})^{\top}\Gamma+(\lambda_{3}- \lambda_{4})^{\top}\quad\text{and}\quad\lambda^{\top}A_{I}=-T(\lambda_{1}- \lambda_{2})^{\top}.\]
As \(\lambda^{T}A_{I}\) is integer, we find that \(\gamma\coloneqq T(\lambda_{1}-\lambda_{2})^{\top}\Gamma\) is an integer linear combination of the rows of the cycle matrix \(\Gamma\), so that \(\gamma\in\mathcal{C}\). From \(\lambda^{T}A_{C}=0\) we infer that \(\lambda_{3}-\lambda_{4}=-\gamma/T\). By
the linear independence condition, for an arc \(a\in A\), not both of \(\lambda_{3,a}\) and \(\lambda_{4,a}\) can be non-zero. Hence, when we set \(F\coloneqq\{a\in A\mid\lambda_{3,a}\neq 0\}\), we have
\[\lambda_{3,a}=\begin{cases}-\gamma_{a}/T&\text{if }a\in F,\\ 0&\text{if }a\in A\setminus F,\end{cases}\quad\text{ and }\quad\lambda_{4,a}= \begin{cases}\gamma_{a}/T&\text{if }a\in A\setminus F,\\ 0&\text{if }a\in F.\end{cases} \tag{10}\]
With that, the fractional part \([\lambda^{\top}b]_{1}\) evaluates to
\[[\lambda^{\top}b]_{1}=[\lambda_{3}^{\top}u-\lambda_{4}^{\top}\ell]_{1}=\left[ -\sum_{a\in F}\frac{\gamma_{a}u_{a}}{T}-\sum_{a\in A\setminus F}\frac{\gamma_ {a}\ell_{a}}{T}\right]_{1}.\]
Observe that for any \(y\in\mathbb{R}\), we have
\[T\left[\frac{y}{T}\right]_{1}=T\left(\frac{y}{T}-\left\lfloor\frac{y}{T} \right\rfloor\right)=y-T\left\lfloor\frac{y}{T}\right\rfloor=[y]_{T},\]
so that
\[T[\lambda^{\top}b]_{1}=T[\lambda_{3}^{\top}u-\lambda_{4}^{\top}\ell]_{1}= \alpha_{\gamma,F}, \tag{11}\]
where \(\alpha_{\gamma,F}\) is as in Theorem 2.12, and we have \(\alpha_{\gamma,F}\neq 0\) because \(\lambda^{\top}b\notin\mathbb{Z}\).
We now consider the expressions \(\lambda_{\pm}^{\top}(b-A_{C}x-A_{I}z)\). Since for \((x,z)\in\mathcal{P}\),
\[b-A_{C}x-A_{I}z=(-\Gamma x+Tz,\Gamma x-Tz,u-x,x-\ell)^{\top}=(0,0,u-x,x-\ell) ^{\top},\]
we have
\[\lambda_{+}^{\top}(b-A_{C}x-A_{I}z) =\frac{1}{T}\sum_{\begin{subarray}{c}a\in F\\ \gamma_{a}<0\end{subarray}}\gamma_{a}(u_{a}-x_{a})+\frac{1}{T}\sum_{ \begin{subarray}{c}a\in A\setminus F\\ \gamma_{a}>0\end{subarray}}\gamma_{a}(x_{a}-\ell_{a}), \tag{12}\] \[\lambda_{-}^{\top}(b-A_{C}x-A_{I}z) =\frac{1}{T}\sum_{\begin{subarray}{c}a\in F\\ \gamma_{a}>0\end{subarray}}\gamma_{a}(u_{a}-x_{a})+\frac{1}{T}\sum_{ \begin{subarray}{c}a\in A\setminus F\\ \gamma_{a}<0\end{subarray}}\gamma_{a}(x_{a}-\ell_{a}).\]
It is now evident from (11) and (12) that multiplying the split inequality (8) for \(\lambda\) with \(\alpha_{\gamma,F}(T-\alpha_{\gamma,F})\) yields the flip inequality (6) for \((\gamma,F)\).
To prove the converse, starting from \(\gamma\in\mathcal{C}\) and \(F\subseteq A\) with \(\alpha_{\gamma,F}\neq 0\), we define \(\lambda_{3}\) and \(\lambda_{4}\) as in (10), so that \(\lambda_{3}-\lambda_{4}=-\gamma/T\). Moreover, as (11) holds, we have that \(\lambda^{\top}b\) is not an integer. Since \(\gamma\in\mathcal{C}\), there is an integral vector \(\eta\in\mathbb{Z}^{B}\) with \(\eta^{\top}\Gamma=\gamma\). Then \(\lambda\coloneqq(\eta/T,0,\lambda_{3},\lambda_{4})\in\Lambda\), and the split inequality for \(\lambda\) is equivalent to the flip inequality for \((\gamma,F)\).
N. Lindner and Liebchen (2020) proved that \(\mathcal{P}_{\text{flip}}=\mathcal{P}_{\text{I}}\) if the cyclomatic number \(\mu\) of \(G\) is at most one by analyzing the combinatorial structure of \(\mathcal{P}_{\text{flip}}\). In terms of the split closure, this result becomes almost trivial:
**Corollary 3.2**.: _Suppose that \(\mu\leq 1\). Then \(\mathcal{P}_{\text{split}}=\mathcal{P}_{\text{I}}\)._
Proof.: This is clear for \(\mu=|B|=0\), as
\[\mathcal{P}=\mathcal{P}_{\text{split}}=\mathcal{P}_{\text{I}}=\{(x,z) \in\mathbb{R}^{A}\mid\ell\leq x\leq u\}.\]
For \(\mu=|B|=1\), there is only a single integer variable \(z\), and by virtue of (7),
\[\mathcal{P}_{\text{split}}=\bigcap_{\beta\in\mathbb{Z}}\text{conv}\,\{ (x,z)\in\mathcal{P}\mid\beta z\in\mathbb{Z}\}=\text{conv}\,\{(x,z)\in\mathcal{ P}\mid z\in\mathbb{Z}\}=\mathcal{P}_{\text{I}}.\qed\]
### Chvatal Closure
For any mixed-integer set \(S\) defined by \((A_{\mathcal{C}},A_{I},b)\) with associated polyhedron \(P\), one can define the _Chvatal closure_ as a "one-side split closure" by
\[P_{\mathrm{Ch}}\coloneqq\bigcap\left\{P^{(\beta,\beta_{0})}\Big{|}(\beta,\beta _{0})\in\mathbb{Z}^{p}\times\mathbb{Z}\text{ s.t. }P\cap\{\beta^{\top}z\leq\beta_{0}\}= \emptyset\text{ or }P\cap\{\beta^{\top}z\geq\beta_{0}+1\}=\emptyset\right\},\]
see, e.g., Conforti et al. (2010, 2014). It is clear that \(P_{\mathrm{split}}\subseteq P_{\mathrm{Ch}}\subseteq P\). For Periodic Event Scheduling, we find:
**Theorem 3.3**.: _The Chvatal closure of the MIP (3) is given by_
\[\mathcal{P}_{\mathrm{Ch}}=\{(x,z)\in\mathcal{P}\mid(x,z)\text{ satisfies the cycle inequality \eqref{eq:chvatal} for all }\gamma\in\mathcal{C}\}.\]
Proof.: We need to determine those \((\beta,\beta_{0})\in\mathbb{Z}^{B}\times\mathbb{Z}\) for which one of \(\mathcal{P}\cap\{\beta^{\top}z\leq\beta_{0}\}\) or \(\mathcal{P}\cap\{\beta^{\top}z\geq\beta_{0}+1\}\) is empty. Since \(\Gamma x=Tz\) holds for all \((x,z)\in\mathcal{P}\), we have \(\beta^{\top}z=\frac{\gamma^{\top}x}{T}\) for \(\gamma\coloneqq\beta^{\top}\Gamma\in\mathcal{C}\) for an arbitrary choice of \(\beta\in\mathbb{Z}^{B}\). Let
\[k_{1} \coloneqq\left\lceil\min\left\{\frac{\gamma^{\top}x}{T}\,\Big{|} \,(x,z)\in\mathcal{P}\right\}\right\rceil =\left\lceil\frac{\gamma_{+}^{\top}\ell-\gamma_{-}^{\top}u}{T} \right\rceil,\] \[k_{2} \coloneqq\left\lfloor\max\left\{\frac{\gamma^{\top}x}{T}\,\Big{|} \,(x,z)\in\mathcal{P}\right\}\right\rfloor =\left\lfloor\frac{\gamma_{+}^{\top}u-\gamma_{-}^{\top}\ell}{T} \right\rceil.\]
Then \(\mathcal{P}\cap\{\frac{\gamma^{\top}x}{T}\leq\beta_{0}\}=\emptyset\) for all \(\beta_{0}\leq k_{1}-1\) and \(\mathcal{P}\cap\{\frac{\gamma^{\top}x}{T}\geq\beta_{0}+1\}=\emptyset\) for \(\beta_{0}\geq k_{2}\). If \(k_{1}\geq k_{2}+1\), then \(\mathcal{P}_{\mathrm{Ch}}=\emptyset\), and no \((x,z)\in\mathcal{P}\) satisfies the cycle inequality (4) for \(\gamma\). Otherwise, both polyhedra \(\mathcal{P}\cap\{\frac{\gamma^{\top}x}{T}\geq k_{1}\}\) and \(\mathcal{P}\cap\{\frac{\gamma^{\top}x}{T}\leq k_{2}\}\) are non-empty, and they are defined by \(\mathcal{P}\) and Odijk's cycle inequalities (4).
Moreover, since \(\mathcal{P}\cap\{\frac{\gamma^{\top}x}{T}\geq k_{1}\}\subseteq\mathcal{P}\cap \{\frac{\gamma^{\top}x}{T}\geq\beta_{0}\}\) for any \(\beta_{0}\leq k_{1}\) and \(\mathcal{P}\cap\{\frac{\gamma^{\top}x}{T}\leq k_{2}\}\subseteq\mathcal{P}\cap \{\frac{\gamma^{\top}x}{T}\leq\beta_{0}\}\) for \(\beta_{0}\geq k_{2}\), we can conclude that
\[\bigcap_{\beta_{0}\leq k_{1}-1}P^{(\beta,\beta_{0})}=P^{(\beta,k_{1}-1)}\quad \text{and}\quad\bigcap_{\beta_{0}\geq k_{2}}P^{(\beta,\beta_{0})}=P^{(\beta,k _{2})}.\]
We conclude that for each \(\beta\in\mathbb{Z}^{B}\) and \(k_{1}\) and \(k_{2}\) as above,
\[\bigcap\left\{P^{(\beta,\beta_{0})}\Big{|}\beta_{0}\in\mathbb{Z} \text{ s.t. }P\cap\{\beta^{\top}z\leq\beta_{0}\}=\emptyset\text{ or }P\cap\{\beta^{\top}z\geq\beta_{0}+1\}=\emptyset\right\}\] \[=P^{(\beta,k_{1}-1)}\cap P^{(\beta,k_{2})},\]
from which the claim follows.
## 4 Separation of Split Cuts
From a practical point of view, the split closure can be a valuable tool to provide dual bounds for mixed integer programs. Of course, this requires efficient separation methods. As we have established that the split closure is of a specific form in the case of periodic timetabling, we can make use of the combinatorial structure behind flip inequalities to separate cuts.
### Simple Cycles
We show at first that for separating split/flip inequalities, it suffices to consider _simple_ oriented cycles, i.e., oriented cycles \(\gamma\in\mathcal{C}\cap\{-1,0,1\}^{A}\) that yield a simple cycle on the underlying undirected graph of \(\mathcal{G}\).
**Lemma 4.1** (Orientation-preserving cycle decomposition).: _Let \(\gamma\in\mathcal{C}\). Then there are simple oriented cycles \(\delta_{1},\ldots,\delta_{r}\in\mathcal{C}\) such that \(\gamma=\sum_{k=1}^{r}\delta_{k}\), \(\gamma_{+}=\sum_{k=1}^{r}\delta_{k,+}\), and \(\gamma_{-}=\sum_{k=1}^{r}\delta_{k,-}\)._
Proof.: Let \(A^{+}\coloneqq\{a\in A\mid\gamma_{a}>0\}\) and \(A^{-}\coloneqq\{a\in A\mid\gamma_{a}<0\}\) be the set of forward and backward arcs of \(\gamma\), respectively. Construct a digraph \(G_{\gamma}\), whose set of arcs \(A_{\gamma}\) is given by
\[A_{\gamma}\coloneqq A^{+}\cup\{(j,i)\mid(i,j)\in A^{-}\}.\]
Define \(g_{ij}\coloneqq\gamma_{ij}\) if \((i,j)\in A^{+}\) and \(g_{ji}\coloneqq-\gamma_{ij}\) if \((i,j)\in A^{-}\). Then \(g\geq 0\) is a circulation in \(G_{\gamma}\), so that it decomposes into simple directed cycles \(d_{1},\ldots,d_{r}\). Finally set \(\delta_{k,ij}\coloneqq d_{k,ij}\) if \((i,j)\in A^{+}\) and \(\delta_{k,ij}\coloneqq-d_{k,ij}\) if \((i,j)\in A^{-}\).
**Theorem 4.2**.: _Let \(F\subseteq A\) and \((x,z)\in\mathcal{P}\). If \((x,z)\) satisfies all flip inequalities w.r.t. \(F\) and all simple oriented cycles \(\gamma\), then it satisfies all flip inequalities w.r.t. \(F\) and all \(\gamma\in\mathcal{C}\). In particular,_
\[\mathcal{P}_{\mathrm{split}}=\mathcal{P}_{\mathrm{flip}}=\left\{(x,z)\in \mathcal{P}\left|\begin{array}{c}(x,z)\text{ satisfies the flip inequality}\\ \text{ for all simple oriented cycles }\gamma\in\mathcal{C}\text{ and all }F\subseteq A \end{array}\right.\right\}.\]
Proof.: The inclusion \((\subseteq)\) is clear, it remains to show \((\supseteq)\). Suppose that \((x,z)\) satisfies the flip inequalities w.r.t. \(F\) and all simple oriented cycles. Moving to the flipped instance \(I_{F}\) as in Section 2.3, we can assume that \(F=\emptyset\), so that it suffices to consider the change-cycle inequality (5) for an arbitrary \(\gamma\in\mathcal{C}\). Let \(\gamma=\delta_{1}+\cdots+\delta_{r}\) be an orientation-preserving decomposition as in Lemma 4.1. We proceed by induction on \(r\).
If \(r\leq 1\), then \(\gamma=0\) or \(\gamma\) is simple, and there is nothing to show.
Now assume \(r\geq 2\). There is nothing to show if \(\alpha_{\gamma}=0\), as the left-hand side of the change-cycle inequality is always non-negative. We hence assume \(\alpha_{\gamma}>0\).
By induction hypothesis, \((x,z)\) satisfies the change-cycle inequality for the cycles \(\delta\coloneqq\delta_{1}\) and \(\varepsilon\coloneqq\delta_{2}+\cdots+\delta_{r}\). If \(\alpha_{\delta}=0\) resp. \(\alpha_{\varepsilon}=0\), then we have \(\alpha_{\gamma}=\alpha_{\varepsilon}\) resp. \(\alpha_{\gamma}=\alpha_{\delta}\), as \(\alpha_{\gamma}=[\alpha_{\delta}+\alpha_{\varepsilon}]_{T}\). The validity of the change-cycle inequality for \(\gamma\) then follows immediately because the right-hand side equals the one for \(\varepsilon\) resp. \(\delta\), while the left-hand side can only become larger.
We are hence left with the case \(\alpha_{\gamma},\alpha_{\delta},\alpha_{\varepsilon}>0\). In this case we can rewrite the change-cycle inequality (5) so that it is of the form
\[\frac{\gamma_{+}^{\top}(x-\ell)}{\alpha_{\gamma}}+\frac{\gamma_{-}^{\top}(x- \ell)}{T-\alpha_{\gamma}}\geq 1,\]
which will be the key ingredient in our argumentation. Define
\[\kappa_{\delta}\coloneqq\min\left\{\frac{\alpha_{\delta}}{\alpha_{\gamma}}, \frac{T-\alpha_{\delta}}{T-\alpha_{\gamma}}\right\}\quad\text{ and }\quad\kappa_{\varepsilon}\coloneqq\min\left\{\frac{\alpha_{ \varepsilon}}{\alpha_{\gamma}},\frac{T-\alpha_{\varepsilon}}{T-\alpha_{\gamma}} \right\}.\]
With \(y\coloneqq x-\ell\), using that \((x,z)\) satisfies the change-cycle inequality w.r.t. both \(\delta\) and \(\varepsilon\),
\[\frac{\gamma_{+}^{\top}y}{\alpha_{\gamma}}+\frac{\gamma_{-}^{\top}y }{T-\alpha_{\gamma}} =\frac{\delta_{+}^{\top}y}{\alpha_{\gamma}}+\frac{\delta_{-}^{ \top}y}{T-\alpha_{\gamma}}+\frac{\varepsilon_{+}^{\top}y}{\alpha_{\gamma}}+ \frac{\varepsilon_{-}^{\top}y}{T-\alpha_{\gamma}}\] \[=\frac{\alpha_{\delta}}{\alpha_{\gamma}}\frac{\delta_{+}^{\top}y }{\alpha_{\delta}}+\frac{T-\alpha_{\delta}}{T-\alpha_{\gamma}}\frac{\delta_{-} ^{\top}y}{T-\alpha_{\delta}}+\frac{\alpha_{\varepsilon}}{\alpha_{\gamma}} \frac{\varepsilon_{+}^{\top}y}{\alpha_{\varepsilon}}+\frac{T-\alpha_{ \varepsilon}}{T-\alpha_{\gamma}}\frac{\varepsilon_{-}^{\top}y}{T-\alpha_{ \varepsilon}}\] \[\geq\kappa_{\delta}\left(\frac{\delta_{+}^{\top}y}{\alpha_{ \delta}}+\frac{\delta_{-}^{\top}y}{T-\alpha_{\delta}}\right)+\kappa_{ \varepsilon}\left(\frac{\varepsilon_{+}^{\top}y}{\alpha_{\varepsilon}}+\frac{ \varepsilon_{-}^{\top}y}{T-\alpha_{\varepsilon}}\right)\] \[\geq\kappa_{\delta}+\kappa_{\varepsilon},\]
_Claim._\(\kappa_{\delta}+\kappa_{\varepsilon}=1\).
If the claim holds, then the change-cycle inequality w.r.t. \(\gamma\) holds for \((x,z)\), and we are done. Recall that \(\alpha_{\gamma}=[\alpha_{\delta}+\alpha_{\varepsilon}]_{T}\), so that
\[\alpha_{\gamma}=\begin{cases}\alpha_{\delta}+\alpha_{\varepsilon}&\text{if } \alpha_{\delta}+\alpha_{\varepsilon}<T,\\ \alpha_{\delta}+\alpha_{\varepsilon}-T&\text{otherwise}.\end{cases}\]
If \(\alpha_{\delta}+\alpha_{\varepsilon}<T\), then \(\alpha_{\gamma}=\alpha_{\delta}+\alpha_{\varepsilon}\), hence \(\alpha_{\delta}\leq\alpha_{\gamma}\) and \(T-\alpha_{\gamma}=T-\alpha_{\delta}-\alpha_{\varepsilon}\leq T-\alpha_{\delta}\), so that \(\kappa_{\delta}=\alpha_{\delta}/\alpha_{\gamma}\). Analogously, \(\kappa_{\varepsilon}=\alpha_{\varepsilon}/\alpha_{\gamma}\), so that \(\kappa_{\delta}+\kappa_{\varepsilon}=1\).
In the other case, we have \(\alpha_{\gamma}=\alpha_{\delta}+\alpha_{\varepsilon}-T\). From this, we infer \(\alpha_{\delta}=\alpha_{\gamma}+T-\alpha_{\varepsilon}\geq\alpha_{\gamma}\) and \(T-\alpha_{\gamma}=2T-\alpha_{\delta}-\alpha_{\varepsilon}\geq(T-\alpha_{ \delta})+(T-\alpha_{\varepsilon})\geq T-\alpha_{\delta}\), so that \(\kappa_{\delta}=(T-\alpha_{\delta})/(T-\alpha_{\gamma})\). Analogously, \(\kappa_{\varepsilon}=(T-\alpha_{\varepsilon})/(T-\alpha_{\gamma})\), so that again \(\kappa_{\delta}+\kappa_{\varepsilon}=1\).
Using Theorem 3.3 and Remark 2.13, Theorem 4.2 implies the analogous result for the Chvatal split closure and the cycle inequalities:
**Corollary 4.3**.: _Let \((x,z)\in\mathcal{P}\). If \((x,z)\) satisfies all cycle inequalities w.r.t. all simple oriented cycles \(\gamma\), then it satisfies all cycle inequalities for all \(\gamma\in\mathcal{C}\). In particular,_
\[\mathcal{P}_{\mathrm{Ch}}=\left\{(x,z)\in\mathcal{P}\left|\begin{array}{c}(x,z)\text{ satisfies the cycle inequality}\\ \text{for all simple oriented cycles }\gamma\in\mathcal{C}\end{array}\right.\right\}.\]
### Separation Hardness
N. Lindner and Liebchen (2020) outline a pseudo-polynomial time algorithm based on the dynamic program by Borndorfer, Hoppmann, et al. (2020) that finds a maximally violated flip inequality (if there is any), i.e., a simple cycle \(\gamma\) and a set \(F\subseteq A\) such that the difference of the right-hand and left-hand sides of (6) is maximum. We prove here that pseudo-polynomial time is best possible unless \(\mathrm{P}=\mathrm{NP}\):
**Theorem 4.4**.: _Given \((x,z)\in\mathcal{P}\) and \(M\geq 0\), it is weakly NP-hard to decide whether there exist a simple cycle \(\gamma\) and a subset \(F\subseteq A\) such that \((x,z)\) violates the flip inequality for \((\gamma,F)\) by at least \(M\)._
Proof.: We reduce the weakly NP-hard Ternary Partition Problem (Borndorfer, Hoppmann, et al., 2020): Given \(m\in\mathbb{N}\) and \(c\in\mathbb{N}^{m}\), is there \(a\in\{-1,0,1\}^{m}\) such that \(\sum_{i=1}^{m}a_{i}c_{i}=\pm\frac{1}{2}\sum_{i=1}^{m}c_{i}\)? For a Ternary Partition intance \((m,c)\), we define a PESP instance \((G,T,\ell,u,w)\) as follows: The digraph \(G=(V,A)\) is given by a complete directed graph on the vertex set
\[V\coloneqq\{1^{+},1^{-},2^{+},2^{-},\ldots,m^{+},m^{-}\},\]
where we delete the arcs \((1^{-},1^{+}),(2^{-},2^{+}),\ldots,(m^{-},m^{+})\). We set \(T\coloneqq\sum_{i=1}^{m}c_{i}\) and
\[\ell_{i^{+}i^{-}}\coloneqq c_{i},\quad u_{i^{+}i^{-}}\coloneqq T,\quad w_{i^{+ }i^{-}}\coloneqq 1\quad\text{for all }i\in\{1,\ldots,m\}.\]
For all other arcs \(a\), we set \(\ell_{a}\coloneqq u_{a}\coloneqq w_{a}\coloneqq 0\). As for any PESP instance, the optimal solution to the LP relaxation of (3) is given by \(x^{*}=\ell\).
Suppose now that \(x^{*}=\ell\) violates some flip inequality (6) for some simple oriented cycle \(\gamma\) and some \(F\subseteq A\) by at least \(M\coloneqq\frac{T^{2}}{4}\). Since \(x^{*}=\ell\), only arcs in \(F\) contribute non-trivially to the left-hand side of (6), moreover, these arcs are all of the form \((i^{+},i^{-})\). We hence obtain
\[\alpha_{\gamma,F}\sum_{(i^{+},i^{-})\in F,\gamma_{i^{+}i^{-}}=1}(T-c_{i})+(T- \alpha_{\gamma,F})\sum_{(i^{+},i^{-})\in F,\gamma_{i^{+}i^{-}}=-1}(T-c_{i}) \leq\alpha_{\gamma,F}(T-\alpha_{\gamma,F})-M, \tag{13}\]
where \(\alpha_{\gamma,F}=[-\sum_{(i^{+},i^{-})\notin F}\gamma_{i^{+}i^{-}}c_{i}]_{T}\). As the left-hand side of (6) is non-negative, we have that \(\alpha_{\gamma,F}(T-\alpha_{\gamma,F})\geq M=T^{2}/4\), which implies \(\alpha_{\gamma,F}=T/2\). Set \(a_{i}\coloneqq-\gamma_{i^{+}j^{-}}\) for all \(i\in\{1,\ldots,m\}\). Then \([\sum_{i=1}^{m}a_{i}c_{i}]_{T}=\alpha_{\gamma,F}=T/2\), and as \(-T\leq\sum_{i=1}^{m}a_{i}c_{i}\leq T\), we find that \(\sum_{i=1}^{m}a_{i}c_{i}=\pm T/2\). In particular, a violated flip inequality leads to a positive answer to the Ternary Partition instance.
Conversely, suppose that there is \(a\in\{-1,0,1\}^{m}\) such that \(\sum_{i=1}^{m}a_{i}c_{i}=\pm T/2\). Construct a simple oriented cycle \(\gamma\) with \(\gamma_{i^{+}i^{-}}\coloneqq-a_{i}\) for all \(i\in\{1,\ldots,m\}\). Then \(\alpha_{\gamma,\emptyset}=T/2\), and the flip inequality for \(\gamma\) and \(F=\emptyset\) (i.e., the change-cycle inequality for \(\gamma\)) is violated by at least \(T^{2}/4=M\), because the left-hand side of (13) vanishes.
In practice, the dynamic program indicated in (N. Lindner Liebchen, 2020) consumes too much memory. It is therefore advantageous to switch to a cut-generating MIP. Balas and Saxena (2008) describe a parametric MIP with a single parameter \(\theta\in[0,1]\) for this purpose. We are however in a better situation: Translating to periodic timetabling via Theorem 3.1, the parameter \(\theta\) essentially corresponds to \(\alpha_{\gamma,F}\), which is always an integer between \(0\) and \(T-1\). This means that the parametric MIP can be replaced by a finite sequence of standard IPs for each such integer \(\alpha_{\gamma,F}\). The formulation of the IP (14) is straightforward from the definition (6) of flip inequalities:
**Theorem 4.5**.: _Let \((x,z)\in\mathcal{P}\setminus\mathcal{P}_{\mathrm{flip}}\) and \(\alpha\in\{1,\ldots,T\}\). Then a maximally violated flip inequality w.r.t. \((x,z)\) with \(\alpha_{\gamma,F}=\alpha\) among all oriented cycles \(\gamma\) and all \(F\subseteq A\) is found by the following integer program:_
\[\begin{split}\text{Minimize}&(T-\alpha)\sum_{a\in A }(x_{a}-\ell_{a})y_{a}^{+}+\alpha\sum_{a\in A}(x_{a}-\ell_{a})y_{a}^{-}\\ +\alpha\sum_{a\in A}(x_{a}-\ell_{a})f_{a}^{+}+(T-\alpha)\sum_{a \in A}(u_{a}-x_{a})f_{a}^{-}\\ \text{s.t.}&\sum_{a\in A}\ell_{a}(y_{a}^{-}-y_{a}^{+} )+\sum_{a\in A}u_{a}(f_{a}^{-}-f_{a}^{+})+kT=\alpha\\ &\sum_{a\in\delta^{+}(v)}\gamma_{a}-\sum_{a\in\delta^{-}(v)} \gamma_{a}=0,\qquad\qquad\qquad v\in V,\\ & f^{+}-f^{-}+y^{+}-y^{-}=\gamma,\\ 0\leq f^{+}+f^{-}+y^{+}+y^{-}\leq 1,\\ \qquad\qquad\qquad\qquad f^{+},f^{-},y^{+},y^{-}\in\{0,1\}^{A}, \\ \gamma\in\{-1,0,1\}^{A},\\ k\in\mathbb{Z}.\end{split} \tag{14}\]
Any feasible solution of (14) with objective value less than \(\alpha(T-\alpha)\) will produce a violated flip inequality. Recall from Remark 2.13 that flip inequalities with \(\alpha_{\gamma,F}=0\) are trivial and cannot be violated, and that due to symmetry, it is not necessary to consider the IP (14) for \(\alpha\geq T/2\).
### Separation for a Fixed Cycle
We discuss now how to find a maximally violated flip inequality in linear time when the cycle \(\gamma\) is already fixed. To this end, we take the perspective of split cuts. Consider again a mixed-integer set defined by \((A_{C},A_{I},b)\) and the associated polyhedron \(P=\{A_{C}x+A_{I}z\leq b\}\). When a split \((\beta,\beta_{0})\) is fixed, then the separation problem on \(P^{(\beta,\beta_{0})}\) can be solved as follows (Balas et al., 1993; Bonami, 2012; Conforti et al., 2014): Given \((x,z)\in P\), check whether \(\beta^{\top}z\leq\beta_{0}\) or \(\beta^{\top}z\geq\beta_{0}+1\). If yes, then \((x,z)\in P^{(\beta,\beta_{0})}\). Otherwise, solve the linear program
\[\begin{split}\text{Minimize}\quad(s-t)^{\top}b+\frac{1}{\beta^{ \top}z-\beta_{0}}\cdot t^{\top}(b-A_{C}x-A_{I}z)\\ \text{s.t.}\hskip 113.811024pt(s-t)^{\top}A_{C}&=0\\ (s-t)^{\top}A_{I}&=\beta^{\top}\\ s,t&\geq 0.\end{split} \tag{15}\]
If the value of (15) is at least \(\beta_{0}+1\), then \((x,z)\in P^{(\beta,\beta_{0})}\), otherwise it is not. In the latter case, if we take a basic optimal solution \((s^{*},t^{*})\), then \((x,z)\) is separated by the split inequality w.r.t. \(s^{*}-t^{*}\). This cut-generating LP (15) finds a maximally violated split inequality in the following sense:
**Lemma 4.6**.: _Suppose that \((x,z)\in P\setminus P^{(\beta,\beta_{0})}\). Let \((s^{*},t^{*})\) be an optimal basic solution of (15), \(\lambda^{*}\coloneqq s^{*}-t^{*}\). Then_
\[[\lambda^{*\top}b]_{1}(1-[\lambda^{*\top}b]_{1})-(1-[\lambda^{*\top}b]_{1}) \lambda^{*\top}_{+}(b-A_{C}x-A_{I}z)-[\lambda^{*\top}b]_{1}\lambda^{*\top}_{-} (b-A_{C}x-A_{I}z) \tag{16}\]
_is maximum among all \(\lambda=s-t\) such that \((s,t)\) is feasible for (15) and \(\lambda^{\top}b\in[\beta_{0},\beta_{0}+1)\)._
Proof.: Since \((s^{*},t^{*})\) is basic, we have \(\lambda^{*}_{+}=s\) and \(\lambda^{*}_{-}=t\). As \((x,z)\in P\setminus P^{(\beta,\beta_{0})}\), \(\beta^{\top}z-\beta_{0}>0\). Then \(\lambda^{*}\) maximizes
\[-(\beta^{\top}z-\beta_{0})\lambda^{\top}b-\lambda^{\top}_{-}(b-A_{C}x-A_{I}z).\]
Adding a constant term, \(\lambda^{*}\) also maximizes
\[(\beta^{\top}z-\beta_{0})(\beta_{0}+1)-(\beta^{\top}z-\beta_{0}) \lambda^{\top}b-\lambda^{\top}_{-}(b-A_{C}x-A_{I}z)\] \[=(\beta^{\top}z-\beta_{0})(\beta_{0}+1-\lambda^{\top}b)-\lambda^{ \top}_{-}(b-A_{C}x-A_{I}z).\]
Observing that \(\lambda^{\top}(b-A_{C}x-A_{I}z)=\lambda^{\top}b-\beta^{\top}z\), this is the same as
\[=(\lambda^{\top}b-\lambda^{\top}(b-A_{C}x-A_{I}z)-\beta_{0})( \beta_{0}+1-\lambda^{\top}b)-\lambda^{\top}_{-}(b-A_{C}x-A_{I}z)\] \[=(\lambda^{\top}b-\beta_{0})(\beta_{0}+1-\lambda^{\top}b)-(\beta _{0}+1-\lambda^{\top}b)\lambda^{\top}(b-A_{C}x-A_{I}z)-\lambda^{\top}_{-}(b-A_ {C}x-A_{I}z)\] \[=(\lambda^{\top}b-\beta_{0})(\beta_{0}+1-\lambda^{\top}b)-(\beta _{0}+1-\lambda^{\top}b)\lambda^{\top}_{+}(b-A_{C}x-A_{I}z)\] \[\quad-(\lambda^{\top}b-\beta_{0})\lambda^{\top}_{-}(b-A_{C}x-A_{I} z).\]
Since \(\lambda^{\top}b\in[\beta_{0},\beta_{0}+1)\), \([\lambda^{\top}b]_{1}=\lambda^{\top}b-\beta_{0}\) and \(1-[\lambda^{\top}b]_{1}=\beta_{0}+1-\lambda^{\top}b\), and we arrive at (16).
Note that the condition \(\lambda^{\top}b\in[\beta_{0},\beta_{0}+1)\) in Lemma 4.6 is no restriction, since it suffices to consider \(\lambda\) for which \(\lfloor\lambda^{\top}b\rfloor=\beta_{0}\) (cf. Section 3). We obtain the following in the context of periodic timetabling:
**Theorem 4.7**.: _Let \(\mathcal{P}\) be a fractional periodic timetabling polytope. Let \((x,z)\in\mathcal{P}\), \(\gamma\in\mathcal{C}\) with \(\gamma^{\top}x\notin T\mathbb{Z}\), and set \(g\coloneqq T/\left[-\gamma^{\top}x\right]_{T}\). Then the flip inequality w.r.t. \(\gamma\) and_
\[F\coloneqq\{a\in A\mid\gamma_{a}>0\text{ and }u_{a}-\ell_{a}\geq g(u_{a}-x_{a}) \}\cup\{a\in A\mid\gamma_{a}<0\text{ and }u_{a}-\ell_{a}\leq g(x_{a}-\ell_{a})\}\]
_is maximally violated by \((x,z)\) among the flip inequalities w.r.t. \(\gamma\). In particular, a maximally violated flip inequality w.r.t. \(\gamma\) can be found in \(O(|\gamma|)\) time._
Proof.: We first write down the cut-generating LP (15) for the PESP situation (9):
Minimize \[s_{3}^{\top}u-t_{3}^{\top}u-s_{4}^{\top}\ell+t_{4}^{\top}\ell+ \frac{1}{\beta^{\top}z-\beta_{0}}\cdot(t_{3}^{\top}(u-x)+t_{4}^{\top}(x-\ell))\] s.t. \[(s_{1}-t_{1}-s_{2}+t_{2})^{\top}\Gamma+s_{3}^{\top}-t_{3}^{\top} -s_{4}^{\top}+t_{4}^{\top} =0,\] \[-s_{1}+t_{1}+s_{2}-t_{2} =\frac{\beta}{T},\] \[s_{1},s_{2},s_{3},s_{4},t_{1},t_{2},t_{3},t_{4} \geq 0.\]
Recall from Theorem 3.1 that a flip inequality w.r.t. \(\gamma\) corresponds to a split inequality derived from \(P^{(\beta,\beta_{0})}\) with \(\beta^{\top}\Gamma=-\gamma\). Since \((x,z)\in\mathcal{P}\), we have \(\beta_{0}=\lfloor\beta^{\top}z\rfloor=\lfloor-\gamma^{\top}x/T\rfloor\). Eliminating the variables \(s_{1},s_{2},t_{1},t_{2}\), and setting
\[g\coloneqq\frac{1}{\beta^{\top}z-\beta_{0}}=\frac{1}{[\beta^{\top}z]_{1}}= \frac{1}{[\beta^{\top}\Gamma x/T]_{1}}=\frac{1}{[-\gamma^{\top}x/T]_{1}}= \frac{T}{[-\gamma^{\top}x]_{T}},\]
this becomes
Minimize \[s_{3}^{\top}u-t_{3}^{\top}u-s_{4}^{\top}\ell+t_{4}^{\top}\ell+g \cdot(t_{3}^{\top}(u-x)+t_{4}^{\top}(x-\ell))\] s.t. \[s_{3}-t_{3}-s_{4}+t_{4} =-\frac{\gamma}{T},\] \[s_{3},s_{4},t_{3},t_{4} \geq 0.\]
This linear program is trivial to solve: In each basic solution, for each arc \(a\in A\) at most one of \(s_{3,a},s_{4,a},t_{3,a},t_{4,a}\) will be non-zero, and \(s_{3,a}=s_{4,a}=t_{3,a}=t_{4,a}=0\) for all \(a\in A\) with \(\gamma_{a}=0\). We examine the contribution to the objective for each arc \(a\) in \(\gamma\) in such a basic solution:
If \(\gamma_{a}>0\), then either \(t_{3,a}>0\) or \(s_{4,a}>0\). In the first case, the contribution to the objective is \(\gamma_{a}(g(u_{a}-x_{a})-u_{a})/T\), otherwise, if \(\gamma_{a}<0\), then either \(s_{3,a}>0\) or \(t_{4,a}>0\), the contribution being \(-\gamma_{a}u_{a}/T\) resp. \(-\gamma_{a}(\ell_{a}+g(x_{a}-\ell_{a}))/T\). In particular, an optimal solution is given by
\[t_{3,a} \coloneqq\frac{\gamma_{a}}{T} \text{for all $a$ s.t. $\gamma_{a}>0$ and }-\ell_{a}\geq g(u_{a}-x_{a})-u_{a},\] \[s_{4,a} \coloneqq\frac{\gamma_{a}}{T} \text{for all $a$ s.t. $\gamma_{a}>0$ and }-\ell_{a}<g(u_{a}-x_{a})-u_{a},\] \[s_{3,a} \coloneqq-\frac{\gamma_{a}}{T} \text{for all $a$ s.t. $\gamma_{a}<0$ and }u_{a}\leq g(x_{a}-\ell_{a})+\ell_{a},\] \[t_{4,a} \coloneqq-\frac{\gamma_{a}}{T} \text{for all $a$ s.t. $\gamma_{a}<0$ and }u_{a}>g(x_{a}-\ell_{a})+\ell_{a},\]
and \(s_{3,a}\coloneqq s_{4,a}\coloneqq t_{3,a}\coloneqq t_{4,a}\coloneqq 0\) otherwise. The cut derived from this solution is the split inequality for \(\lambda=s-t\), which by Theorem 3.1 corresponds to the flip inequality for \(\gamma\) and
\[F =\{a\in A\mid\lambda_{3,a}\neq 0\}\] \[=\{a\in A\mid\gamma_{a}>0\text{ and }u_{a}-\ell_{a}\geq g(u_{a}-x_{a}) \}\cup\{a\in A\mid\gamma_{a}<0\text{ and }u_{a}-\ell_{a}\leq g(x_{a}-\ell_{a})\}.\]
Observe that by (11), \(T[\lambda^{\top}b]_{1}=\alpha_{\gamma,F}\). Using (12) and multiplying (16) with \(T^{2}\) therefore yields the violation of the flip inequality w.r.t. \((\gamma,F)\). By Lemma 4.6, we conclude that the violation is indeed maximal.
Comparing Split Closures
Recall that the Periodic Event Scheduling Problem can be formulated in two ways as a MIP, where the incidence-based formulation (2) is essentially a special case of the cycle-based formulation (3) by virtue of Remark 2.6. The methods of Section 3 therefore apply to both formulations, and the question arises whether one of the two split closures is stronger. We will show that both closures are in fact of the same strength in Section 5.3.
Typically, the integer variables in both formulations are general. However, under certain circumstances, the periodic offset variables \(p_{a}\) in (2) can be assumed to be binary (Liebchen, 2006). We will discuss Section 5.4 how to achieve binary variables by a subdivision procedure. We will show that this binarization approach does not lead to a stronger split closure.
We show in Section 5.2 that split closures commute with Cartesian products, which means in the PESP situation that the split closures can be considered on blocks of \(G\) individually.
However, to be able to compare split closures of different polyhedra, we need to develop a few technicalities first in Section 5.1.
### Mixed-Integer-Compatible Maps
We begin with two mixed-integer sets
\[S_{i}:=\{(x,z)\in\mathbb{R}^{n_{i}}\times\mathbb{Z}^{p_{i}}\mid A^{i}_{C}x+A^ {i}_{I}z\leq b^{i}\},\quad i\in\{1,2\},\]
and the associated polyhedra
\[P_{i}:=\{(x,z)\in\mathbb{R}^{n_{i}}\times\mathbb{R}^{p_{i}}\mid A^{i}_{C}x+A^ {i}_{I}z\leq b^{i}\},\qquad(P_{i})_{\mathrm{I}}:=\mathrm{conv}(S_{i}),\qquad \quad i\in\{1,2\}.\]
**Definition 5.1**.: A map \(\varphi:\mathbb{R}^{n_{1}}\times\mathbb{R}^{p_{1}}\to\mathbb{R}^{n_{2}}\times \mathbb{R}^{p_{2}}\) is _mixed-integer-compatible_ if \(\varphi\) is affine and \(\varphi(\mathbb{R}^{n_{1}}\times\mathbb{Z}^{p_{1}})\subseteq\mathbb{R}^{n_{2} }\times\mathbb{Z}^{p_{2}}\).
In particular, if \(\varphi(P_{1})\subseteq P_{2}\) for a mixed-integer-compatible map \(\varphi\), then \(\varphi(S_{1})\subseteq S_{2}\) and \(\varphi((P_{1})_{\mathrm{I}})\subseteq(P_{2})_{\mathrm{I}}\).
**Lemma 5.2**.: _Let \(\psi:\mathbb{R}^{n_{1}}\times\mathbb{R}^{p_{1}}\to\mathbb{R}^{n_{2}}\times \mathbb{R}^{p_{2}}\) be a linear map and let \(\psi^{*}:\mathbb{R}^{n_{2}}\times\mathbb{R}^{p_{2}}\to\mathbb{R}^{n_{1}}\times \mathbb{R}^{p_{1}}\) be the corresponding dual linear map, identifying dual vector spaces choosing standard bases. Then the following are equivalent:_
1. \(\psi\) _is mixed-integer-compatible._
2. \(\psi(\mathbb{R}^{n_{1}}\times\{0\})\subseteq\mathbb{R}^{n_{2}}\times\{0\}\) _and_ \(\psi^{*}(\{0\}\times\mathbb{Z}^{p_{2}})\subseteq\{0\}\times\mathbb{Z}^{p_{1}}\)_._
Proof.: \((1)\Rightarrow(2)\): For the first statement consider for \(i\in[n_{1}]\) the \(i\)-th standard basis vector \(e_{i}\in\mathbb{R}^{n_{1}}\). Then \(\psi(e_{i},0)=(x,z)\) for some \(x\in\mathbb{R}^{n_{2}}\) and \(z\in\mathbb{R}^{p_{2}}\). But as \(\psi\) is linear and mixed-integer-compatible, \(\psi(\lambda e_{i},0)=(\lambda x,\lambda z)\) with \(\lambda z\in\mathbb{Z}^{p_{2}}\) for all \(\lambda\in\mathbb{R}\), so that \(z=0\).
For the second statement, consider for \(j\in[p_{2}]\) the \(j\)-th standard basis vector \(e_{j}\). Then for \(i\in[n_{1}]\), the \(i\)-th coordinate of \(\psi^{*}(0,e_{j})\) is given by \((0,e_{j})^{\top}\psi(e_{i},0)=0\) by the first statement. For \(i\in[p_{1}]\), the \((n_{1}+i)\)-th coordinate of \(\psi^{*}(0,e_{j})\) is given by \((0,e_{j})^{\top}\psi(0,e_{i})\), which is integral as \(\psi\) is mixed-integer-compatible.
\((2)\Rightarrow(1)\): Let \((x,z)\in\mathbb{R}^{n_{1}}\times\mathbb{Z}^{p_{1}}\). Then \(\psi(x,z)=\psi(x,0)+\psi(0,z)\), so using linearity and the first statement in (2), it suffices to consider \(\psi(0,e_{i})\) for \(i\in[p_{1}]\). But now for \(j\in[p_{2}]\), the \((n_{1}+i)\)-th coordinate of \(\psi^{*}(0,e_{j})\) is integral by the second statement in (2), and since it is given by \((0,e_{j})^{\top}\psi(0,e_{i})\), we conclude that the \((n_{2}+j)\)-th coordinate of \(\psi(0,e_{i})\) is integer. Consequently, \(\psi\) must be mixed-integer-compatible.
The following is a generalization of Theorem 1 in (Dash et al., 2018).
**Theorem 5.3**.: _Let \(\varphi\) be a mixed-integer-compatible map with \(\varphi(P_{1})\subseteq P_{2}\). Then \(\varphi((P_{1})_{\mathrm{split}})\subseteq(P_{2})_{\mathrm{split}}\)._
Proof.: Consider \((x_{1},z_{1})\in(P_{1})_{\mathrm{split}}\) and \(\beta_{2}\in\mathbb{Z}^{p_{2}}\). We need to show that \(\varphi(x_{1},z_{1})\) is a convex combination of points \((x_{2}^{i},z_{2}^{i})\in P_{2}\) with \(\beta_{2}^{\top}z_{2}^{i}\) integral. Since \(\varphi\) is mixed-integer-compatible, the last \(p_{2}\) entries of \(\varphi(0,0)\) are integral, and so the linear map \(\psi:=\varphi-\varphi(0,0)\) is mixed-integer-compatible as well. By Lemma 5.2, \(\psi^{*}(0,\beta_{2})=(0,\beta_{1})\) for some \(\beta_{1}\in\mathbb{Z}^{p_{1}}\). Since \((x_{1},z_{1})\in(P_{1})_{\mathrm{split}}\), it is a convex combination of \((x_{1}^{i},z_{1}^{i})\in P_{1}\) with \(\beta_{1}^{\top}z_{1}^{i}\in\mathbb{Z}\). Write
\[(x_{2}^{i},z_{2}^{i}):=\varphi(x_{1}^{i},z_{1}^{i})=\psi(x_{1}^{i},z_{1}^{i})+ \varphi(0,0)\in P_{2}.\]
Then
\[\beta_{2}^{\top}z_{2}^{i}=(0,\beta_{2})^{\top}(x_{2}^{i},z_{2}^{i}) =(0,\beta_{2})^{\top}\psi(x_{1}^{i},z_{1}^{i})+(0,\beta_{2})^{\top }\varphi(0,0)\] \[=\psi^{*}(0,\beta_{2})^{\top}(x_{1}^{i},z_{1}^{i})+(0,\beta_{2})^ {\top}\varphi(0,0)\] \[=(0,\beta_{1})^{\top}(x_{1}^{i},z_{1}^{i})+(0,\beta_{2})^{\top} \varphi(0,0)\] \[=\beta_{1}^{\top}z_{1}^{i}+(0,\beta_{2})^{\top}\varphi(0,0)\] \[\in\mathbb{Z}.\]
As \(\varphi\) is affine and hence preserve convex combinations, \(\varphi(x_{1},z_{1})\) is a convex combination of the \((x_{2}^{i},z_{2}^{i})\in P_{2}\).
**Example 5.4**.: An example for a mixed-integer-compatible map is provided by the change of the cycle basis in the context of periodic timetabling. Let \(I=(G,T,\ell,u,w)\) be a PESP instance and let \(\Gamma,\Gamma^{\prime}\) be two cycle matrices of integral cycle bases of \(G\). As in Remark 2.9, there is an unimodular matrix \(U\) such that \(\Gamma^{\prime}=U\Gamma\). The map \(\varphi:(x,z)\mapsto(x,Uz)\) maps the fractional periodic timetabling polytope \(\mathcal{P}_{1}\) defined by \(\Gamma\) to the fractional periodic timetabling polytope \(\mathcal{P}_{2}\) defined by \(\Gamma^{\prime}\). The map \(\varphi\) is clearly linear and maps mixed-integer points to mixed-integer points, so that \(\varphi\) is mixed-integer-compatible by definition. We conclude that \(\varphi((\mathcal{P}_{1})_{\mathrm{split}})\subseteq(P_{2})_{\mathrm{split}}\). Since \(U\) is unimodular, \(\varphi\) has a mixed-integer compatible inverse, so that \(\varphi\) provides a "mixed-integer" isomorphism of \((P_{1})_{\mathrm{split}}\) with \((P_{2})_{\mathrm{split}}\).
### Split Closure of Cartesian Products
As first application of mixed-integer-compatible maps, we prove that split closures are compatible with Cartesian products.
**Theorem 5.5**.: _Consider two mixed-integer sets_
\[S_{i}=\{(x,z)\in\mathbb{R}^{n_{i}}\times\mathbb{Z}^{p_{i}}\mid A_{C}^{i}x+A_{ I}^{i}z\leq b^{i}\},\quad i\in\{1,2\},\]
_and the associated polyhedra_
\[P_{i}:=\{(x,z)\in\mathbb{R}^{n_{i}}\times\mathbb{R}^{p_{i}}\mid A_{C}^{i}x+A_{ I}^{i}z\leq b^{i}\},\quad i\in\{1,2\}.\]
_Then \((P_{1}\times P_{2})_{\mathrm{split}}=(P_{1})_{\mathrm{split}}\times(P_{2})_ {\mathrm{split}}\)._
Proof.: We first prove \((P_{1})_{\text{split}}\times(P_{2})_{\text{split}}\subseteq(P_{1}\times P_{2})_{ \text{split}}\) using the characterization (7):
\[(P_{1})_{\text{split}}\times(P_{2})_{\text{split}}\] \[=\left(\bigcap_{\beta_{1}\in\mathbb{Z}^{P_{1}}}\operatorname{conv }(\{(x_{1},z_{1})\in P_{1}\mid\beta_{1}^{\top}z_{1}\in\mathbb{Z}\})\right) \times\left(\bigcap_{\beta_{2}\in\mathbb{Z}^{P_{2}}}\operatorname{conv}(\{(x_{ 2},z_{2})\in P_{2}\mid\beta_{2}^{\top}z_{2}\in\mathbb{Z}\})\right)\] \[=\bigcap_{\beta_{1}\in\mathbb{Z}^{P_{1}}}\bigcap_{\beta_{2}\in \mathbb{Z}^{P_{2}}}\left(\operatorname{conv}(\{(x_{1},z_{1})\in P_{1}\mid\beta _{1}^{\top}z_{1}\in\mathbb{Z}\})\times\operatorname{conv}(\{(x_{2},z_{2})\in P _{2}\mid\beta_{2}^{\top}z_{2}\in\mathbb{Z}\})\right)\] \[=\bigcap_{(\beta_{1},\beta_{2})\in\mathbb{Z}^{P_{1}}\times \mathbb{Z}^{P_{2}}}\operatorname{conv}(\{(x_{1},z_{1})\in P_{1}\mid\beta_{1}^ {\top}z_{1}\in\mathbb{Z}\}\times\{(x_{2},z_{2})\in P_{2}\mid\beta_{2}^{\top}z_ {2}\in\mathbb{Z}\})\] \[=\bigcap_{(\beta_{1},\beta_{2})\in\mathbb{Z}^{P_{1}}\times \mathbb{Z}^{P_{2}}}\operatorname{conv}(\{(x_{1},z_{1},x_{2},z_{2})\in P_{1} \times P_{2}\mid\beta_{1}^{\top}z_{1}\in\mathbb{Z},\beta_{2}^{\top}z_{2}\in \mathbb{Z}\})\] \[\subseteq\bigcap_{(\beta_{1},\beta_{2})\in\mathbb{Z}^{P_{1}}\times \mathbb{Z}^{P_{2}}}\operatorname{conv}(\{(x_{1},z_{1},x_{2},z_{2})\in P_{1} \times P_{2}\mid(\beta_{1},\beta_{2})^{\top}(z_{1},z_{2})\in\mathbb{Z}\})\] \[=(P_{1}\times P_{2})_{\text{split}}.\]
To show the reverse inclusion, we consider the natural projections \(\varphi_{i}:P_{1}\times P_{2}\to P_{i}\) for \(i\in\{1,2\}\). Both \(\varphi_{i}\) are mixed-integer-compatible, so that by Theorem 5.3, \(\varphi_{i}((P_{1}\times P_{2})_{\text{split}})\subseteq(P_{i})_{\text{split}}\). In particular, the map \((\varphi_{1},\varphi_{2})\), which is the identity map, maps \((P_{1}\times P_{2})_{\text{split}}\) into \((P_{1})_{\text{split}}\times(P_{2})_{\text{split}}\).
We apply now Theorem 5.5 to periodic timetabling. Consider for an arbitrary digraph \(G\) its decomposition into blocks. Since each cycle is part of a unique block, the cycle space of \(G\) decomposes into the direct sum of the cycle spaces of its blocks. This has the consequence that any cycle matrix \(\Gamma\) of \(G\) has a block structure as well, so that the fractional periodic timetabling polytope \(\mathcal{P}\) is the Cartesian product of the fractional periodic timetabling polytopes associated to the subinstances of each block.
**Theorem 5.6** (cf. N. Lindner & Liebchen, 2020).: _If \(G_{1},\ldots,G_{k}\) are the blocks of \(G\) and \(\mathcal{P}_{1},\ldots,\mathcal{P}_{k}\) are the fractional periodic timetabling polytopes of the subinstances of \(G_{1},\ldots,G_{k}\), respectively, then_
\[\mathcal{P}_{\text{split}}=(\mathcal{P}_{1})_{\text{split}}\times\cdots \times(\mathcal{P}_{k})_{\text{split}}.\]
_In particular, if \(G\) is a cactus graph, then \(\mathcal{P}_{\text{split}}=\mathcal{P}_{\text{I}}\)._
Proof.: By the above discussion, this is a direct consequence of Theorem 5.5. If \(G\) is a cactus graph, then each block satisfies \(\mu\leq 1\). It remains to apply Corollary 3.2.
### Incidence-Based vs. Cycle-Based Formulation
Recall from Remark 2.6 that the incidence-based formulation (2) of a PESP instance is identical to a particular cycle-based formulation (3) of an augmented instance, where the augmentation consists in successively adding arcs \(a\) with \(\ell_{a}=0\) and \(u_{a}=T-1\). Such arcs with \(u_{a}-\ell_{a}=T-1\) are sometimes called _free_(e.g., Goerigk and Liebchen, 2017), as they do not impact the feasibility of a PESP instance. The augmentation procedure in Remark 2.6 hence decomposes as a sequence of \(|V|\)_simple free augmentations_, which we formally define as follows:
**Definition 5.7**.: Let \(I=(G,T,\ell,u,w)\) be a PESP instance. Let \(I^{\prime}=(G^{\prime},T,\ell^{\prime},u^{\prime},w^{\prime})\) be a PESP instance such that \(I\) arises from \(I^{\prime}\) by deleting a _free_ arc \(\overline{a}\), i.e., \(u^{\prime}_{\overline{a}}-\ell^{\prime}_{\overline{a}}=T-1\). We say that \(I^{\prime}\) is a _simple free augmentation_ of \(I\) by \(\overline{a}\).
We will first investigate a trivial case of a simple augmentation \(I^{\prime}\) of \(I\) by \(\overline{a}\): If \(\overline{a}\) is a bridge, then \(\overline{a}\) constitutes a block of \(G^{\prime}\), so that we conclude by Theorem 5.6 that
\[\mathcal{P}^{\prime}_{\text{split}}=\mathcal{P}_{\text{split}}\times[\ell^{ \prime}_{\overline{a}},u^{\prime}_{\overline{a}}]_{\text{split}}=\mathcal{P} _{\text{split}}\times[\ell^{\prime}_{\overline{a}},u^{\prime}_{\overline{a}}], \tag{17}\]
where \(\mathcal{P}\) and \(\mathcal{P}^{\prime}\) are the fractional periodic tension polytopes of \(I\) and \(I^{\prime}\), respectively. Since \(\overline{a}\) is a bridge, any cycle basis for \(G\) is a cycle basis for \(G^{\prime}\), so that the choice of any integral cycle basis yields a natural projection \(\mathcal{P}^{\prime}_{\text{split}}\to\mathcal{P}_{\text{split}},(x,x_{ \overline{a}},z)\mapsto(x,z)\), which is well-defined and surjective by (17). Thus any split inequality for \(I^{\prime}\) is trivially a split inequality for \(I\) and vice versa.
We will hence turn our interest to the more interesting case that \(\overline{a}\) is not a bridge. We start with an observation about cycle bases:
**Lemma 5.8**.: _Let \(I^{\prime}\) be a simple free augmentation of \(I\) by \(\overline{a}\) such that \(\overline{a}\) is not a bridge of \(G^{\prime}\). Then there is an integral cycle basis \(B\) of \(G\) and an oriented cycle \(\overline{\gamma}\) such that \(B^{\prime}:=B\cup\{\overline{\gamma}\}\) is an integral cycle basis of \(G^{\prime}\) and \(\overline{a}\in\overline{\gamma}\)._
Proof.: Since \(\overline{a}\) is not a bridge, \(G\) and \(G^{\prime}\) have the same set of nodes, so that any spanning tree of \(G\) is a spanning tree of \(G^{\prime}\). Hence, if \(B\) is any fundamental cycle basis of \(G\), we can augment \(B\) by the fundamental cycle \(\overline{\gamma}\) induced by \(\overline{a}\) in \(G^{\prime}\).
Choose cycle bases \(B\), \(B^{\prime}\) and an oriented cycle \(\overline{\gamma}\) as in Lemma 5.8. We assume that \(\mathcal{P}\) is defined using the cycle matrix \(\Gamma\) of \(B\), and that \(\mathcal{P}^{\prime}\) is defined using the cycle matrix \(\Gamma^{\prime}\) of \(B^{\prime}\), so that \(\Gamma^{\prime}\) arises from \(\Gamma\) by appending the row \(\overline{\gamma}^{\top}\).
**Lemma 5.9**.: _Let \(I^{\prime}\) be a simple free augmentation of \(I\) by \(\overline{a}\) such that \(\overline{a}\) is not a bridge of \(G^{\prime}\). The natural projection \(\varphi:\mathcal{P}^{\prime}\to\mathcal{P},(x,x_{\overline{a}},z,z_{ \overline{\gamma}})\mapsto(x,z)\) is mixed-integer-compatible. In particular, \(\varphi(\mathcal{P}^{\prime}_{\text{split}})\subseteq\mathcal{P}_{\text{ split}}\)._
Proof.: The map \(\varphi\) is linear and maps mixed-integer points to mixed-integer points. That \(\varphi\) descends to split closures follows from Theorem 5.3.
In view of Lemma 5.9, the split closure of the simple free augmentation is hence never worse, but could provide a potentially tighter relaxation by additional "projected split inequalities". We show now that this is not the case.
**Lemma 5.10**.: _Let \(\varphi:\mathcal{P}^{\prime}\to\mathcal{P}\) denote the natural projection as in Lemma 5.9. Then \(\varphi(\mathcal{P}^{\prime}_{\text{split}})=\mathcal{P}_{\text{split}}\)._
Proof.: Since \(I^{\prime}\) is an augmentation of \(I\) by \(\overline{a}\), we note at first, using the interpretation of split inequalities as flip inequalities from Theorem 3.1, that the set of defining inequalities of \(\mathcal{P}^{\prime}_{\text{split}}\) can be partitioned into the set of defining inequalities of \(\mathcal{P}_{\text{split}}\), which cannot contain the variable \(x_{\overline{a}}\), and a remaining set of inequalities, which do all contain \(x_{\overline{a}}\). The image of \(\varphi(\mathcal{P}^{\prime}_{\text{split}})\) can be described by Fourier-Motzkin elimination of the variable \(x_{\overline{a}}\). It is therefore sufficient to show that all inequalities generated by the Fourier-Motzkin procedure are redundant for \(\mathcal{P}_{\text{split}}\). Since the redundancy is clear for those inequalities that do not contain \(x_{\overline{a}}\), we will hence consider only the remaining inequalities where \(x_{\overline{a}}\) has a non-zero coefficient.
Among the defining inequalities of \(\mathcal{P}^{\prime}_{\text{split}}\), \(x_{\overline{a}}\) occurs precisely in the bound inequalities \(x_{\overline{a}}\geq\ell^{\prime}_{\overline{a}}\) and \(x_{\overline{a}}\leq u^{\prime}_{\overline{a}}\), and in the flip inequalities of simple cycles containing \(\overline{a}\). Fourier-Motzkin considers pairs of these inequalities, one of them giving a lower bound, and the other an upper bound on \(x_{\overline{a}}\). That is, the following types of pairs have to be considered:
1. \(x_{\overline{a}}\geq\ell^{\prime}_{\overline{a}}\) and \(x_{\overline{a}}\leq u^{\prime}_{\overline{a}}\),
2. \(x_{\overline{a}}\geq\ell_{\overline{a}}^{\prime}\) and a flip inequality for \((\gamma,F)\) with \(\overline{a}\in\gamma\) and \(\overline{a}\in F\),
3. \(x_{\overline{a}}\leq u_{\overline{a}}^{\prime}\) and a flip inequality for \((\gamma,F)\) with \(\overline{a}\in\gamma\) and \(\overline{a}\notin F\),
4. two flip inequalities for \((\gamma,F_{\gamma})\) and \((\delta,F_{\delta})\) with \(\overline{a}\in\gamma\), \(\overline{a}\notin F_{\gamma}\), \(\overline{a}\in\delta\), \(\overline{a}\in F_{\delta}\).
In all those flip inequalities, we can assume that the cycles are simple and that the parameter \(\alpha\) is at least \(1\). Moreover, using the symmetry in Remark 2.13, we can without loss of generality fix the direction of \(\overline{a}\) as forward or backward, replacing \(\gamma\) by \(-\gamma\) if necessary. Let us proceed with Fourier-Motzkin:
1. Elimination yields \(\ell_{\overline{a}}^{\prime}\leq u_{\overline{a}}^{\prime}\), which is trivially true.
2. Assume that \(\overline{a}\) is forward in \(\gamma\). Then we can write the flip inequality (6) for \((\gamma,F)\) with \(\alpha:=\alpha_{\gamma,F}\geq 1\) as \[\alpha(u_{\overline{a}}^{\prime}-x_{\overline{a}})+f(x)\geq\alpha(T-\alpha),\] where \(f(x)\geq 0\) for all \((x,z)\in\mathcal{P}\). Fourier-Motzkin elimination with \(x_{\overline{a}}\geq\ell_{\overline{a}}^{\prime}\) yields \[\alpha u_{\overline{a}}^{\prime}+f(x)\geq\alpha(T-\alpha)+\alpha\ell_{ \overline{a}}^{\prime},\] or equivalently, recalling that \(u_{\overline{a}}^{\prime}-\ell_{\overline{a}}^{\prime}=T-1\), \[f(x)\geq\alpha(T-\alpha-u_{\overline{a}}^{\prime}+\ell_{\overline{a}}^{\prime })=\alpha(1-\alpha),\] but this is redundant for \(\mathcal{P}_{\mathrm{split}}\), since \((x,z)\in\mathcal{P}\) and \(\alpha\geq 1\) imply \(f(x)\geq 0\geq\alpha(1-\alpha)\).
3. is analogous to (2).
4. This is the most tedious part. We assume without loss of generality that \(\overline{a}\) is backward in \(\gamma\) and forward in \(\delta\). We will show that the Fourier-Motzkin inequality is valid for all points \((x,z)\in\mathcal{P}\) with \((\gamma+\delta)^{\top}x\in T\mathbb{Z}\). Since \(\gamma+\delta\) is an element of the cycle space \(\mathcal{C}\) of \(G\), the Fourier-Motzkin inequality is hence valid for the convex hull of those points and in particular for \(\mathcal{P}_{\mathrm{split}}\) by virtue of (7). We first write down the flip inequalities, omitting \(F_{\gamma}\) and \(F_{\delta}\) in the subscripts of \(\alpha\): \[\alpha_{\gamma}(x_{\overline{a}}-\ell_{\overline{a}}^{\prime})+f (x) \geq\alpha_{\gamma}(T-\alpha_{\gamma}),\] \[\alpha_{\delta}(u_{\overline{a}}^{\prime}-x_{\overline{a}})+g(x) \geq\alpha_{\delta}(T-\alpha_{\delta}),\] where \(f(x),g(x)\geq 0\) for all \((x,z)\in\mathcal{P}\). Elimination produces \[\alpha_{\delta}f(x)+\alpha_{\gamma}g(x)\geq\alpha_{\gamma}\alpha_{\delta}(2T- \alpha_{\gamma}-\alpha_{\delta}-u_{\overline{a}}^{\prime}+\ell_{\overline{a} }^{\prime})=\alpha_{\gamma}\alpha_{\delta}(T+1-\alpha_{\gamma}-\alpha_{\delta }).\] (18) The inequality (18) is trivially redundant if \(\alpha_{\gamma}+\alpha_{\delta}\geq T+1\). We hence assume from now on \(\alpha_{\gamma}+\alpha_{\delta}\leq T\). Let \((x,z)\in\mathcal{P}\) with \((\gamma+\delta)^{\top}x\in T\mathbb{Z}\). Then \[\sum_{a\in A\setminus F_{\gamma}}\gamma_{a}x_{a}+\sum_{a\in F_{\gamma}}\gamma_ {a}x_{a}+\sum_{a\in A\setminus F_{\delta}}\delta_{a}x_{a}+\sum_{a\in F_{ \delta}}\delta_{a}x_{a}\equiv 0\mod T.\] Since \(\gamma_{\overline{a}}+\delta_{\overline{a}}=0\), \(\overline{a}\notin F_{\gamma}\), \(\overline{a}\in F_{\delta}\), this implies \[\sum_{a\in A\setminus(F_{\gamma}\cup\{\overline{a}\})}\gamma_{a}x_{a}+\sum_{a \in F_{\gamma}}\gamma_{a}x_{a}+\sum_{a\in A\setminus F_{\delta}}\delta_{a}x_{a }+\sum_{a\in F_{\delta}\setminus\{\overline{a}\}}\delta_{a}x_{a}\equiv 0\mod T,\]
so that, using the definition of \(\alpha_{\gamma},\alpha_{\delta}\) (cf. Theorem 2.12),
\[\sum_{a\in A\setminus\{F_{\gamma}\cup\{\overline{\alpha}\}\}}\gamma_{ a}(x_{a}-\ell_{a})-\sum_{a\in F_{\gamma}}\gamma_{a}(u_{a}-x_{a})+\sum_{a\in A \setminus F_{\delta}}\delta_{a}(x_{a}-\ell_{a})-\sum_{a\in F_{\delta}\setminus \{\overline{\alpha}\}}\delta_{a}(u_{a}-x_{a})\] \[\equiv-\sum_{a\in A\setminus\{F_{\gamma}\cup\{\overline{\alpha}\} \}}\gamma_{a}\ell_{a}-\sum_{a\in F_{\gamma}}\gamma_{a}u_{a}-\sum_{a\in A \setminus F_{\delta}}\delta_{a}\ell_{a}-\sum_{a\in F_{\delta}\setminus\{ \overline{\alpha}\}}\delta_{a}u_{a}\mod T\] \[\equiv\alpha_{\gamma}+\alpha_{\delta}+u_{\overline{\alpha}}^{t}- \ell_{\overline{\alpha}}^{t}\mod T\] \[\equiv\alpha_{\gamma}+\alpha_{\delta}-1\mod T\]
As we can assume \(\alpha_{\gamma},\alpha_{\delta}\geq 1\), we have that \(\alpha:=\alpha_{\gamma}+\alpha_{\delta}-1\geq 0\). This implies that
\[D:=\sum_{a\in A\setminus\{F_{\gamma}\cup\{\overline{\alpha}\}\}}\gamma_{a}(x_{ a}-\ell_{a})-\sum_{a\in F_{\gamma}}\gamma_{a}(u_{a}-x_{a})+\sum_{a\in A \setminus F_{\delta}}\delta_{a}(x_{a}-\ell_{a})-\sum_{a\in F_{\delta}\setminus \{\overline{\alpha}\}}\delta_{a}(u_{a}-x_{a})\]
is either \(\leq\alpha-T\) (a) or \(\geq\alpha\) (b). Before showing that (18) is redundant in both cases, we write down the left-hand side of (18) explicitly:
\[\begin{split}&\alpha_{\delta}f(x)+\alpha_{\gamma}g(x)\\ &=\alpha_{\delta}(T-\alpha_{\gamma})\sum_{a\in A\setminus\{F_{ \gamma}\cup\{\overline{\alpha}\}\},\gamma_{a}=1}(x_{a}-\ell_{a})+\alpha_{ \delta}\alpha_{\gamma}\sum_{a\in A\setminus\{F_{\gamma}\cup\{\overline{\alpha }\}\},\gamma_{a}=-1}(x_{a}-\ell_{a})\\ &+\alpha_{\delta}\alpha_{\gamma}\sum_{a\in F_{\gamma}\gamma_{a}=1 }(u_{a}-x_{a})+\alpha_{\delta}(T-\alpha_{\gamma})\sum_{a\in F_{\gamma}\gamma_{ a}=-1}(u_{a}-x_{a})\\ &+\alpha_{\gamma}(T-\alpha_{\delta})\sum_{a\in A\setminus F_{ \delta},\delta_{a}=1}(x_{a}-\ell_{a})+\alpha_{\gamma}\alpha_{\delta}\sum_{a\in A \setminus F_{\delta},\delta_{a}=-1}(x_{a}-\ell_{a})\\ &+\alpha_{\gamma}\alpha_{\delta}\sum_{a\in F_{\delta}\setminus \{\overline{\alpha}\},\delta_{a}=1}(u_{a}-x_{a})+\alpha_{\gamma}(T-\alpha_{ \delta})\sum_{a\in F_{\delta}\setminus\{\overline{\alpha}\},\delta_{a}=-1}(u_ {a}-x_{a}).\end{split} \tag{19}\]
1. Expanding \(\alpha_{\delta}(T-\alpha_{\gamma})\) and \(\alpha_{\gamma}(T-\alpha_{\delta})\) in (19), and then bounding all summands with \(T\) as a factor by \(0\) from below, we obtain \[\alpha_{\delta}f(x)+\alpha_{\gamma}g(x)\geq-\alpha_{\gamma}\alpha_{\delta}D \geq-\alpha_{\gamma}\alpha_{\delta}(\alpha-T)=\alpha\gamma\alpha_{\delta}(T+1- \alpha_{\gamma}-\alpha_{\delta}).\] Hence (18) holds in the case that \(D\leq\alpha-T\).
2. Let \(\nu:=\min(\alpha_{\gamma},\alpha_{\delta})\). Expanding \(\alpha_{\delta}(T-\alpha_{\gamma})\) and \(\alpha_{\gamma}(T-\alpha_{\delta})\) in (19), bounding \(T\alpha_{\gamma}\), \(T\alpha_{\delta}\) from below by \(T\nu\), we find \[\alpha_{\delta}f(x)+\alpha_{\gamma}g(x)\geq(T\nu-\alpha_{\gamma}\alpha_{\delta} )D.\] Since \(\nu\) is one of \(\alpha_{\gamma},\alpha_{\delta}\) and \(\alpha_{\gamma},\alpha_{\delta}\leq T-1\), we have \(T\nu-\alpha_{\gamma}\alpha_{\delta}\geq 0\). This implies with \(D\geq\alpha\) that \[\alpha_{\delta}f(x)+\alpha_{\gamma}g(x)\] \[\geq(T\nu-\alpha_{\gamma}\alpha_{\delta})\alpha\] \[=(T\nu-\alpha_{\gamma}\alpha_{\delta})(\alpha_{\gamma}+\alpha_{ \delta}-1)\] \[=T\nu(\alpha_{\gamma}+\alpha_{\delta}-1)+\alpha_{\gamma}\alpha_{ \delta}(1-\alpha_{\gamma}-\alpha_{\gamma})\] \[=T\nu(\alpha_{\gamma}+\alpha_{\delta}-1)-T\alpha_{\gamma}\alpha_{ \delta}+\alpha_{\gamma}\alpha_{\delta}(T+1-\alpha_{\gamma}-\alpha_{\gamma})\] It remains to show that \(T\nu(\alpha_{\gamma}+\alpha_{\delta}-1)-T\alpha_{\gamma}\alpha_{\delta}\geq 0\). This is true since \(\nu\) is one of \(\alpha_{\gamma},\alpha_{\delta}\geq 1\).
We conclude that the image \(\varphi(\mathcal{P}^{\prime}_{\text{split}})\) is fully described by the flip inequalities of cycles not containing \(\overline{a}\) and the variable bounds for all arcs except \(\overline{a}\). Hence \(\varphi(\mathcal{P}^{\prime}_{\text{split}})=\mathcal{P}_{\text{split}}\).
As a corollary to Lemma 5.10, we obtain the following result.
**Theorem 5.11**.: _Let \(I\) and \(I^{\prime}\) be PESP instances with fractional periodic timetabling polyhedra \(\mathcal{P}\) and \(\mathcal{P}^{\prime}\), respectively. Suppose that \(I^{\prime}\) arises from \(I\) by a sequence of simple free augmentations. If \(\varphi:\mathcal{P}^{\prime}\to\mathcal{P}\) denotes the natural projection, then \(\varphi(\mathcal{P}^{\prime}_{\text{split}})=\mathcal{P}_{\text{split}}\)._
In particular, recalling Remark 2.6, the incidence-based formulation (2) is not stronger than the cycle-based formulation (3) in terms of split closures. Consequently, it is of no use to develop methods which augment an instance by a free arc, obtain a flip/split inequality and project down again, as this will not lead to information which cannot already be obtained from the split closure of the original instance.
### Binarization by Subdivision
A reformulation of a MIP general variables into one with binary variables can exhibit stronger split closures (Dash et al., 2018) or lift-and-project closures (Aprile et al., 2021). For the application of periodic timetabling, there is a combinatorial binarization method: Let \(I=(G,T,\ell,u,w)\) be a PESP instance, \(G=(V,A)\). We assume that \(0\leq\ell<T\) and \(\ell\leq u<\ell+T\) by preprocessing (see Remark 2.3), so that the integer periodic offset variables \(p_{a}\) in the incidence-based formulation (2) of PESP can only take values in \(\{0,1,2\}\). Moreover, if \(u_{a}\leq T\) for some \(a\in A\), then \(p_{a}\in\{0,1\}\) for any integer feasible solution \((x,\pi,p)\)(Liebchen, 2006).
**Definition 5.12**.: Let \(I^{\prime}=(G^{\prime},T,\ell^{\prime},u^{\prime},w^{\prime})\) be a PESP instance that arises from \(I\) by subdividing an arc \(\overline{a}\in A\) with \(\ell_{\overline{a}}<u_{\overline{a}}\) into two new arcs \(a_{1},a_{2}\) such that:
\[0\leq\ell^{\prime}_{a_{1}} \leq u^{\prime}_{a_{1}},\] \[0\leq\ell^{\prime}_{a_{2}} \leq u^{\prime}_{a_{2}},\] \[\ell^{\prime}_{a_{1}}+\ell^{\prime}_{a_{2}} =\ell_{\overline{a}},\] \[u^{\prime}_{a_{1}}+u^{\prime}_{a_{2}} =u_{\overline{a}},\] \[w^{\prime}_{a_{1}}=w^{\prime}_{a_{2}} =w_{\overline{a}}.\]
We call \(I^{\prime}\) a _simple subdivision_ of \(I\) at \(\overline{a}\).
Observe that if the bounds on the arc \(\overline{a}\) are such that \(u_{\overline{a}}>T\), one can always construct a simple subdivision \(I^{\prime}\) of \(I\) at \(\overline{a}\) such that \(u^{\prime}_{a_{i}}-\ell^{\prime}_{a_{i}}>0\) and \(u^{\prime}_{a_{i}}\leq T\) for \(i\in\{1,2\}\), due to the assumption that \(\ell<T\) and \(u<\ell+T\). As a result, for the instance \(I^{\prime}\) arising from subdividing each arc \(\overline{a}\) with \(u_{\overline{a}}>T\) as above, the incidence-based MIP formulation (2) will then have exclusively binary variables.
**Example 5.13**.: Figure 4 shows the instance obtained from the instance \(I\) from Figure 1 by subdividing every arc with \(u_{a}>T\).
Let \(I^{\prime}\) be a simple subdivision of a PESP instance \(I\) at \(\overline{a}\), introducing new arcs \(a_{1}\) and \(a_{2}\). The cycle spaces of \(G\) and \(G^{\prime}\) are isomorphic: If \(\gamma\) is an element of the cycle space of \(G\), then \(\gamma^{\prime}\) with
\[\gamma^{\prime}_{a}:=\begin{cases}\gamma\pi&\text{if }a\in\{a_{1},a_{2}\},\\ \gamma_{a}&\text{if }a\notin\{a_{1},a_{2}\},\end{cases}\]
defines an element of the cycle space of \(G^{\prime}\), and the whole cycle space of \(G^{\prime}\) arises this way. We can therefore associate to an integral cycle basis \(B=\{\gamma_{1},\ldots,\gamma_{\mu}\}\) of \(G\) the integral cycle basis \(B^{\prime}=\{\gamma^{\prime}_{1},\ldots,\gamma^{\prime}_{\mu}\}\). Then any cycle offset \(z\) in (3) w.r.t. \(B\) defines a cycle offset \(z^{\prime}\) w.r.t. \(B^{\prime}\) by \(z^{\prime}_{\gamma^{\prime}}:=z_{\gamma}\), so that cycle offsets are essentially the same. We will use \(\Gamma\) and \(\Gamma^{\prime}\) to define \(\mathcal{P}\) and \(\mathcal{P}^{\prime}\), the fractional periodic tension polytopes of \(I^{\prime}\) and \(I\), respectively.
**Lemma 5.14**.: _Consider a simple subdivision \(I^{\prime}\) of \(I\) at an arc \(\overline{a}\) with notation as above._
1. _The map_ \(\rho:\mathcal{P}^{\prime}\to\mathcal{P},(x,x_{a_{1}},x_{a_{2}},z)\mapsto(x,x _{a_{1}}+x_{a_{2}},z)\) _is well-defined and mixed-integer-compatible._
2. _The map_ \(s:\mathcal{P}\to\mathcal{P}^{\prime},(x,x_{\overline{a}},z)\mapsto\left(x, \ell^{\prime}_{a_{1}}+\frac{u^{\prime}_{a_{1}}-\ell^{\prime}_{a_{1}}}{u_{ \overline{a}}-\ell_{\overline{a}}}(x_{\overline{a}}-\ell_{\overline{a}}), \ell^{\prime}_{a_{2}}+\frac{u^{\prime}_{a_{2}}-\ell^{\prime}_{a_{2}}}{u_{ \overline{a}}-\ell_{\overline{a}}}(x_{\overline{a}}-\ell_{\overline{a}}),z\right)\) _is well-defined and mixed-integer-compatible._
3. \(\rho\circ s:\mathcal{P}\to\mathcal{P}\) _is the identity map._
4. \(\rho(\mathcal{P}^{\prime}_{split})=\mathcal{P}_{\text{split}}\)_._
Proof.:
1. The map is well-defined: The hypothesis \(\ell^{\prime}_{a_{1}}+\ell^{\prime}_{a_{2}}=\ell_{\overline{a}}\) and \(u^{\prime}_{a_{1}}+u^{\prime}_{a_{2}}=u_{\overline{a}}\) implies that \(\ell_{\overline{a}}\leq x_{a_{1}}+x_{a_{2}}\leq u_{\overline{a}}\) holds for all \((x,x_{a_{1}},x_{a_{2}},z)\in\mathcal{P}^{\prime}\). As \(\rho\) is linear and does not affect the integrality of \(z\), it is mixed-integer-compatible.
2. The map is well defined: Due to the assumption of subdividing arcs with \(u_{\overline{a}}>T\) only, we have \(u_{\overline{a}}-\ell_{\overline{a}}>0\) and \(u^{\prime}_{a_{1}}-\ell^{\prime}_{a_{1}}\geq 0\). Since \(x_{\overline{a}}-\ell_{\overline{a}}\geq 0\) for all \((x,x_{\overline{a}},z)\in\mathcal{P}\), we conclude \[\ell^{\prime}_{a_{1}}=\ell^{\prime}_{a_{1}}+\frac{u^{\prime}_{a_{1}}-\ell^{ \prime}_{a_{1}}}{u_{\overline{a}}-\ell_{\overline{a}}}(\ell_{\overline{a}}- \ell_{\overline{a}})\leq\ell^{\prime}_{a_{1}}+\frac{u^{\prime}_{a_{1}}-\ell^{ \prime}_{a_{1}}}{u_{\overline{a}}-\ell_{\overline{a}}}(x_{\overline{a}}- \ell_{\overline{a}})\leq\ell^{\prime}_{a_{1}}+\frac{u^{\prime}_{a_{1}}-\ell^{ \prime}_{a_{1}}}{u_{\overline{a}}-\ell_{\overline{a}}}(u_{\overline{a}}-\ell _{\overline{a}})=u^{\prime}_{a_{1}}.\] The argument for the \(x_{a_{2}}\) entry is analogous. We note that \(s\) is affine and maps point with integral \(z\) to points with integral \(z\), so that \(s\) is mixed-integer-compatible.
3. This follows since \[\ell^{\prime}_{a_{1}}+\frac{u^{\prime}_{a_{1}}-\ell^{\prime}_{a_{1}}}{u_{ \overline{a}}-\ell_{\overline{a}}}(x_{\overline{a}}-\ell_{\overline{a}})+ \ell^{\prime}_{a_{2}}+\frac{u^{\prime}_{a_{2}}-\ell^{\prime}_{a_{2}}}{u_{ \overline{a}}-\ell_{\overline{a}}}(x_{\overline{a}}-\ell_{\overline{a}})=\ell _{\overline{a}}+\frac{u_{\overline{a}}-\ell_{\overline{a}}}{u_{\overline{a}}- \ell_{\overline{a}}}(x_{\overline{a}}-\ell_{\overline{a}})=x_{\overline{a}}.\]
Figure 4: Subdivision of the instance in Figure 1 obtained from two simple subdivisions such that \(u_{a}\leq T\) for all arcs \(a\).
4. Since \(\rho\) and \(s\) are mixed-integer compatible, \(\rho(\mathcal{P}^{\prime}_{\text{split}})\subseteq\mathcal{P}_{\text{split}}\) and \(s(\mathcal{P}_{\text{split}})\subseteq\mathcal{P}^{\prime}_{\text{split}}\). The composition \(\rho|_{\mathcal{P}^{\prime}_{\text{split}}}\circ s|_{\mathcal{P}_{\text{split}}}\) of the restrictions to split closures is hence well-defined, and by (3), it is the identity map on \(\mathcal{P}_{\text{split}}\). We conclude that \(\rho|_{\mathcal{P}^{\prime}_{\text{split}}}\) is surjective.
A repeated application of Lemma 5.14 together with Theorem 5.11 yields:
**Theorem 5.15**.: _Let \(I\) and \(I^{\prime}\) be PESP instances with fractional periodic timetabling polyhedra \(\mathcal{P}\) and \(\mathcal{P}^{\prime}\), respectively. Suppose that \(I^{\prime}\) arises from \(I\) by a sequence of simple subdivisions and simple free augmentations. If \(\psi:\mathcal{P}^{\prime}\to\mathcal{P}\) denotes the composition of the summation maps \(\rho\) in Lemma 5.14 (1) for the subdivisions and the projection maps \(\varphi\) in Lemma 5.9 for the free augmentations, then \(\psi(\mathcal{P}^{\prime}_{\text{split}})=\mathcal{P}_{\text{split}}\)._
In particular, when we binarize the MIP (3) by first performing simple subdivisions and then move to the formulation (2), we gain no further insight about split inequalities.
## 6 Computational Experiments
We want to assess how useful the split closure is for obtaining dual bounds for PESP in practice. To that end we introduce a procedure, which exploits Theorem 4.7 in a heuristic way, and proceeds to find cuts systematically once the heuristic fails, such that we optimize over the entire split closure by means of Theorem 4.5. We will also examine the performance of the heuristic in comparison to the systematic exploration.
### Separation Procedure
Our goal is to optimize over the entire split closure. We do so with our custom separator which proceeds as illustrated by the flowchart in Figure 5: At first, it tries to heuristically generate violated flip inequalities (highlighted in blue in the chart): We compute a minimum spanning tree with respect to the periodic slack \(x-\ell\) of the current LP solution \((x,z)\in\mathcal{P}\), and determine a most violated flip inequality for the fundamental cycles of that tree by Theorem 4.7. When no more heuristic cuts are found, the parametric IP (14) as in Theorem 4.5 is solved. During the solution process of the IP, a callback retrieves intermediate incumbent solutions and generates the corresponding cuts. The procedure terminates when no more violated cuts can be found, or the time limit is hit. Since the amount of cuts found by the heuristic is rather larger in the beginning, we apply the filtering mechanisms of SCIP to detect effective cuts. However, cuts found by the parametric IP will always be enforced, so that the whole procedure is correct up to numerical tolerances: If the procedure terminates because no more violated cuts can be detected, then the optimal solution over the split closure has been found.
### Methodology
To conduct our computational experiments, we use the benchmark library PESPlib (Goerigk, 2022), whose instances are derived from real-world scenarios. Although significant process has been made in the past, no instance could be solved to proven optimality up to date.
Since the PESPlib instances are computationally very hard, we consider not only the full instances, but also two subinstances per instance whose cyclomatic number \(\mu\) has been restricted to \(25\), and \(100\), respectively. Note that \(\mu\) is the number of integer variables in (3). The restriction procedure for an instance \((G,T,\ell,u,w)\) works by iteratively removing arcs, deleting in each step one arc \(a\) with highest span \(u_{a}-\ell_{a}\) and breaking ties by preferring lowest weight
Figure 5: Flowchart of the split cut generation procedure. “Decrease \(\alpha\)” means to set \(\alpha:=\alpha-1\) if \(\alpha\geq 2\) and \(\alpha:=\lfloor T/2\rfloor\) otherwise. We start with \(\alpha=\lfloor T/2\rfloor\), as this is likely to produce cuts with large violation.
\(w_{a}\)(cf. Goerigk and Liebchen, 2017). In contrast to the full PESPlib instances, these restricted variants can be solved to optimality within a reasonable amount of time.
We first preprocess each instance, so that in particular the assumptions as in Remark 2.3 hold. For each instance \(I=(G,T,\ell,u,w)\), we consider the cycle-based formulation (3) using an integral cycle basis \(B\) that minimizes \(\sum_{\gamma\in B}\sum_{a\in\gamma}(u_{a}-\ell_{a})\). This choice of cycle basis is motivated by its good performance for computing dual bounds (Bordorfer, Lindner, & Roth, 2020; Masing, Lindner, & Ebert, 2023). By Theorem 5.11, the cycle-based formulation is not weaker than the incidence-based formulation, and by Remark 2.6, it is more compact. We then invoke the branch-cut-and-price framework SCIP (Achterberg, 2009). The advantage of using SCIP is that it is highly customizable and we can disable everything that does not come from split cuts: We disable the built-in presolving, branching, heuristics, propagators separators and merely call a custom separation callback during the cutting loop at the root node.
In all experiments, we use SCIP 8.0.3 (Bestuzheva et al., 2021) with Gurobi 9.5.2 (Gurobi Optimization, LLC, 2023) as LP solver. We also use Gurobi to solve the parametric IP from Theorem 4.5. Gurobi is allowed to use 6 threads on an Intel Xeon E3-1270 v6 CPU running at 3.8 GHz with 32 GB RAM. The time limit has been set to 4 hours wall time for each instance.
### Results
#### 6.3.1 Restriction to \(\mu=25\)
Table 1 shows the results for the restrictions of the PESPlib instances to the cyclomatic number \(\mu=25\). For all but one instance, the cut generation procedure of Section 6.2 terminates within 22 minutes, only R1L1v hits the time limit due to a hard parametric IP (14). Optimizing over the split closure is exact for R4L1 and R4L4v, but R4L4v is trivial in the sense that \(x=\ell\) is an optimal solution. The average relative optimality gap with respect to the optimal objective value in terms of weighted slack \(w^{\top}(x-\ell)\) and the best bound obtained by split cuts, taken over all 22 instances, is 6.61 %.
#### 6.3.2 Restriction to \(\mu=100\)
The results for the restriction to \(\mu=100\) are summarized in Table 2. Again, we can determine the optimal solution of (3) for all these restricted instances. The cut generation procedure of Section 6.2 terminates within the time limit for 20 out of 22 instances. R4L4v is again almost trivial to solve, because two cuts suffice to produce an integral solution. The second smallest gap is at R1L1v, although the time limit is hit. The average optimality gap is 13.37 %, which is about twice as much as in the case \(\mu=25\).
#### 6.3.3 Full instances
Finally, the results for the full PESPlib instances are given in Table 3 in comparison to the best known primal bounds and in Table 4 in comparison to the best known dual bounds. All instances hit the time limit. Compared to the restricted instances, relatively few cuts are generated by the IP (14), which is both due to the large supply of heuristically generated cuts, and the difficulty of the IP. The time limit is not sufficient to unfold the power of the IP, on the other hand, increasing the time limit to 8 or 24 hours empirically produced only marginal improvements. This effect is also illustrated in Figure 6: The plot shows an exemplary progression of the dual bound and the number of applied cuts for the instance R2L1 with a logarithmic time axis. The heuristic separation procedure finds no more cuts for the first time after roughly 45 minutes (about \(10^{3.43}\) seconds), and then the parametric IP takes over, causing a sudden and
\begin{table}
\begin{tabular}{l r r r r r r r} \hline Instance & \(\mu\) & Opt. Val. (\(\mathcal{P}_{\mathrm{I}}\)) & Dual Bd. (\(\mathcal{P}_{\mathrm{split}}\)) & Gap [\%] & Cuts & IP Cuts & Time [s] \\ \hline BL1 & 25 & 479 501 & 455 492 & 5.01 & 114 & 33 & 98 \\ BL2 & 25 & 582 203 & 529 247 & 9.10 & 128 & 32 & 116 \\ BL3 & 25 & 614 544 & 513 344 & 16.47 & 122 & 28 & 197 \\ BL4 & 25 & 581 688 & 507 168 & 12.81 & 176 & 65 & 106 \\ \hline R1L1 & 25 & 1 469 763 & 1 314 105 & 10.59 & 284 & 123 & 747 \\ R1L2 & 25 & 1 271 066 & 1 235 774 & 2.78 & 226 & 96 & 857 \\ R1L3 & 25 & 1 704 349 & 1 693 441 & 0.64 & 238 & 114 & 1 281 \\ R1L4 & 25 & 1 543 182 & 1 429 795 & 7.35 & 294 & 118 & 936 \\ \hline R2L1 & 25 & 2 598 725 & 2 171 855 & 16.43 & 212 & 83 & 255 \\ R2L2 & 25 & 2 726 109 & 2 471 181 & 9.35 & 238 & 75 & 335 \\ R2L3 & 25 & 1 698 794 & 1 661 074 & 2.22 & 116 & 12 & 91 \\ R2L4 & 25 & 2 417 447 & 2 325 110 & 3.82 & 244 & 64 & 119 \\ \hline R3L1 & 25 & 1 110 721 & 1 055 499 & 4.97 & 170 & 83 & 513 \\ R3L2 & 25 & 1 283 884 & 1 148 551 & 10.54 & 152 & 67 & 201 \\ R3L3 & 25 & 1 617 501 & 1 478 034 & 8.62 & 196 & 58 & 389 \\ R3L4 & 25 & 1 063 438 & 987 067 & 7.18 & 143 & 56 & 399 \\ \hline R4L1 & 25 & 1 053 623 & 1 053 623 & 0.00 & 102 & 14 & 205 \\ R4L2 & 25 & 1 394 526 & 1 313 700 & 5.80 & 136 & 39 & 231 \\ R4L3 & 25 & 1 718 591 & 1 648 388 & 4.08 & 148 & 47 & 213 \\ R4L4 & 25 & 498 913 & 488 043 & 2.18 & 171 & 60 & 701 \\ \hline R1L1v & 25 & 1 741 592 & 1 645 779 & 5.50 & 128 & 58 & 14 400 \\ R4L4v & 25 & 3 660 000 & 3 660 000 & 0.00 & 0 & 0 & 0 \\ \end{tabular}
\end{table}
Table 1: Results for the PESPlib instances restricted to \(\mu=25\). The table lists the optimal objective value of the MIP (3) in terms of weighted slack \(w^{\top}(x-\ell)\), the best dual bound obtained by split cuts, the primal-dual gap, the total number of applied split cuts, the number of cuts provided by the parametric IP (14), and the running time in seconds.
\begin{table}
\begin{tabular}{l r r r r r r r} \hline Instance & \(\mu\) & Opt. Val. (\(\mathcal{P}_{\mathrm{I}}\)) & Dual Bd. (\(\mathcal{P}_{\mathrm{split}}\)) & Gap [\%] & Cuts & IP Cuts & Time [s] \\ \hline BL1 & 100 & 1 341 151 & 1 216 355 & 9.31 & 1 092 & 357 & 954 \\ BL2 & 100 & 1 733 429 & 1 451 049 & 16.29 & 910 & 307 & 1 287 \\ BL3 & 100 & 1 747 063 & 1 461 798 & 16.33 & 922 & 304 & 1 864 \\ BL4 & 100 & 1 605 968 & 1 427 228 & 11.13 & 975 & 349 & 1 399 \\ \hline R1L1 & 100 & 5 481 154 & 4 582 018 & 16.40 & 1 300 & 493 & 6 903 \\ R1L2 & 100 & 4 873 559 & 3 952 695 & 18.90 & 1 138 & 348 & 7 453 \\ R1L3 & 100 & 6 256 521 & 5 151 095 & 17.67 & 998 & 324 & 4 742 \\ R1L4 & 100 & 5 008 640 & 4 202 959 & 16.09 & 1 407 & 415 & 6 184 \\ \hline R2L1 & 100 & 8 284 107 & 6 881 776 & 16.93 & 1 021 & 294 & 2 453 \\ R2L2 & 100 & 7 099 578 & 6 244 993 & 12.04 & 1 366 & 406 & 4 648 \\ R2L3 & 100 & 6 722 776 & 5 982 798 & 11.01 & 1 102 & 342 & 6 038 \\ R2L4 & 100 & 5 516 243 & 4 996 368 & 9.42 & 1 217 & 317 & 3 242 \\ \hline R3L1 & 100 & 4 366 123 & 3 770 709 & 13.64 & 927 & 355 & 7 180 \\ R3L2 & 100 & 4 666 798 & 3 796 483 & 18.65 & 764 & 253 & 8 554 \\ R3L3 & 100 & 4 719 345 & 3 890 774 & 17.56 & 921 & 301 & 7 492 \\ R3L4 & 100 & 2 950 612 & 2 730 898 & 7.45 & 885 & 348 & 11 242 \\ \hline R4L1 & 100 & 4 428 800 & 3 715 032 & 16.12 & 717 & 179 & 2 947 \\ R4L2 & 100 & 4 101 438 & 3 492 759 & 14.84 & 789 & 236 & 14 400 \\ R4L3 & 100 & 4 302 565 & 3 740 673 & 13.06 & 875 & 226 & 8 785 \\ R4L4 & 100 & 1 994 572 & 1 676 547 & 15.94 & 607 & 184 & 7 448 \\ \hline R1L1v & 100 & 10 253 906 & 9 715 723 & 5.25 & 191 & 0 & 14 400 \\ R4L4v & 100 & 14 880 000 & 14 880 000 & 0.00 & 2 & 0 & 1 \\ \end{tabular}
\end{table}
Table 2: Results for the PESlib instances restricted to \(\mu=100\). The table lists the optimal objective value of the MIP (3) in terms of weighted slack \(w^{\top}(x-\ell)\), the best dual bound obtained by split cuts, the primal-dual gap, the total number of applied split cuts, the number of cuts provided by the parametric IP (14), and the running time in seconds.
persisting drop in performance. We can observe that once the parametric IP came into effect, the heuristic stage provides only few further cuts. This could be due to the initial high quality results provided by the heuristic, such that the improvement through a cut from the parametric IP results in only a marginal change in the new solution. The subsequent spanning tree in the following heuristic stage could then be similar to the previous one, such that from this point on, only little to no improvement is found in the heuristic stage; and the costly parametric IP is the main contributor.
With respect to all instances, the average optimality gap is 40.83 %. As expected, the quality of the results obtained by our method is dependent on the problem size. In particular for the 16 \(RiLj\) instances there is a strong correlation between the size of \(\mu\) and the optimality gap. This is also evidenced by the Pearson correlation coefficient, which is approximately 95 %.
On the dual side, the split closure provides at least 91.10 % of the currently best known dual bound. This underlines the good performance of our method - most of the incumbent dual bounds have been obtained by longer computation times, and in contrast to our study, neither branching nor other types of cutting planes apart from split cuts have been forbidden. Despite being at a disadvantage in this regard, our method provides better dual bounds for five out of the 22 instances, with improvements up to 25 %. Other bounds have been obtained with the help of heuristically separated flip inequalities as well, by, e.g., Bormdorfer, Lindner, and Roth (2020); N. Lindner and Liebchen (2020); N. Lindner, Liebchen, and Masing (2021); Masing et al. (2023), such that our procedure can be seen as an advancement of previous methods in the sense that our heuristic unlocks more potential due to exploiting Theorem 4.7.
### Insights
From our experiments we have gained two main insights: On one hand, we have seen that our procedure is indeed useful in computing qualitative dual bounds, as we were able to improve five instances of the benchmarking library PESPlib significantly. But also for the other instances, some of which have been treated excessively in the past, a high percentage of the bound could be reached in comparably little time by our procedure.
On the other hand, our tests help us to assess the quality of the split closure for computing the lower bounds independent of the procedure chosen: The instances where the optimal solution could be obtained and our procedure terminates give an indication of how well suited the split closure is for dual bounds in the context of PESP. Here, we were able to observe that the split closure provided fairly low optimality gaps on average, and even certified optimality in three cases, a non-negligible gap remains: E.g., in the worst case, namely for R1L2 with \(\mu=100\), there is a gap of 18.9 % between the optimal dual bound of the split closure and the optimal solution. We reach the conclusion that the split closure is essential for raising the dual bound. However, in order to close the primal-gap entirely, further methods, e.g., higher rank split cuts, will have to be applied.
Considering that usually \(\mathcal{P}_{l}\subsetneq\mathcal{P}_{\text{split}}\), such that any bound obtained from the split closure will not be sufficient to prove optimality, one could ask the question, whether it is worth it to explore it to its full extent, or whether the fast, heuristic section of our procedure would be sufficient. For an indication, we analyzed the instances where our procedure terminated before the time limit was reached: We found that the best bound before the parametric IP came into effect reached at least 89.1 %, and on average even 95.6 % of the final dual bound. We conclude that indeed the heuristic approach of separating flip inequalities is quite effective, as it is able to cover the majority of the dual bound that can be obtained from the split closure quickly. In our case, the addition of the parametric IP in the procedure was essential for the assessment of the split closure and might be helpful to find new cuts, so that the heuristic can produce
\begin{table}
\begin{tabular}{l r r r r r r r} \hline Instance & \(\mu\) & Primal Bd. (\(\mathcal{P}_{\text{I}}\)) & Dual Bd. (\(\mathcal{P}_{\text{split}}\)) & Gap [\%] & Cuts & IP Cuts & Time [s] \\ \hline BL1 & 5 298 & 6 333 641 & 4 252 778 & 32.85 & 42 927 & 15 & 14 400 \\ BL2 & 4 880 & 6 799 331 & 4 299 517 & 36.77 & 37 498 & 84 & 14 400 \\ BL3 & 6 265 & 6 675 098 & 4 290 946 & 35.72 & 58 628 & 20 & 14 400 \\ BL4 & 9 684 & 6 562 147 & 3 923 974 & 40.20 & 88 640 & 265 & 14 400 \\ \hline R1L1 & 2 722 & 29 894 745 & 19 041 890 & 36.30 & 21 965 & 60 & 14 400 \\ R1L2 & 2 876 & 30 507 180 & 19 059 669 & 37.52 & 23 767 & 45 & 14 400 \\ R1L3 & 2 848 & 29 319 593 & 18 193 974 & 37.95 & 23 468 & 61 & 14 400 \\ R1L4 & 3 769 & 26 516 727 & 16 441 121 & 38.00 & 30 460 & 18 & 14 400 \\ \hline R2L1 & 3 206 & 42 422 038 & 24 806 675 & 41.52 & 27 739 & 163 & 14 400 \\ R2L2 & 3 360 & 40 642 186 & 24 464 467 & 39.81 & 28 842 & 159 & 14 400 \\ R2L3 & 3 239 & 38 558 371 & 22 645 939 & 41.27 & 28 816 & 95 & 14 400 \\ R2L4 & 5 514 & 32 483 894 & 19 102 410 & 41.19 & 47 958 & 0 & 14 400 \\ \hline R3L1 & 4 630 & 43 271 824 & 25 343 534 & 41.43 & 38 725 & 17 & 14 400 \\ R3L2 & 4 800 & 45 220 083 & 25 963 773 & 42.58 & 41 951 & 19 & 14 400 \\ R3L3 & 5 446 & 40 483 617 & 22 273 090 & 44.98 & 46 099 & 6 & 14 400 \\ R3L4 & 7 478 & 33 335 852 & 17 027 192 & 48.92 & 46 773 & 0 & 14 400 \\ \hline R4L1 & 5 331 & 49 426 919 & 27 938 824 & 43.47 & 42 505 & 6 & 14 400 \\ R4L2 & 5 688 & 48 764 793 & 27 585 028 & 43.43 & 45 946 & 7 & 14 400 \\ R4L3 & 6 871 & 45 493 081 & 23 849 465 & 47.58 & 46 277 & 0 & 14 400 \\ R4L4 & 9 371 & 36 703 391 & 16 488 684 & 55.08 & 42 579 & 0 & 14 400 \\ \hline R1L1v & 2 832 & 42 591 141 & 28 544 123 & 32.98 & 20 326 & 22 & 14 400 \\ R4L4v & 9 637 & 61 968 380 & 38 307 814 & 38.18 & 45 916 & 0 & 14 400 \\ \hline \end{tabular}
\end{table}
Table 3: Results for the full PESPlib instances. The table lists the best known primal bound for the MIP (3) in terms of weighted slack \(w^{\top}(x-\ell)\) according to (Goerigk, 2022), the best dual bound obtained by split cuts, the primal-dual gap, the total number of applied split cuts, the number of cuts provided by the parametric IP (14), and the running time in seconds.
Figure 6: Evolution of the dual bound in terms of weighted slack \(w^{\top}(x-\ell)\) (blue, left axis) and the number of applied split cuts (grey, right axis) for the instance R2L1. Green markers correspond to cuts obtained from the heuristic, orange to cuts from the parametric IP. The time axis is logarithmic.
\begin{table}
\begin{tabular}{l r r r r r} \hline Instance & \(\mu\) & Dual Bd. (\(\mathcal{P}_{1}\)) & Dual Bd. (\(\mathcal{P}_{\rm split}\)) & Gap [\%] & Dual Bd. Source \\ \hline BL1 & \(5\,298\) & \(3\,668\,148\) & \(4\,252\,778\) & \(-15.94\) & Bormdörfer, Lindner, and Roth (2020) \\ BL2 & \(4\,880\) & \(3\,943\,811\) & \(4\,299\,517\) & \(-9.02\) & Bormdörfer, Lindner, and Roth (2020) \\ BL3 & \(6\,265\) & \(3\,571\,976\) & \(4\,290\,946\) & \(-20.13\) & Bormdörfer, Lindner, and Roth (2020) \\ BL4 & \(9\,684\) & \(3\,131\,491\) & \(3\,923\,974\) & \(-25.31\) & Bormdörfer, Lindner, and Roth (2020) \\ \hline R1L1 & \(2\,722\) & \(20\,901\,883\) & \(19\,041\,890\) & \(8.90\) & N. Lindner et al. (2021) \\ R1L2 & \(2\,876\) & \(19\,886\,799\) & \(19\,059\,669\) & \(4.16\) & Masing et al. (2023) \\ R1L3 & \(2\,848\) & \(19\,323\,821\) & \(18\,193\,974\) & \(5.85\) & Masing et al. (2023) \\ R1L4 & \(3\,769\) & \(17\,283\,850\) & \(16\,441\,121\) & \(4.88\) & Masing et al. (2023) \\ \hline R2L1 & \(3\,206\) & \(25\,929\,643\) & \(24\,806\,675\) & \(4.33\) & Masing et al. (2023) \\ R2L2 & \(3\,360\) & \(25\,642\,692\) & \(24\,464\,467\) & \(4.59\) & Masing et al. (2023) \\ R2L3 & \(3\,239\) & \(23\,941\,492\) & \(22\,645\,939\) & \(5.41\) & Masing et al. (2023) \\ R2L4 & \(5\,514\) & \(19\,793\,447\) & \(19\,102\,410\) & \(3.49\) & Masing et al. (2023) \\ \hline R3L1 & \(4\,630\) & \(26\,825\,864\) & \(25\,343\,534\) & \(5.53\) & Masing et al. (2023) \\ R3L2 & \(4\,800\) & \(27\,178\,406\) & \(25\,963\,773\) & \(4.47\) & Masing et al. (2023) \\ R3L3 & \(5\,446\) & \(23\,007\,043\) & \(22\,273\,090\) & \(3.19\) & Masing et al. (2023) \\ R3L4 & \(7\,478\) & \(17\,432\,725\) & \(17\,027\,192\) & \(2.33\) & Masing et al. (2023) \\ \hline R4L1 & \(5\,331\) & \(29\,174\,444\) & \(27\,938\,824\) & \(4.24\) & Masing et al. (2023) \\ R4L2 & \(5\,688\) & \(28\,664\,399\) & \(27\,585\,028\) & \(3.77\) & Masing et al. (2023) \\ R4L3 & \(6\,871\) & \(24\,293\,621\) & \(23\,849\,465\) & \(1.83\) & Masing et al. (2023) \\ R4L4 & \(9\,371\) & \(17\,961\,400\) & \(16\,488\,684\) & \(8.20\) & N. Lindner and Liebchen (2020) \\ \hline R1L1v & \(2\,832\) & \(29\,620\,775\) & \(28\,544\,123\) & \(3.63\) & Goerigk (2022) \\ R4L4v & \(9\,637\) & \(32\,296\,041\) & \(38\,307\,814\) & \(-18.61\) & Goerigk (2022) \\ \hline \end{tabular}
\end{table}
Table 4: Comparison of dual bounds for the full PESlib instances. The table lists the best known dual bound for the MIP (3) in terms of weighted slack \(w^{\top}(x-\ell)\) according to the source in the last column, the best dual bound obtained by split cuts, and the primal-dual gap.
effective cuts again. However for practical purposes, particularly when other methods aimed at improving the dual bounds are used in parallel, the time-consuming parametric IP might be too costly. The heuristic part could be sufficient, particularly in light of the realization that - also in practice - the split closure is not enough to close the dual gap entirely.
## 7 Conclusion
We have shown that in the context of periodic timetabling, the split closure can be expressed in combinatorial terms, namely via flip inequalities with respect to simple cycles. Consequently, this means that a dual bound obtained from flip inequalities is as good as from split cuts. However, flip inequalities are - in a way - easier to grasp: We show that for a fixed cycle, a separating flip inequality can be found in linear time. This can be used to obtain a heuristic, which turned out to be powerful in practice. In combination with a systematic exploration of violated flip inequalities, we were able to improve the dual bounds of five instances of the benchmark library PESPlib - proving both the effectiveness of our approach, but also of the benefit of the split closure in the context of PESP. One of our main contributions is also in the insight that the split closures of various equivalent PESP formulations are all equivalent as well, meaning that neither the specific MIP formulation, nor any amount of subdivision or augmentation will lead to a stronger split closure.
Our computational experiments also indicate that even with a full exploration of the flip polytope, a certain gap will remain. To close the primal-dual gap entirely, further research into stronger cuts is needed, which will have to be different from first-order split cuts.
|
2302.01960 | Dynamic Arctic weather variability and connectivity | The rapidly shrinking Arctic sea ice is changing weather patterns and
disrupting the balance of nature. Dynamics of Arctic weather variability (WV)
plays a crucial role in weather forecasting and is closely related to extreme
weather events. Yet, assessing and quantifying the WV for both local Arctic
regions and its planetary impacts under anthropogenic climate change is still
unknown. Here, we develop a complexity-based approach to systematically
evaluate and analyze the dynamic behaviour of WV. We reveal that the WV within
and around the Arctic is statistically correlated to the Arctic Oscillation at
the intraseasonal time scale. We further find that the variability of the daily
Arctic sea ice is increasing due to its dramatic decline under a warming
climate. Unstable Arctic weather conditions can disturb regional weather
patterns through atmospheric teleconnection pathways, resulting in higher risk
to human activities and greater weather forecast uncertainty. A multivariate
climate network analysis reveals the existence of such teleconnections and
implies a positive feedback loop between the Arctic and global weather
instabilities. This enhances the mechanistic understanding of the influence of
Arctic amplification on mid-latitude severe weather. Our framework provides a
fresh perspective on the linkage of complexity science, WV and the Arctic. | Jun Meng, Jingfang Fan, Uma S Bhatt, Jürgen Kurths | 2023-02-03T19:17:01Z | http://arxiv.org/abs/2302.01960v1 | # Dynamic Arctic weather variability and connectivity
###### Abstract
The rapidly shrinking Arctic sea ice is changing weather patterns and disrupting the balance of nature. Dynamics of Arctic weather variability (WV) plays a crucial role in weather forecasting and is closely related to extreme weather events. Yet, assessing and quantifying the WV for both local Arctic regions and its planetary impacts under anthropogenic climate change is still unknown. Here, we develop a complexity-based approach to systematically evaluate and analyze the dynamic behaviour of WV. We reveal that the WV within and around the Arctic is statistically correlated to the Arctic Oscillation at the intraseasonal time scale. We further find that the variability of the daily Arctic sea ice is increasing due to its dramatic decline under a warming climate. Unstable Arctic weather conditions can disturb regional weather patterns through atmospheric telecconnection pathways, resulting in higher risk to human activities and greater weather forecast uncertainty. A multivariate climate network analysis reveals the existence of such teleconnections and implies a positive feedback loop between the Arctic and global weather instabilities. This enhances the mechanistic understanding of the influence of Arctic amplification on mid-latitude severe weather. Our framework provides a fresh perspective on the linkage of complexity science, WV and the Arctic.
Arctic sea ice is declining and thinning at an accelerating rate due to anthropogenic climate change [1; 2]. The warming trend is more prominent in the Arctic and is double of the global average or even greater regionally [3], a phenomenon known as Arctic amplification (AA) [4; 5; 6]. The Arctic sea ice conditions can affect the Arctic ecosystem, wildlife, hunting and shipping, exploration of nature resources and more [7; 8; 9]. As one crucial component of the complex Earth system [10; 11], changes in Arctic sea ice are found to have statistical and dynamical connections with regional as well as remote climatic impacts [12; 13; 14; 15] (as shown in Fig. 1) through both large-scale atmospheric and oceanic circulations [16; 17; 18; 19; 20]. The rapid shrinking of the ice cover has attracted much attention about the Arctic sea ice teleconnections and predictions from seasonal-to-decadal time scales in recent years [21; 22; 23; 24]. However, the understanding about its variability on weather time scales is still in its infancy [25; 26], although it is crucial for weather forecasting, the safety of commercial and subsistence maritime activities, the survival of polar mammals and the benefit of polar economics. The impact of day-to-day Arctic sea ice variations has been underestimated in most of the climate models [27]. To fill this gap, here we adopt complexity-based approaches and the _climate network_ framework to investigate the daily WV of the Arctic sea ice and its connections to climate phenomena on different spatio-temporal scales, including the Arctic Oscillation (AO), climate change and local weather conditions even in regions faraway.
Complexity science employs the mathematical representation of network science and provides a powerful tool to study the structure, dynamics and function of complex systems [28]. The climate system is a typical complex adaptive system due to its nonlinear interactions and feedback loops between and within different layers and components. In recent years, network science has been implemented to the climate system to construct the climate network (CN) [29]. The CN is a novel tool to unveil and predict various important climate mechanisms and phenomena [30], including forecasting of the El Nino Southern Oscillation [31; 32] and Indian summer monsoon rainfall [33; 34], the global pattern of extreme-rainfall [35], the changes of global-scale tropical atmospheric circulation under global warming [36], teleconnections among tipping elements in the Earth system [37], the Indian Ocean Dipole [38] and so on.
The AO is one of the major modes of atmospheric circulation over the mid-to-high latitudes of the Northern Hemisphere (NH) [39], which influences climate patterns in Eurasia, North America, Eastern Canada, North Africa, and the Middle East, especially during boreal winter [40; 41; 42].
The AO index is defined as the leading empirical orthogonal function of NH sea level pressure anomalies from latitudes \(20^{\circ}\) N to \(90^{\circ}\) N and is characterized by the back-and-forth shifting of atmospheric pressure between the Arctic and the mid-latitudes. During the positive AO phases, the surface pressure is lower-than-average in the Arctic region and the jet stream shifts northward accompanied by a poleward shift of the storm track [43]. Correspondingly, we find that both the sea ice and air temperature in mid-to-high latitudes of the NH changes more rapidly (i.e., with blueshifted frequency spectrum) paired with more stable weather conditions (i.e., redshifted) in regions further south during the _AO positive phases_, in contrast to the _AO negative phases_ when pressure north of the Arctic Circle is higher than normal. To quantify the blue/red-shift effect and its geographic distribution indicating increased/reduced WV, here we introduce two novel mathematical techniques: the _advanced autocorrelation function method_, i.e., \(W_{ACF}\) and the _advanced power spectrum method_, i.e., \(W_{PS}\) (see Methods). This way enables us to find that the day-to-day variability of ice cover for a large area of the Arctic is increasing due to the dramatic melting of the sea ice [44], which indicates more enhanced risks for severe weather under climate change [45; 46; 47; 48]. This may also increase the probability of unstable weather conditions globally through atmospheric teleconnections between the Arctic and the global climate systems (see links shown in Fig. 1). Finally, we statistically verify the existence of such teleconnections between the Arctic sea ice and weather conditions in remote global regions via a multivariate climate network framework. Such teleconnections can result in a positive feedback loop of WV between the Arctic and the rest (see Fig. 1) and contribute to understanding the mechanisms of linkage between the AA and mid-latitude weather [49]. The presented results and methodology not only facilitate a quantitative risk assessment of extreme weather events (see Fig. S1), but also reveal the existence of interaction or synchronization paths among regional and global climate components.
## Results
### Linkage of the weather variability and the AO
The WV refers to the irregularity/predictability of the climate data at weather time scales (i.e., hours - days). There are various ways to evaluate the data variability/irregularity, such as the en
tropy [50; 51; 52], the detrended fluctuation analysis [53; 54], the correlation dimension [55], the lyapunov exponents analysis [56], etc. However, most of them would be problematic, biased or invalid when dealing with short and noisy data, such as the weather data. The standard deviation (SD) is an effective technique to quantify the dispersion of data, but not a good measure for irregularity, e.g., the SD of a randomly shuffled data is the same as the original. Besides, the auto-correlation function describes how fast the self-similarity of a variable decays with time [57] and the power spectral analysis [58] allows us to discover periodicity in the data. Yet, a systematic evaluation of the auto-correlation and the power spectrum as well as their dynamic evolution for non-stationary climate data are still lacking.
Therefore, here we introduce two mathematical functions: \(W_{ACF}\) and \(W_{PS}\) (see Methods for details) to quantify the WV in and around the Arctic in a given month, as well as its dynamic behavior during the period from Jan. 1980 to Dec. 2019. For a given time series, the physical meanings of these metrics are: higher values of the \(W_{ACF}\) stands for weaker short-term memory; while higher values of the \(W_{PS}\) indicates faster changes. In particular, to better understand their physical meanings, we construct various nonlinear time series (as shown in Fig. 2a) via the following dynamical equations,
\[x_{t} =\cos{(2\pi t/20)}, \tag{1}\] \[y_{t} =\cos{(2\pi t/10)},\] (2) \[z_{t}^{x} =0.2x_{t}+0.8u_{t},\] (3) \[z_{t}^{y} =0.2y_{t}+0.8u_{t}, \tag{4}\]
where \(t\in[0,1000]\), \(u_{t}\) is the nonlinear logistic function as: \(u_{t+1}=\mu u_{t}(1-u_{t})\). Here we set the parameter \(\mu=3.8\) and \(u_{0}=0.01\), i.e., it generates a chaotic behavior [59]. Mathematically, Eqs. (1) and (2) are two periodic functions but with different periods \(20\) and \(10\), respectively; while, Eqs. (3) and (4) consist of a periodic term and a chaotic term (Fig. 2a). Therefore, strictly speaking, the value of \(W_{ACF}\) for \(z_{t}^{x}\) (\(z_{t}^{y}\)) is higher than \(x_{t}\) (\(y_{t}\)), i.e., weaker short-term memory, due to the chaotic term \(u_{t}\); the value of \(W_{PS}\) for \(y_{t}\) (\(z_{t}^{y}\)) is higher than \(x_{t}\) (\(y_{t}\)), i.e., faster changes, due to the periodic term with different periods. One should note that a segment of unstable data is usually changing faster, with both high \(W_{ACF}\) and \(W_{PS}\), e.g., Eqs. (3) and (4). While a segment of quickly changing data is not necessarily irregular, such as high frequency periodic data, with high
but low \(W_{ACF}\), as Eq. (2). We extract 31 (i.e., the maximal length of one month in the climate data) consecutive data points from each of the samples and perform \(W_{ACF}\) and \(W_{PS}\) analysis on the extracted subsets. All results presented in Fig. 2b and c, are consistent with our theory, which indicates that our two functions can be used as effective tools to describe the variability (both _disorder_ and _frequency_) for given time series.
Next, we apply \(W_{ACF}\) and \(W_{PS}\) to quantify the Arctic sea ice WV based on the sea ice cover dataset (daily, 1979-2019, see DATA for details). Our results are shown in Figs. 2d-h. A positive value of \(r\) denoted by blue in Figs. 2d or e, indicates positive correlation between the annual mean of \(W_{ACF}\) or \(W_{PS}\) with the AO index. We observe that both \(W_{ACF}\) and \(W_{PS}\) tend to be higher, i.e., indicating faster and more irregular day-to-day changes of ice cover, during the _AO positive phases_ than _AO negative phases_, in some parts of the Arctic region, such as, the Canadian Archipelago, Beaufort Sea, and the Central Arctic. To illustrate the effect of the AO on \(W_{PS}\), we show that the power spectrum of Arctic sea ice during the AO positive phase, e.g., Jan. 1989, is significantly blueshifted comparing to that during the AO negative phase, e.g., Jan. 2010 (see Fig. 2f). To illustrate the effect of the AO on \(W_{ACF}\), we show that the timeseries of AO index and \(W_{ACF}\) are significantly synchronized during the period 1980-2019 (as shown in Fig. 2g and h). Moreover, we uncover that the climatic effects of the AO are more prominent in winter-spring than in summer-autumn (see Figs. S2 and S3).
The underlying physical mechanism is related to the typical atmospheric character of the AO, as well as the close interactions between the Arctic sea ice and the surface atmosphere. During the positive phases of AO, the jet stream shifts northward and the storm tracks are located farther north than during the AO negative phases [60], see Fig. S4. This results in more unstable regional weather in mid-to-high latitudes of the NH, and yields higher \(W_{ACF}\) and \(W_{PS}\) of the air temperature data, see Fig. 3 and Figs. S5-S8. In contrast, the \(W_{ACF}\) and \(W_{PS}\) of the air temperature in the mid-latitudes of the NH increase with more outbreaks of significant weather events (e.g., cold events, frozen precipitation and blocking days) [60] as the zonal wind weakens during the negative AO phases, see Fig. 3 and Figs. S5-S8. In particular, as shown in Figs. S5-S8, there are even significant connections between the AO and the WV in some regions of the Southern Hemisphere.
The \(W_{ACF}\) and \(W_{PS}\) analysis provides an additional way to describe the quantitative response of both the Arctic sea ice and the atmosphere to the AO, thus could be used to assess the risk of
extreme events in mid-to-high latitudes of the NH.
### Increased irregularity of Arctic sea ice cover
In the following, our results shown in Fig. 4 indicate that the sea ice cover in a large area of the Arctic, including the East Siberian, the Beaufort Sea and the Central Arctic, where the ice thickness decrease is dramatic (as shown in Fig. S9), has changed more rapidly and irregularly over the past 40 years (1980-2019). That is since both values of the \(W_{ACF}\) and \(W_{PS}\) are significantly increasing. The observed enhancing trend of WV may be attributed to the following two reasons: One is related to the development of remote sensing and data analyzing technology, resulting in better data resolution and accuracy over the data record; the other reason is the rapid decline of multi-year ice cover, due to the dramatic increase of air temperature [61]. The multi-year sea ice has been defined as the ice that survives at least one summer melt and represents the thick sea ice cover, while the first-year ice refers to the ice that has no more than one-year's growth. As more of perennial ice cover is replaced by younger and thinner ice cover, the regional ice cover becomes more fragile and vulnerable to fluctuations of air temperature or some other forces [44]. Therefore, local interactions between the sea ice and atmosphere would be enhanced and the weather in the Arctic and remote global regions may affect each other more easily through potential tele-connected pathways (e.g., Fig. 5), which may increase the WV associated with the short-term weather predictability.
In addition, we observe relatively more areas with a significant trend of enhanced instability in the melt season under global warming (see Fig. 4a). This is because during the melt season (Apr.-Aug.), the sea ice declines and fluctuates more dramatically than in other seasons when the monthly average ice cover extent (the area of ocean with at least \(15\%\) sea ice, marked by the blue curve in Fig. 4a) reaches its maximum/minimum. An intensification of the summer Arctic storm activity is also likely to happen as the land-sea thermal contrast increases under global warming [62; 63; 64], which can increase the WV both in the ocean and atmosphere.
### Arctic-global teleconnection patterns
Next, we propose the _multivariate climate network_ approach to statistically reveal the potential teleconnection patterns between the Arctic sea ice (Fig. S10a) and the global air temperature field (Fig. S10b), see more details in the Methods. Different from the classical climate network approach with only one climate variable, see Ref. [65; 30; 66] and references therein, we construct climate networks where each link connects one node located in the Arctic (Fig. S10a) and the other in the globe (Fig. S10b). In particular, the link weight quantifies the similarity of temporal evolution between two different climate variables, i.e., the Arctic sea ice and the global air temperature. By comparing to a Null-model (see Methods), we observe the dynamic behavior of network connectivity (as shown in Fig. S11a), which is defined as the ratio of significant links for each month's network. The statistical significance for each link is defined by comparing to the null-model, see details in Method section. A value of above \(5\%\) connectivity indicates statistically significant synchronization of weather between the Arctic and areas outside, such as Feb. 2010 (see Fig. S11b and c), when the AO is in a strong negative phase and the cold polar air plunged into lower latitudes of the NH and result in extreme weather conditions in a large area of the globe [67; 68]. We identify the significant Arctic-global teleconnection patterns by using climate network node degree fields, which are defined as the number of significant links that connect to the Arctic for each global node, for two specific periods, Feb. 2010 (AO negative phase) and Mar. 2019 (AO positive phase) in Fig. 5a and c, respectively.
Moreover, two typical links presented in Fig. 5 indicate strong synchronizations between the daily sea ice cover for one Arctic node and the air temperature for another remote global node (their time series are shown in Figs. S12 and S13). As shown in Fig. S12b, changes in the sea ice for node \(i\) (\(77.5^{\circ}\) N, \(160^{\circ}\) E) in the Arctic are two days ahead of the air temperature variations for node \(j\) (\(30^{\circ}\) N, \(105^{\circ}\) E) in the Sichuan Province of Southwest China, i.e., evolution of the Arctic sea ice could affect the anomalies of air temperature in Southwest China. To better understand how sea ice affects air temperature variability faraway, we identify the most probable teleconnection propagation path through the _shortest path_ method (see Methods for more details). We show a potential propagation path for this teleconnection (marked by yellow in Fig. 5b) and find that it seems to be roughly a straight line from the Arctic to Southwest China through Eastern Russia and
Mongolia. The path length is close to \(6400\) km. From a meteorological perspective, this path can be well explained by the main large-scale atmospheric circulation. A negative phase of the AO leads to a stronger Siberian High and extends farther southeastward. This results in repeated cold air outbreaks into South China [42]. Our analysis is highly consistent with the wind climatology, see the background information of Fig. 5b.
In addition, its feedback is also considered. However, we observe a relatively weaker connection in the opposite direction, i.e., from Southwest China to the Arctic. We find that changes in the air temperature at the same location in Southwest China influence that of the sea ice for the same Arctic node \(11\) days later, as shown in Fig. S12c. Correspondingly, we identify its potential propagation path (marked by orange in Fig. 5b) and find it corresponds to negative wind anomalies from Southwest China to the Arctic. These two tele-connected paths form an interaction loop that suggests a large-scale atmospheric feedback of WV between the Arctic and Southwest China.
In a contrast, during a positive phase of the AO, we show another teleconnection and its path in Fig. 5c and d, which indicates that the fluctuations of air temperature in California can affect the Arctic sea ice through the upper atmospheric circulations. Meanwhile, changes in the Arctic sea ice can also influence the temperature fluctuations in California along upper wind routes in an opposite direction, however, at a weaker strength (see more details in Fig. S13c). This is because during the positive phase of the AO, low pressure dominates the Arctic regions, leading to a northward and intensified jet stream that blocks the outbreaks of frigid polar air into lower latitudes and reduces storm activity in California [69]. The uncovered teleconnection loop between the Arctic and California suggests that Arctic sea ice decline may drive more California droughts and wildfires [70].
The synchronization of day-to-day weather between the Arctic and other regions can favor positive feedbacks of WV, where increasing WV/instability of the Arctic sea ice may cause a higher risk of extreme weather conditions in remote global regions. Meanwhile, impacts from global regions may also induce unstable weather conditions in the Arctic.
## Discussion
In summary, we have introduced the mathematical \(W_{ACF}\) and \(W_{PS}\) functions to quantify the short-term dynamic WV relating to the irregularity and frequency of the day-to-day changes of climate data. By adapting \(W_{ACF}\) and \(W_{PS}\), we are able to identify significant effects of the AO on day-to-day changes of the Arctic sea ice as well as the WV in mid-to-high latitudes of the NH. We attribute the physical mechanism to the shifts of north-to-south location of jet stream and storm-steering associated with different phases of the AO. Furthermore, we found that during the past 40 years, the Arctic sea ice variability on weather time scales is substantially increasing due to the melting of the thick perennial sea ice. Finally, in order to analyze the dynamic Arctic weather connectivity, we have constructed multivariable climate networks, i.e., between the Arctic sea ice and the global air temperature field. By applying the shortest path method, we are able to identify teleconnections paths as well as positive feedback loops of WV. We also proposed a possible physical mechanism underlying these paths. The reduction of Arctic sea ice stability may increase the risk of unstable weather conditions and lead to reduced skill of weather forecasts [71] globally through the Arctic-global teleconnected feedback loops. Our new findings can help to understand the physical mechanisms linking the AA and the global climate, and implies prominent global impacts of the Arctic WV on human and natural systems under climate change [6; 49].
As the Arctic is considered to be a barometer of global climatic change, in particular, Arctic sea ice loss is approaching a tipping point and is extremely crucial for the whole Earth's climate [72]. Besides the immediate utility of being able to quantitatively analyze the dynamics of WV for local Arctic regions and its global impacts, our framework would be also applied to study and reveal the short-term synchronizations of connectivity among remote global regions, sea ice forecasting, as well as systemic risk induced by the interdependency among other complex subsystems and cascading of adverse consequences, which is particularly important for a systemic risk-informed global governance.
Figure 1: **The arctic system as a crucial component of the Earth climate system.****a**, Schematic view of a climate network. Links indicate interactions between different regional climate systems in the globe. Golden links represent teleconnections between the Arctic and regions outside. **b**, Illustration of the complex Arctic system. It contains the cryosphere, biosphere, hydrosphere, and atmosphere as well as the interactions among them. A change in one component often triggers changes and feedbacks in numerous interconnected processes (e.g., Arctic sea ice decline). The circular arrow suggests a positive feedback of the WV between the Arctic and the rest of the climate system.
Figure 2: **Blueshift effect of the Arctic Oscillation on the Arctic weather variability**. **a**, Sample nonlinear time series generated based on Eqs. (1-4). **b**, The auto-correlation functions and values \(W_{ACF}\) of each sample time series shown in **a**. **c**, The power spectrum density and values \(W_{PS}\) of each sample time series shown in **a**. **d**, The correlations between the annual mean of the AO index and the \(W_{ACF}\) for the Arctic sea ice. The “**x**” marks represent the nodes with correlations significant at the 95% confidence level (Student’s t test). **e**, The same as **d** for \(W_{PS}\). **f**, The power spectrum of the sea ice for all nodes marked by symbol “**x**” in **e** in Jan. 1989 with a positive AO phase comparing to that in Jan. 2010 with a negative AO phase. **g**, The AO index (pink solid line for monthly and pink dashed line for annual) versus the \(W_{ACF}\) index (dark blue solid line for monthly and dark blue dashed for annual) averaged over all nodes marked by symbol “**x**” in **d**. **h**, The scatter plots of annual indexes (dashed lines in **g**) of the AO versus \(W_{ACF}\), the \(r\) value between these two indexes is \(0.65\), with a \(p\) value of \(5.5\times 10^{-6}\).
Figure 3: **The relationships between the AO and weather variability.****a**,**b**, The correlation maps between the annual mean of the AO index and \(W_{ACF}\) of the air temperature at \(850hPa\) pressure level during the period of 1980–2019. **c**,**d**, The same as **a** and **b**, but for \(W_{PS}\). The symbol “**x**” in each panel represents the region with correlation significant at the 95% confidence level (Student’s t-test).
Figure 4: **The dynamic weather variability of the Arctic daily sea ice cover during June.****a**, The ratio of nodes that has statistically significant increasing trend for the \(W_{ACF}\) (gray) and \(W_{PS}\) (purple); the Sea Ice Index, i.e. the area with at least \(15\%\) ice cover (blue) for the same months during 1980–2019. **b**, Changes per decade as multiple of one standard deviation (\(\sigma\)), for each Arctic node’s \(W_{ACF}\) during June. **c**, the same as **b** for \(W_{PS}\). The symbol “**x**” in panels **b** and **c** represents the region with trend significant at the 95% confidence level (Student’s t-test).
Figure 5: **Diagram of climate network teleconnection paths.****a**, Heatmap of the node degree defined as the number of significant links for each node (see Methods) in the climate network of Feb. 2010. The blue line indicates the teleconnection between one Arctic node and one node located in Sichuan province of China. **b**, The propagation pathway of the teleconnection marked by blue in **a**. **c**, the same as **a** for Mar. 2019. The blue line indicates the teleconnection link between one Arctic node and one node in California of United States. **d**, The propagation pathway of the teleconnection marked by blue in **c**. The colors and white arrows depict the magnitudes and directions of the 850 (500)-hPa winds in **b** (**d**).
## Data and Methods
### Data
The data used in the current work is the \(0\) hr (UTC) daily sea ice cover and the air temperature at \(850hPa\) pressure level from the ERA5 [73] ([https://apps.ecmwf.int/datasets/](https://apps.ecmwf.int/datasets/)) reanalysis, with a spatial (zonal and meridional) resolution of \(2.5^{\circ}\times 2.5^{\circ}\). The searching principle for \(850hPa\) pressure level is, since it is just above the boundary layer to avoid direct interactions between the sea ice and surface atmosphere [24]. We select \(8040\) grids from the dataset of air temperature which approximately equally cover the globe (see Fig. S10b). There are \(377\) grids located in the ocean of the Arctic region that with non-zero sea ice cover at least for one day (see Fig. S10a). Then, for each calendar year \(y\) and for each node, we calculate the anomalous value for each calendar day \(t\) by using the original value minus the climatological average, then divided by the climatological standard deviation. The calculations of the climatological average and standard deviation are based on data from the year of 1979 to 2019. For simplicity, leap days are excluded.
The AO index was downloaded from: [https://www.cpc.ncep.noaa.gov/products/precip/CWlink/dailyaoindex/monthly.ao.index.b50.current.ascii](https://www.cpc.ncep.noaa.gov/products/precip/CWlink/dailyaoindex/monthly.ao.index.b50.current.ascii). [Accessed in Sep. 2021].
The Arctic Sea Ice Extent was downloaded from : [https://nsidc.org/data/g02135/versions/3](https://nsidc.org/data/g02135/versions/3). [Accessed in Jan. 2021].
### Assessing Weather Variability Functions
#### Advanced autocorrelation function method
The autocorrelation function (ACF) is widely used to measure the memory of a time series and reveals how the correlation between any two values of the signal changes as their time-lag [57]. Generally, for a given time series, \(x_{t}\), the ACF is defined as,
\[C(\tau)=\frac{\mathrm{Cov\left(x_{t},x_{t+\tau}\right)}}{\sqrt{\mathrm{Var \left(x_{t}\right)Var\left(x_{t+\tau}\right)}}}, \tag{5}\]
where \({\rm Cov}({\bf X},{\bf Y})={\rm E}[({\bf X}-{\rm E}[{\bf X}])({\bf Y}-{\rm E}[{ \bf Y}])]\) and \({\rm Var}({\bf X})={\rm E}[{\bf X^{2}}]-{\rm E}[{\bf X}]^{2}\). If the \(x_{t}\) are completely uncorrelated, for example, a white noise process, \(C(\tau)\) is zero at all lags except a value of unity at lag zero (\(\tau=0\)). A correlated process on the other hand, has non-zero values at lags other than zero to indicate a correlation between different lagged observations. In particular, short-range memory of the \(x_{t}\) are described by \(C(\tau)\) declining exponentially
\[C(\tau)\sim\exp\left(-\tau/\tau^{*}\right), \tag{6}\]
with a characteristic time scale, \(\tau^{*}\). For long-range memory, \(C(\tau)\) declines as a power-law
\[C(\tau)\propto\tau^{-\gamma}, \tag{7}\]
with an exponent \(0<\gamma<1\). However, a direct calculation of \(C(\tau)\), \(\tau^{*}\) and \(\gamma\) is usually not appropriate due to noise superimposed on the collected data \(x_{t}\) and due to underlying trends of unknown origin [74]. In order to overcome the problems described above, here, we develop an advanced autocorrelation function method to quantify the memory (both short and long range) strength \(W_{ACF}\) of a time series as,
\[W_{ACF}=\frac{{\rm max}\left(|C(\tau)|\right)-{\rm mean}\left(|C(\tau)|\right) }{\sqrt{{\rm Var}\left(|C(\tau)|\right)}}\equiv\frac{1-{\rm mean}\left(|C( \tau)|\right)}{\sqrt{{\rm Var}\left(|C(\tau)|\right)}}, \tag{8}\]
where'max' and'mean' are the maximum and mean values of the absolute ACF, i.e., \(|C(\tau)|\), respectively. \(\tau\in[-\tau_{max},\tau_{max}]\) is the time lag. In the present work, we take \(\tau_{max}=10\) days, since we are considering the day-to-day changes of data at the time scale of weather forecasting, i.e., within two weeks. Equation (8) describes the fluctuations of the ACF and its values reveal the strength of memory, i.e., higher (smaller) \(W_{ACF}\) indicates a weaker (stronger) correlation and results in a low (strong) memory. For example, white noise has a maximum value \(W_{ACF}=(2\tau_{max}+1)\sqrt{\frac{2\tau_{max}}{2\tau_{max}+1}}\). Other examples are described in Fig. 2. Another big advancement of our method is eliminating the problematic nonstationarities.
#### Advanced power spectrum method
The advanced autocorrelation function \(W_{ACF}\) quantify well the strength of memory for an arbitrary time series, but does not reveal any information about the frequency content. For example,
Eqs. (1) and (2) are two functions with different periods. Their \(W_{ACF}\) values are almost the same, as shown in Fig. 2. To fill this gap, we further develop an advanced power spectrum (PS) method. Based on the Welch's method [75] we define the advanced power spectral density \(W_{PS}\) as,
\[W_{PS}=\int_{f}P(f)\times fdf, \tag{9}\]
where \(P(f)\) is the normalized spectral density and \(f\) stands for the corresponding frequency, which can be obtained by Fourier transform. \(W_{PS}\) is indeed the weighted mean of \(f\), thus has the same unit as frequency. Notably, a relatively higher value of the \(W_{PS}\) indicates a larger ratio of the high frequency components (i.e., blueshift), see examples shown in Fig. 2.
### Climate Networks
#### Nodes
Different from the classical climate network with only one node classification, see Ref. [66; 30] and references therein, here, we define two types of nodes: globe nodes \(i\) with air temperature variable \(T_{i}(t)\); Arctic nodes \(j\) with Arctic sea ice cover variable \(I_{j}(t)\). We thus have \(8040\) globe nodes (as shown in Fig. S10b) and \(377\) Arctic nodes (as shown in Fig. S10a).
#### Links
We construct a sequence of multivariate climate networks. For obtaining the strength of the links between each pair of nodes \(i\) and \(j\), we compute, for each month \(m\), the time-delayed, cross-correlation function
\[C_{i,j}^{m}(\tau)=\frac{\left\langle T_{i}^{m}(t)I_{j}^{m}(t-\tau)\right\rangle -\left\langle T_{i}^{m}(t)\right\rangle\left\langle I_{j}^{m}(t-\tau)\right\rangle }{\sqrt{\operatorname{Var}(T_{i}^{m}(t))\operatorname{Var}(I_{j}^{m}(t-\tau))}}, \tag{10}\]
and
\[C_{i,j}^{m}(-\tau)=\frac{\left\langle T_{i}^{m}(t-\tau)I_{j}^{m}(t)\right\rangle -\left\langle T_{i}^{m}(t-\tau)\right\rangle\left\langle I_{j}^{m}(t)\right\rangle }{\sqrt{\operatorname{Var}(T_{i}^{m}(t-\tau))\operatorname{Var}(I_{j}^{m}(t))}}, \tag{11}\]
where the bracket \(\langle\rangle\) denotes an average over consecutive days during a given month \(m\), and \(\tau\in[0,\tau_{max}]\) is the time lag. Since we mainly focus on the dynamic Arctic WV, here we chose the maximal time lag \(\tau_{max}=20\) days for Eqs. (10) and (11).
We identify the time lag \(\theta\) at which the absolute value of the cross-correlation function \(|C_{i,j}^{m}(\tau)|\) reaches its maximum. The _weight_ of link \((i,j)^{m}\) is defined as the corresponding value of the cross-correlation function, i.e. \(C_{i,j}^{m}=C_{i,j}^{m}(\tau=\theta)\). Therefore, the weight of each link could be either positive or negative, but with the maximum absolute value. The sign of \(\theta\) indicates the direction of each link; that is, when the time lag is positive \((\theta>0)\), the direction of this link is from \(j\) to \(i\), and vice versa [76].
#### Null-model
Next, we investigate the statistical significance of the link weights in the real networks by comparing to the shuffled surrogate network. In the surrogate network, to calculate link weight for each pair of nodes, we use two segment of data, each is corresponding to \(30\) consecutive days starting from the first day of a month that is randomly selected from the period Jan. 1980-Dec. 2019, so that to destroy real correlations between two nodes in the temporal dimension. Then we define the significant threshold \(q\) as the \(95\%\) highest value of the absolute weights for all links in the surrogate network. The link \((i,j)^{m}\) in the real network for a specific month \(m\) is defined as significant if it is higher than \(q\) or lower than \(-q\), i.e., \(|C_{i,j}^{m}|>q\). We find that the number of significant links for each month's network are dynamically changing with time as shown in Fig. S11.
#### Node degrees
We define the degree for each global node as the number of significant links that connect to the Arctic nodes. We show heatmaps of node degrees for two specific months, i.e., the Feb. 2010 (Fig. 5a) and the Mar. 2019 (Fig. 5b). We observe higher node degrees in many regions, even in low latitudes, of the NH for Feb. 2010, comparing to that for Mar. 2019. We suppose it is related to the different phases of the AO.
### Teleconnection path mining
To identify the teleconnection path, we perform the _shortest path_ method of complex networks to find the optimal paths in our climate networks. A path is a sequence of nodes in which each node is adjacent to the next one, especially, in a directed network, the path can follow only the direction of an arrow. Here, our climate network is based on only one climate variable-air temperature at \(850hPa\) pressure level, and we select 726 nodes from the \(10512\) nodes [34; 37]. For each climate network link \((i,j)^{m}\), we define its cost function value as
\[E_{i,j}^{m}=\frac{1}{|C_{i,j}^{m}|}. \tag{12}\]
The Dijkstra algorithm [77] was used to determine the directed optimal path between a source node \(i\) and a sink node \(j\) with the following constraints [78; 37]: (i) the distance for every step is shorter than 1000km; (ii) link time delay \(\theta\geq 0\); (iii) the sum cost function value for all collection of links through path \(i\longrightarrow j\) is minimal. In this way, we identify the optimal paths for information/energy/matter spreading in the two-dimensional space.
## Data availability
The data represented in Figs. 2-5 are available as Source Data. All other data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request.
## Code availability
The C++ and Python codes used for the analysis are available on GitHub: ([https://github.com/fanjingfang/DAWV](https://github.com/fanjingfang/DAWV)).
###### Acknowledgements.
The authors wish to thank T. Liu for his helpful suggestions. We acknowledge the support by the National Natural Science Foundation of China (Grant No. 12205025, 12275020, 12135003).
## Author contributions
J.M and J.F designed the research. J.M performed the analysis, J.M, J.F, U.S.B and J.K generated research ideas and discussed results, and contributed to writing the manuscript.
## Additional information
Supplementary Information is available in the online version of the paper.
## Competing interests
The authors declare no competing interests.
|
2306.14124 | Dark Fermions in Fluctuating Valence Insulators | A fluctuating-valence impurity in a metal is quantum-critical unlike a Kondo
impurity which has the properties of a local Fermi-liquid. A systematic theory
for the fluctuating-valence lattice is constructed, based on the hybridization
and pairing of itinerant d-orbitals with localized f-orbitals both of which are
essential parts of the solution of the impurity problem. It also uses the fact
that the single-particle excitations at the Fermi-surface in any dimension can
be written as orthogonal Majoranas and those with linear departures from the
Fermi-surface as linear combination of bare particles and holes with the same
spin. The calculations on the lattice give four spin-degenerate one-particle
excitations of fractionalized fermions; two sets disperse across the chemical
potential and the other two have gaps. The former are shown to be dark to any
linear electro-magnetic probes of their charge and spin and observable only
through probes of their free-energy such as a Fermi-liquid specific heat and
magneto-oscillations characteristic of a Fermi-surface but without a Zeeman
splitting. The excitations with the gaps behave as in insulators but with
renormalized amplitudes. The superfluid density is zero. A magnetic field $H$
turns the insulator to a metal with a singularity in magnetization proportional
to $\sqrt{H - H_c}$, with $H_c$ related to the gap. Beyond $H_c$, the usual
Zeeman splitting appears in the magneto-oscillations. The properties and
predictions are compared to the momentous recent discoveries in
fluctuating-valence insulators. Similar excitations may be expected in
transition metal chalcogenide layers at fluctuating-valence, and quite likely
for Kagome lattices, and twisted multi-layer graphene near specific fillings. | C. M. Varma | 2023-06-25T04:38:33Z | http://arxiv.org/abs/2306.14124v3 | # Dark Fermions in Mixed Valence Insulators
###### Abstract
In a model of correlated insulators with for example \(f\) and \(d\) orbitals, conditions for a local pairing of the two orbitals exist without a global condensation. On minimizing the free-energy constrained by the condition for stability of mixed-valence, the magnitude of the local pairing is found to be identical to the magnitude of the renormalized hybridization of the \(f\) and \(d\) orbitals. If the system breaks inversion and charge conjugation with their product preserved, this leads to four bands of fractionalized fermions each with half the weight of the normal fermions; two sets disperse across the chemical potential and the other two have gaps. The fermions without a gap are dark to any linear electro-magnetic probes of their charge and spin and observable only through probes of their free-energy such as a Fermi-liquid specific heat and magneto-oscillations characteristic of a Fermi-surface but without a Zeeman splitting. The excitations in the bands with the gaps behave as in insulators. A magnetic field \(H\) turns the insulator to a metal with a singularity in magnetization proportional to \(\sqrt{H-H_{c}}\), with \(H_{c}\) related to the gap. Beyond \(H_{c}\), the usual Zeeman splitting appears in the magneto-oscillations. The properties and predictions are compared to the momentous recent discoveries in fluctuating mixed-valence insulators.
A truly remarkable discovery of the past decade is the magneto-oscillations periodic in \(1/H\) characteristic of a Fermi-surface in fluctuating mixed-valence insulators SmB\({}_{6}\) and YbB\({}_{12}\)[1; 2; 3; 4]. Recent measurements [5] show that the change of the amplitude of the oscillations with temperature follows the Lifshitz-Kosevich formula which relies only on having degenerate excitations with a Fermi-distribution. The fermion mass found in such measurements and the size of the Fermi-surface from the period of oscillations, using the Onsager flux quantization condition, gives the coefficient of the linear in temperature specific heat (and associated thermal conductivity) in quantitative agreement with measurements [6] as for metals. ARPES [7] and neutron scattering [8] measurements show single-particle and particle-hole excitations with gaps consistent with the insulating gap deduced from the resistivity. There are other rather unique features of the recent results [9; 10], which will be addressed below. Older experiments are reviewed in [11; 12].
An essentially exact result, obtained by Wilson's numerical renormalization group, is that unlike the Kondo impurity in a metal, the correct minimal model for a mixed-valence impurity has logarithmic low energy singularities in its local charge and spin-susceptibility as well as in its local pairing susceptibility [13; 14]. The excitations in the same model discovered by bosonization [14] of the fermions coupling to the impurity can be expressed as Majoranas [15] which do not couple linearly to electromagnetic fields. This was extended to construct a theory for the lattice [15]. But the nature of the non-local particles on which the periodic Bloch conditions are imposed in such a theory is physically obscure, which makes the procedure of extension to the lattice unclear. A direct approach to the problem in the lattice is used here, based on an ansatz motivated by the divergence of the pairing susceptibility in the single impurity problem. The consistency of the ansatz is proven and leads to a state with close correspondence to the experiments and several predictions.
_The Model_: The physical reasons for the difference of a mixed-valence impurity from a Kondo impurity lie in the requirement that both valences satisfy the Friedel screening condition and have been already described [16; 17; 18]. The juxtaposition of the correlated f-levels with respect to the chemical potential in the band are sketched in Fig. (1). A simple model extended to the lattice which is a band-insulator has a basis \(f_{i\sigma}\) and \(d_{i\sigma}\) at each site,
with a Hamiltonian,
\[H = \sum_{i\sigma}H_{i\sigma}+\ \sum_{k,\sigma}\epsilon(k)d_{k,\sigma}^{+}d _{k,\sigma}, \tag{1}\] \[H_{i\sigma} = \epsilon_{f}^{0}n_{fi}-\mu(n_{fi}+n_{di})+t(f_{i\sigma}^{+}d_{i+n, \sigma}+H.C.)+Un_{fi\uparrow}n_{fi\downarrow}\] \[+ V(n_{fi}-1/2)(n_{di}-1/2)-J\sum_{\sigma}n_{fi\sigma}n_{d,i\sigma}.\]
\(n\) sums over the neighbors of a site i. The centroid \(\epsilon_{d0}\) of the d-band energies \(\epsilon(k)\) is at 0. The correlated f-orbital energy is \(\epsilon_{f}^{0}\) and \(U>>t\). On any given site f and d have different symmetry so that interactions between both relative spin-combinations in \(n_{fi}\) and \(n_{di}\) is allowed through the parameter \(V\) and there is a Hund's rule coupling \(J\) favoring same spin. In the single-impurity problem \(V\) is fixed to satisfy the stability condition for a mixed valence [13; 14]. I have subtracted 1/2's in that term so that with chemical potential \(\mu=\epsilon_{f}^{0}=\epsilon_{d0}\), and \(<n_{di}>=0\), the state with (\(n_{fi}=0,n_{di}=1\)) has the same local energy as the state with (\(n_{fi}=1,n_{di}=0\)), thus maintaining local charge neutrality or the Friedel screening condition.
The simple model used here is sufficient to derive the properties for \(<n_{f}>=1/2\). In actual materials, the stable value of the fractional \(f-\)valence is determined by various other details [11; 16] such as the ionicity and the radii of the cations and the anions.
The equations of motion in the problem for \(U\rightarrow\infty\) are equivalent to dropping the term proportional to \(U\) and replacing the bare width of the f-level \(\Gamma\equiv\pi t^{2}\nu\), where \(\nu\) is the d-band density of states at the chemical potential by the operator,
\[\overline{\Gamma}_{\sigma}=\Gamma_{\sigma}(1-n_{fi-\sigma}) \tag{2}\]
This procedure was introduced in Ref. [11; 19] for the Anderson impurity model [20] as well as for a lattice; its mean-field value, \(\Gamma_{\sigma}(1-<n_{f-\sigma}>)\), was shown to be the expression commonly used for the Kondo temperature. Similar result has been obtained with additional sophistication, and the approximations in a variety of different ways, [21; 22; 23][24; 25], [26; 27; 28]. We may define \(\overline{t}_{\sigma}\) by \(\overline{\Gamma}_{\sigma}=\pi\overline{t}_{\sigma}^{2}\nu\), which is marginal [29; 30] about the high temperature fixed point where it flows to 0 and the susceptibility is the Curie law. The crossover temperature is the Kondo temperature which is of \(O(\Gamma)\) for mixed-valence [17; 19]. We need not dwell on the issue of the precise value of \(\overline{t}\) as long as the renormalization effects lead to an effective hybridization parameter between the \(f\) and the \(d\)-orbitals at low temperatures.
_The Ansatz_: After the mean-field approximation on the term proportional to \(V\) already mentioned, the possibility of equal spin pairing is investigated by introducing
\[P_{i}=J<f_{i\sigma}d_{i,\sigma}>. \tag{3}\]
\(P_{i}\) is in general complex. Its magnitude is assumed uniform and denoted by \(P\). The local Hamiltonian is
\[\overline{H}_{i\sigma}=\frac{P^{2}}{2J}+P\big{(}f_{i\sigma}^{+}d_{i,\sigma}^{+ }+H.C.\big{)}+\overline{t}\big{(}f_{i\sigma}^{+}d_{i,\sigma}+H.C.\big{)}+( \epsilon_{f}^{0}-\mu)n_{fi\sigma}. \tag{4}\]
(\(\overline{t}\) is actually odd in \(k\). But only its value in the vicinity of crossing of the bands appears linearly in the theory and since these are not points of any particular symmetry, \(\overline{t}\) may be taken to be a constant.) In terms of real fermion operators,
\[m_{fi1\sigma}=\frac{1}{\sqrt{2}}(f_{i\sigma}+f_{i\sigma}^{+});\ m_{f2i\sigma}= \frac{i}{\sqrt{2}}(f_{i\sigma}-f_{i\sigma}^{+}), \tag{5}\]
and similarly defined operators \(m_{d1i\sigma}\) and \(m_{d2i\sigma}\) for the \(d\)-fermions, the last three terms of \(\overline{H}_{i\sigma}\) are
\[i(P+\overline{t})\ m_{f2i\sigma}m_{d1i\sigma}+i(P-\overline{t})\ m_{f1i\sigma}m _{d2i\sigma}+i(\epsilon_{f}-\mu)m_{fi\sigma}m_{f2i\sigma}. \tag{6}\]
\(\epsilon_{f}^{0}\) has been replaced by \(\epsilon_{f}\) because of renormalization of the \(f\) orbital energy derived later. Define operators in momentum by
\[\alpha_{f1k\sigma} = \frac{1}{\sqrt{N}}\sum_{i}\ e^{ik.R_{i}}m_{fi1\sigma}=\frac{1}{ \sqrt{2}}(f_{k\sigma}+f_{-k\sigma}^{+}), \tag{7}\] \[\alpha_{f2k\sigma} = \frac{1}{\sqrt{N}}\sum_{i}\ e^{ik.R_{i}}m_{f2i\sigma}=\frac{i}{ \sqrt{2}}(f_{k\sigma}-f_{-k\sigma}^{+}). \tag{8}\]
Similarly define \(\alpha_{d1k\sigma},\alpha_{d2k\sigma}\). Note that the coefficients in the linear combination of creation and annihilation operators (of the same spin) with opposite momenta \(k\) and \(-k\) in \(\alpha\)'s are \(\pm 1/\sqrt{2}\) for all \(k\) and that \(\alpha_{(f,d),(1,2),-k\sigma}=\alpha_{(f,d),(1,2),k\sigma}^{+}\) and that the \(\alpha\)'s are not Majoranas.
The materials under consideration are topological insulators [31; 32]. We find it necessary for the properties to be consistent with experiments only if they belong to the class of such insulators [33] which break inversion \({\cal I}\) as well as charge conjugation \({\cal C}\) while preserving their product (as is required in a Lorentz-invariant system with time-reversal preserved). Then \((\epsilon(k)-\mu)=-(\epsilon(-k)-\mu)\). As shown below, stability of the solution requires \(\epsilon_{f}=\mu\). The
total effective Hamiltonian with these conditions is
\[H_{eff} = \sum_{k,\sigma}(\epsilon(k)-\mu)(\alpha^{+}_{dk1\sigma}\alpha_{dk1 \sigma}+\alpha^{+}_{dk2\sigma}\alpha_{dk2\sigma}) \tag{9}\] \[+ \frac{P^{2}}{2J}+(P+\overline{t})\alpha^{+}_{fk2\sigma}\alpha_{dk1 \sigma}+(P-\overline{t})\alpha^{+}_{fk1\sigma}\alpha_{dk2\sigma}.\]
Diagonalizing \(H_{eff}\) gives four bands with energies
\[{\cal E}_{(v,c)(1,2)}(k)=(\epsilon(k)-\mu)/2\mp\sqrt{(\epsilon(k)-\mu)^{2}/4+( \overline{t}\pm P)^{2}}. \tag{10}\]
_The principal results_: The ground state energy is calculated from \(<H_{eff}>\) and minimized with respect to \(P\) for a fixed \(\overline{t}\) and with a lower-cutoff \(-W\) for the d-band. This gives
\[P=J\int_{0}^{-W}d\epsilon\ \nu(\epsilon)\Big{(}\frac{(P+\overline{t})}{\sqrt{ \epsilon^{2}/4+(P+\overline{t})^{2}}}+\frac{(P-\overline{t})}{\sqrt{\epsilon ^{2}/4+(P-\overline{t})^{2}}}\Big{)}. \tag{11}\]
Figure 1: On the left: Juxtaposition of the correlated f-levels in mixed valence with respect to the conduction band reservoir shown in black and the chemical potential, in terms of parameters defined in Eq. (1). On the right, the schematic dispersion of the bands after the solution presented. In black are shown the valence and conduction band of the insulator made from hybridizing the \(f\) and \(d\) orbitals without introducing local pairing \(P\) of the \(f\) and \(d\). The others are the outcome of introducing \(P\). \(\alpha_{d}\) is the electromagnetically dark band derived from the d-band with half its degrees of freedom crossing the chemical potential and giving the magneto-oscillations without Zeeman splitting. \(\alpha_{f}\) is the non-dispersive electromagnetically dark flat band striding the chemical potential derived from the f-states. The bands in red labelled \(c\) and \(v\) are the hybridized \(f-d\) conduction and valence bands.
With a constant density of states \(\nu\) for the d-band for states with \(\epsilon\) near the chemical potential, (the results do not change for smooth variation of \(\nu(\epsilon)\)) and with \(\lambda\equiv\nu J\), this gives in terms of \(r\equiv P/\overline{t}\),
\[\Big{(}\frac{1}{\lambda}-2\ln\frac{W}{|\overline{t}|}\Big{)}=\frac{1}{r}\Big{(} (r+1)\log|r+1|+(r-1)\log|r-1|\Big{)}. \tag{12}\]
\(r=0\) is one obvious solution. On examining the curvature, the solution \(r=0\) has a local minimum only for \(\lambda<\frac{1}{2(1+\ln(W/\overline{t}))}\). The other solutions are symmetric about \(r=0\). We can confine further discussion for \(r\neq 0\) to its positive values.
Eq. (12) must be supplemented by the stability condition for mixed-valence, that the average occupation of the \(f\) and the \(d\) states is stable for each at \(1/2\) per site. To that end the self-energy \(\Sigma_{f}(i,\omega)\) of the f-level is calculated. Noting that \(f_{i\sigma}^{+}f_{i\sigma}=im_{fli\sigma}m_{fl2i\sigma}\), and using Eq. (6) and Eq. (9), the real part of the self-energy is,
\[Re\ \Sigma_{fi}(\omega)=|P^{2}-\overline{t}^{2}|\ i<m_{d2i\sigma}m_{d1i\sigma}> =-\nu|P^{2}-\overline{t}^{2}|\log\big{|}\frac{W}{\omega}\big{|}. \tag{13}\]
With \(\epsilon_{f}^{0}=\mu\), the renormalized f-orbital energy \((\epsilon_{f}-\mu)=Re\ \Sigma_{f}(\epsilon_{f}-\mu)\). The solution is \((\epsilon_{f}-\mu)=-c\ \nu|P^{2}-\overline{t}^{2}|\), where \(c\) is a numerical factor of \(O(1)\). For the eigenvectors corresponding to Eq. (10), the stable insulating state requires the chemical potential to be at the mid-point of the minimal gap between \({\cal E}_{(v,1)}(k)\) and \({\cal E}_{(c,1)}(k)\) so that the particle and hole excitation energies are equal; only then is the compressibility well defined for a pure insulator. This happens only for \(\epsilon_{f}=\mu\). So we require \(P^{2}=\overline{t}^{2}\), or \(r=\pm 1\), i.e. at the non-analyticity of Eq. (12). This condition is achieved for
\[\frac{1}{\lambda}-2\log\frac{W}{2\overline{t}}=0=\frac{1}{r}((r-1)\log|r-1|). \tag{14}\]
\(r=1\) is a consistent solution of Eq. (14), at which we also achieve \(<n_{f}>=<n_{d}>=1/2\) and an incompressible insulating state. The first part of Eq. (14) gives the condition on \(\nu V\) for stable mixed-valence. We must examine also that the system is stable about this solution. We find by calculating the second derivative of the energy with respect to \(r\), that it is positive for \(r=1\) in the sizable range \(\pm e/2\).
The condition on \(P(T)\) at finite temperatures from minimizing the free-energy is
\[\frac{P}{J}=(P+\overline{t})\int_{E_{0+}}^{-W}dE_{+}\ \nu(E_{+})\frac{\tanh( \beta E_{+}/2)}{\sqrt{\epsilon^{2}+(P+\overline{t})^{2}}}+(\overline{t} \rightarrow-\overline{t},E_{+}\to E_{-}) \tag{15}\]
\(E_{+,-}\) refers to the eigenvalues in Eq. (10) with \((\overline{t}\pm P)\). \(E_{0,+,-}\) are the lowest values \(|\overline{t}\pm P|^{2}/W\) allowed of \(E_{+,-}\) and \(\nu(E)\equiv\nu(\epsilon)\frac{d\epsilon}{dE}\). Noting that \((\epsilon^{2}+(\overline{t}\pm P|^{2})^{-1/2}\frac{d\epsilon}{dE_{\pm}}=E_{ \pm}^{-1}\), together with \(\int_{a}^{W}dx\frac{\tanh(\beta x)}{x}=\ln(W/a)\) for \(\beta W>>1\), gives the condition \(r(T)=\pm 1\) or \(0\) independent of temperature with the same conditions as at \(T=0\). Note that \(\overline{t}\) itself crosses over to \(0\) as the high temperature fixed point is approached. There are also analytic finite temperature corrections about \(T=0\) as for ordinary insulating and metallic states for the two pairs of sets of states.
_Phase Fluctuations_: The amplitude \(|P_{i}|=\overline{t}\) at every site has been found in the mean-field calculation above. We must allow \(P_{i}=Pe^{i\phi_{i}}\) and calculate the static fluctuations \(<P_{i}^{*}P_{j}>\). The correlations can be derived from the fluctuation contribution of the term in \(H\) proportional to \(V\); the slowest decaying contribution is
\[<P_{i}^{*}P_{j}>=J^{2}P^{2}\sum_{\sigma,n}<d_{i\sigma}^{+}d_{j\sigma}>(-\omega _{n})<f_{i\sigma}^{+}f_{\sigma}j>(\omega_{n}). \tag{16}\]
The single-particle Green's function \(<d_{i}^{+}d_{j}>(\omega_{n})\) has projection on the fermion band \(\alpha\) which is gapless at the chemical potential. However, \(<f_{i}^{+}f_{j}>(\omega_{n})\) has projections only on the bands \(v\) and \(c\) with a gap \(O(\overline{\Gamma})\). With the velocity \(\to 0\) in these bands over a large part of the Brillouin zone, \(<P_{i}^{*}P_{j}>\) decays exponentially with a length of no more than a lattice constant. This result means that the phase of \(P_{i}\) is independent at every site.
There is no long-range condensation nor a phase transition as a function of temperature in this model. \(|P|\) follows \(\overline{t}\) at any temperature and acquires a finite value through a cross-over in temperature. It is shown below that the superfluid density is \(0\), as in an insulator.
_Principal Physical Results_: (i) For \(P=\overline{t}\), Eqs. (10) give a fermion band with annihilation operators \(\alpha_{d1k\sigma}\) and non-dispersive fermions with annihilation operators \(\alpha_{f2k\sigma}\) without a gap at the chemical potential. The other two are gapped valence and conduction band states with eigenvectors which are linear combinations of \(\alpha_{d2k\sigma}\) and \(\alpha_{f1k\sigma}\). All are sketched in Fig. (1). Particle-hole excitations may be constructed from such single-particle states. Note that each of the single-particle excitations has half the normal weight of fermions, thus conserving the total degrees of freedom.
(ii) States at any \(k\) for \(\alpha_{d1k\sigma}\) and \(\alpha_{f2k\sigma}\) are equal combination of creation and annihilation operators (with same spin) of ordinary fermions at all \(k\), like the states in (equal spin-pairing triplet) BCS superconductivity _at only_\(k=k_{F}\). They do not respond to any low energy perturbations coupling linearly to charge density, spin-density, current or spin-current
operators at any center of mass momenta. This excludes any low energy electromagnetic or spin-magnetic or neutron scattering response. The states with a gap are unequal combination of creation and annihilation operators of ordinary fermions (except at the isolated points where the gap is \((\overline{t}+P)\)). They therefore give a response similar to of an insulator with a band-structure with a direct gap given by \(2\overline{t}\) and minimum indirect gap of about \(4\overline{t}^{2}/W\).
(iii) There is no superfluid density as a response to a vector potential. This may be seen from the fact that the zero momentum optical transitions are only between the \(f\) and \(d\) projections of the bands. The integrated spectral weight of these transitions over energy of these between the bands shown may easily be seen to the same as between the insulating state bands with \(P=0\). This means that the diamagnetic \(A^{2}\) term in the response is exactly cancelled by the paramagnetic response, just as happens in an ordinary insulator.
Optical experiments with threshold at \(2(2\overline{t})\) use states in \(c\) and \(v\) which are neutral so that the rise above threshold will be slow due to the "coherence factors". But indirect transitions at \(2\overline{\Gamma}\) by neutron scattering should show sharp features because of the nearly zero-velocity at the indirect gaps. These predictions should be tested.
(iv) _Magneto-oscillations_: Let us go back to the original Hamiltonian \(H\) of Eq. (1) and obtain Landau levels for the \(d\)-electrons on applying a vector potential \(A\). The quantum numbers in a field are \(k_{z}\) and the two-dimensional Landau level indices \(n\). The energy is now \(\frac{k_{z}^{2}}{2m}+(n+\frac{1}{2})\omega_{c}\), \(\omega_{c}=eH_{0}/mc\). As in Eq. (5), we can split the annihilation and creation operators labelled by these quantum-numbers to a pair of real operators. Then we have degenerate Landau levels made from bands which we may label \(\alpha_{dL1\sigma},\alpha_{dL2\sigma}\)_before_ we consider the hybridization with the \(f\)-orbitals.
Given their correlation energy, the \(f\) orbital wave-functions are unaffected by the magnetic field. Their local hybridization to the d-orbital Landau levels may now be considered, again constrained by the renormalization of \(t\) to \(\overline{t}\). Given the very high number of Landau levels for the d-orbitals, \(2t_{d}/g\mu_{B}H\) of \(O(10^{3})\) for 50 Tesla, the calculation of \(P\) in the new basis gives the same result as before because no assumption about the wave-functions of the states was made. Then with \(P=\pm t\), one of the annihilation operators for Landau levels, \(\alpha_{dL1}\) or \(\alpha_{dL2}\) is unaffected by the hybridization, while the other develops gaps and Landau levels of the conduction band and of valence band. Therefore in the presence of a field, we have Landau levels across the Fermi-surface whose occupation oscillates as the magnetic field is changed. These are sufficient conditions to have de Haas- van Alphen oscillations with amplitudes
which are given by Lifshitz and Kosevich. The prediction here is that the amplitude of the oscillations will be \(\frac{1}{2}\) the normal value; this prediction can be tested by comparing with the amplitude on further increasing the field, where as discussed below, the "insulator" turns to a metal. The situation must change when the cyclotron energy becomes larger than \(\overline{\Gamma}\), which is the minimal gap in the band-structure. I also show below that there should be no spin-splitting of the oscillations in the insulating state.
(v) _Zeeman effects_: In a magnetic field \(H_{0}\), the Zeeman Hamiltonian,
\[H_{Z}=-\mu_{B}\sigma\cdot H_{0}\big{(}\sum_{i}g_{f}f_{i\sigma}^{+}f_{i\sigma}+ g_{d}d_{i\sigma}^{+}d_{i\sigma}\big{)}. \tag{17}\]
Consider first the effects perturbatively on the bands \(\alpha_{dk2\sigma}\) and \(\alpha_{fk1\sigma}\) crossing the chemical potential. It is easy to check that the uniform magnetic susceptibility of these bands is 0 because, integrated over \(k\), they have equal linear combinations of hole and electron states with the same spin. The Fermi-wave-vector for the up and down spin remains the same. There can then be no spin-splitting of the magneto-oscillations.
The gapped bands \(v_{k\sigma}\) and \(c_{k\sigma}\) are also similar combination of up spins and of down spins. So their linear susceptibility contribution is also zero. The finite magnetic susceptibility in the experiments can be traced to details of the spin-orbit splitting of the f-orbitals and the resultant van-Vleck susceptibility. Such details are absent in the model above. The lack of spin in these states implies of-course a lack of spin-orbit coupling and therefore no torque in applying magnetic fields at arbitrary direction with respect to the crystalline axes.
The perturbative calculation breaks down as \(g\mu_{B}H_{0}\) is increased to the order of the gap \(\overline{\Gamma}\). Now we consider the relation Eq. (2) between \(\overline{\Gamma}_{\sigma}\) and \((1-n_{f,-\sigma})\), which marks the end of the Kondo or mixed-valence renormalizations. As argued earlier [18], the occupation of the minority \(f\)-spin must go to 0 and the occupation of the majority \(f-\)spin \(\to 1/2\), maintaining the occupation required at mixed-valence by the condition that \(V>>t\). This marks a transition [18] from the insulator to a (polarized) metal where \(\overline{t}_{\downarrow}\to 0\) so that the hybridization of the wide band with this spin disappears. Near this point, the self-consistency condition continues to give \(P_{\sigma}=\pm\overline{t}_{\sigma}\) on minimizing the energy with respect to \(P_{\sigma}\) for a fixed \(\overline{t}_{\sigma}\). Then we can write \((\overline{t}_{\sigma}+P_{\sigma})=\zeta(H-H_{c})\), where \(H_{c}\) is the field for the transition to the metallic state, so that the dispersion of band at the chemical potential of one of the spins has \((\zeta(H_{0}-H_{c}))^{2}\) replacing \((\overline{t}+P)^{2}\). The _increase_ of magnetization for
\(H_{0}>H_{c}\) then starts with
\[M\propto\sqrt{2H_{c}(H_{0}-H_{c})}. \tag{18}\]
A sharp change in the magnetization just above the insulator to metal transition in a field was reported [9; 10] and appears to be consistent with Eq. (18). Also, no Zeeman splitting of oscillations was observed in the insulating state while it appears in the metallic state [9; 10]. These results are a strong tests of the theory given here. Interestingly, the torque in the experiments increases dramatically above \(H_{c}\). I attribute this to the re-appearance of spin and therefore spin-orbit coupling in the low energy states at the transition to the metal.
(vi) One necessary aspect of the theory are the \(\alpha_{fk2\sigma}\) states localized at the chemical potential. They are dark but have a free-energy independent of temperature, so that there is ground state entropy. The magnitude of this in the simple model is large of \(O(R\ln 2)\). If residual interactions remove their entropy, it must be at very low temperatures.
Alternative theoretical efforts on this problem may be found in [34; 35; 36; 37; 38; 39].
_Summary_: The results in this paper come from the identity \(P=\pm\overline{t}\) discovered in this paper from the solution of a model of strong correlations with more than one orbital at a site and \(\mathcal{I}\) and \(\mathcal{C}\) breaking with the product preserved [33]. A pair of set of the resulting fermions are equal linear combinations of particles and holes with the same spin and opposite momentum at all momenta. The gap-less fermions with dispersion characterized by a high velocity do not respond electro-magnetically but respond in probes of their free-energy. The other pair are excitations with gaps with normal characteristics as in insulators. The results bear detailed comparison with experiments and have several predictions. Interesting questions remain on what other classes of models and their symmetries lead to the dark states and fractionalization of fermions discovered here quite so simply and systematically.
_Acknowledgements_: Thanks are due to Lu Li, John Singleton and Suchitra Sebastian for discussion of their experimental data, and the former two for suggestions on the manuscript. I also wish to thank Rahul Nandkishore for introducing me to Ref. [33], and James Analytis, Robert Birgeneau and Joel Moore for arranging my stay at Berkeley, where this work was done. |
2305.05579 | Radar Altimeter Redesign for Multi-Stage Interference Risk Mitigation in
5G and Beyond | The radar altimeter is installed on most 14 CFR Pt 25 category aircraft,
which are applicable to passenger travel and represent most airline traffic.
The radar altimeter system is highly accurate and reports the height above the
terrain. It plays a significant role in the take-off, approach, and landing
phases of the applicable aircraft. In critical conditions, including reduced
visibility, proximity to terrain, collision avoidance, and autoland procedures,
the accuracy of radar altimeters is crucial to the safety of aircraft. This
study aims to address the inappropriate behavior of the susceptible system that
may cause essential safety concerns with unknown interoperability and
operational impacts. We design and verify a strategic approach to mitigate the
risks of potential airborne interference to a radar altimeter due to the
coexistence of a 5G and future G signal, especially with the growing demand for
the Space Air Ground Integrated Network (SAGIN). This study details a design
change to a pre-existing radar altimeter system, and the process necessary to
gain certification approval following this change is analyzed. We address the
certification aspects from a TSO perspective resulting from changes made to a
system post-certification. Artifacts, as defined in the FAA Project Specific
Certification Plan template, including the Change Impact Analysis, Means of
Compliance, and Test Plans, which are mandated by the certification authorities
and requested by aircraft manufacturers and operators to ensure a level of
compliance during the engineering cycle, have been adhered to. | Jarret Rock, Ying Wang | 2023-05-09T16:14:57Z | http://arxiv.org/abs/2305.05579v1 | # Radar Altimeter Redesign for Multi-Stage Interference Risk Mitigation in 5G and Beyond
###### Abstract
The radar altimeter is installed on most 14 CFR Pt 25 category aircraft, which are applicable to passenger travel and represent most airline traffic. The radar altimeter system is highly accurate and reports the height above terrain. It plays a significant role in the take-off, approach, and landing phases of the applicable aircraft. In critical conditions, including reduced visibility, proximity to terrain, collision avoidance, and autoland procedures, the accuracy of radar altimeters is crucial to the safety of aircraft.
Effective January 2022, the permission for 5G deployment via the FCC, FAA, and other relevant parties generates safety concerns for the aviation industry, with multi-level impacts that include aircraft manufacturers, avionics equipment manufacturers, operators, air traffic controllers, certification authorities, and passengers. The 5G network operates in a frequency range that coincides with that previously dedicated to the radar altimeter system. The FAA has advocated the use of RF filters to purify the signals received by the radar altimeter. With shared frequency usage, considerations must be given to the interoperability of the systems. Furthermore, a hierarchy must be established regarding the criticality of the systems and which ones are susceptible versus offending. Inconsistent and inaccurate radar altimeter values communicated throughout the avionics architecture resulting from spurious 5G signals from cellular communication network use can be catastrophic.
This study aims to address the inappropriate behavior of the susceptible system that may cause essential safety concerns with unknown interoperability and operational impacts. We design and verify a strategic approach to mitigate the risks of potential airborne interference to a radar altimeter due to the coexistence of a 5G and future G signal, especially with the growing demand for the Space Air Ground Integrated Network (SAIGN). This study details a design change to a pre-existing radar altimeter system, and the process necessary to gain certification approval following this change is analyzed. We address the certification aspects from a TSO perspective resulting from changes made to a system post-certification. Artifacts, as defined in the FAA Project Specific Certification Plan template, including the Change Impact Analysis, Means of Compliance, and Test Plans, which are mandated by the certification authorities and requested by aircraft manufacturers and operators to ensure a level of compliance during the engineering cycle, have been adhered to.
Radar altimeter 5G, Certification, TSO, Aircraft Safety
## I Introduction
The emergence of 5G network technology poses a safety concern for the aviation industry, particularly for the radar altimeter (also known as radio altimeter, rad alt, low-range altimeter, or RALT) system. This system is susceptible to RF interference due to 5G operating in close proximity to a dedicated aeronautical radio navigation frequency band. Of particular concern is the fact that 5G telecommunications operations occur in the range of 3.7 to 3.98 GHz [1]. Radar altimeters are precision systems that operate in the 4.2 to 4.4 GHz range and provide critical flight data to aircraft. An International Air Transport Association (IATA) and International Federation of Air Line Pilots' Associations (IFALPA) document states that 'any failures or interruptions of these sensors can therefore lead to incidents with catastrophic outcomes, potentially resulting in multiple fatalities' [2]. At the time of writing this study, only preliminary suggestions existed for how to protect radar altimeters, and several manufacturers had not incorporated any such protection into their products. A proactive approach is needed to mitigate possible erroneous radar altimeter data resulting from unwanted interference while still maintaining airworthiness, and that is the focus of this study.
Radar altimeter systems are tasked with providing vertical height above terrain. This is an absolute value and is not influenced by factors such as atmospheric pressure and temperature. This is in contrast to baro-corrected altitude, which allows for pressure setting adjustments via a Kollsman Window as the aircraft transitions between air masses of varying pressure characteristics. Similarly, GPS data being used for landing applications is further refined by Wide Area Augmentation System (WAAS) and Receiver Autonomous Integrity Monitoring (RAIM) to guarantee a given accuracy [3]. Radar altimeters, however, typically have no compensation during their active mode and are thus dependent on a sanitized RF environment to guarantee accuracy. Given that radar altimeters are focal in providing data to the flight deck during landing phases when terrain separation is critical, safety concerns surround the reliability of this data in the presence of 5G emissions. Radar altimeters may be as accurate as \(\pm 3ft\)[4][5].
Historically radar altimeters have been afforded frequency spacing from other systems hence protecting the integrity of the system. According to DefenseNews, the C-Band is, "...relatively quiet... For decades, this made the neighboring 4.2-4.4 GHz frequency a perfect place for the operation of radar altimeters...' [6]. Due to the protected RF environment that radar altimers have operated within, there is often no protection incorporated in the design of most legacy units against co-existent frequency usage. There has previously not been any regulatory requirement to protect against 5G. As stated in a Special Airworthiness Information Bulletin, "TSO-C87A does not provide criteria for compatibility with adjacent band operations, including potential impacts associated with
wireless communications system deployments" [7]. There is some collateral damage to consider. The potential misleading data is not isolated to the radar altimeters in a monitor-only context. Architecturally the radar altimeter provides altitude input for several other flight critical systems. That is to say, an erroneous altimeter value is more than just a visual discrepancy on a pilot display, instead it can propagate this error into the Terrain Awareness Warning System (TAWS), the auto-land system (AFCS) and the Traffic Alert and Collision Avoidance System (TCAS) to name a few. Figure 2 shows an example of a systems architecture with the rad alt feeding consumers of its data.
In a civilian application, it should be noted that the radar altimeter is "...the only sensor that provides direct measurement of the clearance height of the aircraft over the terrain or other obstacles, and failures of these sensors can therefore lead to incidents with catastrophic results resulting in multiple fatalities" [1]. Radar altimeters are featured on both civil and military aircraft installations and are poised to be a key system in autonomous flight vehicles. An effective solution to protecting radar altimeter data can protect lives in aviation.
This study will present one strategy that may be used by a radar altimeter manufacturer to reduce the susceptibility to 5G RF interference of currently fielded radar altimeter products. Further, the study will highlight the engineering life cycle associated with making modifications to existing products that have been previously certified for aircraft use via a Technical Standard Order (TSO). The intellectual contribution from this study can be summarized as:
* Detailing the macro-process for revising an existing TSO'd product. Its benefits include a significant reduction in both Engineering effort and Certification approval time based on a delta certification strategy.
* Creating the PSCP. Its benefits include a summary of the elements that create the PSCP and a definitive process for determining the Major or Minor classification of modifications to a sample radar altimeter.
* Description of implementation of the solution. Its benefits include various means of demonstrating Compliance with a still evolving regulatory baseline.
* Description of the equipment verification process. Its benefits include exploring various verification means and their suitability to the regulations and standards that the equipment manufacturer is seeking compliance with.
The rest of this paper is organized as follows. Section II discussed the background and related work. Section III describes the overview and system design of radar altimeter redesign and risk mitigation following the product cycle. Section IV provides details I the radar altimeter redesign, followed by Section V in verifying the design and certificate program for potential deployment. We discussed the current limitations and potentials in further collaborative work in Section VI. Finally, the conclusion and future work is shown in Section VII.
## II Background and Related Work
The Radio Technical Commission for Aeronautics (RTCA) in conjunction with the Aerospace Vehicle Institute, has produced a paper titled Assessment of C-Band Mobile Telecommunications Interference Impact on Low Range Radar altimeter Operations. The RTCA Special Committee 239 formed a 5G Task Force in April 2020 to lead the research efforts that would produce the data featured in the paper. It was determined by the aviation industry that there was a need to characterize the performance of fielded radar altimeters operating in the presence of RF interference from 5G networks in the band of concern. Additionally, the aviation industry wanted to understand the risks of 5G and the potential impacts to continued safe aviation operations [1]. Technical information was sourced from both the mobile industry as well as radar altimeter manufacturers. One conclusion of the RTCA document is that the results presented within the "... report reveal a major risk that 5G telecommunications systems in the 3.7-3.98 GHz band will cause harmful interference to radar altimeters on all types of civil aircraft including commercial transport airplanes: business, regional and general aviation airplanes; and both transport and general aviation helicopters" [1]. The concern is beyond national, IATA and IFALPA have jointly released a document to address their concerns related to 5G. Much of their findings are consistent with those of the RTCA. "Radar altimeters are deployed on tens of thousands of commercial and general aviation aircraft as well as helicopters worldwide. The radar altimeter is one of the most critical components to an aircraft's operations... Undetected failure of this sensor can therefore lead to catastrophic results..." [2].
Fig. 1: Radar Altimeter Transmit and Receive Signal Propagation
Fig. 2: System Block Diagram (Reproduced from: Honeywell, 2021)
In another study on the effects of EMI upon aircraft avionics, it is stated that, "The adverse impact of a 5G mobile handset cannot be understated. This underscores the need to conduct comprehensive testing on EMI attack scenarios..." [8]. There is mention of the fact that mobile phones are a threat to aircraft navigational systems due to the high frequency and hence short wavelength of their signals. Given that 5G phones operate in an even shorter wavelength, there is a concern that the signals are more intense as they are travelling shorter distances and will not have the opportunity to attenuate. These short travelling signals can conflict aircraft navigation signals that are propagating in the immediate vicinity of the aircraft. []. The article contends that there are at least two types of radiative interference from a cellular phone to be considered. The necessary fundamental emissions required to operate the phone in conjunction with its base station (send and receive signals) but less predictable are the spurious emissions which are unwanted but may also fall within the C-Band. The fundamental emissions occur outside the primary bandwidth of the radar altimeter hence the danger is blocking interference. Spurious emissions are within the bandwidth of the radar altimeter and therefore either desensitize or lead to false determination of the radar altimeter.. The image in Figure 3 reproduced from shows the effect of both the fundamental and spurious emissions on the radar altimeter band.
## III Overview of Product Life Cycle
The proposed method is to present a case study of a strategy to address 5G concerns with a rad alt product. Focal to this is the addition of a bandpass filter at the recommendation of the FAA to mitigate interference [9]. The study addresses the certification process associated with revisions to an exiting TSO and re-integration into the aircraft.
The various stages of the product life cycle have long been established and have roots tracing to the economist Raymond Vernon in 1966 [10]. This study assumes the four stages as shown in Figure 4 to support the radar altimeter product. After an assessment by the Product Management Team (PMT) that the radar altimeter product is in a profitable stage such as Maturity, it can justify the expenses associated with re-Engineering and re-Certification.
Communication during the product life cycle is key in ensuring that information is flowing appropriately between the various stakeholders of the TSO process. Some of the key communication paths are provided in Figure 5.
"Airworthiness Directives are legally enforceable regulations issued by the FAA in accordance with 14 CFR part 39 to correct an unsafe condition in a product" [11]. As such, The FAA issued AD 2021-23-12 and AD 2021-13 for airplanes and helicopters respectively to address, "... a determination that radio altimeters cannot be relied upon to perform their intended function if they experience interference from wireless broadband operations in the 3.7-3.98 GHz frequency band (5G C-Band)" [11]. The ADs required operators to revise their AFM to incorporate limitations prohibiting certain operations that are radar altimeter dependent when there is a known presence of 5G signals as communicated by Notices to Airmen (NOTAM) [12] Similarly (Safety Alert for Operators) SAFO 21007 was issued to operators to inform that certain Instrument Approach Procedures are restricted if by NOTAM they are "affected by 5G C-Band interference, and prohibited by the ADs unless the operator has an FAA-approved AMOC" [12].
With the various operational restrictions and the threat of invalidation of a TSO for a product that provides aircraft critical data, the longevity of the product is in question. In an attempt to preserve the product, the TSO must be restored. Following analyses by PMT, Engineering, Production, and Certification teams, a path forward is defined. In the case of
Fig. 4: Business Model and Product Life Cycle
Fig. 5: Typical Communication Flows
Fig. 3: Fundamental and Spurious 5G Emissions on Radar Altimeters
this particular product, it is assumed that its planned end-of-life is still distant and that there is a need to maintain the airworthiness of aircraft that utilize this radar altimeter. A compliance strategy is then defined by the Certification team and Engineering is used to create, implement and verify that the proposed solution meets certification intent. This plan to revise an existing product is communicated to the FAA by the TSO holder - the radar altimeter manufacturer, via a Project Specific Certification Plan (PSCP), refer to Figure 5 and Figure 6. With the oversight of the Certification Authority, the manufacturer establishes a Certification Package Flow. The Certification Authority confirms with the manufacturer that they can Find Compliance to the Affected Regulations via the artifacts generated from the Certification Package Flow. Figure 6 illustrates the Certification Package Flow relevant to this case study.
## IV Radar Altimeter Re-Design and Impacts Analysis
An output or deliverable from the PSCP is the Change Impact Analysis (CIA). The CIA is used to identify the proposed changes for the equipment and to determine the impact to certification. During the Engineering effort to create the CIA, Figure 7 shows some of the details that need to be considered.
**Declaration of the Change** In this case study the Declaration of the change is a 'Minor Hardware Change'. The distinction must be made as to whether the change is Minor or Major as there is a direct impact on the certification strategy or path, see Figure 6. Major changes are subject to more development, verification and certification rigor. The intended modification does not affect form, fit or intended function of the radar altimeter. The hardware modification is merely incidental to the operation of the radar altimeter and requires no additional training for the operator. It is classified as Minor, refer to the applicable regulatory definitions that classify these changes [13]. This determination is guided by 14 CFR 21.619. An assumption is that this modification does not require the full compliment of MOPS testing, thus it can be rationalized that the addition of the filter is not substantial enough to warrant a "...complete investigation to determine compliance with a TSO..." and therefore cannot be a Major change [13].
**Project Schedule** The estimated project completion time is defined at the start of the project. Schedules however are subject to factors such as finances, available Engineering resources to implement the change, Production lead times on components such as the filter, Production assembly times, laboratory availability to simulate the 5G environment, reconfiguring the aircraft for testing, FCC approval to conduct flight test as the radar altimeter is emissive and the FAA's process time for TSO approvals.
**Affected Part Number** The Affected PNs are identified at the time that the life cycle evaluation is conducted. Given that there is a Hardware change associated with the modification, there is an evolution of the PN.
**Modification Description** The primary modification is to incorporate the changes as proposed by the FAA in their guidance on 5G [9]. This involves the installation of a bandpass filter on the receiver partition of the radar altimeter. The filter will be installed prior to the RF input side of the rad alt.
There are two Stopbands, one on either side of the desired Passband. The theory is that only the intended frequency range of 4.2-4.4 GHz will pass through the filter. The undesirable 5G frequencies of 3.7-3.98 GHz being restricted by the Stopbands. The filter is proposed to also prevent spurious emissions from influencing the Bandpass.
It must be determined if there is an insertion loss penalty due to the filter installation. Compensations must account for unwanted signal attenuation either in software or a further hardware change.
It is typical that several hardware and/or software changes will be incorporated in a single TSO update. This is due to the practice of reviewing Open Problem Reports (OPRs) and attempting to resolve them in bulk if there is another modification in process. Figure 7 Shows that an OPR review is required as part of the CIA process. This case study does not include modifications beyond that required to incorporate the filter.
**Affected Regulations/Requirements/Standards** An analysis must be conducted of the applicable regulations. The
Fig. 8: Pre and Post Modification of Rad Alt
Fig. 6: Sample Certification Package Flows
Fig. 7: CIA Decision Inputs
hypothetical radar altimeter in this study was previously granted TSO-C87 approval which is the FAA's standard for radar altimeter performance. The MOPS, DO-155 outline the minimum operational performance needed for a piece of equipment to meet TSO approval. The MOPS therefore are Affected Requirements and Standards and since issued under 14 CFR Part 37, Affected Regulations [14]. Advisory Circular AC25-7D provides guidance for flight testing of transport category airplanes and thus is also an Affected Requirements document [15]. There are self-imposed requirements, often to provide a competitive edge against other products within the market. These self-imposed requirements may be more restrictive than industry or regulatory requirements. This particular product being a fielded product has a Product Specification, this document defines a level of accuracy of the radar altimeter that must be maintained post-modification. Table I shows the expected accuracy of the radar altimeter.
Consideration must be given to non-regulatory requirements that are sometimes imposed by a customer who is typically the aircraft manufacturer or operator, the radar altimeter in this case study is not subject to such customer requirements and will not be discussed further.
**Compliance Strategy and MOC** The purpose of Compliance is to associate a given product with a performance standard. In this study the dominant standard is TSO-C87. Due to the harsh environment that avionics equipment must operate in, there is a need to prove satisfactory operation when subjected to elements such as vibration, shock, temperature gradients, humidity, fluids, sand, dust and EMI [16]. A sample of possible Affected Regulations/Requirements/Standards is presented in Table II along with the proposed MOCs. The MOC is the technique that the RALT manufacturer will use to Show Compliance to the standards. The table further shows that not only are different MOCs used but that they may be used in combination.
**Verification Methods** The two verification methods chosen for discussion are Analysis and Tests.
Analysis as a MOC refers to the use of design documentation such as engineering drawings, product specifications and system descriptions to prove that a given product meets the intent or is Compliant with a particular requirement. An engineering drawing showing the updated architecture of the RALT including the bypass filter, can be used as partial evidence to Show Compliance to the requirement that a filter must be installed to protect against interference. Similarly, a vendor-provided product specification for the filter itself can be used to show how its performance is adequate to protect against the specific interference frequencies by referencing the Stopbands. It is common practice to use product specification documentation to Show Compliance with environmental conditions, as the vendor will typically include details of the environment in that their product is designed to work in.
Tests are an actual demonstration of the performance of the product being evaluated during scripted scenarios. There is a setup procedure for the test, there are the steps to be performed, and then there is the test outcome. Tests as a MOC are further reduced to Laboratory Tests and Flight Tests; aircraft ground testing is regarded as Flight Tests.
**Return to Service** A plan needs to be constructed as to how the modified RALTs will re-enter service. The typical means is a Service Letter (SL) for less critical items or a Service Bulletin (SB) to communicate more critical aircraft and equipment updates to the operator. Given that the presence of 5G interference has generated safety concerns, it is appropriate to issue a Service Bulletin. A date is defined that the SB needs to be complied with or else the grounding of the aircraft may occur. The SB is transmitted to all owners and operators of the affected aircraft on a serialization basis. The SB contains a history of the problem and a set of instructions for remedying the concern. The SB for this issue will detail the removal of the legacy and non-compliant RALT and detailed instructions for installation and checkout of the modified RALT.
Though the modified RALT will have a TSO and, therefore FAA approval, there is a different certification path to install the RALT and integrate it with other avionics components for use in flight. Another obstacle to be overcome is that the revised RALT will have a part number different to that which it is replacing and therefore will void the aircraft's Type Certification (assumes no approved substitution via Illustrated Parts Catalog (IPC)). A common solution to this is to apply for a Supplemental Type Certificate (STC) which if approved, modifies the original aircraft Type Certificate.
## V Re-Design Assessment and Verification
**Analysis:** Analysis has been selected as one of the MOC for Showing Compliance, refer to Table II. It is expected that the RALT performs satisfactorily when exposed to intentional and spurious emissions from adjacent authorized spectrum users. The specification for a suitable filter will reveal that, attenuation does not occur between 4000 MHz and 4600 MHz hence creating a Bandpass consistent with the required range for a RALT of \(4.2GHz-4.4GHz\) (\(4200MHz\)-\(4400MHz\)). The Stopbands are to be defined below and above the operational rad alt ranges.
**Tests:** Laboratory tests are planned as a MOC to Show Compliance. Indeed, laboratory tests provide a means to simulate direct exposure to a 5G environment with the test engineer having the ability to vary the intensity of the interference in a controlled manner. The modified RALT is sent an input signal via the Rx RF port to simulate a known altitude. This attitude is confirmed via a recording of the outputs of the RALT. The test is repeated. However, the input signal is distorted by frequencies representative of 5G and the RALT output of computed altitude is recorded.
Flight tests provide the opportunity to expose the units under test to a real-world environment and allow the equipment to be used in a manner consistent with the intended application.
Due to the potentially hazardous nature of flight test activities, a Safety of Flight Letter is required of the Engineering team responsible for the modification of the rad alt. The letter provides evidence that the modified unit has been subjected to laboratory-based testing and does not present a hazard of fire, explosion or EMI to the basic aircraft systems when powered on. The Safety of Flight Letter also lists the Serial Number of the RALT to be tested as this is validated at the time of installation in the test aircraft.
As the modified radar altimeter is not TSO'd at the time of test execution an alternate means of approval needs to be used to install and fly the aircraft with unapproved equipment. The aircraft is placed in an Experimental Category for purposes of Research and Development and at the time of Certification in Show Compliance using a Conformity process. [17]. This is a special aircraft Category issued by the FAA for the purpose of flight testing.
Per FAA guidance the rad alt system with an added filter should perform its function of altitude measurement with similar accuracy of the un-modified rad alt. A dual radar altimeter installation is recommended to enable this comparison. Performance demonstrations of the two rad alts will ensure performance consistent with the legacy product. For this type of requirement which is performance-based, Flight Tests are an appropriate MOC.
The radar altimeter has both transmit and receive components. Given that the flight test will require this bi-directional signal propagation over a geographic expanse, prior approval from the FCC is required. An application must be filed in advance.
For new products the FAA must approve or may delegate their approval to conduct the flight test campaign. The FAA also reserves the right to observe any Certification flight test even if approval is delegated. The approval follows their review of the Flight Test Plan.
The Flight Test Plan is the formal means of communicating to the FAA the details of the flight test campaign.
**Supplemental Type Certificate:** The RALT manufacturer provides the aircraft manufacturer with technical and certification details to show that the replacement RALT meets or exceeds the performance of the previous RALT and therefore does not compromise the aircraft. Installation drawings for the RALT need to be updated. However, they are contained in a separate engineering package for the STC, this maintains a clear path of independence from the original Type Certificate. Compliance and Verification activities are also performed at the aircraft level to prove a successful integration of the modified RALT into the aircraft. This standalone STC package, once approved by the Regulatory Authority, then allows the use of the modified RALT in the aircraft. The STC process is critical in the return to service of the aircraft as it serves as the final level of approval for flight with equipment that differs from that of the original aircraft Type Design.
At the time of authoring this document, the lab testing to support the modified radar altimeter had not been completed. Only preliminary flight tests had been completed. However, the test results are suggestive that the modified radar altimeter performs in a manner consistent with that of an un-modified unit.
## VI Discussion
Beyond the current results in this paper, the ongoing work includes an increased focus on a radar altimeter design that is less susceptible to EMI and system testing that exceeds the minimum requirements for certification. Specifically, robustness testing should target exposing the system to corner cases that are atypical for usual operation and regulatory impact on updating the existing Minimum Operating Performance Standards (MOPS) DO-155 to address the risk of 5G interference specifically.
There is much work to be accomplished in the near future to address radar altimeters and concerns about 5G. The regulations must be revised to reflect the new performance requirements and the minimum standards that the radar altimeters must be produced to. The RTCA states, "In all cases (TSO-C87, DO-155, and ED-30), there have been no specific requirements regarding interference susceptibility or receiver masks. The latest update to requirements was 1980 - what did the RF spectrum look like then?" [18]. Given that radar altimeters are inherently wideband systems, they are potentially more susceptible to signal blocking than other types of receivers [18]. As a result, the aviation industry should continue to expand its research on using passive devices, such as filters, that protect the radar altimeter transceiver. Other alternatives can be considered. In Japan, placement of 5G base stations is to be avoided within 200m from the approach path of aircraft. According to the Japanese scientists, this mitigation is effective in avoiding the blockage of radar altimeter signals [19]. Japan has also experimented with 5G antenna configurations and demonstrated how beam formation can impact interference on radar altimeters [20]. Yet another alternative is increased band separation between 5G and radar altimeters. This is the approach employed in Europe. European 5G utilizes 3.4 to 3.8 GHz, whereas US deployments of 5G use 3.7 to 3.98 GHz, thus reducing the margin from the 4.2 GHz lower limit of radar altimeters [21]. At present, most 5G efforts assume power levels typical for 5G systems in the US. Additional research, however, is needed to ensure global compatibility as 5G deployment in Europe, for example, is subject to higher power levels, up to 1.5 times higher. Indeed, it is expected that domestically, the power levels of 5G will also increase eventually. The radar altimeter and 5G conflict
highlight another area of concern, that of complacency. The radar altimeter technology has not advanced from a security or threat perspective, yet its RF environment is becoming increasingly threatened. A review should be conducted of other known critical avionics systems that similarly have benefited from quiet RF environments and therefore do not have protections designed into them.
The FAA's recommended solution of an RF filter is effective against 5G interference. All development and production product names and part numbers within this study have been altered to be disassociated from any manufacturers' product line. Any similarities are unintentional.
## VII Conclusions and Future Research
This study presents the methods that may be used by a radar altimeter manufacturer to increase the robustness of a hypothetical RALT against 5G interference. The incomplete test results of this study suggest that the addition of a bandpass filter intended to limit frequencies other than the 4.2 to 4.4 GHz does not negatively impact the basic radar altimeter function. Though the laboratory testing is incomplete at this time, preliminary test results are commensurate with those observed in flight tests. As the unmodified radar altimeter is an approved unit with a TSO, it is proposed in this study as a control. The accuracy of the TSO'd unit meets those of the regulatory requirements in Table II.
There is a history of aircraft accidents resulting from radar altimeter anomalies. Turkish Airlines Flight 1951 was caused primarily by the aircraft's automated reaction triggered by a faulty radio altimeter. This accident resulted in fatalities. On December 25th, 2012, in Kazakhstan, an Antonov 72 crashed, killing all 20 onboard. Following an autopilot failure, the captain "decided to fly the plane manually. Two minutes and 40 seconds after takeoff, the radio altimeter also failed. The flight was continued using barometric altimeters... there was a momentary failure of these altimeters as well..." [22]. The airplane collided with terrain 21km short of the runway and broke apart.
The FAA has also been clear that there is a need, "to establish a timeline for retrofitting or replacing radar altimeters in US airliners that are affected by 5G C-band signals..." [23]. 5G potential interference will not be ignored, and there are regulatory consequences. RALT manufacturers will need to improve their current product line to make them less 5G susceptible. This study provides insights into a strategic solution for radar altimeters for actual flights for multi-stage airborne interference risk mitigation in 5G and future G networks.
|
2307.13562 | How massless are Weyl fermions in Weyl semimetals? | Circularly polarized light fails to generate currents in inversion-symmetric
Weyl semimetals with degenerate Weyl nodes. While each node generates current
with the direction depending on its chirality, the two currents in the two
degenerate nodes of opposite chirality cancel each other. By extension, it is
also generally expected that the currents generated at the same Weyl node by
the fields of opposite helicity should also observe mirror symmetry and cancel.
Surprisingly, here we find that this is not the case. The origin of this effect
lies in the nonlinear energy dispersion, which manifests strongly already very
close to the Weyl nodes, where linear dispersion is expected to hold and the
Weyl fermions are thus expected to be massless. A scheme based on using a
trefoil field composed of a counterrotating fundamental and its second harmonic
is proposed to control the induced asymmetry at a chiral node from positive to
negative, including zero. | Amar Bharti, Misha Ivanov, Gopal Dixit | 2023-07-25T15:14:31Z | http://arxiv.org/abs/2307.13562v1 | # How massless are Weyl fermions in Weyl semimetals?
###### Abstract
Circularly polarized light fails to generate currents in inversion-symmetric Weyl semimetals with degenerate Weyl nodes. While each node generates current with the direction depending on its chirality, the two currents in the two degenerate nodes of opposite chirality cancel each other. By extension, it is also generally expected that the currents generated at the same Weyl node by the fields of opposite helicity should also observe mirror symmetry and cancel. Surprisingly, here we find that this is not the case. The origin of this effect lies in the nonlinear energy dispersion, which manifests strongly already very close to the Weyl nodes, where linear dispersion is expected to hold and the Weyl fermions are thus expected to be massless. A scheme based on using a trefoil field composed of a counterrotating fundamental and its second harmonic is proposed to control the induced asymmetry at a chiral node from positive to negative, including zero.
Condensed matter systems provide attractive platforms to realize exotic particles, originally proposed in high-energy physics. Weyl semimetals are one such system in which low-energy collective excitations are governed by massless Weyl fermions which appear in pairs of opposite chirality [1]. These fermions exhibit novel phenomena, such as negative magnetoresistance [2; 3; 4], the chiral magnetic effect [5; 6; 7], the quantized circular photogalvanic effect [8; 9], and the Hall effect [10; 11; 12; 13], among others [14; 15; 16; 17; 18; 19; 20; 21; 22]. Moreover, Weyl fermions are promising for upcoming quantum technologies at room temperature [23; 24; 25].
Light-driven optical response has played a pivotal role in understanding and probing exotic properties of Weyl semimetals [26; 27; 28; 29; 30; 31; 32]. One such optical response is circularly polarized light-driven selective excitations in the vicinity of the Weyl nodes. The excitation process depends on the chirality of the Weyl fermions and the helicity of circularly polarized light [33]. Helicity-driven selective excitations in broken inversion-symmetric Weyl semimetals lead to population asymmetry around the Weyl nodes and the circular photogalvanic effect: the generation of current upon irradiation with circular light [27; 34; 35]. Broken inversion symmetry in Weyl semimetals is a prerequisite to ensure noncancellation of the contribution from a pair of chiral Weyl nodes. Thus, when a measurement of coupling between the massless fermions and circularly polarized light is integrated over both nodes, the nonzero result arises only in the inversion-broken Weyl semimetals [27].
Since this conclusion assumes perfectly massless Weyl fermions, i.e., a gapless system with a perfectly linear dispersion near the nodes, it welcomes a question: how quickly is this assumption violated as one moves away from the exact location of the node? Note that deviations from linear dispersion imply that even for gapless nodes, the mass becomes nonzero as soon as one moves away from the degenerate point. Can circularly polarized light with opposite helicity generate non-mirror-symmetric excitations in inversion-symmetric Weyl semimetals once the nonlinearity of the band structure is taken into account, even near the Weyl nodes? We show that the answer to the latter question is positive.
We begin with the perfectly massless Weyl fermions, where the population induced around the chiral Weyl node with \(\chi=1\) by right circularly polarized light is superimposable with that of \(\chi=-1\) induced by the left circularly polarized light and vice versa. These populations provide the reference for the more general case of nonlinear band dispersion. Once quadratic corrections to the Weyl equation are included, helicity-sensitive asymmetric excitations becomes nonzero and significant already at the Weyl nodes. That is, the excitation
generated with one helicity at the \(\chi=1\) node is no longer superimposable with that generated by the opposite helicity at the \(\chi=-1\) node and the excitations at a given node for light with opposite helicities are not mirror symmetric. The same result obtains for the more general inversion-symmetric Hamiltonian of a Weyl semimetal. While the induced asymmetry reduces when decreasing the light frequency, so that the resonant excitations are located very close to the node, it still remains substantial. Last but not least, we devise a scheme based on two-color counterrotating circularly polarized light to control the helicity-sensitive asymmetric excitation. Our control scheme can tailor the asymmetry from positive to zero to negative.
A Hamiltonian for a type-I Weyl semimetal can be written as [36]
\[\mathcal{H}(\mathbf{k})=2t_{x}\cos(k_{x}a)\sigma_{x}+2t_{y}\cos(k_{y}a)\sigma _{y}+2t_{z}\left[\cos(k_{z}a)-\alpha-\beta\sin(k_{x}a)\sin(k_{y}a)\right]\sigma _{z}, \tag{1}\]
where \(t\)'s are hopping parameters, \(\sigma\)'s are Pauli matrices, and \(\alpha\) and \(\beta\) are dimensionless parameters. The Hamiltonian corresponds to an inversion-symmetric Weyl semimetal with broken time-reversal symmetry [36]. To make our discussion simple, we have considered \(t_{x,y,z}=t\) and \(\alpha=\beta\). Diagonalization of Eq. (1) yields the band structure shown in Fig. 1(a). The two Weyl nodes are positioned at \(W_{1,2}=(\frac{\pi}{2a},-\frac{\pi}{2a},\pm\frac{\pi}{2a})\), i.e., \((0.5,0,\pm 0.25)\) in reduced coordinates [37], and are at the Fermi level. The energy contours in their vicinity in the \(k_{x}-k_{y}\) plane are isotropic [see Fig. 1(b)], so that light-induced excitation should yield a symmetric population.
Let us first focus on the linear part of the band dispersion. Expanding Eq. (1) up to linear terms near the Weyl nodes, we find
\[\mathcal{H}_{1}(\mathbf{k}) =d_{1,x}(\mathbf{k})\sigma_{x}+d_{1,y}(\mathbf{k})\sigma_{y}+d_{1,z}(\mathbf{k})\sigma_{z} \tag{2a}\] \[\mathcal{H}_{2}(\mathbf{k}) =d_{2,x}(\mathbf{k})\sigma_{x}+d_{2,y}(\mathbf{k})\sigma_{y}+d_{2,z}(\mathbf{k})\sigma_{z} \tag{2b}\]
Here, \(\mathbf{k}\) denotes the deviation from the Weyl node [for both nodes, Eqs. (2a) and (2b)], \(d_{1(2),x}(\mathbf{k})=v\left(-k_{x}a\right),d_{1(2),y}(\mathbf{k})=v\left(k _{y}a\right)\), and \(d_{1(2),z}(\mathbf{k})=v\left[-(+)\tilde{k}_{z}a\right]\), where \(-\tilde{k}_{z}(+\tilde{k}_{z})\) is measured relative to the Weyl node 1 (2), and \(v=2t\). The above Hamiltonian in Eq. (2) represents the Weyl equation and can be written as \(\mathcal{H}_{w}=v\ \mathbf{k}\cdot\mathbf{\sigma}\). As pointed above, the two Weyl nodes described by \(\mathcal{H}_{1}(\mathbf{k})\) and \(\mathcal{H}_{2}(\mathbf{k})\) are degenerate and only differ by chirality, which is defined as \(\chi=\text{sgn}(d_{x}\cdot d_{y}\times d_{z})\). The Weyl nodes 1 and 2 have \(\chi=1\) and -1, respectively.
Light-driven electronic excitation in a Weyl semimetal is simulated using the density matrix approach within the semiconductor Bloch equations framework as discussed in Refs. [32;
38; 39]. To account for the decoherence between electron and hole during the excitation process, a phenomenological dephasing term with 1.5 fs is introduced. Our findings are robust against the dephasing term ranging from 1.5 to 10 fs.
The conduction band population is obtained by integrating the density matrix in the conduction band after the end of the laser pulse; the population is integrated over \(k_{x}\) and \(k_{y}\), and is shown along the \(\tilde{k}_{z}\) direction, where \(\tilde{k}_{z}=0\) is the Weyl plane which contains both chiral Weyl nodes, for Eq. (2). We used \(\sim\) 100 fs long circularly polarized pulses with intensity \(10^{11}\) W/cm\({}^{2}\) and wavelength 3.2 \(\mu\)m (i.e., \(\omega\) = 0.39 eV); different wavelengths upto 10.6 \(\mu\)m (i.e., \(\omega\) = 0.12 eV) were also studied, with the results described below.
Figure 2 shows the final population around the two Weyl nodes in the conduction band after the end of the pulse, for \(\chi=-1\) (a) and \(\chi=1\) (b) calculated for the Hamiltonians in Eqs. (2a) and (2b), respectively. As expected, the population asymmetry is zero at \(\tilde{k}_{z}=0\) and is mirror symmetric with respect to changing either the light helicity or the chirality of the node. In particular, the population at \(\chi=-1\) induced by the left circularly polarized (LCP) pulse is the same as that induced at \(\chi=+1\) by the right circularly polarized (RCP)
Figure 1: (a) Energy dispersion along high-symmetry points of an inversion-symmetric Weyl semimetal as given in Eq. (1). (b) Energy contour around one of the Weyl nodes in \(k_{x}-k_{y}\) plane (Weyl planes). The hopping parameter is \(t=1.8\) eV and the lattice parameters are \(a=6.28\) Å and \(\beta=0.8\). The lattice vectors are \(a_{1}=(a,-a,0),a_{2}=(a,a,0)\), \(a_{3}=(0,0,a)\), and the reciprocal vectors are \(b_{1}=(\pi/a,-\pi/a,0),b_{2}=(\pi/a,\pi/a,0),b_{3}=(0,0,2\pi/a)\), leading to reduced coordinates for the high-symmetry points as follows: \(R_{1}(\pi/2a,-\pi/2a,-\pi/a)\), \(X(\pi/2a,-\pi/2a,0)\), \(R_{2}(0,\pi/a,-\pi/a)\), \(G(0,0,0)\), and \(R_{3}(\pi/2a,-\pi/2a,\pi/a)\).
pulse; see Fig. 2(c). The same is true for the population induced by the RCP at \(\chi=-1\) compared to the population induced by LCP near \(\chi=1\); see Fig. 2(d).
Having established this reference, we now go beyond the linear approximation and expand Eq. (1) to the second order, resulting in the following expression,
\[\tilde{\mathcal{H}}_{1}(\mathbf{k}) =d_{1,x}\sigma_{x}+d_{1,y}\sigma_{y}+\tilde{d}_{1,z}\sigma_{z}, \tag{3a}\] \[\tilde{\mathcal{H}}_{2}(\mathbf{k}) =d_{2,x}\sigma_{x}+d_{2,y}\sigma_{y}+\tilde{d}_{2,z}\sigma_{z}, \tag{3b}\]
where \(\tilde{d}_{1(2),z}(\mathbf{k})=v\left[-(+)\tilde{k}_{z}a\right]-\tilde{v} \left[\frac{(k_{x}a)^{2}+(k_{y}a)^{2}}{2}\right]\) with \(\tilde{v}=2t\alpha\). The \(\tilde{d}_{z}\) component now contains additional terms quadratic in \(k_{x}\) and \(k_{y}\), whereas \(d_{x}\) and \(d_{y}\) remain identical in
Figure 2: Residual population in the conduction band (\(n_{c}\)) after the end of the left-handed circularly polarized (LCP) and right-handed circularly polarized (RCP) light around a Weyl node with (a) \(\chi=-1\) and (b) \(\chi=1\). (c) Comparison of the residual populations from a Weyl node with \(\chi=-1\) due to LCP, and from a Weyl node with \(\chi=1\) due to RCP. (d) Same as (c) for a Weyl node with \(\chi=-1\) due to RCP, and from a Weyl node with \(\chi=1\) due to LCP. The Weyl nodes with \(\chi=-1\) and \(\chi=1\) are described by Eq. (2).
both the equations.
The quadratic terms affect the final population already in the immediate vicinity of the Weyl nodes as visible from Figs. 3(a) and 3(b). The mirror symmetry upon changing the handedness of the Weyl node is, of course, preserved: the population near \(\chi=-1\) is mirror symmetric with that near \(\chi=+1\) with respect to changing \(\tilde{k}_{z}\rightarrow-\tilde{k}_{z}\). However, for a given Weyl node, the peaks of the populations induced by RCP and LCP light do not coincide. Similarly, the excitation induced near the \(\chi=-1\) node by LCP pulse does not overlap with the excitation induced near the \(\chi=+1\) node by RCP pulse; see Fig. 3(c). Likewise, the excitation induced near the \(\chi=+1\) node by the LCP pulse does not overlap with the excitation induced near the \(\chi=-1\) node by RCP pulse; see Fig. 3(d). This stands in stark contrast with Fig. 2. The fact that this asymmetry, associated with the deviations from the linear dispersion, arises in the immediate vicinity of the nodes, i.e., in what is supposed to be
Figure 3: Same as in Fig. 2 for the Weyl nodes with \(\chi=-1\) and \(\chi=1\), but now using the Hamiltonian Eq. (3) which includes the quadratic terms.
the zero-mass region, raises the question posed in the title of this Letter: How massless are the Weyl fermions under practical conditions of typical laser wavelengths and intensities?
Since the deviations from the massless behavior could have come from our specific choice of the laser wavelength and intensity, which could have forced the electrons to explore the nonlinear parts of the dispersion, we will scan the laser intensity and wavelength while using the full Hamiltonian given in Eq. (1). Below we shall use normalized population asymmetry defined as
\[\eta=\frac{n_{c}^{\circlearrowright}-n_{c}^{\circlearrowright}}{(n_{c}^{\circlearrowright} +n_{c}^{\circlearrowright})/2}, \tag{4}\]
where \(n_{c}^{\circlearrowright}(n_{c}^{\circlearrowright})\) is the final population due to LCP (RCP) light along \(k_{z}\), integrated in the \(k_{x}-k_{y}\) plane.
Figure 4(a) shows \(\eta\) for the driving wavelengths \(\lambda\) from 1.6 \(\mu\)m to 10.6 \(\mu\)m, which allows one to access different parts of energy dispersion during the excitation. We see that for all \(\lambda\) the asymmetry \(\eta\) is nonzero around the Weyl nodes at \(k_{z}=\pm 0.25\). While the asymmetry reduces with \(\lambda\), even for the longest wavelength substantial values of \(\eta\) at the levels \(\sim 10\%\) arise in the immediate vicinity of the Weyl nodes. We note that the deviation from linear dispersion for the wavelength studied is below 0.001 shown in Fig. 4(b), while the circular dichroism asymmetry induced is several orders of magnitude higher as in Fig. 4(a).
Figure 5(a) shows the dependence of \(\eta\) on laser intensity, for \(\lambda=3.2\)\(\mu\)m. Notably, we find
Figure 4: (a) Normalized population asymmetry (\(\eta\)) as a function of the wavelength of the circularly polarized driving pulse with intensity \(5\times 10^{9}\) W/cm\({}^{2}\). (b) Nonlinear correction (\(\Delta E\)) to the energy (\(E\)) obtained from the linear dispersion. The simulations use the full Hamiltonian given in Eq. (1).
that the asymmetry is nonzero exactly at the Weyl node, where the dispersion is linear. This is true for all laser intensities, with the position of the zero asymmetry moving away from the node with increasing intensity. The another surprise is that the asymmetry decreases with increasing intensity, i.e., when the electron is driven to explore a wider range of the Brillouin zone, where the dispersion nonlinearity is stronger. This observation is supported by Fig. 5(b), which shows that the intensity dependence of the non-normalized asymmetry is sublinear, with the slope 0.8.
At this point, it is natural to explore the possibilities to control the ratio of the asymmetry induced by LCP and RCP light at a given node. To this end, we apply \(\omega-2\omega\) counterrotating circularly polarized laser pulses with the total vector potential given by
\[\mathbf{A}(t)=\frac{A_{0}f(t)}{\sqrt{2}}\left(\left[\cos(\omega t+\phi)+ \mathcal{R}\cos(2\omega t)\right]\hat{\mathbf{e}}_{x}\right.+\left.\left[\sin( \omega t+\phi)-\mathcal{R}\sin(2\omega t)\right]\hat{\mathbf{e}}_{y}\right). \tag{5}\]
The ratio between the two electric fields is controlled by \(\mathcal{R}\), and \(\phi\) describes the subcycle relative phase between the \(\omega\) and \(2\omega\) pulses. In recent years, \(\omega-2\omega\) circularly polarised pulses have been employed to control the valley asymmetry in pristine graphene [40; 41].
The population excited by \(\omega-2\omega\) counterrotating pulses is shown in Fig. 6, with the fundamental wavelength \(\lambda=3.2\)\(\mu\)m. For \(\mathcal{R}=0.2\), the RCP-LCP combination generates
Figure 5: (a) Variations of the normalized population asymmetry (\(\eta\)) for different intensities of the circularly polarized light. (b) Logarithm of the difference in the population around a given Weyl node excited by the left- or right-handed circularly polarized light, as a function of the laser’s intensity. The slope of the fitted line is 0.8, i.e., is below unity expected for linear processes. The driving light wavelength is 3.2 \(\mu\)m. The simulations use the full Hamiltonian given in Eq. (1).
more excitation than the LCP-RCP combination; see Fig. 6(a). Moreover, the peak induced by RCP-LCP combination leans toward the center of the Brillouin zone. As \(\mathcal{R}\) changes from 0.2 to 0.5, both combinations yield almost similar population. However, the peaks due to LCP-RCP combination change its direction and peaked toward the center. The situation reverses for \(\mathcal{R}=1\) where the LCP-RCP combination generates higher excitation than the RCP-LCP combination; see Fig. 6(c). The reason behind such behavior is the interplay of the LCP-RCP and the LCP-RCP combination. The reason behind the LCP-RCP combination is that the LCP-RCP combination is more sensitive to the LCP-RCP combination.
Figure 6: Residual conduction-band population for different ratio (\(\mathcal{R}\)) of the two-color \(\omega-2\omega\) laser pulses: (a) \(\mathcal{R}=0.2\), (b) \(\mathcal{R}=0.5\), and (c) \(\mathcal{R}=1.0\). Intensity and wavelength of the \(\omega\) pulse are \(5\times 10^{10}\) W/cm\({}^{2}\) and 3.2 \(\mu\)m, respectively.
the two competing resonant processes driven by LCP and RCP light, which is controlled by changing \(\mathcal{R}\). Thus, the ratio and the behavior of the residual population can be controlled by tailoring the value of \(\mathcal{R}\) in \(\omega-2\omega\) counterrotating pulses.
In conclusion, we have demonstrated the generation of helicity-sensitive population in an inversion-symmetric Weyl semimetal, which is not symmetric with respect to the helicity of the driving circular light. The effect is general and persists for different wavelengths and intensities. Even for the longest wavelegnths and weakest intensities studied, it is triggered by the deviations of the Weyl fermion mass from zero, even in the immediate vicinity of the Weyl node. The origin of this phenomenon is embedded in the Berry connection, which remains unaffected by any modifications in the Hamiltonian of the Weyl semimetal [42]. We have proposed a way to control and manipulate the asymmetric population using counter-rotating bicircular light, which allow tailoring the asymmetry from positive to negative via nearly zero. The asymmetric residual population can be probed via time- and angle-resolved photoemission spectroscopy in a pump-probe setup [43].
G. D. acknowledges financial support from SERB India (Project No. MTR/2021/000138).
|
2306.04337 | A study on the impact of Self-Supervised Learning on automatic
dysarthric speech assessment | Automating dysarthria assessments offers the opportunity to develop
practical, low-cost tools that address the current limitations of manual and
subjective assessments. Nonetheless, the small size of most dysarthria datasets
makes it challenging to develop automated assessment. Recent research showed
that speech representations from models pre-trained on large unlabelled data
can enhance Automatic Speech Recognition (ASR) performance for dysarthric
speech. We are the first to evaluate the representations from pre-trained
state-of-the-art Self-Supervised models across three downstream tasks on
dysarthric speech: disease classification, word recognition and intelligibility
classification, and under three noise scenarios on the UA-Speech dataset. We
show that HuBERT is the most versatile feature extractor across dysarthria
classification, word recognition, and intelligibility classification, achieving
respectively $+24.7\%, +61\%, \text{and} +7.2\%$ accuracy compared to classical
acoustic features. | Xavier F. Cadet, Ranya Aloufi, Sara Ahmadi-Abhari, Hamed Haddadi | 2023-06-07T11:04:02Z | http://arxiv.org/abs/2306.04337v2 | # A Study on the Reliability of Automatic Dysarthric Speech Assessments
###### Abstract
Automating dysarthria assessments offers the opportunity to develop effective, low-cost tools that address the current limitations of manual and subjective assessments. Nonetheless, it is unclear whether current approaches rely on dysarthria-related speech patterns or external factors. We aim toward obtaining a clearer understanding of dysarthria patterns. To this extent, we study the effects of noise in recordings, both through addition and reduction. We design and implement a new method for visualizing and comparing feature extractors and models, at a patient level, in a more interpretable way. We use the UA-Speech dataset with a speaker-based split of the dataset. Results reported in the literature appear to have been done irrespective of such split, leading to models that may be overconfident due to data-leakage. We hope that these results raise awareness in the research community regarding the requirements for establishing reliable automatic dysarthria assessment systems.
**Index Terms**: dysarthric speech, speech recognition
## I Introduction
Dysarthria is caused by a lack of articulatory control and muscle weakness, which affect the rate of speech, the dynamic amplitudes and pitches, and the manner in which the spoken word is pronounced. All of which contribute to unintelligible speech and difficulty in understanding, due to the inaccurate articulation of phonemes and abnormal speech patterns [1].
Dysarthria classification has become increasingly important in diagnosing the disorder, determining the best treatment options, and conducting speech therapy sessions as needed [2, 3, 4, 5, 6, 7]. While the literature indicates that existing automated assessment attempts achieve high accuracy, the methodology by which the assessment models were evaluated may not be sufficiently generalized, since the models have been trained and evaluated on the same speakers or a single unseen speaker per intelligibility class [8]. There is a limited research examining the effectiveness of such assessments under various scenarios, and provide fine-grained information about their performance for individual patients.
Our **main contribution** in this paper is the development of an interpretable tool that enables the aggregation and visualization of assessment results. The tool has been used to empirically evaluate a number of representations (e.g., acoustics and self-supervised methods) and classifiers (e.g., logistics regression and multilayer perceptron), as well as to carry out binary and multiclass tasks (e.g., disease, word, and severity classification). In order to simulate the real-world recordings collection scenarios, we experimented with three settings: default, noise addition, and noise reduction. Our tool can provide a better understanding of types of impairments that can lead to empirical features, usable to develop programs that facilitate identification of disorders and their characteristics. As an example, the severity classification task performance under these three scenarios, and using acoustics and HuBERT [9] features, shows that HuBERT outperforms the acoustics algorithm.
## II Dysarthria Automatic Assessments
Dysarthria intelligibility assessment is typically performed in two stages [10]. The training stage involves building a computational model based on the patients' speech samples and their respective speech intelligibility classes. Once the model is trained, it can be used to identify classes of speakers with unknown intelligibility levels, by comparing their acoustic features with those which were used during training. Intelligibility assessment approaches that are reference-free focus on developing classification models without any prior understanding of healthy speech, instead focusing on the extraction of acoustic features believed to be highly correlated with intelligibility [11]. A reference-based approach uses healthy speech signals as a standard for measuring intelligibility [12]. Healthy speech data is utilized in these approaches (e.g., ASR-based approaches) in order to determine the characteristics of intelligible speech, and then, they are used as a basis for estimating the level of intelligibility [13, 14]. It exploits the fact that ASR systems trained only on healthy speech perform poorly on dysarthric speech, and that the performance of ASR systems deteriorates with the severity of dysarthric speech.
## III Towards an Effective Dysarthria Assessments
### _Overview_
Due to the fact that the effectiveness of dysarthria assessments is influenced by the quality of recordings, the representations used, and the classification algorithm used, we
Fig. 1: The proposed tool overview.
intend to develop an interpretable tool that will facilitate the understanding of the outputs of such assessments. An overview of the proposed tool is provided in Figure 1, which can be easily adapted to extract a variety of features, followed by multiple classification algorithms. Then, our tool aggregates the results per patient in order to verify the reliability of the assessment results. Aggregation outputs could be interpreted as intelligibility classes, such as low, mid, and high levels, and could provide clinicians with an interpretable classification of the speaker's intelligibility.
### _Experimental Settings_
**Dataset.** The UA-Speech [15] dataset contains recordings of control and dysarthric speech. Speech signals are sampled at \(16\,\mathrm{kHz}\). It comprises 13 healthy control speakers and 15 dysarthric speakers. The vocabulary includes 455 distinct words with 10 digits, 26 radio alphabets, 19 computer commands, 100 common words and 300 uncommon words. Speakers were divided into four different categories based on the severity of the condition, namely high (H), mid (M), low (L) and very low (VL). In our experiments, we do not consider the uncommon words.
**Feature Extraction.** We use a variety of handcrafted and learned features reported in the literature.
* _Hand-crafted Features._ Acoustic measures of articulation, voice, and prosody were extracted using PRAAT [16, 17]. Examples include mean harmonic-to-noise ratio (HNR), fraction of locally unvoiced frames, number of voice breaks, degree of voice breaks, mean and standard deviation of pitch, jitter and shimmer, and cepstral peak prominence (CPP) [18].
* _Self-supervised Features._ Self-supervised learning (SSL) methods such as wav2vec2 [19], CPC, and HuBERT [9] have been successfully exploited for a variety of speech classification tasks [19, 20, 21]. Self-Supervised Feature extractions are done using pre-trained models implemented by SUPERB [22] benchmark1, using a NVIDIA Quadro GV100 GPU. Footnote 1: [https://github.com/s3prl/s3prl](https://github.com/s3prl/s3prl)
**Classification Tasks.**
* _Dysarthria Classification._ Dysarthria may be classified as flaccid, spastic, ataxic, hypokinetic, choreatic, dystonic, or mixed. We treat this task as a binary classification task, since there is no information regarding the type of disease in the used dataset. The 13 control speakers belong to one class, while the 15 dysarthric speakers belong to another class. We test both binary and multi classification tasks as below. We compare performance of Logistic Regression (LR) and Multi Layer Perceptron (MLP) Classifiers using Scikit-learn version 1.2.1 [23]. We consider the default parameters for each model. And operate patient-based splits such that 19 patients are used for training and 9 for testing. We consider the following tasks: [leftmargin=*]
* _Word Classification._ Dysarthria patients are more likely to be able to utter isolated words rather than continuous sentences [12]. Isolated word recognition is the process of converting the input speech command into the corresponding text format [24]. Keyword spotting involves detecting specific words or phrases within larger spoken sentences or utterances [25]. We consider this task as multi classification task, with \(155\) individual words. We considered only words that were not identified as uncommon in the original dataset.
* _Severity Classification._ Dysarthria can vary in severity, leading to speech of different degrees of intelligibility [4]. We consider \(5\) classes, the UA-Speech dataset provides \(4\) severity levels, we add one for control speakers. We proceed stratified splitting over the classes.
**Metrics.** We report the balanced accuracy to account for the classes' imbalance present in the dataset across the different tasks [26]. For the Dysarthria and word classification, performance is evaluated at the recording level. For instance, we
consider an audio sample to be classified as Dysarthric or non Dysarthric speech. For the Severity classification, we report the performance at a speaker level. We first train a model to predict the severity of the different samples, then we group the predictions per patient. A patient severity level is then based on the most frequent class predicted.
### _Analysis Configurations_
#### Iii-C1 Default Setting
**Q1.**_How reliable are classifiers when confronted to unknown speakers?_
**Preliminaries.** Classification tasks can help healthcare professionals to better understand and manage the speech difficulties associated with dysarthria, and to develop appropriate treatment plans to improve communication. Although the literature provides references to existing automated assessment attempts that have achieved high levels of accuracy, they are not necessarily indicative of generalizability due to their evaluation methodology [8, 10]. The training and evaluation of the models were conducted on the same speakers or on only one unseen speaker for each target class [27]. Consequently, such results might be biased by the model ability to identify the correct speaker rather than identifying dysarthria related information.
**Setup.** The raw recordings from the UA-Speech dataset are divided by patient ID to ensure that the training and testing sets do not overlap. Thus, all the recording from a speaker are either in the training or the test set. We evaluate the performance of the classifiers when confronted to speakers that were not used during training. We test the performance based on combinations of features extractors and classification models. For the feature extraction, we consider hand-crafted features through acoustic features and different representations extracted by state-of-the-art self-supervised models such as HuBERT and Wav2vec2, and CPC.
**Results.** Huang et al., in [8] shows that most existing work had achieved very high accuracy (\(75.1\%\) to \(93.97\%\)), however, most of these studies validated their models using the same samples of speech used in training, making their results less generalizable. Furthermore, based on the task at hand, the target can be heavily imbalanced. Therefore, we opt for the Balanced Accuracy Metric [26]. From table I, we observe that models trained using Self-Supervised representations outperform models trained on acoustic features across all tasks. Since the results are obtained without fine-tuning for the self-supervised feature extractors, they appear as promising direction for automated assessment of dysarthria. To get a better understanding of the reliability of the assessment at the patient level, we use the proposed tool in Section III-A, as show in figure 2, that allows for a fine-grained analysis of the predictions of the models and feature extractors. The tool can be adapted to other features extractors and classification models. Therefore, such tool could provide clinicians with more interpretable assessment for a given patient, and be used toward more personalized treatments.
#### Iii-C2 Noise Addition
**Q2.**_Are models trained on the Default dataset biased by patterns related to the noise in recordings?_
**Preliminaries.** Some recordings in the default settings of the dataset have different noise level and clicking sounds. Thus, one could argue that the good performance observed on the disease classification task is due to the presence of patterns associated with external factors from the recording rather than speech related information. To confirm whether the feature extractors and models ability to leverage information that would be specific to classes, we conduct audio sample mixing to exacerbate such effect.
**Setup.** To assert whether the performance observed are due to models learning to distinguish based on specific noise pattern, we generate a new version of the dataset. First, we obtained a single background noise sample from the WHAM dataset [28]. Then, we mixed every audio recording from control patient with that noise pattern. Under such scenario: features extractors and models able to extract that singular noise pattern would achieve higher performance.
**Results.** From Table I, we observe that all combinations of feature extractor and models have achieved higher balanced accuracy on the disease classification tasks. Therefore, when enforcing a singular pattern across control patients, all feature extractors and models were able to leverage such pattern. Therefore, we conduct further analysis after reducing the noise levels in the recordings.
#### Iii-C3 Noise Reduction
**Q3.**_What impact does enhancing the recordings have over the different tasks?_
**Preliminaries.** Given the observation from the Noise Addition setting, we considered a scenario under which we enhance the Default dataset. One such way is to use speech restoration methods. Speech restoration process involves restoring degraded speech signals to their original quality [29]. For instance, speech is typically surrounded by background noise, blurred by reverberation in the room, or recorded with low-quality equipment. It is possible that ambient noise from clinical clicks or other artifact potentially present in Dysarthric recordings. In [30], the effectiveness of automatic denoising tools is quite limited, particularly for dysarthric speakers with severe grades of disorder.
**Setup.** Our objective was to enhance the recordings by applying one of the speech enhancement approaches, and to evaluate the performance of the models in such scenarios. We generate a new version of dataset after applying 'VoiceFixer' [31], a method that attempts to remove multiple distortions simultaneously. The pre-trained 'VoiceFixer' models produce outputs with \(~{}$44.1\,\mathrm{kHz}$\) sample rate, therefore, we further apply resampling to \(~{}$16\,\mathrm{kHz}$\) before extracting the representations. This ensures that the sampling rates match between the input signal and the feature extractor models pre-trained on data with \(~{}$16\,\mathrm{kHz}$\) sample rate.
**Results.** From table I, we observe that across all the tasks, feature extractors and classifiers tested: the performance decreased compared to the default setting. This indicates that performance observed in the default settings might be partly due to patterns that are corrected post-enhancement. We further investigate the variation at the patient level through our proposed visualization tool of the models predictions for the speaker intelligibility task. Figure 2 indicates major differences when using acoustic features. Indeed, across all patients, the logistic regression models consistently classified recording to
the level of control recordings. Nonetheless, the application of the enhancement tool lead to loss of information in some recording: such as partial speech segments removal in recordings. This indicates that systematic use of such enhancement tools requires additional care, if used as a preprocessing step in automated assessment pipelines.
## IV Conclusions and Future Work
**Data Leakage on Medical Recordings.** Depending on the dataset split operated, results can be biased by data-leakage. Models that are trained and evaluated on patients with recordings that are in both the training and evaluation set might have high performance, but unreliable as they have been partly trained on speaker specific properties. Ensuring dataset splits such that patients are in exclusively in one set or another allows for investigating performance on future, and therefore unknown at train time, patients.
**Classes Imbalance.** For the Intelligibility Severity assessment, there is a major class imbalance with respect to the number of recordings, _i.e.,_\(8.3\%\) (VL), \(14\%\) (L), \(13.9\%\) (M), \(63.8\%\) (H + C) or \(19.6\%\) (H) and \(44.2\%\) (C). This leads to classification models that are likely to struggle distinguishing between the VL, L and M classes. Furthermore, the coarse labels given to the intelligibility, namely, patients with \(26\%\) intelligibility score being grouped with patients with intelligibility scores of \(49\%\), make it difficult to provide tools that obtain a detailed understanding of these patterns.
**Limitations and Future Work.** Future work includes the extension to other datasets, and data augmentation methods to tackle classes imbalanced. In [27], Nemours is a very small database as compared to other databases though it focused only on spastic dysarthria. According to the severity levels, the classification accuracy of their experiments ranges from \(40.41\%\) to \(95.80\%\) using only acoustic features. We considered the balanced accuracy to summarize performance of the models, nonetheless, class specific performance analysis could be beneficial. We will open-source the evaluation toolkit. We welcome the community to participate and drive the research frontier.
## V Acknowledgements
The authors would like to thank Sandra Siby for numerous comments, technical questions, references, and invaluable suggestions for the presentation that led to an improved text. Xavier F. Cadet is supported by UK Research and Innovation (UKRI Centre for Doctoral Training in AI for Healthcare grant number EP/S023283/1). Hamed Haddadi is supported by the EPSRC Open Plus Fellowship (EP/W005271/1: Securing the Next Billion Consumer Devices on the Edge). For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising.
|
2307.07517 | Causing is Achieving -- A solution to the problem of causation | From the standpoint of applied ontology, the problem of understanding and
modeling causation has been recently challenged on the premise that causation
is real. As a consequence, the following three results were obtained: (1)
causation can be understood via the notion of systemic function; (2) any cause
can be decomposed using only four subfunctions, namely Achieves, Prevents,
Allows, and Disallows; and (3) the last three subfunctions can be defined in
terms of Achieves alone. It follows that the essence of causation lies in a
single function, namely Achieves. It remains to elucidate the nature of the
Achieves function, which has been elaborated only partially in the previous
work. In this paper, we first discuss a couple of underlying policies in the
above-mentioned causal theory since these are useful in the discussion, then
summarize the results obtained in the former paper, and finally reveal the
nature of Achieves giving a complete solution to the problem of what causation
is. | Riichiro Mizoguchi | 2023-07-01T09:01:49Z | http://arxiv.org/abs/2307.07517v1 | # Causing is Achieving1
###### Abstract
From the standpoint of applied ontology, the problem of understanding and modeling causation has been recently challenged on the premise that causation is real. As a consequence, the following three results were obtained: (1) causation can be understood via the notion of systemic function; (2) any cause can be decomposed using only four subfunctions, namely _Achieves_, _Prevents_, _Allows_ and _Disallows_; and (3) the last three subfunctions can be defined in terms of _Achieves_ alone. It follows that the essence of causation lies in a single function, namely _Achieves_.
It remains to elucidate the nature of the _Achieves_ function, which has been elaborated only partially in the previous work. In this paper, we first discuss a couple of underlying policies in the above-mentioned causal theory since these are useful in the discussion, then summarize the results obtained in the former paper, and finally reveal the nature of _Achieves_ giving a complete solution to the problem of what causation is.
Causation, systemic function, device ontology, direct causation, state-mediation, negative causation
## 1 Introduction
Although what causation is has been discussed for many years, it has been considered an unsolved problem to date. Assuming causation is real, this paper discusses causation as an occurrent (causing) and aims to demonstrate that it is essentially achieving. This view has been presented in [1] where the authors obtained the following results: (1) causation can be understood as a case of systemic function, and hence it is possible to talk about causation in terms of function, (2) Causing can be decomposed along two dimensions, direct/indirect and positive/negative, into the four subfunctions _Achieves_, _Prevents_, _Allows_ and _Disallows_, and (3) _Prevents_, _Allows_ and _Disallows_ can be defined in terms of the _Achieves_ function. It follows that causation is essentially understood in terms of _Achieves_.
The main goal of this paper is to provide a solution to what causation is rather than discussions about the existing theories of causation by uncovering what _Achieves_ is. This issue has been only partially tackled in [1]. Before addressing the main topic, the paper reviews how an occurrent causes another. This review leads to fruitful discussions of causation which relies on three policies adopted in the theory: (1) the state-centric approach, (2) the distinction between direct and indirect causations, and (3) the impossibility of direct causation between events. On the basis of these observations and policies, we see why the conventional attempts to explain causation have not been successful. Then, after summarizing the results obtained in [1], we will turn to our main goal, the elucidation of the _Achieves_ function. More precisely, we will reveal the nature of _Achieves_ for all possible relata giving in this way a solution to the question of what causation is. Finally, some of the typical philosophical issues such as simultaneity, necessity and causal efficacy are discussed followed by conclusions.
## 2 Negative causation and Fundamental claims
### Typical existing theories
The main questions driving this research can be articulated as follows:
1. What is causation?
2. Does a theory of causation explain (exclude) the known positive (negative) examples? |
2307.08624 | National Origin Discrimination in Deep-learning-powered Automated Resume
Screening | Many companies and organizations have started to use some form of AIenabled
auto mated tools to assist in their hiring process, e.g. screening resumes,
interviewing candi dates, performance evaluation. While those AI tools have
greatly improved human re source operations efficiency and provided
conveniences to job seekers as well, there are increasing concerns on unfair
treatment to candidates, caused by underlying bias in AI systems. Laws around
equal opportunity and fairness, like GDPR, CCPA, are introduced or under
development, in attempt to regulate AI. However, it is difficult to implement
AI regulations in practice, as technologies are constantly advancing and the
risk perti nent to their applications can fail to be recognized. This study
examined deep learning methods, a recent technology breakthrough, with focus on
their application to automated resume screening. One impressive performance of
deep learning methods is the represen tation of individual words as
lowdimensional numerical vectors, called word embedding, which are learned from
aggregated global wordword cooccurrence statistics from a cor pus, like
Wikipedia or Google news. The resulting word representations possess interest
ing linear substructures of the word vector space and have been widely used in
down stream tasks, like resume screening. However, word embedding inherits and
reinforces the stereotyping from the training corpus, as deep learning models
essentially learn a probability distribution of words and their relations from
history data. Our study finds out that if we rely on such deeplearningpowered
automated resume screening tools, it may lead to decisions favoring or
disfavoring certain demographic groups and raise eth ical, even legal,
concerns. To address the issue, we developed bias mitigation method. Extensive
experiments on real candidate resumes are conducted to validate our study | Sihang Li, Kuangzheng Li, Haibing Lu | 2023-07-13T01:35:29Z | http://arxiv.org/abs/2307.08624v1 | # National Origin Discrimination in
###### Abstract
Many companies and organizations have started to use some form of AI-enabled automated tools to assist in their hiring process, e.g. screening resumes, interviewing candidates, performance evaluation. While those AI tools have greatly improved human resource operations efficiency and provided conveniences to job seekers as well, there are increasing concerns on unfair treatment to candidates, caused by underlying bias in AI systems. Laws around equal opportunity and fairness, like GDPR, CCPA, are introduced or under development, in attempt to regulate AI. However, it is difficult to implement AI regulations in practice, as technologies are constantly advancing and the risk pertinent to their applications can fail to be recognized. This study examined deep learning methods, a recent technology breakthrough, with focus on their application to automated resume screening. One impressive performance of deep learning methods is the representation of individual words as low-dimensional numerical vectors, called word embedding, which are learned from aggregated global word-word co-occurrence statistics from a corpus, like Wikipedia or Google news. The resulting word representations possess interesting linear substructures of the word vector space and have been widely used in downstream tasks, like resume screening. However, word embedding inherits and reinforces the stereotyping from the training corpus, as deep learning models essentially learn a probability distribution of words and their relations from history data. Our study finds out that if we rely on such deep-learning-powered automated resume screening tools, it may lead to decisions favoring or disfavoring certain demographic groups and raise ethical, even legal, concerns. To address the issue, we developed bias mitigation method. Extensive experiments on real candidate resumes are conducted to validate our study.
## 1 Introduction
Benefiting from rapid advancements in computer technologies and the availability of big data, AI has reshaped many industries. AI has also been leveraged in talent acquisition and used in recruitment and selection [1]. One of the biggest benefits of using AI in recruitment is that it can screen thousands of resumes and shortlist candidates within minutes. The advantage allows recruiters to reduce their manual time and focus on more pertinent matters. AI technologies have been used in different stages of hiring. For example, AI-powered chat bot is used to automate and optimize recruiters' repetitive interactions inside the screening, scheduling, reference checking, etc. Automatic facial expression analysis helps recruiters conduct the initial screening of candidates. AI technologies are also employed and allow recruiters to use games and quizzes to filter candidates. HR technologies indeed provide an effective solution for time-consuming tasks, like resume screening and initial candidate interview, which have been recognized as a bottleneck for organizations to scale and expand their businesses.
However, there are growing concerns about the ethics and lawfulness of the use of AI in hiring [2, 3, 4, 5]. In 2018, it was reported that Amazon developed a machine learning-based recruitment program, which was later found to be
biased against women. The machine learning algorithm was fed with the company history employment data, where the majority of employees had been men. The imbalanced training data led the system to prefer male candidates over female ones 1. In 2019, EPIC filed a complaint with the Federal Trade Commission against an Human Resource (HR) technology company for its unfair and deceptive practices in violation of the Federal Trade Commission (FTC) Act, e.g. falsely denying the use of facial recognition2.
Footnote 1: [https://www.forbes.com/sites/forbeshumanresourcescouncil/2021/10/14/understanding-bias-in-ai-enabled-hiring](https://www.forbes.com/sites/forbeshumanresourcescouncil/2021/10/14/understanding-bias-in-ai-enabled-hiring)
Footnote 2: [https://epic.org/documents/in-re-hirevue/](https://epic.org/documents/in-re-hirevue/)
To address concerns about potential misuse or unintended consequences of AI, however, we have seen many efforts and initiatives on developing technical standards for reliable, robust, and trustworthy AI systems. In addition to that, lawmakers are developing regulations to use police power to ensure good use of AI. At least 17 states in the U.S. in 2022 have introduced general artificial intelligence bills or resolutions. The General Data Protection Regulation (GDPR) is a regulation in European Union law on data protection and privacy, which has policies regarding AI technologies. There are laws around Equal Opportunity and Fairness. New York City approved a bill, to be effective on January 1, 2023, i.e. "...a bias audit be conducted on an automated employment decision tool prior to the use of said tool. The bill would also require that candidates or employees that reside in the city be notified about the use of such tools in the assessment or evaluation for hire or promotion, as well as, be notified about the job qualifications and characteristics that will be used by the automated employment decision tool."
However, it is challenging to implement these legal regulations, like the requirement on auditing HR technology, due to many technical reasons. For example, it might be difficult to detect discrimination for certain protected attributes like sexual orientation or disability status, if the data does not contain this information. To better guard against the misuse of AI and its unintended consequence, it is critical to understand how AI systems work and detect/mitigate bias in the algorithms, which requires us to explore how predictive technologies work at each step of the process.
There are lots of researches on machine learning fairness, responsible AI, etc. This study focuses on the use of AI in HR technology, particularly in the use of natural language processing for resume screening. Resume screening is in large demand in today's job market, especially when applications can be made as simple as one click on some job posting sites, e.g. LinkedIn, Indeed. A resume system typically is responsible for filtering out the resumes that are deemed unqualified or less qualified than the resumes that are selected by the system. Because a hiring process carries ethical responsibility, a resume screening system, which is thought to automate and facilitate the hiring process, should not be exempted from its ethical responsibilities. A resume screening system is expected to be able to select resumes accurately (i.e. the ones selected should match the job requirement) and fairly (i.e. no other factors like gender, nationality, or race, should affect the decision made). However, AI algorithms are trained on existing dataset, which may be not well represented and may contain biases. Such biases will be inherited by the learned AI model and cause disparate impact on the resultant automated decisions [6]. Natural Language Processing (NLP) is the technology used to scan textual data in a resume and make candidate recommendations based on the matching between a candidate resume and a job description. The traditional NLP methods represent a document as a bag of words, which can be mathematically represented as a numeral vector with each component indicating the weight of a corresponding word. A popular method to compute the weight is called Term Frequency-Inverse Document Frequency (TF-IDF) [7]. As such, the matching between a candidate resume and a job description can be measured as cosine similarity of their vector representations. The TF-IDF based information retrieval method can be utilized to quickly screen candidate resumes. However, it was reported in [2] that the method may carry socio-linguistic bias and cause disparate impact on the origin country of the applicants. Built upon existing research, this study examines the disparate impact of deep learning (particularly word embedding) based resume screening methods. The disadvantage of TF-IDF is that it is based on the bag-of-words model (the length of a TF-IDF vector is the same size as the vocabulary) and cannot capture information on position, semantics, or co-occurrences of words. Word embedding is a method that represents a word as a short real-valued vector, where each component encodes some characteristic information. Similar words have their representations closer in the vector space. Word embedding is learned by forming a unsupervised learning problem and trained a large corpus. Popular word embeddings include Word2Vec (trained from Google news) [8], GloVe (trained from Wikipedia) [9]. The advantages of word embedding include the small size of the embedding vector and the retaining of the semantic meaning of words and their context information. Due to those advantages, word embedding can be used to support many downstream tasks, including resume screening. However, it was reported in [10] that "word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent". The underlying reason is that word embedding essentially captures the association/co-occurrence of words from the training corpus. If the corpus has stereotype information, the pattern will be encoded in the resultant word embedding. This study tries to evaluate the risk of word embedding-based resume screening method, with respect to national origin bias, and propose some mitigation methods.
Literature Review
This study is related to multiple research fields, including AI ethics, machine learning fairness, DEI (Diversity, Equality, and Inclusion), NLP, etc. We will review some representative works.
AI ethics concern principles and guidelines that govern the development and implement of AI techniques, with respect to human rights and human dignity. Due to the importance of responsible AI, many institutions and organizations have issued guidance for responsible AI. In [11], they provided a global landscape of AI ethics guidelines. Although those agreements disagree in some terms, a set of AI principles have received wide consensus:
* This principle ensures responsibility for complying with data protection and for demonstrating that compliance in any AI system. It is required to assess and mitigate its risks, and document can demonstrate how the system is compliant and justify the choices made in the process.
* AI lawfulness states that the development, deployment, and use of AI systems should have a legal basis and be compliant with any applicable legal regulations. AI fairness requires the fair treatment of sub-populations of users of products involving automation, and ensuring that users are fully aware of the processing to make an informed decision. Transparency means open access to the details of the functionality of an AI product.
* Appropriate security measures should be adopted to decrease the potential for software vulnerabilities and minimize the use of data to reduce the risk of data loss and misuse. Personal data will only be collected and processed to accomplish the specific, explicit, and legitimate purposes
* Developing and deploying AI should comply with the individual rights of information, access, rectification, erasure, restriction of processing, data portability, and the right to be informed.
Improperly designed AI systems invite risks of mistreatment of people with certain demographic characteristics. Machine learning bias and fairness has become an active research field. A good survey on recent research results can be found in [13]. When used in the HR space, it would violate the equal employment opportunity compliance, which requires treating all people equally when it comes to hiring, promotions, compensation, layoffs, benefits, disciplinary actions and other employment practices. A sample case is that Amazon scraps secret AI recruiting tool that showed bias against women [14]. An algorithm-driven job advertisement, promoting job opportunities in the science, technology, engineering and math fields, was supposed to be gender neutral, but displayed less to female audience, as the algorithm determined younger women are a prized demographic and are more expensive to show ads to [15]. The software, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), which measures the risk of a person to recommit another crime, is found to be biased against African-Americans [16]. In healthcare, a machine learning powered scheduling algorithm is biased against economically disadvantaged groups by assigning them to disfavored time slots, because the algorithm determines those patients have high probability of no-show, given their demographics [17]. Machine bias is studied in many other applications like housing [18], credit market [19].
Mitigating bias in hiring algorithms requires us to explore how predictive technologies work at each step of the hiring process. Tools used in an early step might be fundamentally different than those used later on. Even the same tools may exhibit disparate behaviors because they were fed with different data sets. Bias can come from different sources [5]. Historical bias arises when data has historical stereotyping, which leads to a model that produces harmful outcomes. Representation bias occurs when the development sample underrepresents some part of the population, and subsequently fails to generalize well for a subset of the use population. Measurement bias occurs when choosing, collecting, or computing features and labels to use in a prediction problem. Aggregation bias arises when a one-size-fits-all model is used for data in which there are underlying groups or types of examples that should be considered differently. Learning bias appears when modeling choices amplify performance disparities across different examples in the data. Evaluation bias occurs when the benchmark data used for a particular task does not represent the use population. groups. Deployment bias arises when there is a mismatch between the problem a model is intended to solve and the way in which it is actually used.
Bias also appeared in NLP models. In [10], it shows that state-of-the-art word embeddings would map "man" to "comptuer programmer" and "woman" to "homemaker". In [20], an empirical study shows gender bias in coreference resolution systems. Reference [21] identifies bias a generated text from a language model based on recurrent neural networks. Reference [22] studies bias in sentence embeddings. Reference [23] notices gender bias in machine translation. In [24], bias is also confirmed in Google translate. Reference [25] investigates bias in named entity recognition (NER) systems and observe that more female names as opposed to male names are being tagged as non-person entities or not being tagged at all.
Automated Resume Screening System
A resume screening system's goal is to reduce a large corpus of possible candidates to a manageable amount of resumes. Automated resume screening is used by recruiters and others involved in the hiring process. Larger companies with a greater workforce are more likely to use the system. Automated screening software helps the hiring managers to quickly review hundreds or even thousands of applications and expedite the hiring process. Existing automated screening software mostly use keyword based methods and categorize resumes based on words. Interestingly, we can find many online tips, teaching how to craft resumes to bypass such keyword based screening software. It is reported in [26] that TF-IDF based resume matching methods possess bias on national origin. keyword based information retrieval has several disadvantages, like low matching accuracy, high computational overhead, etc. Deep learning based information retrieval has attracted a lot of interests from both academia and industry. HR technology companies have now studied the use of deep learning, specifically word embedding, to represent resumes as low-dimensional vectors, which are used for downstream tasks. We term the technology as resume embedding, which is appealing as it transforms a resume into a structured numerical format that is easy to process. However, its involved AI risk has not been investigated. The goal of this study is to examine the technique and address potential concerns if there is any.
### Resume Embedding
Resume embedding intends to map a candidate profile to a vector in a relatively low-dimensional space and capture semantics of candidate profiles, such that similar resumes are placed close together in the embedding space. Embedding can create a denser representation of categorical values and maintain some of the implicit relationship information between those values. Embedding can be trained via neural network from data corpus. It has been used in many applications like recommendation systems, computer vision, semantic search, etc.
A naive resume embedding can be built on existing word embedding. Word embedding is a popular representation of document vocabulary. As opposed to one-hot encoding, another common method for representing categorical variables, which maps a single category to a vector of binary values, word embedding represents words in a low dimensional space and captures context of a word in a document. Popular word embeddings include Word2Vec and GloVe. The Word2Vec algorithm uses a two-layer neural network model to learn word associations from a large corpus of text. It takes as its input a large corpus of text and produces a vector space, typically of several hundred dimensions, with each unique word in the corpus being assigned a corresponding vector in the space. GloVe is another popular word embedding. Its training model is slightly different from Word2Vec, as it stresses the frequency of co-occurrences and can be interpreted as a summary of the training corpus with low dimensionality that reflects co-occurrences. For example, Word2Vec maps "accountant" to a vector of size 300 as [0.318, -0.234, -0.205,...] and GloVe converts "accountant" to a vector of size 100 as [0.20509, -0.45237, 0.26541,...].
To perform resume embedding, we can firstly clean candidate profiles through a set of preprocessing steps, i.e. normalize text, remove Unicode characters like URLs, hashtag, delete stop-words, and perform word stemming and lemmatization. Suppose the cleaned resume \(d\) consists of a set of unique terms \(\{t_{1},...,t_{n}\}\) with corresponding term frequencies of \(\{f_{1},..,f_{n}\}\). Let \(W(t_{i})\) be the word embedding of term \(t_{i}\). Then, we can derive resume embedding \(R(d)\) as:
\[R(d)=\sum_{i}f_{i}W(t_{i}). \tag{1}\]
Resume embeddings can then be used for resume matching. Given a job post \(p\), we apply the same preprocessing steps and apply the same word embedding conversion. Denote the resultant embedding as \(R(p)\). The matching between resume \(d\) and job post \(p\) can measured by their cosine similarity:
\[sim(d,p)=\frac{R(d)R(p)}{|R(d)||R(p)|}. \tag{2}\]
### Resume Embedding Bias and Measure
Resume embedding indeed significantly improves HR productivity and makes candidate screening efficient. However, it can be vulnerable to stereotypes that are inherited from the training data. Reference [10] reports that word embeddings, even training from Google News shows gender stereotypes to a disturbing level. For example, "man" closer to "computer programmer", while "woman" is to "homemaker". Gender bias exhibit in word embedding for many occupation terms. Besides gender bias, many other bias, like nationality origin, age, sexual orientation etc, exist in word embedding as well. Those bias reflect skewed perception in our society. If we apply such word embedding to resume representation, it is expected that the bias will be carried over to automated resume screening.
This study focuses on the national origin bias in the word embedding-based automated resume screening. According to Equal Employment Opportunity Commission (EEOC), national origin discrimination involves treating people (applicants or employees) unfavorably because they are from a particular country or part of the world, because of their ethnicity or accent, or because they appear to be of a certain ethnic background (even if they are not). The law forbids discrimination when it comes to any aspect of employment, including hiring, firing, pay, job assignments, promotions, layoff, training, fringe benefits, and any other term or condition of employment. Unfortunately, in practice we have seen many reported cases on national origin discrimination. Recruiters may even unconsciously screen out candidate based on certain information on their resume that is associated with some demographic groups.
National original discrimination occurs in word embedding-based automated resume screening. People from the same national origin tend to use similar vocabularies or have overlaps in interests, due to their cultural background. Those overlap make their resume to exhibit certain pattern, which can be learned and exploited. Our experiments found out that it is possible to recognize a person's national origin to a certain extent, simply based on that person's writing. These days, many intelligent HR tools are able to match job postings with candidate profiles, using AI tools. We found that job postings and profiles, which belong to the same demographic group, can have a higher matching rate. As a result, companies run by a certain ethnicity would only hire/attract people of the same ethical background. The issue in fact has become an obstacle to today's talent acquisition.
To evaluate and address the issue, we firstly need a fairness measure to determine bias severity. In practice, we can sense that a demographic bias happens if candidates from one demographic group is greatly favored than candidates from another demographic group. We expect that in a "fair" resume screening process, the demographic decomposition of the selected candidates should roughly match that of the entire resume data set where the search is conducted on. For example, if Chinese candidates make up of 1/3 of the resumes collected, the selected Chinese candidates should make up of 1/3 of the selected candidates. To this end, we define a fairness measure as below:
\[Fairness=\frac{P(selected|d_{1})}{P(selected|d_{2})} \tag{3}\]
where \(P(selected|d_{1})\) is the chance of a candidate from demographic group \(d_{1}\) being selected by the system, and \(d_{1}\) denotes the demographic group in which a candidate's resume has the lowest chance of being selected, whereas \(d_{2}\) is the demographic group in which a candidate's resume has the highest chance of being selected. A high fairness measure indicates that each demographic groups have about equal chance of being selected by the system, which suggests less or no bias; and low fairness measure can indicate that one demographic group is a lot more favored by the system than another, suggesting more bias.
Another important measure of a resume screening system is its accuracy. An accurate resume screening system means that we can expect the candidates selected by a job posting to be matches to that job posting. For an inaccurate resume screening system, though, a resume being selected by a job posting tells no information of whether it's a match to that job posting. An accuracy measure is proposed as below:
\[Accuracy=P(match|selected). \tag{4}\]
### Bias Mitigation
Although word embedding converts terms to numerical vectors, those numerical values encode information on national origin. For example, certain food or activities are highly associated with some national origins. Some educational background also imply national origin. For example, if the word "Shanghai" is identified in a resume or a job posting, it can be reasonably inferred that the resume/job posting is from China. It is known that the appearance of the same term in both a resume and a job posting will result in increased cosine similarity, and thus render the resume more likely to be selected by the system. To resolve this issue, we should remove or decrease the weight of the biased terms to minimize their effect of them. Terms like "Shanghai" or "India" is obviously biased demographically, but other less obvious terms can easily be biased without being recognized as biased terms.
To mitigate bias, we propose to identify these potentially biased terms and then remove/downgrade their impact on the resume matching algorithm. We use the p-value notion [26] to capture the severity of inherent national origin bias for a term \(t\). It is defined as:
\[p(t)=\frac{P(t|d_{min})}{P(t|d_{max})}, \tag{5}\]
where \(P(t|d)\) denotes the average frequency of a word appearing in one demographic group, and \(d_{min}\) is the demographic group where term \(t\) is the least likely to be found (lowest \(P(t|d)\)). \(d_{max}\), on the contrary, is the demographic group \(d\) that maximizes \(P(t|d)\) for term \(t\).
A low \(p\) ratio can be an indication of a potentially biased term, because the term is less likely to be find in one demographic than another. Table 3 shows the corresponding p-ratio value for the terms seen in the word cloud, terms like _management, business_have way higher p-ratio than words like _shanghai, india_. To undermine the effects of these terms, it is intuitive to reduce the weight of the biased terms and promote the weight for those that are considered fair. To this end, a fair resume embedding transformation is introduced to adjust for the term-wise biases
\[\tilde{R}(d)=\sum_{i}f_{i}W(t_{i})p(t_{i}). \tag{6}\]
By introducing the p-ratio value, the above adjusted resume embedding can have less impact from biased words and reduce national origin discrimination. However, it applies adjusted weights to terms proportional to their frequency, which might not be accurate and not yield the optimal result. To address that, a sigmoid function is used along the p-ratio to generate a weight for each term that is controlled by two parameters: \(\lambda\) and \(\tau\). The new weighting for each term \(t\) is defined as such:
\[\bar{R}(d)=\sum_{i}f_{i}W(t_{i})\sigma(\lambda(p(t_{i})-\tau). \tag{7}\]
The sigmoid function is a function that convert a linear function to one that is steep near the boundary value, making the output close to 0 for all input values that is below the boundary value and close to 1 for all input values above the boundary values. The two parameters: \(\lambda\) and \(\tau\) control the steepness of the slope near the boundary value respectively. The intuition behind this is that if we know certain term is strongly biased, we want to remove it completely, and for the words the we know to be unbiased, we do not reduce its weight even if it might show up a little bit more often in one demographic group than another. The best combination of \(\lambda\) and \(\tau\) need to be tuned to fit each scenario. To illustrate it, Figure 1 illustrates the sigmoid function with \(\lambda=50\) and \(\tau=0.3\).
## 4 Experimental Study
In this section, we will conduct experiments to examine word embedding-based automated resume screening and its bias towards national origin. In particular, we want to investigate whether a job posting from a certain ethnicity group tends to attract/match resumes with the same ethnicity background, deemed by automated resume screening tools. We will also evaluate our bias mitigation methods.
### Dataset and Pre-processing
We collect two types of datasets, i.e. resumes and job postings. We use the public resume dataset 3, which consists of over 1,000 profiles in PDF or DOCX format. We used libraries including docx2txt4 and pdfplumber5 to extract text from
Figure 1: Sigmoid Function with \(\lambda=50\) and \(\tau=0.3\)
the original resume files. The resumes are collected in Singapore, so the demographic/position distribution may match that of Singapore, in which finance, management, and accounting positions divided the dataset evenly. After extracting texts and relevant information from the resumes, 129 resumes from three demographic groups - 64 from China, 31 from India, 34 from Malaysia based on automatic country origin labeling - are selected for further analysis. Out of these, manual check is performed on the labeling to ensure that all labels are accurate. As a result, 28 resumes from India, 48 resumes from China, and 29 resumes from Malaysia (105 resumes in total) are kept. For extracted resumes, we perform standard text cleaning, e.g. removing stop words, word stemming. Figure 2 shows a resume example.
To get a peek on the candidate resumes from different demographic groups, we generate word cloud for each group. Figures 2(a) - 3 provides a visualization of word frequency for each demographic group and clearly shows differences in word uses, although those candidates might have similar experiences and look for the same type of positions. If we purely rely on algorithms to do job matching based on texts, it would inevitably lead to mistakes and bias.
For job postings, we collected from popular job posting sites (Indeed.com) to test the resume screening system. While a job posting is usually comprise of parts that states the qualification of the position as well as the parts that advertises the company (the "What to Expect" part). In terms of screening resume based on a job posting, the "What to Expect" part contribute little to the searching criteria and will be trimmed when conducting resume screening. For each demographic group that is targeted in this study, three job postings are collected, with each of them resemble one of the common types of role in the dataset: Management Role, Analytic Role, and Accounting Role.
To assess whether the proposed system is accurate, a binary relationship indicating whether a resume matches a job posting is provided between every resume-job posting pair through manual review. This process is done before any further processing took place to ensure the integrity of the experiments. In the ground truth label, 372 out of 945 resume-job posting pairs are labeled as match, so if resumes are randomly selected for any job posting, one should expect about 39% percent of them to be matched to the job posting.
Figure 3: Word Cloud
Figure 2: Example of Resume File
### Results
The first experiment is to evaluate performance of word embedding based automated resume screening with respect to fairness measure and accuracy. For each ethnic group (i.e. India, China, and Malaysia), we pick 3 job postings. For each job posting, we apply word embedding to generate vector representation. For all candidate profiles, we use the same word embedding to generate resume embedding. For each job posting we retrieve the top 10 profiles in terms of cosine similarity. For each national origin, we report the results averaged over three posting in Table 1. The overall accuracy is 0.622, which is much better than random guess. As the first step of a hiring process, 62% accuracy of automated screening is satisfactory and justifies its use. The fairness measure value is alarming with the overall value of 0.309, whereas the fairness value for India is only 0.165. It suggests that job postings do possess strong signal of national origin and word embedding based resume filtering exhibits evident bias.
We further narrow down candidate profiles and examine accuracy between job posting origin and candidate origin. The results are reported in Table 2. For example, for job postings with origin from India, the overall accuracy is 0.567. Among retrieved resumes, the accuracy for candidates from India, China, and Malaysia, is 0.4, 0.8, and 0.467 respectively. As the first glance, it seems no bias issue, as the accuracy for India resumes is lower than China and Malaysia. However, it in fact suggests bias. The accuracy for China resumes is high, because there are fewer resumes being retrieved. On the contrary, more resumes from India being retrieved causes low accuracy for India profiles.
The second experiment is to evaluate our bias mitigation methods, i.e. p-ratio and sigmoid adjusted p-ratio. The p-ratio method reduces bias by decreasing weights of biased terms. For example, "Shanghai" carries a lot of information national origin. To downgrade its effect, we can multiply its word embedding vector by a value between 0 and 1. The corresponding value is called p-ratio. Its definition can be referred to the previous section. The p-ratio is low (close to 0) for biased terms like "india", "shanghai" and is high (close to 1) for unbiased terms like "finance", "management". Table 3 reports the p-ratio values for some words.
After computing p-ratio values for all terms, we can adjust resume embedding by reducing weights on biased terms and then perform job-resume matching. We carry out the previous experiment with adjusted fair embedding this time. The results are reported in Tables 4 and 5. The comparison of fairness measure in Table 4 shows the fair embedding, adjusted by p-ratio values, indeed improves fairness. The overall fairness measure has increased from 0.309 to 0.782, which is a significant jump. The fairness measure for each individual national origin also improves greatly. Particularly, for the Malaysia national origin, it increases from 0.330 to 0.875. Table 5 reports the accuracy measure for each pair of job posting origin and resume origin. It shows that accuracy values are better than regular word embedding in Table 2
\begin{table}
\begin{tabular}{c|c|c}
**Country Origin of Job Posting** & **Fairness Measure** & **Accuracy** \\ \hline India & 0.165 & 0.567 \\ China & 0.292 & 0.633 \\ Malaysia & 0.330 & 0.667 \\ Overall & 0.309 & 0.622 \\ \end{tabular}
\end{table}
Table 1: Word2Vec Fairness Measure for Job Postings from Different Countries
\begin{table}
\begin{tabular}{c|c c c c}
**job posting origin \textbackslash{}candidate origin** & **India** & **China** & **Malaysia** & **Overall** \\ \hline
**India** & 0.400 & 0.800 & 0.467 & 0.567 \\
**China** & 0.714 & 0.667 & 0.545 & 0.633 \\
**Malaysia** & 0.833 & 0.600 & 0.643 & 0.667 \\
**Overall** & 0.667 & 0.688 & 0.550 & 0.622 \\ \end{tabular}
\end{table}
Table 2: Accuracy Measure by Job Posting Origin/Candidate Origin Using Word2Vec
\begin{table}
\begin{tabular}{c|c}
**term** & **p-ratio** \\ \hline management & 0.966 \\ finance & 0.900 \\ audit & 0.574 \\ business & 0.863 \\ india & 0.065 \\ shanghai & 0.144 \\ malaysia & 0.021 \\ \end{tabular}
\end{table}
Table 3: Sample Terms and Their p-ratio Values
in general. The results demonstrate that the fair embedding with adjusted p-ratio values performs better than regular word embedding, with respect to both fairness and accuracy.
The third experiment is to evaluate the adjusted word embedding with sigmoid of p-ratio. As pointed out in the prior section, the p-ratio adjusts term weights proportionally. As such, some terms who occur in most resumes, but do not carry information, would get inappropriate large weights. For example, "degree" may appear in many resumes. The word alone does to contain much information and should be assigned large weight according to the p-ratio formula. To alleviate the issue, sigmoid function curves p-ratio, pushes values up when they are above a threshold, and suppresses values down when they are below the threshold. The attempt is to improve the word embedding performance. Recall that the sigmoid function has two parameters, \(\lambda\) and \(\tau\). They control the steepness of the slope near the boundary value and the boundary value respectively, as shown in 1. The intuition behind this is that if we know certain term is strongly biased, we want to remove it completely, and for the words the we know to be not biased, we do not reduce its weight even if it might show up a little bit more often in one demographic group than another. The best combination of \(\lambda\) and \(\tau\) need to be tuned to fit each scenario. A plot for tuning the parameters can look like this: 4
Following the plot, it can be observed that as \(\tau\) increases, the accuracy measure drops. This is intuitive because when we increase the boundary value, terms that are inherently not biased and useful to the screening process are also omitted, causing a decrease in accuracy. There's also a process where accuracy increases with \(\tau\), which can possibly be caused by the removal of distraction caused by the biased terms that do not contribute to the resume-job posting analysis. For this case in specific, \(\lambda=50\) and \(\tau=0.4\) is used as they yield the best performance measure overall.
We apply the fair embedding with sigmoid of p-ratio to the same dataset. The results are reported in Table 6, 7. We see that while the fairness measure doesn't change much at this \(\lambda\) and \(\tau\) values, the accuracy measure experienced an increase as a result of our tuning strategy. The sigmoid function provides a way to find a sweet spot between the fairness-accuracy trade-offs.
To visualize the effects of different resume embedding methods, we utilize T-Distributed Stochastic Neighbor Embedding (T-SNE) plots [27], which is a popular statistical method for visualizing high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The visualizations are plotted in Figure 5, 6, 7 provide a visual representation of how fair embedding with p-ratio and fair embedding with sigmoid of p-ratio are able to mitigating the demographically separated clusters in the embedded resume space. The clustering boundary is clear and well defined in 5, and the clusters start to mix together in 6 and 7, as a result of removal or lessening the weight of the intra-cluster similarities.
Then, we conduct another experiment to see how the number of top matched resumes affect the result. In the exemplary result system, 10 resumes at a time is selected from the resume pool for each query job descriptions (N=10). In reality, however, the user should be able to customize how many resume should be screened. The choice of N can affect the
\begin{table}
\begin{tabular}{c|c c}
**Origin of Job Posting** & **Word2Vec With p-Value** & **Word2Vec with p-Value and Sigmoid** \\ \hline
**India** & 0.216 & 0.189 \\
**China** & 0.429 & 0.637 \\
**Malaysia** & 0.875 & 0.802 \\
**Overall** & 0.782 & 0.782 \\ \end{tabular}
\end{table}
Table 6: Fairness Measure After Bias Mitigating Techniques
\begin{table}
\begin{tabular}{c|c c c}
**Origin of Job Posting** & **Word-Embedding** & **Fair-Embedding** \\ \hline China & 0.292 & 0.429 \\ India & 0.165 & 0.216 \\ Malaysia & 0.330 & 0.875 \\ Overall & 0.309 & 0.782 \\ \end{tabular}
\end{table}
Table 4: Comparison of Fairness Measure using Word2Vec
\begin{table}
\begin{tabular}{c|c c c c}
**Job Posting Origin**Resume Origin** & **China** & **India** & **Malaysia** & **Overall** \\ \hline China & 0.643 & 0.500 & 0.667 & 0.600 \\ India & 0.375 & 1.00 & 0.500 & 0.533 \\ Malaysia & 0.692 & 0.667 & 0.875 & 0.733 \\ Overall & 0.600 & 0.652 & 0.625 & 0.622 \\ \end{tabular}
\end{table}
Table 5: Accuracy Measure After Using Fair-Embedding with Word2Vec
Figure 4: Fairness and Accuracy of different \(\lambda\) and \(\tau\)s
Figure 5: T-SNE Before Applying p-Value fairness
\begin{table}
\begin{tabular}{c|c c c c} \multicolumn{5}{l}{**Job Posting Origin**Resume Origin**} & **China** & **India** & **Malaysia** & **Overall** \\ \hline China & 0.667 & 0.625 & 0.571 & 0.633 \\ India & 0.333 & 1.00 & 0.526 & 0.567 \\ Malaysia & 0.846 & 0.778 & 1.00 & 0.867 \\ Overall & 0.676 & 0.772 & 0.647 & 0.689 \\ \end{tabular}
\end{table}
Table 7: Accuracy Measure After Using Fair-Embedding-Sigmoid Word2Vec
initial accuracy/fairness measure dramatically if not chosen carefully: for example, if N is too small, there is a higher chance to get a fairness measure of 0 is none of the resumes from a demographic is selected. If N is too big that it covers almost all of the resume, then the fairness measure will be higher, which can underestimate the bias in the system. The bias mitigating techniques, however work regardless of the choice of N (8, 9, 8, 9, 10). It turns out that, regardless of the N value to choose, the bias mitigating techniques are always able increase (or stays the same) both performance measures.
\begin{table}
\begin{tabular}{c|c c c c}
**Job Posting Origin**Resume Origin** & **China** & **India** & **Malaysia** & **Overall** \\ \hline China & 0.571 & 0.588 & 0.571 & 0.578 \\ India & 0.333 & 0.769 & 0.478 & 0.533 \\ Malaysia & 0.600 & 0.714 & 0.571 & 0.622 \\ Overall & 0.515 & 0.682 & 0.534 & 0.578 \\ \end{tabular}
\end{table}
Table 8: Accuracy Measure Before Fairness-Embedding Word2Vec n=15
Figure 6: T-SNE After Applying p-Value fairness
Figure 7: T-SNE After Applying p-Value fairness with sigmoid
\begin{table}
\begin{tabular}{c|c c c c}
**Job Posting Origin**\(\backslash\)**Resume Origin**\(\backslash\)** & **China** & **India** & **Malaysia** & **Overall** \\ \hline China & 0.667 & 0.400 & 0.625 & 0.600 \\ India & 0.462 & 0.875 & 0.500 & 0.556 \\ Malaysia & 0.565 & 0.700 & 1.00 & 0.711 \\ Overall & 0.587 & 0.643 & 0.660 & 0.622 \\ \end{tabular}
\end{table}
Table 10: Accuracy Measure After Fairness-Embedding-Sigmoid W2V n=15
Figure 8: Accuracy Measure with Different N Values
\begin{table}
\begin{tabular}{c|c c c c}
**Job Posting Origin**\(\backslash\)**Resume Origin**\(\backslash\)** & **China** & **India** & **Malaysia** & **Overall** \\ \hline China & 0.640 & 0.500 & 0.625 & 0.600 \\ India & 0.500 & 0.857 & 0.462 & 0.533 \\ Malaysia & 0.632 & 0.750 & 0.857 & 0.733 \\ Overall & 0.607 & 0.677 & 0.604 & 0.622 \\ \end{tabular}
\end{table}
Table 9: Accuracy Measure After Fairness-Embedding Word2Vec n=15
Figure 9: Fairness Measure with Different N Values
The Google News Word2Vec model is used in this experiment. Because this model is general purpose rather than specialized, may not represent the best performance that can be achieved by a word-embedding based resume screening system. The bias mitigating techniques presented, however, works on other word-embedding model as well as the Google News Word2Vec. As shown in the table 11 and 12, the GloVe word-embedding performs inferior to the Google News Word2Vec model at both measures initially. The mitigating techniques, including P-ratio and P-ratio with sigmoid, however, still managed to increase its fairness measure and accuracy measures.
## 5 Conclusion and Discussion
AI has been viewed as a technological revolution and provided unprecedented opportunities to our society. However, opportunities also come with challenges and even harm. If we cannot well understand AI technologies and their inherent fairness/bias issues, AI can cause great damage and create ethics and societal issues on a large scale. This study is following the active research stream on AI bias in automated resume filtering, focusing on the application of word embedding in candidate profile matching. Our study finds out that the word embedding (learned from a large corpus with neural networks) based job-resume matching algorithm can carry national origin bias, which reflects biases of individual terms. Using such systems, national origin bias would be amplified in work forces and can invite lawsuits. To address the issue from the algorithm perspective, we introduce several improved algorithms, inspired by existing research results. The key idea is to adjust weights on individual terms when generating resume vectors, by reducing the impact of terms with strong national origin bias. Extensive experiments are conducted to evaluate the proposed algorithms. The results suggest that our algorithms have great performance with respect to the fairness measure and accuracy.
Since the weights of each term in the proposed algorithms are learned from the training data, the algorithm may require a medium to large scale of labeled data when applied in the real world. When it comes to tiny-scale tasks like reviewing a single resume, other bias-mitigating techniques might be turned to. While the algorithm is supposed to reduce biases for all demographic groups divided along the protected attributes (gender, race, religion, etc), extra work might need to be done to obtain labeled data set for some of the attributes.
Algorithmic fairness in HR technologies is a broad subject. There are a lot of studies to be done. For example, it is worth investigating how to adjust word vectors directly along some subspace, such as national origin, to remove inherent bias, as opposed to adjusting the weights of terms. Such a solution would be more elegant and useful. Some protected attributes, like sexual orientation and disability status, might be difficult to infer from resume text. However, their bias does exist in the AI world. When there is no training data to reflect those attributes' information, how to devise algorithms to remove such bias? In addition to resume filtering, bias appears in other phases of AI-assisted hiring as well. Exploring those issues will be left as our future work.
|
2304.11625 | Meaningful Causal Aggregation and Paradoxical Confounding | In aggregated variables the impact of interventions is typically ill-defined
because different micro-realizations of the same macro-intervention can result
in different changes of downstream macro-variables. We show that this
ill-definedness of causality on aggregated variables can turn unconfounded
causal relations into confounded ones and vice versa, depending on the
respective micro-realization. We argue that it is practically infeasible to
only use aggregated causal systems when we are free from this ill-definedness.
Instead, we need to accept that macro causal relations are typically defined
only with reference to the micro states. On the positive side, we show that
cause-effect relations can be aggregated when the macro interventions are such
that the distribution of micro states is the same as in the observational
distribution; we term this natural macro interventions. We also discuss
generalizations of this observation. | Yuchen Zhu, Kailash Budhathoki, Jonas Kuebler, Dominik Janzing | 2023-04-23T11:51:12Z | http://arxiv.org/abs/2304.11625v3 | # Meaningful Causal Aggregation and Paradoxical Confounding
###### Abstract
In aggregated variables the impact of interventions is typically ill-defined because different micro-realizations of the same macro-intervention can result in different changes of downstream macro-variables. We show that this ill-definedness of causality on aggregated variables can turn unconfounded causal relations into confounded ones and vice versa, depending on the respective micro-realization. We argue that it is practically infeasible to only use aggregated causal systems when we are free from this ill-definedness. Instead, we need to accept that macro causal relations are typically defined only with reference to the micro states. On the positive side, we show that cause-effect relations can be aggregated when the macro interventions are such that the distribution of micro states is the same as in the observational distribution and also discuss generalizations of this observation.
## 1 Introduction
Discussions on publicly relevant questions in economy, politics, and health typically refer to variables that are actually aggregations over a large number of components. For instance, quantities like _employment rate_, _voter participation_ or _vaccination rate_ are usually aggregations over business sectors or/and over the whole population of a country or region. Following the literature (Chalupka et al., 2016; Rubenstein et al., 2017), we will refer to those aggregated variables (e.g. 'total sales' and 'total revenue' of a company) as _macro-variables_, and the detailed individual level quantities as _micro-variables_.
Previous works such as Rubenstein et al. (2017), Spirtes and Scheines (2004) and Beckers and Halpern (2019) show that macro variables do not necessarily admit unambiguous causal conclusions, but Rubenstein et al. (2017) provide a notion of consistency between macro- and micro-models. Their notion is strong since it requires that all microscopic realizations of an intervention on an aggregated variable result in the same effect on aggregated downstream variables. As follow-up work, Beckers and Halpern (2019) proposes a sequence of definitions of causal consistency, starting from that of Rubenstein et al. (2017), each new definition they provide is stronger than the previous.
Despite the difficulty of consistent aggregation in the real world, the concept of aggregation is ubiquitous. For example, economists study the impact of Gross Domestic Product on civil conflict (Miguel et al., 2004) as it would be overwhelming to model each person's socioeconomic and political activities; physicists study the impact of pressure and temperature of gas inside a container because it is practically impossible to model the motions of individual particles. Thus, instead of asking whether there is a causal model over macro-variables that is consistent in the sense of Rubenstein et al. (2017), Beckers and Halpern (2019), Spirtes and Scheines (2004), or is approximately consistent (Rischel and Weichwald, 2021; Beckers et al., 2019), we discuss how to make causal claims of interventions on macro-variables _relative to their realizations on micro-variables_. We find that, as a result, _confoundedness_ is ambiguous in the macro-variables.
Section 2 introduces the framework of micro- and macro-variable systems. Section 3 introduces 'natural micro-realizations' of macro-interventions as those that respect the distributions of micro states for given macro states and show that they enable an unconfounded macro causal model when the micro model is itself unconfounded, while shifts of micro distributions can induce confounding. We derive this phenomenon as a result of a change of coordinate systems where one can identify a macro-intervention with choices of particular coordinate systems for the micro-variables. Section 4 shows cases where the confounding induced by the shift of micro distribution cancels out with confounding from an additional variables, which results in
an unconfounded macro system. Section 5 discusses generalizations of 'natural micro-realizations' to general DAGs and their challenges. 1
Footnote 1: In Appendix F we give multiple real-world examples of aggregation to illustrate its ubiquity.
The goal of this paper is to initiate a discussion on how meaningful causal aggregation can be made possible, with realistic assumptions, while also being aware of the paradoxes described here.
## 2 Macro- and Micro-Intervention
In this work, micro-level causal models are standard acyclic structural equation models, defined following the standard frameworks introduced by Pearl (2009) and using notation from Peters et al. (2017):
**Definition 1** (micro causal model).: A micro causal model is an acyclic structural causal model (SCM).
**Definition 2** (structural causal model).: A structural causal model (SCM) \(M\mathrel{\mathop{:}}=(\mathbf{S},P_{\mathbf{N}})\) consists of a collection \(\mathbf{S}\) of \(d\) (structural) assignments
\[X_{j}\mathrel{\mathop{:}}=f_{j}(\textbf{PA}_{j},U_{j}),\ \ j=1,...,d, \tag{1}\]
where \(\textbf{PA}_{j}\subseteq\{X_{1},\cdots,X_{d}\}\setminus\{X_{j}\}\) are called **parents of \(X_{j}\)**; and a joint distribution \(P_{\mathbf{U}}=P_{U_{1},\cdots,U_{d}}\) over the noise variables, which we require to be jointly independent. The corresponding causal DAG \(G\) has nodes \(X_{j}\) and arrows into \(X_{j}\) from each of its parents. Note, that \(X_{j}\) does _not_ have to be one-dimensional.
Given a micro-causal model, macro-variables can be thought of to arise from applying an aggregation map on the micro-variables.
**Definition 3** (aggregation map, macro-variables).: Given a random variable \(X\) with range \(\mathcal{X}\), an _aggregation map_ is a surjective but non-injective function \(\pi:\mathcal{X}\rightarrow\bar{\mathcal{X}}\). Define \(\bar{X}=\pi(X)\) to be the _macro-variable_ of \(X\) under \(\pi\).
Graphically, we represent aggregation by the graph \(X\leftrightarrow\bar{X}\), where the bi-directed arrow visualizes the fact that \(\bar{X}\) can be seen as effect of \(X\) in the observational distribution, but the direction is reversed once we talk about interventions on \(\bar{X}\). Intuitively, readers can think about total sales as a consequence of individual sales before intervention; during an intervention on total sales, its change is reflected in individual-level sales (so the arrow is reversed).
We will later consider separate aggregations at each node. Then each \(X_{j}\) will often be a vector in \(\mathbb{R}^{n}\) and \(\bar{X}_{j}\) the sum or average of all its components. Given a distribution \(P(X_{1},\ldots,X_{n})\) over the micro-variables, defining \(\pi_{j}\) for each node induces a joint distribution \(P(X_{1},\ldots,X_{n},\pi(X_{1}),\ldots,\pi(X_{n}))\). To avoid further indices, we use the same symbol \(\pi\) instead of \(\pi_{j}\) for each node. In our examples we will anyway mostly work with the sum as aggregation map.
**Definition 4** (amalgamated graph).: Given a micro causal model and a set of aggregation maps, we define the amalgamated graph, \(G^{a}\mathrel{\mathop{:}}=(\mathcal{V},\bar{\mathcal{V}},\mathcal{E},\mathcal{ C})\), where \(V\in\mathcal{V}\) if \(V\) is a micro-variable, \(\bar{V}\in\bar{\mathcal{V}}\) if \(\bar{V}\) is a macro-variable. Further, \(U\to V\in\mathcal{E}\) if \(U\) and \(V\) are micro-variables and \(U\) is a parent of \(V\), and \(S\leftrightarrow\bar{S}\in\mathcal{C}\) if \(\bar{S}\) is the aggregation of \(S\).
The first consequence of aggregation is that the aggregated variables may no longer show a well-defined causal relation:
**Example 1**.: Consider the cause-effect relation \(X\to Y\), with \(X=(X_{1},X_{2})\) and \(Y=(Y_{1},Y_{2})\). For \(j=1,2\), set \(Y_{j}\mathrel{\mathop{:}}=\alpha_{j}X_{j}\). Define \(\bar{X}\mathrel{\mathop{:}}=\pi_{X}(X_{1},X_{2})=X_{1}+X_{2}\), and likewise \(\bar{Y}\mathrel{\mathop{:}}=\pi_{Y}(Y_{1},Y_{2})=Y_{1}+Y_{2}\). Then, for \(\alpha_{1}\neq\alpha_{2}\), the effect of setting \(\bar{X}\) to \(\bar{x}\) on \(\bar{Y}\) is ill-defined without any further specification. This example violates the consistency condition set out in Rubenstein et al. (2017). We elaborate in Appendix A.
In reality, the case where \(\alpha_{1}=\alpha_{2}\) is rare, and the generic cases are when \(\alpha_{1}\neq\alpha_{2}\). Since the operation'setting \(\bar{X}\) to \(\bar{x}\)' is not well-defined _a priori_, its micro-realization needs to be specified whenever talking about it. More precisely, we need to specify a distribution according to which we randomize the micro state (the 'joint manipulation' in Spirtes and Scheines (2004)):
**Definition 5** (macro intervention).: Given an aggregation map, \(\pi:\mathcal{X}\rightarrow\bar{\mathcal{X}}\), a (stochastic) _macro-intervention_ at \(\bar{x}\), denoted \(do(\bar{x})\), is a probability measure over \(\mathcal{X}\), denoted \(P^{do}_{\pi,\bar{x}}(X)\), such that \(\pi(x)=\bar{x}\quad\forall x\in\text{supp}(P^{do}_{\pi,\bar{x}}(X))\). When it is clear in context which aggregation map we refer to, we simply write \(P^{do}_{\pi}(X)\) for light notation. We sometimes refer to \(P^{do}_{\pi,\bar{x}}(X)\) as the _micro realization_ of the intervention at \(\bar{x}\).
Note that also Chalupka et al. (2016) define a macro-level intervention via actual interventions on micro-variables, but consider scenarios where the result is insensitive to the microscopic realization. As a further difference to their setting, we consider stochastic interventions Correa and Bareinboim (2020) as well as deterministic, while they only consider the latter.
The following definition formalizes the fact that interventions on aggregated variables reverse the causal relation between \(X\) and \(\bar{X}\), see also Figure 1:
**Definition 6** (macro-intervention graph).: Let \(G^{a}=(\mathcal{V},\bar{\mathcal{V}},\mathcal{E},\mathcal{C})\) be an amalgamated graph. A macro intervention \(P^{do}_{\pi,\bar{x}}\) maps \(G^{a}\) to the _macro-intervention graph_, \(G^{a}\mathord{\prime}\mathrel{\mathop{:}}=(\mathcal{V},\bar{\mathcal{V}}, \mathcal{E}^{\prime},\mathcal{C}^{\prime})\), where \(\mathcal{E}^{\prime}=\mathcal{E}\bigcup\{\bar{X}\to X\}\setminus\{V \to X:V\in\mathcal{V}\setminus\{X\}\}\) and \(\mathcal{C}^{\prime}=\mathcal{C}\setminus\{X\leftrightarrow\bar{X}\}\).
## 3 Macro-confounding in Unconfounded Micro-models
We now show that both the effect of macro-intervention at \(\bar{x}\) and the qualitative property of being confounded or not depends on the micro-realization of the macro-intervention.
### Confounding on the Macro Level
Let \(X\) and \(Y\) be two (multi-variate) micro-variables with associated macro variables \(\bar{X}\) and \(\bar{Y}\). Suppose that \(X\) precedes \(Y\) in causal order. Denote the observational distribution by \(P\). Given a macro intervention at \(\bar{x}\), \(P^{do}_{\bar{x}}(X)\), the post-intervention quantity \(P(\bar{y}|do(\bar{X}:=\bar{x}))\) is well-defined: \(P(\bar{y}|do(\bar{X}:=\bar{x}))=\int_{\mathcal{X},\mathcal{Y}}P(\bar{y}|Y=y)P( y|do(X:=x))P^{do}_{\pi,\bar{x}}(x)dxdy\). We can then define the notion of macro-confounding by comparing the observational and interventional distributions of macro variables.
**Definition 7** (macro confounding).: Let \(X\) and \(Y\) be two micro-variables with associated macro variables \(\bar{X}\) and \(\bar{Y}\) and let \(X\) precede \(Y\) in causal order. Let \(\mathcal{I}\) be a set containing macro-interventions, one for each \(\bar{x}\in\mathsf{Supp}[P(\bar{X})]\). We say there is _macro-confounding_ between macro variables \(\bar{Y}\) and \(\bar{X}\) if \(P(\bar{Y}|\bar{X}=\bar{x})\neq P(\bar{Y}|do(\bar{X}:=\bar{x}))\) for some \(P^{do}_{\bar{x}}(X)\in\mathcal{I}\).
**Definition 8** (confounding-inducing and confounding-inhibiting macro-interventions).: A _confounding-inhibiting macro intervention_ is one such that the post-interventional distribution \(P(\bar{Y}|do(\bar{X}:=\bar{x}))\) is equal to the observational distribution \(P(\bar{Y}|\bar{X}=\bar{x})\). On the other hand, a _confounding-inducing macro intervention_ is one such that \(P(\bar{Y}|do(\bar{X}:=\bar{x}))\neq P(\bar{Y}|\bar{X}=\bar{x})\).
### A Simple Positive Result
The following result provides a simple sufficient condition for which the unconfounded cause-effect relation in Figure 2, left, turns into the unconfounded cause-effect relation in Figure 2, right:
**Theorem 1** (Natural macro-intervention inhibits confounding).: Let \(X\to Y\) and \(\bar{X}\) and \(\bar{Y}\) be macro-variables of \(X\) and \(Y\), respectively. Then the macro-intervention \(P^{do}_{\bar{x}}(X):=P(X|\bar{X}:=\bar{x})\) is confounding-inhibiting, that is, the unconfounded cause-effect relation \(\bar{X}\to\bar{Y}\) correctly predicts the effect of interventions on \(\bar{X}\).
To keep notation simple, our proofs will henceforth consider discrete variables but the generalization to continuous variables is obvious.
Proof.: \(P(\bar{Y}|do(\bar{x}))\ =\ \sum_{x}P(\bar{Y}|x)P(x|\bar{x})\ =\ P(\bar{Y}|\bar{x})\).
Despite its mathematical simplicity, one should appreciate that this result shows an option for consistently aggregating cause-effect relations provided one is willing to accept its _context dependence_: the relation \(\bar{X}\to\bar{Y}\) holds whenever one assumes that the micro-realization \(P^{do}_{\bar{x}}(X)\) is just the
Figure 1: **Left:** The micro-variable causal model \(M\) is described by an SCM, which is shown within the blue frame. The aggregation map \(\pi\) is applied to the micro-variables \(X_{1},\cdots,X_{N}\) and \(Y_{1},\cdots,Y_{N}\), both contained in the dashed frame. \(\bar{X}\) and \(\bar{Y}\) are the macro variables which arise from the aggregation map. **Right:** A macro-intervention \(P^{do}_{\pi,\bar{x}}(X)\) is shown. The intervention applied to the macro variable \(\bar{X}:=\bar{x}\) is realised as a (perfect) stochastic intervention on \(X\), note that all values of \(\mathbf{x}\) supported by \(P_{\mathbf{X}|\bar{X}:=\bar{x}}\) give rise to \(\bar{x}\).
Figure 2: After choosing appropriate microscopic micro-realizations of interventions on \(\bar{X}\), the cause-effect relation (left) remains unconfounded on the macro level (right).
conditional distribution of micro states \(X\) as is usual 'in the wild'. This view, however, entails that we refer to different micro-realizations in different marginal distributions of \(X\), because conditioning macro states would then in general give rise to different distributions of micro states.
Nevertheless, adopting the distribution of micro states from the observational distribution seems like a natural definition for \(P_{\bar{x}}^{do}(X)\) whenever there is no additional knowledge about the system telling us how a'more natural' distribution would look like.2 We will henceforth call \(P(X|\bar{X})\) the _natural micro-realization_.
Footnote 2: Motivated by thermodynamics Balian (2007), one may alternatively want to consider the distribution with maximal entropy subject to the constraint \(\bar{X}=\bar{x}\) as ‘most natural’. However, entropy of continuous variables implicitly refers to a reference measure (like the volume in phase space in physics), which again introduces an ambiguity.
Nevertheless, _equilibrium_ thermodynamics (Adkins, 1983) provides an example where ’natural micro-realizations’ exist: regardless of whether macroscopic variables like volume or pressure have been _observed_ or _set_ by an intervention, in both ceases the distribution of micro states is the uniform distribution (maximum entropy) in the sub-manifold of the phase space satisfying the respective macro constraint.
**Definition 9** (natural macro-intervention): A _natural macro-intervention_ of an aggregated variable is the distribution of the micro-variables conditioned on the value of the macro-variable.
### Motivating Example
To consider a toy scenario in real life which matches Example 1, suppose there are two identical shops, \(A\) and \(B\), each with two products, indexed \(1\) and \(2\), with prices \(\alpha_{1}\) and \(\alpha_{2}\), respectively. The sales \(X_{1},X_{2}\) of the two products are assumed to be independent Gaussians with means \(\mu_{1},\mu_{2}\) and variances \(\sigma_{1}^{2},\sigma_{2}^{2}\). The total revenue, \(\bar{Y}\) is given by \(\bar{Y}=\alpha_{1}X_{1}+\alpha_{2}X_{2}\). Now suppose the shop owners want to understand how total revenue changes _wrt_ total sales, \(\bar{X}=X_{1}+X_{2}\).
Shop \(A\) owner performs the natural micro-realization i.e. \(P^{do}(X_{1},X_{2}|\bar{x}):=P(X_{1},X_{2}|\bar{x})\).
Clearly this leads them to conclude that \(\bar{X}\) is unconfounded due to Theorem 1. Consequentially, the shop \(A\) owner concludes on the following structural equation which is what would be obtained by regressing \(\bar{Y}\) on \(\bar{X}\):
\[\bar{Y}=\frac{\operatorname{Cov}(\bar{Y},\bar{X})}{\operatorname{Cov}(\bar{X},\bar{X})}\bar{X}+N=\frac{\alpha_{1}\sigma_{1}^{2}+\alpha_{2}\sigma_{2}^{2}}{ 2}\bar{X}+N, \tag{2}\]
with \(N\perp\!\!\!\perp\bar{X}\). Meanwhile, shop \(B\) owner performs an experiment also setting the total sales to \(\bar{x}\), but they do this by observing how many items were sold for product \(1\), and then turn up (or down) the advertising to make sure they sell \(\bar{x}-x_{1}\) items for product \(2\). This amounts to the macro intervention:
\[P_{\bar{x}}^{do}(X_{1},X_{2})=\mathcal{N}\left(\begin{pmatrix}\mu_{1}\\ \bar{x}-\mu_{1}\end{pmatrix},\begin{pmatrix}\sigma_{1}^{2}&-\sigma_{1}^{2}\\ -\sigma_{1}^{2}&\sigma_{1}^{2}\end{pmatrix}\right) \tag{3}\]
Shop \(B\) owner, making this intervention, would instead conclude that the structural equation generating \(\bar{Y}\) is
\[\bar{Y}=\alpha_{2}\bar{X}+\tilde{N}, \tag{4}\]
where \(\tilde{N}=(\alpha_{1}-\alpha_{2})X_{1}\).
Thus, whenever \(\alpha_{1}\neq\alpha_{2}\), we have \(\tilde{N}\perp\!\!\!\perp\bar{X}\) under \(P\).
Note that equations (2) and (4) generate the same observational distribution \(P(\bar{Y},\bar{X})\), but they correspond to different macro-interventions on \(\bar{X}\), the former being _confounding-inhibiting_ while the latter being _confounding-inducing_.
### A Perspective from Linear Change of Coordinates
The above example can also be formalized by thinking of the vector of micro variables \(X=(X_{1},\cdots,X_{N})\) as a coordinate system and the aggregation as a result of a linear change of coordinates. Let \(H:\mathbb{R}^{N}\to\mathbb{R}^{N}\) be a bijective linear map (i.e. the change-of-coordinate map). Further, requiring the first row of \(H\) to be ones would give us the sum aggregation:
\[H_{1,:}=(1,1,\ldots,1) \tag{5}\]
Then, trivially it follows that
\[\bar{Y} :=\mathbf{\alpha}^{\top}X\] \[=\mathbf{\alpha}^{\top}H^{-1}HX\] \[=\underbrace{\sum_{i=1}^{N}\alpha_{i}H_{i,1}^{-1}}_{\beta}X+ \underbrace{\sum_{i,k=1}^{N}\sum_{j=2}^{N}\alpha_{i}H_{i,j}^{-1}H_{j,k}X_{k}}_ {\bar{U}_{\beta}}\]
Therefore, after a linear change of coordinates with \(\bar{X}\) being part of the new coordinate system, \(\bar{Y}\) still admits a linear structural equation under the new coordinates. We can view this as a usual structural equation of \(\bar{Y}\), since all we did was changing the coordinates. Now, an intervention on \(\bar{X}\) amounts to changing \(\bar{X}\) while not affecting remaining coordinates \(\sum_{k=1}^{N}H_{2,:,k}X_{k}\), and hence the noise term, \(\bar{U}_{\beta}\), which is a linear combination thereof. This translates the ambiguity of \(do(\bar{x})\) into specifying the basis vectors corresponding to the _remaining_ coordinates.
One may ask, since \(H\) is constrained - it must satisfy (5) - it needs to be shown that there exist \(H\) so that a given \(\beta\) can be obtained. With a little bit more effort, we can also show that for any \(\mathbf{\Delta}\) such that \(\sum_{i}\Delta_{i}=1\), there also exist \(H\) such that \(\mathbf{\Delta}=H_{:,1}^{-1}\). Indeed, we check this in the following lemma.
**Lemma 1** For a given \(\mathbf{\alpha}\in\mathbb{R}^{N},\ \mathbf{\alpha}\not\propto(1,\cdots,1)\) and \(c\in\mathbb{R}\), there exists an invertible \(N\times N\) matrix \(H\in\mathbb{R}^{N\times N}\) such that \(H_{1,i}=1\ \forall i=1,\cdots,N\) and \(\sum_{i}\alpha_{i}H_{i,1}^{-1}=c\). Moreover, for any \(\mathbf{\Delta}\) such that \(\sum_{i}\Delta_{i}=1\), there also exist \(H\) such that \(\mathbf{\Delta}=H_{\cdot,1}^{-1}\).
The proof is in Appendix E. Lemma 1 shows that for any \(\beta\), there exist at least one corresponding change-of-coordinate matrix \(H\). This is remarkable: When \(\beta=\frac{\text{Cov}(\bar{Y}\bar{X})}{\text{Cov}(X,\bar{X})}\), we know from regression that \(\bar{X}\) and \(\bar{Y}\) look unconfounded. On the contrary, note that \(\bar{U}_{\beta}=\mathbf{\alpha}^{\top}X-\beta\bar{X}\). Therefore, as \(\beta\to\infty\), \(|\text{Cov}(\bar{X},\bar{U}_{\beta})|\to\infty\). This means that depending on the change-of-coordinate matrix chosen, \(\bar{X}\) can look either unconfounded or _arbitrarily strongly confounded with \(\bar{Y}\)_.
Relating the ambiguity of macro interventions to the ambiguity of the coordinate systems links this work also to causal representation learning, which deals with causal modeling in scenarios where the variables are not given a priori (Bengio et al., 2013; Scholkopf et al., 2021).
### More Practical: Shift Interventions
In large practical scenarios, interventions that set microvariables to distribute in a particular way is difficult. More realistically, interventions can change the value _relative_ to its current state.
We therefore consider the simple shift-intervention \(X\mapsto X+\delta\) with constant \(\delta\), which is a special case of general shift-interventions \(X\mapsto f(X)\)(Rothenhausler et al., 2015; Sani et al., 2020). In the standard setting (i.e. without dealing with aggregation), a shift intervention is implemented as the practitioner simply observe whichever treatment the subject is about to receive, and then add a constant on top of it. In Appendix B we elaborate on its relations with atomic interventions. Alternatively, one can think of the shift resulting from the linear influence of a cause of \(X\).
**Macro models under shift interventions.** Clearly, the change-of-coordinate perspective also implies, as a consequence, how to distribute the shift in \(\bar{X}\) to shifts in the micro-variables \(X_{1},\cdots,X_{N}\). From (6), doing \(\bar{X}\mapsto\bar{X}+1\) amounts to doing \(X\mapsto X+H_{\cdot,1}^{-1}\):
\[\bar{Y}(\bar{X}+1) =\underbrace{\mathbf{\alpha}^{\top}H_{\cdot,1}^{-1}\bar{X}+U_{\beta }}_{\mathbf{\alpha}^{\top}X}+\mathbf{\alpha}^{\top}H_{\cdot,1}^{-1} \tag{6}\] \[=\mathbf{\alpha}^{\top}(X+H_{\cdot,1}^{-1}) \tag{7}\]
i.e. shifting by \(1\) on \(\bar{X}\) and shifting \(X\) by \(H_{\cdot,1}^{-1}\) on the micro-variables are the same thing.
This makes calculating \(\beta\) easy if you have already decided how you want to shift \(X\): by Lemma 1, \(H_{\cdot,1}^{-1}\) can be any vector \(\mathbf{\Delta}\) whose elements sum to \(1\). Therefore, just make sure that the coefficients of \(\Delta\) sum up to \(1\), and the corresponding structural coefficient reads \(\beta=\mathbf{\alpha}^{\top}\Delta\).
### Non-Gaussian and Non-Linear Generalization
Certainly, \(\bar{X}\) can also be the first coordinate after a _nonlinear_ coordinate transformation. We will see that even in _linear_ models, noise terms that have a non-linear effect on the target may be more interpretable. Further, the most interpretable interventions may not necessarily come from _linear_ coordinate changes.
To this end, assume that \(Y_{j}\) is given by
\[Y_{j}=f_{j}(X_{j})+V_{j},\quad j=1,\ldots,N,\]
where \(f_{j}\) may be non-linear functions and \(V_{j}\) are noise terms, independent of \(X_{j}\) and jointly independent.
We will first show that there is an infinite continuum of options for a structural equation that writes \(\bar{Y}\) in terms of \(\bar{X}\) together with an appropriately constructed (formal) noise term. From these options, we will later choose one that renders the causal relation between \(\bar{X}\) and \(\bar{Y}\) unconfounded.
To simplify notation, we introduce the auxiliary variable \(W:=\sum_{j=1}^{N}f_{j}(X_{j})\) and obtain
\[\bar{Y}=W+\bar{V}, \tag{8}\]
with \(\bar{V}:=\sum_{j=1}^{N}V_{j}\).
For our coordinate change we can restrict our attention to the two dimensional subspace spanned by \(\bar{X},W\): let \(\psi:\mathbb{R}^{2}\to\mathbb{R}^{2}\) be a bijection that leaves the first component invariant, that is \(\psi(a,b)=(a,\psi_{2}(a,b))\). We then have \(\psi^{-1}(a,b)=(a,\phi(a,b))\), where \(\phi\) denotes the second component of \(\psi^{-1}\).
We define an additional noise variable \(M:=\psi_{2}(\bar{X},W)\), from which we can reconstruct \(W\) via \(W=\phi(\bar{X},M)\), and rewrite (8) as
\[\bar{Y}=\phi(\bar{X},M)+\bar{V}, \tag{9}\]
with \(\bar{V}\) being independent of \(\bar{X}\) and \(M\), but \(M\) possibly dependent of \(\bar{X}\). Then, (9) can be interpreted as the SCM of a (possibly confounded) causal relation between \(\bar{X}\) and \(\bar{Y}\) with vector valued noise variable \((M,\bar{V})\).
To see that \(M\) can be chosen in a way that renders this relation unconfounded, define \(M\) via the conditional cumulative distribution function
\[M(\bar{x},w):=P(W\leq w|\bar{X}=\bar{x}).\]
Whenever the conditional distribution of \(W\), given \(\bar{X}\) is continuous, \(M\) given \(\bar{X}\) is uniformly distributed for all \(\bar{x}\) and thus independent of \(\bar{X}\). The function \(\phi\) exists if \(M(\bar{x},\cdot)\) is invertible for all \(\bar{x}\). With such a choice of \(M\), (9) is the SCM of an unconfounded causal relation between
and \(\bar{Y}\) and an intervention that keeps \(M\) constant is then confounding-inhibiting.
The different choices of \(M\) may differ with respect to interpretability and the one above (which renders the relation unconfounded) may not be the most interpretable one. Let us revisit the example \(f_{j}(X_{j})=\alpha_{j}X_{j}\), where \(\alpha_{j}\) is the price per unit and \(V_{j}:=0\). We can then define the noise \(M\) by the average price \(M:=(\sum_{j=1}^{N}\alpha_{j}X_{j})/(\sum_{j=1}^{N}X_{j})\), for which we obtain the SCM
\[\bar{Y}=\bar{X}\cdot M. \tag{10}\]
We can then think of an intervention in which the company starts selling products to an additional country with buying patterns comparable to their existing customers. This increases the sales of all price segments by the same percentage, and thus defines an intervention that keeps the average price \(M\) (the noise in (10)) constant.
### Macro-Interventions for which we cannot specify a macro
Confounder.
So far we have shown that the macro variables appear confounded or not, depending on the specified macro interventions. It turns out there exist macro interventions which do not make the macro variables look unconfounded, but also do not allow for an explicit construction of a macro-confounder. Due to space limitation, we elaborate on this in Appendix C.
## 4 Macro-Confounding in Confounded Micro-Models
What if the micro systems themselves are confounded? In 4.1 we first present a technical result in the context of categorical variable models. In 4.2, we also provide a discussion in the continuous variable setting, focusing on linear Gaussian models.
### Discrete Confounded Micro Models
Even if the underlying micro system is confounded, there can still be, at times, macro interventions which make the system look unconfounded. Moreover, even if the system remains confounded after aggregation in the effect variable (i.e. the micro-variable causes are confounded with the aggregated effect variable \(\bar{Y}\)), the system may still regain unconfoundedness after aggregation of the micro-variable causes.
**Theorem 2** Let \(X,Y,Z\) be categorical variables with \(|\mathcal{X}|,|\mathcal{Y}|,|\mathcal{Z}|<\infty\). Let \(G\) be the causal graph with \(X\to Y\), \(X\gets Z\to Y\), and let \(\pi_{X}:\mathcal{X}\longrightarrow\bar{\mathcal{X}}\) and \(\pi_{Y}:\mathcal{Y}\longrightarrow\bar{\mathcal{Y}}\) denote aggregation maps. Let \(G^{a}\) be the amalgamated graph of \(G\), \(\pi_{X}\) and \(\pi_{Y}\). Further, let \({G^{a}}^{\prime}\) be the macro-intervention graph for an intervention on \(\bar{X}\). See Figure 3 Let \(P\) denote probability distributions satisfying the conditional independence structure given by \({G^{a}}\), and \(P^{do}\) denote probability distributions satisfying the independence structure given by \({G^{a}}^{\prime}\). Suppose \(|\bar{\mathcal{Y}}|<\min\left(|\mathcal{X}|,|\mathcal{Z}|\right)\), \(|\bar{\mathcal{X}}|<|\mathcal{X}|\). Then for any coarsening maps \(\pi_{X}\) and \(\pi_{Y}\), and any family of macro interventions on \(\bar{X}\): \(\{P^{do}_{\pi_{X},\bar{x}}\mid\bar{x}\in\bar{\mathcal{X}},\;P^{do}_{\pi_{X}, \bar{x}}\neq P_{X|\bar{X}=\bar{x}}\}\) there exist at least one \(P\) such that
1. \(P(Y|X)\not\equiv P^{do}(Y|X)\). i.e. \(X\) and \(Y\) are confounded.
2. \(P(\bar{Y}|X)\not\equiv P^{do}(\bar{Y}|X)\). i.e. \(X\) and \(\bar{Y}\) are confounded.
3. \(P(\bar{Y}|\bar{X}=\bar{x})=P^{do}(\bar{Y}|\bar{X}=\bar{x})\) for all \(\bar{x}\in\bar{\mathcal{X}}\). i.e. \(\{(\bar{x},P^{do}_{X|\bar{X}})\;|\bar{x}\in\bar{\mathcal{X}}\}\) are confounding-inhibiting macro interventions.
The conditions \(|\bar{\mathcal{Y}}|<\text{min}(|\mathcal{X}|,|\mathcal{Z}|)\) and \(|\bar{\mathcal{X}}|<|\mathcal{X}|\) are sufficient conditions which ensure that there is strong-enough aggregation. The theorem thus states that we could find joint distributions over \(X,Y,\bar{X},\bar{Y}\) such that even if both \((X,Y)\) and \((\bar{X},Y)\) are confounded, the confounding can be 'cancelled out' for \((\bar{X},\bar{Y})\) provided there is enough aggregation.
### Linear Gaussian Confounded Models
Assume we are given the confounded linear gaussian model
\[\bar{Y}=\alpha^{T}X+N, \tag{11}\]
where \(N\) is a noise variable that correlates with \(X\) but is not a descendant of \(X\).
Let \(\pi\) be the aggregation that sums all elements of \(X\). Is it possible to define \(P^{do}_{\pi,\bar{x}}(X)\) in a way that is confounding inhibiting, that is, that \(P(\bar{Y}|do(\bar{x}))=P(\bar{Y}|\bar{x})\)? The following necessary condition shows that it is not always possible. Let \(\Sigma^{do(\bar{x})}_{X}\) denote the covariance matrix of \(P^{do}_{\pi,\bar{x}}(X)\). Then \(P(\bar{Y}|do(\bar{x}))\) has variance
\[\mathrm{Var}(\bar{Y}|do(\bar{x}))=\mathrm{Var}(N)+\alpha^{T}\Sigma^{do(\bar{ x})}_{X}\alpha. \tag{12}\]
Since the second term is non-negative, we can only suppress confounding if
\[\mathrm{Var}(\bar{Y}|\bar{x})\geq\mathrm{Var}(N). \tag{13}\]
Figure 3: Confounded cause-effect relation between \(X\) and \(Y\), which can turn into an unconfounded cause-effect relation on the aggregated level, depending on the micro-realization of \(do(\bar{x})\).
We can give (13) a geometric interpretation if we focus on random variables with zero mean without loss of generality and think of covariance as inner product in the Hilbert space of random variables with finite variance. We can even restrict the attention to the 3-dimensional space spanned by \(\bar{X},\alpha^{T}X,N\).
Accordingly, we have \(\operatorname{Var}(N)=\|N\|^{2}\). Further, let \(Q^{\perp}\) denote the projection onto the orthogonal complement of \(\bar{X}\). Then \(\operatorname{Var}(\bar{Y}|\bar{x})=\|Q^{\perp}(N+\alpha^{T}X)\|^{2}\) for all \(\bar{x}\), because the conditional variance is given after regressing \(\bar{X}\) out and is homoscedastic since we considering joint-Gaussian variables. We can thus rewrite (13) as
\[\|Q^{\perp}(N+\alpha^{T}X)\|\geq\|N\|. \tag{14}\]
To see that (13) is also sufficient to enable confounding inhibiting interventions, we will briefly check that \(\alpha^{T}\Sigma_{X}^{do(\bar{x})}\alpha\) can be made arbitrarily large, and then it follows (13) is sufficient to guarantee existence of a Gaussian macro-intervention that is confounding-inhibiting. Whenever \(\alpha\) and \(\mathbf{1}=(1,\dots,1)^{T}\) are linearly independent (otherwise the problem is anyway trivial), this is achieved by choosing \(\Sigma_{X}^{do(\bar{x})}\) with \(\mathbf{1}\) being an eigenvector with eigenvalue \(0\) and other eigenvalues large enough so that \(\alpha^{T}\Sigma_{X}^{do(\bar{x})}\alpha=\operatorname{Var}(\bar{Y}|\bar{x})- \operatorname{Var}(N)\).
We now need to show that we can define \(do(\bar{x})\) in a way that ensures that conditional expectations also match:
\[E[\bar{Y}|do(\bar{x})]=E[\bar{Y}|\bar{x}]. \tag{15}\]
Since the intervention cannot affect \(N\) by assumption, (11) implies
\[E[\bar{Y}|do(\bar{x})]=\sum_{j}\alpha_{j}E[X_{j}|do(\bar{x})]+E[N].\]
In defining our intervention, we can freely choose each \(E[X_{j}|do(\bar{x})]\) with the only constraint \(\sum_{j}E[X_{j}|do(\bar{x})]=\bar{x}\). Whenever \(\alpha\) and \((1,\dots,1)\) are linearly independent, the term \(\sum_{j}\alpha_{j}E[X_{j}|do(\bar{x})]\) can be made to achieve _any_ real value, hence we can certainly ensure (15).
Finally, for which values of \(\alpha\) is (14) (or equiv., (13)) satisfied? When \(\alpha\propto\mathbf{1}\), it is clearly not satisfied: since \(Q^{\perp}\bar{X}=0\) and \(\|Q^{\perp}N\|<\|N\|\) as \(\bar{X}\) and \(N\) are not perpendicular under the covariance inner product. But as soon as \(\alpha\) and \(\mathbf{1}\) are linearly independent, the RHS of (12) can be made arbitrarily large by scaling up \(\alpha\) while having it always point in the same direction. Thus, it is possible to tune \(\alpha\) as a hyperparameter of the system to ensure that confounding inhibiting interventions exist.
Interpretation.For the relation between sold items \(X_{i}\) of each product \(i\) with price \(\alpha_{i}\) and \(Y_{i}=\alpha_{i}X_{i}\) its revenue, let \(Z_{i}\) be the number of extra products sold on promotion with reduced price \(\gamma_{i}\). Assume that if the customer buys one item on reduced price, then they will also buy one on full price, giving rise to the micro-level SCM \(X_{i}:=Z_{i}\) and \(Y_{i}:=\alpha_{i}X_{i}+\gamma_{i}Z_{i}\). Here, \(Z_{i}\) are micro confounders of the cause-effect pair \(X_{i}\to Y_{i}\) and we obtain an aggregated confounder \(N:=\sum_{i}\gamma_{i}Z_{i}\), which may disappear for appropriate interventions on \(\bar{X}\), provided the prices \(\alpha_{i}\) are 'different enough' from being all the same.
The above formulation asserts that the shop owner selling the items can make a confounding-inhibiting macro intervention on the total number of items sold on full price (\(\bar{X}\)) when full prices \(\boldsymbol{\alpha}\) contain enough heterogeneity wrt to the reduced prices \(\boldsymbol{\gamma}\) i.e. When all prices are the same, there is not enough heterogeneity in the micro-model to blur out the confounding. On the other hand, if there is enough heterogeneity in the prices, then the shop owner may find a macro-intervention on the items sold on full prices, for which they can infer the causal effect of, by directly regressing total revenue on total items sold on full price.
## 5 Multi-variate models
In this section we want to follow up on the positive result of Theorem 1 and discuss generalizations and their challenges.
### Linear chain
Given the micro model in Figure 4, left, we are now facing two challenges if we wish to aggregate it to the structure on the right. First, the macro variable \(\bar{Y}\) will not necessarily block the influence of \(\bar{X}\) on \(\bar{Z}\) due to information transmission through the micro state \(Y\). Second, the generalization of _natural micro-realization_ raises the following ambiguity for \(do(\bar{y})\): to someone who focuses on the cause-effect relation \(Y\to Z\) only, it should be \(P(Y|\bar{y})\), in alignment with Section 3.2. However, after seeing \(\bar{X}\) one may want to consider \(P(Y|\bar{y},\bar{x})\) more natural. This appears to be the right choice at least if one restricts the analysis to the subpopulation with fixed \(\bar{x}\). Remarkably, these two questions are related:
**Lemma 2** (irrelevance of cause of causes): Given the causal chain in Figure 4, then setting \(do(\bar{y}):=P(Y|\bar{y})\) results in the same downstream effect on \(\bar{Z}\) as implementing it according to \(P(Y|\bar{y},\bar{x})\) if and only if
\[\bar{X}\perp\!\!\!\!\perp\bar{Z}\,|\bar{Y}. \tag{16}\]
Figure 4: The _natural micro-realization_ of macro interventions from Section 3.2 enables coarse graining the chain on the left to the chain on the right whenever \(\bar{X}\perp\!\!\!\perp\bar{Z}\,|\bar{Y}\).
Proof.: Implementing \(do(\bar{y})\) according to \(P(Y|\bar{y})\) results in the downstream effect \(\sum_{y}P(\bar{Z}|y)P(y|\bar{y})=\sum_{y}P(\bar{Z}|y,\bar{y})P(y|\bar{y})=P(\bar{ Z}|\bar{y})\). Micro-realization via \(P(Y|\bar{y},\bar{x})\) results in \(\sum_{y}P(\bar{Z}|y)P(y|\bar{y},\bar{x})=\sum_{y}P(\bar{Z}|y,\bar{y},\bar{x})P( y|\bar{y},\bar{x})=P(\bar{Z}|\bar{y},\bar{x})\). The statement \(P(\bar{Z}|\bar{y})=P(\bar{Z}|\bar{y},\bar{x})\) holds for all \(\bar{x},\bar{y}\) iff. (16) is true.
If (16) does not hold, the chain in Figure 4, right, is anyway not a valid aggregation and requires an additional arrow \(\bar{X}\to\bar{Z}\) instead.3 Whenever (16) holds, the cause-effect relation \(\bar{Y}\to\bar{Z}\) is valid with respect to both micro-realizations \(P(Y|\bar{y},\bar{x})\) and \(P(Y|\bar{y})\), and they result in same effect on \(\bar{Z}\). Since the natural micro-realization of \(do(\bar{x})\) has no ambiguity and renders the relation to \(\bar{Y}\) and \(\bar{Z}\) unconfounded, the chain in Figure 4, right, captures all causal relations correctly with respect to natural micro-realizations. It is convenient that (16) is a purely statistical criterion and refers to macro variables only.
Footnote 3: In the context of dynamical processes, this question translates to the difficult question of which macro variables are required to describe the relevant part of the history, see e.g. Crutchfield and Shalizi (1999).
### Backdoor adjustments
Assume now we have the DAG in Figure 5, left. Under which conditions can we define micro-realizations of macro interventions with respect to which the system behaves like the DAG on the right of Figure 5? We already know that \(P(X|\bar{x})\) yields effects on \(\bar{Y},\bar{Z}\) that align with the observational conditionals \(P(\bar{Y}|\bar{X})\) and \(P(\bar{Z}|\bar{X})\), respectively. The questionable part is the effect of interventions on \(\bar{Y}\). We need to define them in a way that ensures
\[P(\bar{Z}|do(\bar{y}),\bar{x})=P(\bar{Z}|\bar{y},\bar{x}), \tag{17}\]
that is, the backdoor adjustment formula with \(\bar{X}\) as adjustment variable (Rule 2, Theorem 3.4.1 in Pearl (2009)). We find the following sufficient condition:
**Lemma 3** (macro backdoor adjustment).: Given the DAG Figure 5, left, let \(\bar{Z}_{y}\) denote the random variable \(\bar{Z}\) after adjusting \(Y\) to some fixed \(y\) in the SCM for \(\bar{Z}\). If
\[\bar{Z}_{y}\perp\!\!\!\perp Y\,|\bar{X}, \tag{18}\]
then (17) holds if \(do(\bar{y})\) is defined by randomizing \(Y\) according to \(P(Y|\bar{y},\bar{x})\).
Proof.: For fixed \(\bar{x}\), (18) states that \(\bar{Z}\) is a function of noise that is independent of \(Y\), thus we have equality of the interventional and observational probabilities \(P(\bar{Z}|do(y),\bar{x})=P(\bar{Z}|y,\bar{x})\). We conclude \(P(\bar{Z}|do(\bar{y}),\bar{x})=\sum_{y}P(\bar{Z}|do(y),\bar{x})p(y|\bar{y}, \bar{x})=\sum_{y}P(\bar{Z}|y,\bar{x})p(y|\bar{y},\bar{x})=P(\bar{Z}|\bar{y}, \bar{x})\).
Lemma 3 states that we can consider the DAG in Figure 5, as a valid aggregation of the one on the left when we make the macro-intervention \(P(Y|\bar{y},\bar{x})\), provided that \(\bar{X}\) blocks the backdoor path between \(Y\) and \(\bar{Z}\).
Since condition (18) refers to the micro state of \(Y\), it seems quite strong, but finding conditions that are easier to handle has to be left to the future. Nevertheless, it is worth mentioning that the proof of Lemma 3 straightforwardly generalizes to back-door adjustments in arbitrary DAGs. We first state Definition 3.3.1 in Pearl (2009):
**Definition 10** (back-door criterion).: A set \(B\) of variables satisfies the back-door criterion relative to the ordered pair \((X_{i},X_{j})\) if (i) no node in \(B\) is a descendant of \(X_{i}\) and (ii) \(B\) blocks every path between \(X_{i}\) and \(X_{j}\) that contains an arrow into \(X_{i}\).
Then we have:
**Theorem 3** (general back-door adjustment).: Let \(G\) be the causal DAG for \(X_{1},\ldots,X_{n}\) and \(P(\bar{X}_{1},\ldots,\bar{X}_{n})\) be Markovian relative to some DAG \(\bar{G}\) that contains \(\bar{X}_{i}\to\bar{X}_{j}\) if \(X_{i}\to X_{j}\) is an arrow in \(G\), but possibly also additional arrows. Let the set \(B\) of macro variables satisfy the back-door criterion with respect to the pair \((\bar{X}_{i},\bar{X}_{j})\) in \(\bar{G}\) and
\[(\bar{X}_{j})_{x_{i}}\perp\!\!\!\perp X_{i}\,|B, \tag{19}\]
where \((\bar{X}_{j})_{x_{i}}\) denotes \(\bar{X}_{j}\) after adjusting \(X_{i}\) to \(x_{i}\). Then we have
\[P(\bar{X}_{j}|do(\bar{x}_{i}),b)=P(\bar{X}_{j}|\bar{x}_{i},b),\]
with respect to the micro-realization \(P(X_{i}|\bar{x}_{i},b)\).
The proof is immediate by replacing \(Y\) with \(X_{i}\), \(Z\) with \(X_{j}\), and \(X\) with \(B\) in the proof of Lemma 3 (note that the back-door condition for \(\bar{G}\) implies the one for \(G\) because there the former can only differ by additional arrows.
Whenever there are different adjustment sets \(B,B^{\prime}\) with respect to which condition (19) holds, the micro-realizations \(P(X_{i}|\bar{x}_{i},b)\) and \(P(X_{i}|\bar{x}_{i},b^{\prime})\) result in the same effect on \(\bar{X}_{j}\). This is because in any DAG, interventional distributions computed from different back-door adjustments coincide.
Figure 5: Left: complete DAG with micro variables \(X,Y,Z\), together with its aggregations \(\bar{X},\bar{Y},\bar{Z}\). Right: The aggregated DAG which we would like to give a causal semantics by introducing appropriate interventions, if possible.
Conclusion
Aggregated variables are the rule rather than the exception in causal models of processes in everyday life. Nevertheless, there is today no satisfying way to deal with the ill-definedness of interventions on aggregated variables. On the one hand, this paper shows that this ill-definedness entails ambiguity not only with respect to the quantitative effect, but even with respect to the causal structure, namely whether a cause-effect relation is confounded or not. On the positive side, this paper introduces 'natural micro-realizations' of macro interventions which respect the observed distribution of micro states. We have discussed condition for which this admits aggregation in simple causal structures. At the same time, the fact that 'natural micro-realizations' depend on the respective observational distribution at hand, reveals that causal statements on aggregated variables are context dependent. This needs to be kept in mind in particular for out of distribution generalization of causal models (Wenzel et al., 2022), e.g., covariate shift (Sugiyama and Kawanabe, 2012; Scholkopf et al., 2012), in case the shift affects also the distribution of micro states.
## Acknowledgements
We gratefully acknowledge Atalanti Mastakouri and Leena Chennuru Vankadara for helpful discussions and for proofreading the paper.
## Appendix A Section 2
Example 1Let \(M=(\mathbf{S},P_{\mathbf{N}})\) be a micro causal model where
\[\mathbf{S}=\{X_{1}:=U_{1},\;X_{2}:=U_{2},\;Y_{1}:=\alpha_{1}X_{1},\;Y_{2}:= \alpha_{2}X_{2}\}\]
where \(\alpha_{1}\neq\alpha_{2}\), \(\mathcal{U}_{1}=\{0,1\}\), \(\mathcal{U}_{2}=\{0,1\}\), \(\text{supp}[P_{U_{1}}]=\text{supp}[P_{U_{2}}]=\{0,1\}\). Now, consider a set of interventions that we are interested in on the micro causal model.
\[\mathcal{I}=\{i_{1}=do(X_{1}:=1,X_{2}:=0),\] \[i_{2}=do(X_{1}:=0,X_{2}:=1)\}.\]
as well as aggregation maps \(\bar{X}=\pi_{X}((X_{1},X_{2}))=X_{1}+X_{2}\) and \(\bar{Y}=\pi_{Y}((Y_{1},Y_{2}))=Y_{1}+Y_{2}\).
Now applying \(i_{1}\) to the system results in \(X_{1}=1\), \(X_{2}=0\), \(Y_{1}=\alpha_{1}\), \(Y_{2}=0\), and thus \(\bar{X}=1\) and \(\bar{Y}=\alpha_{1}\). Meanwhile, applying \(i_{2}\) to the system results in \(X_{1}=0\), \(X_{2}=1\), \(Y_{1}=0\), \(Y_{2}=\alpha_{2}\), and thus \(\bar{X}=1\) and \(\bar{Y}=\alpha_{2}\). Thus, there is no structural assignment \(f:\bar{\mathcal{X}}\rightarrow\bar{\mathcal{Y}}\) s.t.
\[\pi_{\#}P^{do(i_{k})}(X_{1},X_{2},Y_{1},Y_{2})=P^{do(\bar{X}:=1)}(\bar{X},\bar {Y})\]
where \(k=1,2\).
Therefore, it is not possible to define a sensible causal model on the macro variables: since \(X_{1},X_{2}\) precede \(Y_{1},Y_{2}\) in causal order, \(\bar{X}\) should also precede \(\bar{Y}\) if there is a'macro causal model', yet there is no way to define a structural equation from \(\bar{X}\) to \(\bar{Y}\) that makes the'macro causal model' consistent with the micro model in the sense that the pushforward measure of interventions on the micro model is always equal to the interventional distribution on the macro model given by the corresponding intervention. Here, corresponding intervention is given by the aggregation map. The above intuition can be formalized - [Rubenstein et al., 2017] provides a formal definition of consistency, which is violated by our example exactly for the reason we laid out. In fact, only when \(\alpha_{1}=\alpha_{2}\) are we able to obtain a consistent macro causal model under the definition of theirs.
## Appendix B Digression on Shift Interventions
When treatments take real values, atomic interventions can be generated from shift-interventions:
\[\mathbb{P}(Y|do(X:=x))\] \[= \int_{x^{\prime}\in\mathbb{R}}\mathbb{P}(Y|X=x^{\prime},do\left(X: =X+(x-x^{\prime})\right))\] \[\cdot\mathbb{P}(X=x^{\prime})dx^{\prime} \tag{20}\] \[= \int_{x^{\prime}\in\mathbb{R}}\mathbb{P}(Y|X=x^{\prime},do(X:=x) )\cdot\mathbb{P}(X=x^{\prime}) \tag{21}\]
and vice versa:
\[\mathbb{P}(Y|do(X:=X+\delta))\] \[= \int_{x^{\prime}\in\mathbb{R}}\mathbb{P}(Y|X=x^{\prime},do\left(X :=x^{\prime}+\delta\right)))\] \[\cdot\mathbb{P}(X=x^{\prime})dx^{\prime} \tag{22}\]
In other words, we can always generate the effect of an atomic intervention from shift interventions, and vice versa: imagine a practitioner observing a particular treatment-assignment group \((x^{\prime})\), they then make the appropriate shift-intervention \((x-x^{\prime})\) to change the treatment to \(x\); they do this for every observed \(x^{\prime}\)-treatment-assignment group. Then on average, this is the same as doing an atomic intervention to \(x\) for the whole group.
We can come up with an equivalent notion of no confounding using shift-interventions. Therefore, we may reason safely with shift-interventions and assured that whenever we conclude unconfoundedness under shift-interventions, we will also conclude unconfoundedness under atomic interventions.
Definition 11 (Unconfoundedness under \(\delta\)-shift-interventions for 1-d Linear Gaussian models)Suppose the causal order is \(X\to Y\). Define the counterfactual variable after a \(\delta\)-shift intervention as \(X^{\delta}:=X+\delta\). \(X\) is unconfounded with \(Y\) if
\[\forall\delta\in\mathbb{R},\forall x\in\mathbb{R}\] \[\mathbb{P}(Y|X^{\delta}=x)=\mathbb{P}(Y|X=x) \tag{23}\]
The intuition of this is the following: imagine patient A and patient B would have received treatment \(1.0\) and treatment \(1.1\) without intervention. The shift-intervention would add \(+0.1\) to all treatments which the patients would have received. So, after the intervention, patient A and patient B would receive treatment \(1.1\) and treatment \(1.2\). Note that patient \(A\)_after_ the intervention and patient B _before_ the intervention receive the same treatment (\(1.1\)). If unconfoundedness holds, then patient B before intervention and patient A after intervention would react the same way.
We now prove the equivalence of this definition to the one using atomic interventions, in the case of linear structural equations.
Lemma 4Define the post-shift-intervention variable as \(X^{\delta}:=X+\delta\). For a linear structural equation model, \(Y:=aX+N\) where \(a\) is a constant coefficient, the following two statements are equivalent:
1. \(\forall x\in\mathcal{X}=\mathbb{R},\ P(Y|do(X:=x))=P(Y|X=x)\).
2. \(\forall\delta\in\mathbb{R},\ \forall x\in\mathbb{R},\ P(Y|X^{\delta}=x)=P(Y|X=x)\),
Proof.: (\(2\implies 1\).) By definition,
\[P(Y|X^{\delta}=x)=P(Y|X=x-\delta,do(X:=x))\ \ \forall\delta \tag{24}\]
By 2,
\[P(Y|X=x^{\prime},do(X:=x))=P(Y|X=x)\forall x^{\prime} \tag{25}\]
Therefore,
\[P(Y|do(X:=x)) =\int_{\mathcal{X}}P(Y|X=x^{\prime},do(X:=x))\underbrace{P(X=x^{ \prime}|do(X=x))}_{=P(X=x^{\prime})}dx^{\prime} \tag{26}\] \[=\int_{\mathcal{X}}P(Y|X=x)P(X=x^{\prime})dx^{\prime}\] by (25) (27) \[=P(Y|X=x) \tag{28}\]
(\(1\implies 2\).) By 1,
\[P(Y|do(X:=x)) =\int_{\mathcal{X}}P(Y|X=x^{\prime},do(X:=x))P(X=x^{\prime}|do(X:= x))dx^{\prime} \tag{29}\] \[=\int_{\mathcal{X}}P(Y|X=x^{\prime},do(X:=x))P(X=x^{\prime})dx^{\prime}\] (30) \[=P(Y|X=x)\ \ \forall x \tag{31}\]
Since \(Y=aX+N\), we have
\[(\ref{eq:20}) =\int_{\mathcal{X}}P(ax+N|X=x^{\prime})P(X=x^{\prime})dx^{\prime} \tag{32}\] \[=P(ax+N)\ \ (\text{by basic rules of probability})\] (33) \[=P(ax+N|X=x)=(\ref{eq:21})\ \ \forall x \tag{34}\]
This means that \(N\perp\!\!\!\perp X\), and therefore 1 holds.
There are macro-interventions for which we cannot specify a macro confounder.
We have shown that depending on the specified macro interventions, the macro variables appear confounded or not. As we will show next, there also exist macro interventions which do not allow for any model on the macro variables that explains both the observational and interventional distribution. Suppose in the scenario of Section 3.3 we have a third shop owner who made the deterministic intervention
\[\begin{pmatrix}X_{1}\\ X_{2}\end{pmatrix}\left|\bar{X}\sim\mathcal{N}\left(\begin{pmatrix}\bar{X}\\ 0\end{pmatrix},\mathbf{0}\right)\right. \tag{35}\]
This would induce a deterministic post-intervention relationship between \(\bar{\mathcal{X}}\) and \(\bar{\mathcal{Y}}\):
\[\bar{Y}=\alpha_{1}\bar{X} \tag{36}\]
Had there been a structural causal model from \(\bar{X}\) to \(\bar{Y}\) under this intervention, it would have the form:
\[\bar{Y}=f(\bar{X},N) \tag{37}\]
but since intervening on \(\bar{X}\) to set \(\bar{Y}\) to a deterministic value, \(f\) must be constant in \(N\) for every \(\bar{X}=\bar{x}\), so wlog we can rewrite the structural equation as
\[\bar{Y}=g(\bar{X}) \tag{38}\]
But this implies that the observational random variable, \(\bar{Y}|\bar{X}\), is also deterministic, which contradicts our hypothesis wherever \(\alpha_{1}\neq\alpha_{2}\). Thus, there may be macro-interventions for which we cannot write down a macro causal model which is possibly confounded. We leave it for future work to further investigate the range of macro-interventions which admit macro-confounding variables.
## Appendix D Confounding-inhibiting macro interventions in confounding micro-variable models
### Discrete case
The following proposition shows that, in the discrete case, even in the case where we have confounding in the micro-level, there sometimes exists, for some \(\pi\) and some \(\pi\)-consistent micro-macro intervention such that we would have no confounding in the macro-level.
When in the discrete setting, we always use the usual way to construct probability spaces i.e. take the entire set as sample space and its power set as the \(\sigma\)-algebra.
We outline here the moral reason that the statement should be true.
For the observational and interventional distributions of \(\bar{Y}|\bar{X}\) to be the same, i.e. unconfoundedness in the macro variables, we can consider the decomposition of \(P_{\bar{Y}|\bar{X}}\) and \(P_{\bar{Y}|\bar{X}}^{do}\). Since the macro interventions, \((\begin{array}{c}\bar{x},P_{X|\;\bar{x}}\end{array})\) for various values of \(\bar{x}\), only impact the generative process for \(X\) and \(\bar{X}\), we can observe that all other generative mechanisms are invariant before and after the intervention. Writing down the decomposition, we get
\[P_{\bar{Y}|\bar{X}}^{do}(y) =\sum_{i,j}P_{\bar{Y}|x_{i},z_{j}}(y)P_{X|\bar{X}}^{do}(x_{i})P_ {Z}(z_{j}) \tag{39}\] \[P_{\bar{Y}|\bar{X}}(y) =\sum_{i,j}P_{\bar{Y}|x_{i},z_{j}}(y)P_{X,Z|\bar{X}}(x_{i},z_{j}) \tag{40}\]
Unconfoundedness between \(\bar{X}\) and \(\bar{Y}\) thus amounts to saying that the right-hand-sides of (39) and (40) are the same for all values of \(y\). (39) and (40) can be written as vectorised equations, and equality of the right-hand-side amounts to saying that, roughly speaking, the difference between \(P_{X|\bar{X}}^{do}\otimes P_{Z}\) and \(P_{X,Z|\bar{X}}\) lie in the null space of \(P_{\bar{Y}|X,Z}\). Why should this be true? Morally, this is because the null space of \(P_{\bar{Y}|X,Z}\) is large due to the coarsening, thus possible to contain the said difference sometimes. Additional technical conditions need to be satisfied in the theorem, but these can likewise be satisfied by constructing \(P_{\bar{Y}|X,Z}\) with the correct null space.
Theorem 2Let \(X,Y,Z\) be categorical variables with \(|\mathcal{X}|,|\mathcal{Y}|,|\mathcal{Z}|<\infty\). Let \(G\) be the causal graph with \(X\to Y\), \(X\gets Z\to Y\), and let \(\pi_{X}:\mathcal{X}\longrightarrow\bar{\mathcal{X}}\) and \(\pi_{Y}:\mathcal{Y}\longrightarrow\bar{\mathcal{Y}}\) denote coarsening maps. Let \(G^{a}\) be the amalgamated graph of \(G\), \(\pi_{X}\) and \(\pi_{Y}\), \(G^{a\prime}\) be the macro-intervention graph for an intervention on \(\bar{X}\). Let \(P\) denote probability distributions satisfying the conditional independence structure given by \(G^{a}\), and \(P^{do}\) denote probability distributions satisfying the independence structure given by \(G^{a\prime}\). Suppose \(|\bar{\mathcal{Y}}|<\text{min}\left(|\mathcal{X}|,|\mathcal{Z}|\right)\). Then for any coarsening maps \(\pi_{X}\) and \(\pi_{Y}\), and any family of macro interventions on \(\bar{X}\): \(\left\{(\bar{x},P^{do}_{\pi_{X},\mathcal{X}|\bar{x}})\mid\forall\bar{x}\in \bar{\mathcal{X}}\right\}\) there exist at least one \(P\) such that
1. \(P(Y|X)\not\equiv P^{do}(Y|X)\). i.e. \(X\) and \(Y\) are confounded.
2. \(P(\bar{Y}|X)\not\equiv P^{do}(\bar{Y}|X)\). i.e. \(X\) and \(\bar{Y}\) are confounded.
3. \(P(\bar{Y}|\bar{X}=\bar{x})=P^{do}(\bar{Y}|\bar{X}=\bar{x})\) for all \(\bar{x}\in\bar{\mathcal{X}}\). i.e. \(\left\{(\bar{x},P^{do}_{X|\bar{X}})\mid\bar{x}\in\bar{\mathcal{X}}\right\}\) are confounding-inhibiting macro interventions.
Proof.: The set of probability measures with conditional dependence structure encoded by \(G^{a}\) is
\[\mathcal{P}^{obs}:=\left\{P\ s.t.\ P(X,Z,\bar{X},\bar{Y})\equiv P(Z)P(X|Z)P( \bar{X}|X)P(\bar{Y}|X,Z)\right\} \tag{41}\]
Meanwhile, \(\mathcal{P}^{do}\) denotes the set of probability measures \(P^{do}\) on the same sample space and \(\sigma\)-algebra, but compatible with performing a macro-intervention on \(\bar{X}\):
\[\mathcal{P}^{do}:=\left\{P^{do}\ s.t.\ P^{do}(X,Z,\bar{X},\bar{Y})\equiv P^{do }(Z)P^{do}(\bar{X})P^{do}(X|\bar{X})P^{do}(\bar{Y}|X,Z)\right\} \tag{42}\]
Since the generative processes of all variables except for \(\bar{X}\) and \(X\) are invariant before and after macro-intervention on \(\bar{X}\), write
\[P(Z) =P(Z)=P^{do}(Z) \tag{43}\] \[P(Y|X,Z) =P(Y|X,Z)=P^{do}(Y|X,Z) \tag{44}\]
Therefore, \(P(\bar{Y}|X,Z)=P^{do}(\bar{Y}|X,Z)\). So write:
\[P(\bar{Y}|X,Z)=P(\bar{Y}|X,Z)=P^{do}(\bar{Y}|X,Z) \tag{45}\]
Note the decompositions:
\[P(Y|X) =\sum_{j}^{|\mathcal{Z}|}P(Y|X,Z=z_{j})P(Z=z_{j}|X) \tag{46}\] \[P^{do}(Y|X) =\sum_{j}^{|\mathcal{Z}|}P(Y|X,Z=z_{j})P^{do}(Z=z_{j})\] (47) \[P(\bar{Y}|X) =\sum_{j}^{|\mathcal{Z}|}P(\bar{Y}|X,Z=z_{j})P(Z=z_{j}|X)\] (48) \[P^{do}(\bar{Y}|X) =\sum_{j}^{|\mathcal{Z}|}P(\bar{Y}|X,Z=z_{j})P^{do}(Z=z_{j})\] (49) \[P(\bar{Y}|\bar{X}) =\sum_{i,j}^{|\mathcal{X}|,|\mathcal{Z}|}P(\bar{Y}|X=x_{i},Z=z_{ j})P(X=x_{i},Z=z_{j}|\bar{X})\] (50) \[P^{do}(\bar{Y}|\bar{X}) =\sum_{i,j}^{|\mathcal{X}|,|\mathcal{Z}|}P(\bar{Y}|X=x_{i},Z=z_{ j})P^{do}(X=x_{i},Z=z_{j}|\bar{X}) \tag{51}\]
Define the following linear maps:
\[f_{1,x}:\ \mathbb{R}^{|\mathcal{Z}|} \longrightarrow\mathbb{R}^{|\mathcal{Y}|} \tag{53}\] \[\mathbf{v} \longmapsto\sum_{j}^{|\mathcal{Z}|}P(Y|X=x,Z=z_{j})v_{j}\] (54) \[f_{2,x}:\ \mathbb{R}^{|\mathcal{Z}|} \longrightarrow\mathbb{R}^{|\mathcal{\bar{Y}}|}\] (55) \[\mathbf{v} \longmapsto\sum_{j}^{|\mathcal{Z}|}P(\bar{Y}|X=x,Z=z_{j})v_{j}\] (56) \[f_{3}:\ \mathbb{R}^{|\mathcal{X}|\times|\mathcal{Z}|} \longrightarrow\mathbb{R}^{|\mathcal{\bar{Y}}|}\] (57) \[M \longmapsto\sum_{i,j}^{|\mathcal{X}|,|\mathcal{Z}|}P(\bar{Y}|X=x_{ i},Z=z_{j})M_{ij} \tag{58}\]
Then conditions 1, 2, 3 are equivalent to
\[\exists x\in\mathcal{X},\forall\bar{x}\in\bar{\mathcal{X}}: \tag{59}\] \[P(Z|X=x)-P^{do}(Z)\not\in Ker(f_{1,x})\] (60) \[P(Z|X=x)-P^{do}(Z)\not\in Ker(f_{2,x})\] (61) \[P(X,Z|\bar{X}=\bar{x})-P^{do}(X,Z|\bar{X}=\bar{x})\in Ker(f_{3}) \tag{62}\]
Now analyse \(P(X,Z|\bar{X})\) and \(P^{do}(X,Z|\bar{X})\).
\[P(X,Z|\bar{X}) =\frac{P(\bar{X}|X)P(X|Z)P(Z)}{\sum_{i,j}^{|\mathcal{X}|,|\mathcal{ Z}|}P(Z=z_{j})P(X=x_{i}|Z=z_{j})P(\bar{X}|X=x_{i})} \tag{63}\] \[P^{do}(X,Z|\bar{X}) =P(Z)P^{do}(X|\bar{X}) \tag{64}\]
_An aside: A probability distribution of a discrete random variable, say \(P(A),A\in\mathcal{A},|\mathcal{A}|<\infty\) can be viewed as a finite-dimensional vector \(\mathbf{v}\) such that \(v_{i}=P(A=a_{i})\). From now on we refer to this as the vector of \(P(A)\)._
Note that for a given \(\bar{x}\), \(P(\bar{X}=\bar{x}|X=x)\) and \(P^{do}(X=x|\bar{X}=\bar{x})\) must be zero when \(x\not\in\pi_{X}^{-1}(\bar{x})\). Since \(P(\bar{X}|X)\) is a factor of \(P(X,Z|\bar{X})\), and \(P^{do}(X|\bar{X})\) is a factor of \(P^{do}(X,Z|\bar{X})\), we can work out the elements of the vectors \(P(X,Z|\bar{X}=\bar{x})\) and \(P^{do}(X,Z|\bar{X}=\bar{x})\) which are allowed to be non-zero i.e. precisely the elements corresponding to \(x\) with \(x\in\pi_{X}^{-1}(\bar{x})\). Note that \(\pi^{-1}(\bar{x})\cap\pi^{-1}(\bar{x}^{\prime})=\emptyset\) when \(\bar{x}\neq\bar{x}^{\prime}\).
Order the elements of \(\bar{\mathcal{X}}\) as \(\{\bar{x}_{1},\cdots,\bar{x}_{K},\ K=|\bar{\mathcal{X}}|\}\). Since \(|\bar{\mathcal{X}}|<|\mathcal{X}|\), we can choose \(\bar{x}\) such that \(|\pi^{-1}(\bar{x})|>1\). Wlog, let this be \(\bar{x}_{1}\). Let \(P(X^{+},Z|\bar{X}=\bar{x}_{1})\) and \(P^{do}(X^{+,1},Z|\bar{X}=\bar{x}_{1})\) be matrices which contain the elements of \(P(X,Z|\bar{X}=\bar{x}_{1})\) and \(P^{do}(X,Z|\bar{X}=\bar{x}_{1})\) such that \(X\in\pi^{-1}(\bar{x}_{1})\). Precisely, \(X^{+,1}\) is a vector of length \(|\pi^{-1}(\bar{x}_{1})|\) where \(x_{i}^{+,1}\in\pi^{-1}(\bar{x}_{1})\)\(\forall i=1,\cdots,|\pi^{-1}(\bar{x}_{1})|\). The \(ij^{th}\) element of \(P(X^{+,1},Z|\bar{X}=\bar{x}_{1})\) is given by \(P(X=x_{i}^{+,1},Z=z_{j}|\bar{X}=\bar{x}_{1})\). Choose \(P(Z)\) and \(P(X|Z)\), such that there exist some values of \(x^{+,1}\) such that \(P(X=x^{+,1}|\bar{X}=\bar{x})\neq P^{do}(X=x^{+,1}|\bar{X}=\bar{x})\) and \(P(Z|X=x^{+,1})\neq P(Z)\). Wlog, let this be \(x_{1}^{+,1}\). Then \(P(Z|X=x_{1}^{+,1})\) and \(P(Z)\) are linearly independent since they are both normalised. It follows that \(\mathbf{u}_{x_{1}^{+,1}}:=P(Z|X^{+,1}=x_{1}^{+,1})-P(Z)\) is linearly independent of \(\mathbf{v}_{x_{1}^{+,1}}:=P(Z|X=x_{1}^{+,1})P(X=x_{1}^{+,1}|\bar{X}=\bar{x}_{1 })-P(Z)P^{do}(X=x_{1}^{+,1}|\bar{x}_{1})\). By Lemma 5, there exist (a distribution which we call) \(P(\bar{Y}|X=x_{1}^{+,1},Z)\) which maps \(\mathbf{v}_{x_{1}^{+,1}}\) to \(0\) and maps \(\mathbf{u}_{x_{1}^{+,1}}\)'s to a non-zero vector. For the other values of \(x_{i}^{+,k}\in\pi^{-1}(\bar{x}_{k}),\ \ i,k\neq 1\), we only need to satisfy (62). Lemma 5 also immediately implies that there is \(P(\bar{Y}|X=x_{i}^{+,k},Z)\) such that \(\mathbf{v}_{x_{i}^{+,k}}:=P(Z|X=x_{i}^{+,k})P(X=x_{i}^{+,k}|\bar{X}=\bar{x}_{k })-P(Z)P^{do}(X=x_{i}^{+,k}|\bar{X}=\bar{x}_{k})\) is mapped to \(0\). Therefore, conditions 2 and 3 are satisfied.
It remains to satisfy condition 1. This is easy, because the linear map \(f_{2,x}\) is given by composing \(f_{1,x}\) and \(\mathbf{v}\in\mathbb{R}^{|\mathcal{Y}|}\mapsto\sum_{k}^{|\mathcal{Y}|}P(\bar{Y}| Y=y_{k})v_{k}\). For a fixed \(f_{2,x}\) and \(P(\bar{Y}|Y)\), \(\exists f_{1,x}\) s.t. \(f_{2,x}=P(\bar{Y}|X)\circ f_{1,x}\), since, for example, take \(P(Y=y_{i}|X,Z)=\frac{1}{|\pi^{-1}(\bar{y})|}P(\bar{Y}=\bar{y}|X,Z)\) for any \(y_{i}\in\pi^{-1}(\bar{y})\). Then since \(f_{2,x^{+}}\) maps \(\mathbf{u}_{x^{+}}\) to a non-zero vector, so must
## Appendix E Auxiliary Lemmas
**Lemma 1** For a given \(\mathbf{\alpha}\in\mathbb{R}^{N},~{}\mathbf{\alpha}\not\propto(1,\cdots,1)\) and \(c\in\mathbb{R}\), there exist an invertible \(N\times N\) matrix \(H\in\mathbb{R}^{N\times N}\) such that \(H_{1,i}=1~{}\forall i=1,\cdots,N\) and \(\sum_{i}\alpha_{i}H_{i,1}^{-1}=c\). Moreover, for any \(\mathbf{\Delta}\) such that \(\sum_{i}\Delta_{i}=1\), there also exist \(H\) such that \(\mathbf{\Delta}=H_{:,1}^{-1}\).
Proof.: We aim to choose (column vectors) \(\mathbf{u}_{1},\cdots,\mathbf{u}_{N}\) and \(\mathbf{v}_{1},\cdots,\mathbf{v}_{N}\) such that \(H=\begin{pmatrix}\mathbf{u}_{1}^{\top}\\ \vdots\\ \mathbf{u}_{N}\end{pmatrix}\) and \(H^{-1}=\begin{pmatrix}\mathbf{v}_{1},\cdots,\mathbf{v}_{N}\end{pmatrix}\) satisfy the conditions in the claim. Let \(\langle\cdot,\cdot\rangle\) denote the standard dot product in \(\mathbb{R}^{N}\). First choose \(\mathbf{u}_{1}=(1,\cdots,1)\).
Choose \(\mathbf{v}_{1}^{0}\in\mathbb{R}^{N}\) such that \(\langle\mathbf{v}_{1}^{0},\mathbf{u}_{1}\rangle=1\). Let \(d=\langle\mathbf{v}_{1}^{0},\mathbf{\alpha}\rangle\). If \(d=c\), then set \(\mathbf{v}_{1}=\mathbf{v}_{1}^{0}\). Else, take \(\mathbf{v}\in\mathbf{u}_{1}^{\perp}\) and \(\mathbf{v}\not\in\mathbf{\alpha}^{\perp}\). \(\mathbf{v}\) exists since \(\mathbf{\alpha}\not\propto\mathbf{u}_{1}\). Choose \(\mathbf{v}_{1}=\mathbf{v}_{1}^{0}+\frac{c-d}{\langle\mathbf{v},\mathbf{\alpha} \rangle}\cdot\mathbf{v}\), then \(\langle\mathbf{v}_{1},\mathbf{\alpha}\rangle=\langle\mathbf{v}_{1}^{0},\mathbf{ \alpha}\rangle+\frac{c-d}{\langle\mathbf{v},\mathbf{\alpha}\rangle}\cdot\langle \mathbf{v},\mathbf{\alpha}\rangle=d+c-d=c\).
Now choose \(\mathbf{u}_{2},\cdots,\mathbf{u}_{N}\) s.t. i) \(\langle\mathbf{u}_{i},\mathbf{v}_{1}\rangle=0,~{}\forall i=2,\cdots,N\), and ii) \(\mathbf{u}_{2}\cdots,\mathbf{u}_{N}\) are linearly independent. This can be done since \(\dim(\mathbf{v}_{1}^{\perp})=N-1\).
Note also that \(\mathbf{u}_{1},\cdots,\mathbf{u}_{N}\) are linearly independent: suppose \(\sum_{i}m_{i}\mathbf{u}_{i}=\mathbf{0}\), then \(\sum m_{i}\langle\mathbf{u}_{i},\mathbf{v}_{1}\rangle=0\). But \(\langle\mathbf{u}_{i},\mathbf{v}_{1}\rangle=0\) for \(i=2,\cdots,N\); therefore, \(m_{1}\langle\mathbf{u}_{1},\mathbf{v}_{1}\rangle=0\). Since \(\langle\mathbf{u}_{1},\mathbf{v}_{1}\rangle=1\), we deduce \(m_{1}=0\). But this means \(\sum_{i=2}^{N}m_{i}\mathbf{u}_{i}=\mathbf{0}\). Since we know \(\mathbf{u}_{2},\cdots,\mathbf{u}_{N}\) are linearly independent, \(m_{i}=0\) for \(i=2,\cdots,N\).
For \(j=2,\cdots,N\), choose \(0\neq\tilde{\mathbf{v}}_{j}\in\text{Span}\left(\left\{\mathbf{u}_{i},i\neq j \right\}\right)^{\perp}\). Since \(\left\{\mathbf{u}_{i},i\neq j\right\}\) are linearly independent, \(\dim(\text{Span}\left(\left\{\mathbf{u}_{i},i\neq j\right\}\right)^{\perp})=1\), and thus \(\text{Span}(\tilde{\mathbf{v}}_{j})=\text{Span}\left(\left\{\mathbf{u}_{i},i \neq j\right\}\right)^{\perp}\). Then \(\langle\tilde{\mathbf{v}}_{j},\mathbf{u}_{i}\rangle=0\) when \(i\neq j\). When \(i=j\), \(\langle\tilde{\mathbf{v}}_{j},\mathbf{u}_{j}\rangle\neq 0\) since otherwise \(\mathbf{u}_{j}\in\tilde{\mathbf{v}}_{j}^{\perp}=\text{Span}\left(\left\{ \mathbf{u}_{i},i\neq j\right\}\right)\). Therefore, choose \(\mathbf{v}_{j}=\frac{\tilde{\mathbf{v}}_{j}}{\langle\tilde{\mathbf{v}}_{j}, \mathbf{u}_{j}\rangle}\).
The above shows the first statement.
The second statement is obvious by observing that \(\mathbf{v}_{1}\) can be chosen as \(\mathbf{\Delta}\) by construction.
**Lemma 5** Let \(\mathbf{v}\in\mathbb{R}^{M\times N}\) be a vector with \(\sum_{i,j}v_{ij}=0\), and \(\mathbf{u}_{i}\in\mathbb{R}^{N}\) a vector such that \(\sum_{j}u_{i,j}=0\), where \(u_{i,j}\) is the \(j\)th element of \(\mathbf{u}_{i}\). Suppose \(\mathbf{u}_{i}\) and \(\mathbf{v}_{i}\) are linearly independent, \(M\geq 3\), \(N\geq 2\), \(K\geq 1\). Then there exists a distribution of discrete variables \(P(\bar{Y}|X,Z)\) where \(|\mathcal{X}|=M,~{}|\mathcal{Z}|=N,~{}|\bar{\mathcal{Y}}|=K\) such that
1. \(\sum_{i,j=1}^{M,N}P(\bar{Y}|X=x_{i},Z=z_{j})v_{ij}=\mathbf{0}\).
2. \(\sum_{j}^{N}P(\bar{Y}|X=x_{i},Z=z_{j})u_{i,j}\neq\mathbf{0}\).
Proof.: Let \(P_{ij}\), \(i=1,\cdots,M,~{}j=1,\cdots,N\) denote the vector of \(P(\bar{Y}|X=x_{i},Z=z_{j})\).
The case with \(v_{ij}=0\) for all \(i,j\) is trivial, so assume that there exist some \((i,j)\) such that \(v_{ij}\neq 0\).
If there is some \(i,j\) such that \(v_{ij}=0\), then we are done by choosing \(P_{i_{1},j_{1}}=P_{i_{2},j_{2}}\) for any \((i_{1},j_{1}),(i_{2},j_{2})\neq(i,j)\) as any valid probability vector, and choosing \(P_{ij}\) to be a different valid probability vector with all elements strictly between \(0\) and \(1\). For example, we can choose \(P_{i_{1},j_{1}}=(1/K,\cdots,1/K)\), and \(P_{ij}=(1/K+\epsilon,1/K-\epsilon,1/K\cdots,1/K)\) with a small enough \(\epsilon\).
If \(v_{ij}\neq 0\) for all \((i,j)\), then clearly \(\sum_{(i,j)\in\mathcal{I}_{+}}v_{ij}=-\sum_{(i,j)\in\mathcal{I}_{-}}v_{ij}\), where \(\mathcal{I}_{+}\) is the index set for the positive \(v_{ij}\)'s and \(\mathcal{I}_{-}\) are the negatives. For ease of notation let \(S:=\sum_{(i,j)\in\mathcal{I}_{+}}v_{ij}=-\sum_{(i,j)\in\mathcal{I}_{-}}v_{ij}\).
Now first assign arbitrary probability vectors for each \(P_{ij},~{}(i,j)\in\mathcal{I}_{+}\). Wlog, we can assign different probability vectors to each, ensuring that each element of the vectors are strictly between \(0\) and \(1\). Then the linear combination \(\sum_{(i,j)\in\mathcal{I}_{+},k}v_{ij}P_{k,ij}\) is equal to \(S\); here, \(P_{k,ij}\) is the \(k\)th element of the vector of \(P_{k,ij}\). Now assign for each \((i,j)\in\mathcal{I}_{-}\), \(P_{ij}:=\frac{1}{S}\sum_{(i,j)\in\mathcal{I}_{+}}v_{ij}P_{ij}\). Clearly, every element of \(P_{ij}\) is non-negative and all elements add up to 1, so \(P_{ij}\) is a probability vector. Moreover, \(P(\bar{Y}|X,Z)\) constructed like this will satisfy the requirement that \(\sum_{i,j=1}^{M,N}P(\bar{Y}|X=x_{i},Z=z_{j})v_{ij}=0\).
Now it remains to check condition 2. Take a fixed \(i\in\{1,\cdots,M\}\), if \(\sum_{i,j}u_{ij}P_{ij}=\mathbf{0}\), then consider \(\mathbf{v}_{i}\) and \(\mathbf{u}_{i}\). Since they are linearly independent vectors, the linear map induced by the usual inner product with \(\mathbf{v}_{i}\) has a kernel with a subspace of at least dimension one that is not contained in the kernel of the linear map induced by \(\mathbf{u}_{i}\). Let \(\mathbf{d}\) be a unit vector in this
subspace. Define \(P_{i}=[P_{i1},\cdots,P_{iN}]\). \(P_{i}\) is a \(K\times N\) matrix. Construct \(P_{i}^{\prime}\) by subtracting a small enough multiple of \(\mathbf{d}\) from the first row of \(P_{i}\) and adding the same multiple to the second row of \(P_{i}\). Redefine \(P_{i}:=P_{i}^{\prime}\), and we will have \(\sum_{j}u_{ij}P_{ij}\neq\mathbf{0}\) and \(\sum_{i,j}v_{ij}P_{ij}=\mathbf{0}\). Repeat this for every \(i=1,\cdots,M\) and conditions 1 and 2 will both be satisfied.
## Appendix F Examples
### Hours of work and pay
For manual labourers in a manufacturing plant, the number of hours they work (\(X_{i}\)) determines their monthly wage (\(Y_{i}\)). In particular, the monthly wage (\(Y_{i}\)) of a manual labourer is a product of the number of hours they work (\(X_{i}\)) and their hourly wage (\(\alpha_{i}\)):
\[Y_{i}=\alpha_{i}X_{i} \tag{65}\]
The hourly wage may differ between labourers due to various factors like work experience. To get a high-level overview of the human resources in the manufacturing plant, its business owner keeps track of total work hours (\(\bar{X}\)) and total cost of wages (\(\bar{Y}\)) on a monthly basis. That is, we have a summation coarsening map:
\[\bar{X}=\sum X_{i},\qquad\bar{Y}=\sum Y_{i}\]
In this scenario, Gaussian distribution reasonably approximates the distribution of the number of hours of work of a manual labourer (\(X_{i}\)) as most labourers work around an average number of hours, with few working extra hours more or less. Moreover, it is reasonable to assume that the number of hours of work of each labourer is independent of other labourers. Altogether, we have the linear unconfounded case with independent Gaussian micro-level causes.
### Genomic micro-array data for prediction of diseases
Genomic micro-array data contains personal genome information. A micro-array is a rectangular grid where every column contains the genomic expressions of one person (or subject), and every row contains the expressions of one coding gene for every person. Personal genomic expressions as collected in the micro-array are used to predict certain disease types (\(Y\)). The number of genes (\(X_{i}\)) present on a micro-array is vast; moreover, a population of genes could be jointly causing a disease. This may necessitate coarsening in drug design.
While two drugs may both claim to intervene on the causes of disease \(Y\) on a population level, \(\pi((X_{i})_{i})\), it is possible that they impact the individual amounts of each gene expression differently. Moreover, due to the complexity of gene regulatory networks, there could be confounding present at the micro level. Suppose the practitioners are willing to assume a Linear Gaussian model, then our example in 4.2 suggests that there is a way for the drug to intervene on the macro variable such that the causal influence of the drug can be directly read off from observational data.
### Political Campaigning for votes
During the running up to the general election, politicians go around the country to campaign for votes. The amount of time they spend campaigning at county \(i\) can be written \(X_{i}\). Some (simplistic) political model may assume a linear relationship between the amount of time campaigning in county \(i\) and the votes harvested at that county \(Y_{i}\):
\[Y_{i}=\alpha_{i}X_{i} \tag{66}\]
At the end of the campaign, the team may calculate how much effort they put in, in terms of time, and how many votes they won in total.
\[\bar{X}=\sum X_{i},\quad\bar{Y}=\sum Y_{i} \tag{67}\]
It will be noticed that different counties responds differently to the campaign, so \(\alpha_{i}\) is not constant. Now, different campaign teams may all decide to increase their campaign time in order to get more votes, but the resultant vote changes may still be different due to the different allocations to each county.
More detailed discussion of related work
[Rubenstein et al., 2017] propose exact transformations, following which [Beckers and Halpern, 2019] propose a series of stronger notions of causal abstraction. As illustrated by the example in Example 1, the condition required to call a macro system an exact transformation of the the fine-grained one is not satisfied even in extremely simple cases. The moral reason being that when micro causal variables are aggregated, the causal relationships between the micro causal variables do not in general align with the aggregation, such that if multiple micro-cause-states get coarsened to the same macro-state, then the corresponding micro-effect-states get coarsened also to the same macro-state. By contrast, this work explores what happens when the macro system cannot be called an exact-transformation of the micro system.
[Beckers et al., 2019] explores how to quantify the approximation error when the abstraction is not exact. We take a different angle in this work and do not deal with how good our macro-model is at approximating the micro-model. Rather, our work observes that the aggregated system loses resolution on the original, micro-system, and analyses properties when different pairings between the macro- and micro-systems are made.
A different but closely-related line of work is undertaken by [Chalupka et al., 2016, Rischel and Weichwald, 2021], where the coarsest possible consistent coarsening is sought. The macro-variable states are constructed as equivalence classes (of micro-variable states) in the paper, and every micro-variable state in the same equivalence class leads to the same micro-variable effect - this ensures that interventions on the equivalence classes are well-defined. Our work lies in the realm beyond the coarsest consistent partition, and in these cases, the macro interventions become ill-defined.
Finally, our work inspires assosiation with [Blom et al., 2019], which describes a framework for modelling cyclic causal models, where solutions to the model are the equilibrium states for some initial conditions, and interventions amount to changing these initial conditions. In particular, the causal dynamics of the model are fixed by differential equations. Although some might be tempted to view the relationship between micro and macro variables in our case as cyclic, it is, however, fundamentally different. In our case by specifying the \(P_{X|\;\bar{x}}^{do}\) in the macro-intervention, the dynamics between the macro and micro variables are freely customisable by the practitioner, whereas in their case, the dynamics between the macro and micro variables are fixed a priori by some constraints, for example, by the laws of physics. That being said, there could be situations where the dynamics fixed by the a priori constraints, in some canonical way, already gives rise to the confounding-inhibiting interventions that we described.
|
2301.02939 | Study of the long-range transverse field Ising model with fermionic
Gaussian states | We numerically study the one-dimensional long-range Transverse Field Ising
Model (TFIM) in the antiferromagnetic (AFM) regime at zero temperature using
Generalized Hartree-Fock (GHF) theory. The spin-spin interaction extends to all
spins in the lattice and decays as $1/r^\alpha$, where $r$ denotes the distance
between two spins and $\alpha$ is a tunable exponent. We map the spin operators
to Majorana operators and approximate the ground state of the Hamiltonian with
a Fermionic Gaussian State (FGS). Using this approximation, we calculate the
ground state energy and the entanglement entropy which allows us to map the
phase diagram for different values of $\alpha$. In addition, we compute the
scaling behavior of the entanglement entropy with the system size to determine
the central charge at criticality for the case of $\alpha>1$. For $\alpha<1$ we
find a logarithmic divergence of the entanglement entropy even far away from
the critical point, a feature of systems with long-range interactions. We
provide a detailed comparison of our results to outcomes of Density Matrix
Renormalization Group (DMRG) and the Linked Cluster Expansion (LCE) methods. In
particular, we find excellent agreement of GHF with DMRG and LCE in the weak
long-range regime $\alpha\geq 1$, and qualitative agreement with DMRG in the
strong-long range regime $\alpha \leq 1$. Our results highlight the power of
the computationally efficient GHF method in simulating interacting quantum
systems. | Michael P. Kaicher, Davide Vodola, Simon B. Jäger | 2023-01-07T21:23:53Z | http://arxiv.org/abs/2301.02939v1 | # Study of the long-range transverse field Ising model with fermionic Gaussian states
###### Abstract
We numerically study the one-dimensional long-range Transverse Field Ising Model (TFIM) in the antiferromagnetic (AFM) regime at zero temperature using Generalized Hartree-Fock (GHF) theory. The spin-spin interaction extends to all spins in the lattice and decays as \(1/r^{\alpha}\), where \(r\) denotes the distance between two spins and \(\alpha\) is a tunable exponent. We map the spin operators to Majorana operators and approximate the ground state of the Hamiltonian with a Fermionic Gaussian State (FGS). Using this approximation, we calculate the ground state energy and the entanglement entropy which allows us to map the phase diagram for different values of \(\alpha\). In addition, we compute the scaling behavior of the entanglement entropy with the system size to determine the central charge at criticality for the case of \(\alpha>1\). For \(\alpha<1\) we find a logarithmic divergence of the entanglement entropy even far away from the critical point, a feature of systems with long-range interactions. We provide a detailed comparison of our results to outcomes of Density Matrix Renormalization Group (DMRG) and the Linked Cluster Expansion (LCE) methods. In particular, we find excellent agreement of GHF with DMRG and LCE in the weak long-range regime \(\alpha\geq 1\), and qualitative agreement with DMRG in the strong-long range regime \(\alpha\leq 1\). Our results highlight the power of the computationally efficient GHF method in simulating interacting quantum systems.
+
Footnote †: preprint:
## I Introduction
Quantum phase transitions describe the behavior of quantum many-body systems at zero temperature when tuning a non-thermal control parameter, such as an applied magnetic field. The phase transition appears as a result of competing phases that describe the ground state at the corresponding parameter and typically lead to a fundamental change in the nature of the correlation present in the ground state. Quantum many-body systems can undergo a quantum phase transition and their study has lead to the discovery of many exotic collective phenomena such as superconducting ground states [1], long-range topological order [2], and anyonic statistics[3]. Close to the critical point, the properties of many different physical systems can be classified by a universality class which is independent of the system size and only depends on the underlying dimensions and symmetries of the problem. In this situation, one can in many instances describe the many-body problem by an interacting spin system [4].
One of the paradigmatic microscopic models displaying a quantum phase transition is the Transverse Field Ising Model (TFIM) at zero temperature [5]. This model is exactly solvable in the limit of short-range, nearest-neighbour interactions. However, the solution of this problem is much harder if one considers beyond nearest-neighbour or even long-range interactions [6; 7; 8; 9; 10]. Long-range interacting systems can host exotic states of quantum matter and are therefore of large scientific interest. Recent advances have made effective long-range spin-interactions experimentally accessible [11; 12; 13; 14; 15]. In such systems, the effective interaction extends to all spins in the lattice and decays as a power law \(1/r^{\alpha}\), where \(r\) is the distance of the spins in the lattice and \(\alpha\) is a tunable algebraic exponent. In the experiments one can realize \(0\leq\alpha\leq 3\) which allows one to experimentally probe the regime of long-range interactions in spin systems [11].
In order to analyze the properties of a quantum many-body system, it is important to study large system sizes, which is in our case the number of spins \(N\). The exponential scaling of the Hilbert space dimension with \(N\) makes the ad-hoc diagonalization of such many-body problems illusive. Consequently, one demands numerical methods which are able to capture the qualitative behavior of the many-body system with a computational cost that displays a low scaling with \(N\). A range of many-body methods of varying computational complexity have been applied to study finite size long-range quantum many-body systems, including Quantum Monte Carlo (QMC) [16], stochastic series expansion QMC [17], a combination of QMC and renormalization group methods [18], Lanczos exact diagonalization [19], and Density Matrix Renormalization Group (DRMG) [6; 8]. Recently, a method to study short-range quantum-lattice models in the thermodynamic limit, the Linked-Cluster Expansion (LCE), has been extended to allow for the study of long-range systems for \(\alpha>1\)[9; 10].
In this work, we add Generalized Hartree-Fock (GHF) theory to this mix of methods. GHF is a _mean-field method_ which aims to approximate the ground state of
an interacting quantum system as a free electron gas [20], where the latter describes a class of variational functions known as Fermionic Gaussian States (FGS). Due to its mean-field nature, GHF is a method with very low computational cost, where the most-demanding compute operation--the evaluation of the Pfaffian Pf(**A**) of a \(M\times M\) matrix **A**--scales at most as \(\mathcal{O}(M^{3})\)[21]. Even though FGS describe ground or thermal states of quadratic fermionic Hamiltonians [22], they have been applied to various areas of quantum many-body physics with great success, most notably as ab-initio methods to obtain approximate ground states in electronic structure problems and to condensed matter systems [20; 23; 24]. In this paper, in order to find the FGS which best approximates the ground state of the long-range TFIM, we employ two physically-motivated methods which have been described in Ref. [23]. The first one (ITE) derives the ground state using Imaginary Time Evolution. The second one (ZT) uses a self-consistent equation for the FGS ground state covariance matrix. Using these two methods we calculate the ground state energy and the entanglement entropy. By comparison of these results with the ones obtained from DMRG and ZT we will show that GHF is able to capture the qualitative and quantitative behavior of the long-range TFIM. This highlights the ability of GHF in predicting physically relevant material properties at computationally low cost.
This work is structured as follows. In Section II we discuss the GHF theory which we then apply to the TFIM model described in II.1. The introduced methods are used in Section III where we numerically study the ground state energy and the entanglement entropy. We conclude by summarizing our findings in Section IV and providing an outlook for future work.
## II Theory
### Long-range transverse field Ising model
In this work we consider the TFIM Hamiltonian describing a system of \(N\) spins with open boundary conditions
\[\hat{H}=\sum_{p=1}^{N}h_{p}\hat{\sigma}_{p}^{z}+\sum_{p<q}^{N}J_{pq}\hat{ \sigma}_{p}^{x}\hat{\sigma}_{q}^{x}, \tag{1}\]
where we introduced the transversal magnetic field strength \(h_{p}=\cos(\theta)\), the interaction strength \(J_{pq}=\sin(\theta)/|p-q|^{\alpha}\) and the Pauli matrices \(\hat{\sigma}_{p}^{a}\) (\(a\in\{x,y,z\}\)) for each spin indexed by \(p,q\). The magnetic field and interactions strengths are parameterized by the angle \(\theta\) and the algebraic scaling of the interaction range is given by \(\alpha\). In this work we furthermore focus on _antiferromagnetic_ (AFM) couplings which implies \(J_{pq}>0\) or \(\theta\in(0,\pi)\). Because the Hamiltonian is symmetric under the simultaneous transformations \(\hat{\sigma}_{p}^{z}\mapsto-\hat{\sigma}_{p}^{z}\) and \(\theta\rightarrow\pi-\theta\) we can restrict our study to \(\theta\in(0,\pi/2]\).
In a next step, we map the TFIM Hamiltonian onto a fermionic Hamiltonian. To this end, we use the Jordan-Wigner transformation \(\hat{\sigma}_{p}^{+}=\hat{c}_{p}^{p}e^{i\pi\sum_{q=1}^{p-1}\hat{c}_{q}^{\dagger }\hat{c}_{q}}\) and \(\hat{\sigma}_{p}^{-}=\hat{c}_{p}^{-i\pi\sum_{q=1}^{p-1}\hat{c}_{q}^{\dagger }\hat{c}_{q}}\)[25]. Here, we used \(\hat{\sigma}_{p}^{\pm}=[\hat{\sigma}_{p}^{x}\pm i\hat{\sigma}_{p}^{y}]/2\) and introduced the fermionic raising and lowering operators \(\hat{c}_{p}^{\dagger}\), \(\hat{c}_{p}\), respectively. The latter obey the canonical anticommutation relations \(\{\hat{c}_{p},\hat{c}_{q}\}=0\) and \(\{\hat{c}_{p},\hat{c}_{q}^{\dagger}\}=\delta_{p,q}\), where \(\delta_{p,q}\) is the Kronecker delta and \(\{\hat{A},\hat{B}\}=\hat{A}\hat{B}+\hat{B}\hat{A}\) denotes the anticommutator of two operators \(\hat{A},\hat{B}\). Instead of analyzing the problem in the basis of the \(2\times N\) fermionic operators \(\hat{c}_{p},\hat{c}_{p}^{\dagger}\) we represent the Hamiltonian in \(2N\) Majorana operators \(\hat{a}_{2p-1}=\hat{c}_{p}^{\dagger}+\hat{c}_{p}\) and \(\hat{a}_{2p}=i(\hat{c}_{p}^{\dagger}-\hat{c}_{p})\). The latter poses the anticommutation relation \(\{\hat{a}_{l},\hat{a}_{m}\}=2\delta_{l,m}\) (\(l,m=1,2,\ldots,2N\)) and the Hamiltonian (1) in the Majorana representation is given by
\[\hat{H}= -i\sum_{p=1}^{N}h_{p}\hat{a}_{2p-1}\hat{a}_{2p}+\sum_{p<q}^{N}(-i )^{q-p}J_{pq}\hat{a}_{2p}\hat{S}_{pq}\hat{a}_{2q-1}. \tag{2}\]
Here, we introduced the string operator \(\hat{S}_{pq}=\prod_{k=p+1}^{q-1}(\hat{a}_{2k-1}\hat{a}_{2k})\) which is the product of \(2\times(q-p+1)\) Majorana operators. For nearest-neighbour interactions, \(\alpha=\infty\) and \(J_{pq}=\delta_{p,q\pm 1}\), this string operator becomes the identity, \(\hat{S}_{pq}=1\) and \(\hat{H}\) becomes quadratic in the Majorana operators. Consequently, the model can be described by free fermions and is therefore exactly solved by a FGS [22]. In general, however, for the long-range TFIM we will need to include the contribution of the operator \(\hat{S}_{pq}\). To avoid ambiguity, we use the term _long-range_ in this work for all systems with \(\alpha<\infty\), since the spin interaction breaks up into a sum of terms proportional to \(1/|p-q|^{\alpha}\), where all lattice sites \(p,q\) give non-zero contributions, and not just nearest-neighbour sites \(p,p+1\) (as in the special case \(\alpha\rightarrow\infty\)). Often times, the term _long-range_ is reserved in literature for an algebraic exponent \(\alpha=\sigma+d\) in a \(d\)-dimensional system for \(\sigma<0\) (which in a one-dimensional system refers to the regime \(\alpha<1\)) [26; 27]. Thus, to avoid confusion, in our work we will refer to \(\alpha<1\) as the _strong_ long-range, and to \(\alpha>1\) as the _weak_ long-range regime, while the special case \(\alpha=1\) is marginal.
### Fermionic Gaussian States
The formal definition of a FGS is given by [22],
\[\hat{\rho}_{\text{GS}}= \text{tr}\left(e^{-\beta\hat{H}_{\text{GS}}}\right)^{-1}e^{-\beta \hat{H}_{\text{GS}}}, \tag{3}\]
where \(\hat{H}_{\text{GS}}=\frac{i}{4}\hat{\mathbf{a}}^{T}\mathbf{G}\hat{\mathbf{a}}\) is a Hermitian operator, \(\beta\in\mathds{R}\), \(\hat{\mathbf{a}}=(\hat{a}_{1},\hat{a}_{2},\ldots,\hat{a}_{2N})^{T}\) is a column vector of Majorana operators, and \(\mathbf{G}\) is a \((2N\times 2N)\) real-valued and antisymmetric matrix. FGS are fully described by the real
and anti-symmetric covariance matrix \(\mathbf{\Gamma}\) with entries
\[\Gamma_{lm}=\frac{i}{2}\text{tr}\left(\hat{\rho}_{\text{GS}}[\hat{a}_{l},\hat{a}_ {m}]\right), \tag{4}\]
\(l,m\in\{1,2,\ldots,2N\}\), and where \([\hat{A},\hat{B}]=\hat{A}\hat{B}-\hat{B}\hat{A}\) denotes the commutator of two operators \(\hat{A},\hat{B}\). While Eqs. (3)-(4) describe both pure and mixed FGS, we only focus on pure FGS in this work, since we are interested in the ground state. Pure FGS are characterized by \(\mathbf{\Gamma}^{2}=-\mathbf{1}_{2N}\) (\(\mathbf{1}_{k}\) denotes the \((k\times k)\)-identity matrix), and eigenvalues of the covariance matrix are given by \(\lambda\in\{-1,1\}\). All information contained in the density matrix (3) of a FGS is also contained in its covariance matrix (4). The expectation value of a single tensor product of Majorana or fermionic operators can be computed efficiently through Wick's theorem [22; 28],
\[\text{tr}\left(\hat{\rho}_{\text{GS}}\hat{a}_{i_{1}}\hat{a}_{i_{2}}\cdots\hat{ a}_{i_{2m}}\right)= (-i)^{m}\text{Pf}\left(\mathbf{\Gamma}\big{|}_{i_{1}i_{2}\ldots i_{2m}}\right), \tag{5}\]
where \(i_{1}\neq i_{2}\neq\ldots\neq i_{2m}\) for \(i_{k}\in\{1,\ldots,2N\}\) and \(k=1,\ldots,2N\). The matrix \(\left.\mathbf{\Gamma}\right|_{i_{1}i_{2}\ldots i_{2m}}\) denotes a \((2m\times 2m)\)-submatrix of \(\mathbf{\Gamma}\) with the corresponding rows and columns \(i_{1},i_{2},\ldots,i_{2m}\), and \(\text{Pf}(\mathbf{\mathbf{A}})\) denotes the Pfaffian of a skew-symmetric matrix \(\mathbf{\mathbf{A}}\).
### Approximating the ground state with a fermionic Gaussian State
Using Wick's theorem (5), we are able to compute the energy expectation value
\[E(\mathbf{\Gamma})= \text{tr}\left(\hat{\rho}_{\text{GS}}\hat{H}\right), \tag{6}\]
which results in
\[E(\mathbf{\Gamma})= -\sum_{p=1}^{N}\frac{h_{p}}{2}\left(\Gamma_{2p-1,2p}-\Gamma_{2p,2 p-1}\right)\] \[+\sum_{p<q}^{N}J_{pq}(-1)^{q-p}\text{Pf}\Big{\{}\,\Gamma|_{2p,2p+1,\ldots,2q-1}\Big{\}}\,, \tag{7}\]
for the Hamiltonian given by Eq. (1). In order to approximate the ground state of the Hamiltonian within the family of FGS, one has to find a covariance matrix \(\mathbf{\Gamma}\) which minimizes \(E(\mathbf{\Gamma})\). While one can apply any constrained optimization method, in the following, we will discuss two particular algorithms for finding the optimal \(\mathbf{\Gamma}\), which we will use in Section III.
Imaginary Time Evolution (ITE)The first algorithm performs an Imaginary Time Evolution (ITE) under the constraint that Wick's theorem holds throughout the evolution. This constraint guarantees that the evolved state remains a FGS, and leads to an equation of motion for the corresponding covariance matrix,
\[\frac{d\mathbf{\Gamma}}{d\tau}=\frac{1}{2}[\mathbf{\Gamma},[\mathbf{\Gamma},\mathbf{\mathbf{H} }^{(\text{mf})}]], \tag{8}\]
where \(\tau\in\mathds{R}\) denotes the imaginary time. We derive Eq. (8) in Appendix A. The central quantity is hereby the mean-field Hamiltonian \(\mathbf{\mathbf{H}}^{(\text{mf})}(\mathbf{\Gamma})\) which is the gradient of the energy with respect to the covariance matrix,
\[H_{lm}^{(\text{mf})}=4\frac{dE(\mathbf{\Gamma})}{d\Gamma_{lm}}. \tag{9}\]
This term can be computed explicitly by using identities for the matrix derivative of a Pfaffian, which we have also employed in Appendix B.
We solve Eq. (8) iteratively, by discretizing the ITE into small time steps \(\Delta\tau\). Starting from a random initial covariance matrix, we evolve the covariance matrix through \(\mathbf{\Gamma}(\tau+\Delta\tau)\approx\mathbf{\mathbf{O}}(\Delta\tau)\mathbf{\Gamma}( \tau)\mathbf{\mathbf{O}}(\Delta\tau)^{T}\), where \(\mathbf{\mathbf{O}}(\Delta\tau)=e^{\frac{1}{2}[\mathbf{\mathbf{H}}^{(\text{mf})},\mathbf{ \Gamma}]^{\Delta\tau}}\) is an orthogonal matrix. As explicitly shown in Ref. [23], this approach preserves the purity of the FGS, while ensuring a monotonic decrease of the energy in each iteration.
Zero Temperature (ZT)The second algorithm uses a self-consistent equation for the steady-state solution of Eq. (8). In this algorithm, for a given \(\mathbf{\Gamma}\), we diagonalize the mean-field matrix \(i\mathbf{\mathbf{H}}^{(\text{mf})}=\mathbf{\mathbf{U}}\mathbf{\mathbf{D}}\mathbf{\mathbf{U}}^ {\dagger}\) and recalculate
\[\mathbf{\Gamma}=i\mathbf{\mathbf{U}}\text{sgn}(\mathbf{\mathbf{D}})\mathbf{\mathbf{U}}^{ \dagger}. \tag{10}\]
Here, \(\mathbf{\mathbf{U}}\) is a unitary matrix, \(\mathbf{\mathbf{D}}\) is a diagonal matrix containing the real eigenvalues of \(i\mathbf{\mathbf{H}}^{(\text{mf})}\), and \(\text{sgn}(\mathbf{\mathbf{D}})\) is the sign function applied to the diagonal entries of \(\mathbf{\mathbf{D}}\). From this \(\mathbf{\Gamma}\) we recalculate \(i\mathbf{\mathbf{H}}^{(\text{mf})}\) and repeat the procedure until the covariance matrix is converged. One can check that the solution of this algorithm is also a stationary state of Eq. (8) with \(\mathbf{\Gamma}^{2}=-\mathbf{1}\).
In both algorithms we choose several random initial covariance matrices \(\mathbf{\Gamma}_{\text{init}}\) to ensure unbiased results. A random \(\mathbf{\Gamma}_{\text{init}}\) is generated through \(\mathbf{\Gamma}_{\text{init}}=\mathbf{\mathbf{O}}^{T}\mathbf{\mathbf{\Omega}}\mathbf{\mathbf{O}}\), where \(\mathbf{\mathbf{O}}\) is a random orthogonal matrix and we defined the block diagonal matrix \(\mathbf{\mathbf{\Omega}}=\bigoplus_{k=1}^{N}(-1)^{r_{k}}\left(\begin{smallmatrix}0 \\ -1\end{smallmatrix}\begin{smallmatrix}1\\ 0\end{smallmatrix}\right)\), where \(r_{k}\in\{0,1\}\) is chosen randomly and \(\bigoplus\) denotes the direct sum. After convergence of the corresponding algorithm we achieve a stationary solution \(\mathbf{\Gamma}_{\text{st}}\). With the help of this solution we can then find the GHF approximation to the ground state energy given by \(E(\mathbf{\Gamma}_{\text{st}})\). Besides the energy and entanglement entropy introduced in the following section, the covariance matrix also allow us direct access to quantum correlations.
### Entanglement entropy and central charge
Entanglement entropy is a well-studied measure for the amount of quantum correlations in a pure quantum state [29; 30]. It is defined as \(S_{N_{\mathcal{A}}}=-\text{tr}(\hat{\rho}_{\mathcal{A}}\log(\hat{\rho}_{\mathcal{ A}}))\), where \(\mathcal{A}\) describes a subsystem containing \(N_{\mathcal{A}}\) spins. The reduced density matrix \(\hat{\rho}_{\mathcal{A}}=\text{tr}_{\mathcal{B}}(\hat{\rho})\) is obtained by performing a partial trace over the disjoint subsystem \(\mathcal{B}\), with \(N_{\mathcal{B}}=N-N_{\mathcal{A}}\) spins. For the spins numbered as
\(\mathcal{A}=\{1,2,\ldots,N/2\}\) we define the corresponding Majorana operators by \(\mathfrak{M}_{\mathcal{A}}=\{1,2,\ldots,N-1,N\}\). The entanglement entropy is then fully determined by the matrix \(\mathbf{\Gamma}_{\mathcal{A}}=\left.\mathbf{\Gamma}\right|_{\mathfrak{M}_{ \mathcal{A}}}\) and can be calculated with [31, 32, 23]
\[S_{N/2}= \frac{N}{2}\log(2)-\frac{1}{2}\mathrm{tr}\left[(\mathbf{1}_{N}+i \mathbf{\Gamma}_{\mathcal{A}})\log\left(\mathbf{1}_{N}+i\mathbf{\Gamma}_{ \mathcal{A}}\right)\right]. \tag{11}\]
For short-range 1D systems, the entanglement entropy typically follows two different scalings: for gapped phases, \(S_{N/2}\) saturates to a constant value independent of \(N\) and thus obeys the so-called area law [33]. For gapless phases, the entanglement entropy exhibits the following behavior [34]
\[S_{N/2}=\frac{c}{6}\log(N)+B, \tag{12}\]
where \(c\) is the central charge characterizing the universality class of the system and \(B\) is a non-universal constant. For the nearest-neighbour TFIM at \(\alpha=\infty\) the value of \(c=1/2\) can be found exactly.
For long-range systems we need to differentiate between _weak_ long-range interactions, \(\alpha>d=1\), and the _strong_ long-range interactions, \(\alpha<d=1\).
For _weak_ long-range interactions and a non-vanishing energy gap we expect also an area law scaling, implying that \(S_{N/2}\) is independent of \(N\). For the case of a vanishing gap one also finds a logarithmic divergence [35, 36, 37, 33] following Eq. (12).
For _strong_ long-range interactions in the AFM-TFIM we expect instead a logarithmic divergence of the entanglement entropy, where \(S_{N/2}\) obeys Eq. (12) and one can find \(c\neq 0\) even in presence of a non-vanishing gap [38, 39, 40, 41]. In this regime \(c\) is strictly speaking not a central charge but because of the same functional dependence of \(S_{N/2}\) in Eq. (12), we also denote \(c\) as the effective central charge.
## III Results
### Phase diagram
In this section, we show that a computationally inexpensive GHF mean-field approach can reproduce the phase diagram of the AFM-TFIM for a wide range of values \(\alpha\), both in the weak and strong long-range regime, and is able to locate the point of the phase transition for \(\alpha\geq 1\) in excellent agreement with state-of-the-art numerical methods.
As a first benchmark and in the same spirit of Ref. [6] we map the phase diagram by calculating the entanglement entropy for a wide range of values of \(\alpha\), from _weak_ to _strong_ long-range interactions, and for \(\theta\in(0,\pi/2)\). The values of \(S_{N/2}\) [Eq. (11)] computed with the ZT GHF method are visible in Fig. 1 for \(N=100\). For \(\theta=0\) the interactions vanish and \(S_{N/2}=0\) for all values of \(\alpha\). This represents the phase where all spins are uncorrelated and align with the external magnetic field. However, when \(\theta\) and therefore the AFM interactions are increased, the minimization of the interaction energy competes with the external magnetic field. This is accompanied by an increase of \(S_{N/2}\). Dependent on \(\alpha\), there is a critical value \(\theta_{c}(\alpha)\) beyond which the spins favor an AFM order. This transition is highlighted in Fig. 1 by a sharp rise of \(S_{N/2}\). Our findings are in qualitative agreement with the ones obtained in Ref. [6] from DMRG calculations. To compare our results also quantitatively, we will now focus on the _weak_ and _strong_ long-range interactions cases separately.
### Weak long-range interactions
#### iii.2.1 Comparison of GHF and DMRG
For _weak_ long-range interactions, \(\alpha\geq 1\), we show the ground state energy and the entanglement entropy in Fig. 2(a) and Fig. 2(b), respectively.
The solid lines represent the results obtained from the GHF theory while hollow markers represent the results obtained from DMRG simulations. Both simulation methods predict a rather smooth behavior of the energy in Fig. 2(a). For larger values of \(\alpha\geq 1.5\) we find a maximum and a decrease beyond the maximum point. The GHF and DMRG simulations agree perfectly.
The entanglement entropy, visible in Fig. 2(b), shows for all values and both simulations methods a very quick increase and a pronounced singularity. The latter is an indicator for the phase transition point. Beyond this
Figure 1: We plot the entanglement entropy \(S_{N/2}\) from the covariance matrix obtained through the ZT algorithm for a system size \(N=100\), \(\alpha\in\{0.30,0.50,0.75,1.00,\ldots,3.00\}\), and \(\theta\in(0,\pi/2)\). Black squares represent the quantum critical points \(\theta_{c}^{\infty}/\pi\) in the thermodynamic limit, which are listed in Tab. 1, while the dashed line serves as a guide to the eye.
point we find again a decrease of the entanglement entropy. Both methods, GHF and DMRG, are in very good agreement.
#### iii.2.2 Threshold and central charge
In order to find a value for the threshold at \(N\to\infty\), we are performing a finite-size scaling. For this we carry out analogue simulations for a range of smaller system sizes \(N\in\{20,30,\ldots,100\}\). Then we find numerically the maximum of the entanglement entropy of the half chain \(S_{\rm max}=S_{N/2}(\theta_{\rm max})\) and the corresponding value \(\theta_{\rm max}\). The latter is found using the optimizer scipy.optimize.fminbound() which is pre-implemented in python. For every value \(\theta\) examined by the optimizer we find the optimal FGS for the corresponding Hamiltonian. Optimizing \(S_{N/2}\) over \(\theta\) can be achieved as FGS provide a way for calculating \(S_{N/2}\) polynomially in \(N\), see Eq. (11). We then use the following finite-size scaling law [42]
\[\theta_{\rm max}(N)=\theta_{c}^{\infty}+\frac{a}{N}, \tag{13}\]
where \(\theta_{c}^{\infty}\) is the threshold at \(N\to\infty\) and \(a\) is a fitting parameter which determines the finite-size scaling. Fitting Eq. (13) to the numerically obtained data of \(\theta_{\rm max}\) reveals the \(\theta_{c}^{\infty}\) in the thermodynamic limit. In Fig. 3 we provide examples for the fits that are used to calculate \(\theta_{c}^{\infty}\). We perform these fits for various values of \(\alpha\) and the results for the threshold are collected in Tab. 1. In addition, we have plotted the results of \(\theta_{c}^{\rm max}\) in Fig 1 as black squares which mark the sudden spike of the entanglement entropy. In Tab. 1 we compare the results obtained from the GHF theory with the ones obtained from LCE calculations [10], DMRG data of Ref. [6] (labeled DMRG) and Ref. [8] (labeled DMRG*). We find in general very good agreement of the thresholds obtained from the different methods.
Besides the threshold \(\theta_{c}^{\infty}\) we can also extract the scaling of the maximum entropy \(S_{\rm max}=S(\theta_{\rm max})\). At the critical point we use the scaling law [34] given by Eq. (12). We fit Eq. (12) to the maximum values \(S_{\rm max}\) as displayed in Fig. 4(a). From these fits we extract the central charge \(c\), which is shown in Fig. 4(b) as function of \(\alpha\). The central charge is always above the result \(c=1/2\) expected from the short-range TFIM. We also compare our results to different DMRG results of Ref. [6; 8]. We find that the central charges obtained from FGS are systematically smaller than the values provided by Ref. [6] and larger
Figure 3: Example for the fit of Eq. (13) to the value of \(\theta_{\rm max}\) obtained by maximization of the entanglement entropy \(S\) with FGS. The thresholds \(\theta_{c}^{\infty}/\pi\) are shown for the respective cases \(\alpha\in\{1,2,3\}\), see Tab. 1 for more details.
Figure 2: For a system of size \(N=100\) and exponents \(\alpha\in[1,3]\), we plot (a) the energy \(E\) and (b) the entanglement entropy \(S_{N/2}\) (bottom), as defined in Eqs. (6) and (11), obtained from the covariance matrix of the ZT algorithm (solid lines) and compare it to DMRG (hollow markers).
than the DMRG results of Ref. [8]. The central charge \(c\) is monotonically decreasing in the weak long-range regime, but drops at the onset of the strong long-range regime at \(\alpha=1\). In conclusion, we found that the results of the GHF method are in good qualitative and quantitative agreement with state-of-the-art numerical methods for _weak_ long-range interactions.
### Strong long-range interactions
#### iii.3.1 Comparison of GHF and DMRG
We will now shift our focus to the regime of _strong_ long-range interactions, \(\alpha<1\). We first plot the ground state energy and the entanglement entropy in Fig. 5(a) and Fig. 5(b) for three different values of \(\alpha<1\) of size \(N=100\). In Fig. 5(a) we obtain for all three values of \(\alpha\) a monotonously increasing energy with \(\theta\). This is different to the case of _weak_ long-range interactions (see Fig. 2(a)) where we have observed a maximum close to the threshold at least for sufficiently large \(\alpha\geq 1.5\). We compare our results obtained from FGS also with the ones obtained from DMRG results. Here, we find that DMRG always predicts a lower ground state energy. The discrepancy of the two methods is even more striking in the entanglement entropy visible in Fig. 5(b). Here, while we still observe very good agreement for \(\alpha=0.75\) we found clear deviations for \(\alpha=0.3\). The DMRG results predict tendentially a larger entanglement entropy than the FGS. This is an indicator that FGS are less well-suited for the description of the TFIM for very small \(\alpha\), i.e. very strong long-range interactions.
#### iii.3.2 Violations to the area law
We will now analyze the scaling of the entanglement entropy with the system size. For this we calculate the entanglement entropy for various parameters \(\theta\) and \(\alpha\) and for different numbers of spins \(N\in\{40,50,\ldots,100\}\). We
Figure 4: (a) Extracting the central charge. Using the ZT algorithm for various \(\alpha\), here exemplified by \(\alpha\in\{1,2,3\}\), we plot the entanglement entropy \(S_{N/2}\) against \(\log(N)\). For each \(\alpha\) we perform a linear regression fit, neglecting the system sizes \(N\in\{20,30,40\}\) to mitigate finite size effects. (b) Central charge \(c\) obtained from finite-size scaling up to system size \(N=100\) of FGS evolutions through the ZT algorithm (blue squares) for the AFM long-range TFIM. For comparison, DMRG results from finite-size scaling of system sizes of up to \(N=100\) from Ref. [6] (‘DMRG’, orange square) and [8] (‘DMRG*’, green triangles) are included. The red horizontal line represents the value \(c=1/2\) which describes the Ising universality class. Error bars represent the standard deviation from the linear regression fit.
\begin{table}
\begin{tabular}{|c|c c c c|} \hline \(\alpha\)\(\theta_{c}^{\infty}/\pi\) & FGS & LCE & DMRG & DMRG* \\ \hline
1.00 & 0.3534(4) & - & 0.3509 & - \\
1.25 & 0.3357(1) & 0.35(5) & - & - \\
1.50 & 0.3218(1) & 0.3213(5) & 0.3226 & - \\
1.75 & 0.3106(1) & - & - & - \\
2.00 & 0.3013(2) & 0.3026(8) & 0.3027 & 0.3021 \\
2.25 & 0.2932(2) & 0.294(4) & - & - \\
2.50 & 0.2865(1) & 0.2871(11) & - & - \\
2.75 & 0.2807(2) & - & - & - \\
3.00 & 0.2760(2) & 0.27722(25) & 0.2782 & - \\ \hline \end{tabular}
\end{table}
Table 1: The critical points \(\theta_{c}^{\infty}/\pi\) obtained from Eq. (13) with FGS and ZT, in comparison to LCE [10], and DMRG [6], DMRG*[8] results. The values are obtained for various exponents \(\alpha\) and for simulations up to \(N=100\) spins. The error indicated in the FGS column in round brackets is the standard deviation for the intersect of a linear regression fit of \(\theta_{\text{max}}/\pi\) as a function of \(1/N\).
then fit the coefficients \(c\) and \(B\) using Eq. (12) to the obtained values of the entanglement entropy. The obtained values of \(c\) are shown in Fig. 6. At this point we remark that the effective central charge \(c\) is calculated far away from the threshold in a phase with a non-vanishing energy gap [6].
For the values \(\alpha<1\), we find \(c=0\) only at \(\theta=0\). For increasing \(\theta\) we find a sharp increase of \(c\). For \(\alpha=0.3\) and \(\alpha=0.5\) we find a maximum and then a decrease again for larger values of \(\theta\). A qualitatively similar behavior has also been observed in Ref. [6]. This has been seen as a violation to the area law since this logarithmic divergence does not originate from a closing gap in the spectrum of the system [6]. We therefore conclude that the FGS are able to predict this feature, although the quantitative values deviate from the ones obtained from DMRG results.
## IV Summary and Outlook
This work presents an extensive study of the AFM long-range TFIM in both the weak and strong long-range regime using generalized Hartree Fock theory, a mean-field method with low computational cost. We validate our results by comparing the computed energy and entanglement entropy to DMRG. We plot the phase diagram and provide estimates for the location of the critical point of the second order phase transition through finite-size scaling for \(\alpha\in[1,3]\) and find that they are in excellent agreement with both LCE calculations of Ref. [10] and DMRG simulations of Refs. [6; 8]. At the critical point, we compute the central charge \(c\) of the underlying conformal field theory for \(\alpha\in\{0.3,0.5,1\}\), and find \(c>1/2\) for all values of \(\alpha\). In the strong long-range regime we still found qualitative agreement between FGS and DMRG calculations. Hereby we found larger quantitative deviations for smaller values of \(\alpha\). Remarkably, GHF can predict the logarithmic violations to the area law in the AFM-TFIM which has previously been studied with DRMG. Based on these findings, we conclude that FGS provide a numerically inexpensive alternative to study the AFM long-range TFIM and that our results are in good agreement with DMRG, the current state-of-the-art numerical method for one-dimensional lattice systems.
All simulations were carried out using a standard laptop computer. Since the dimensionality of the system only appears in the Hamiltonian elements \(h_{pq}\) and \(J_{pq}\)
Figure 5: (a) Energy and (b) entanglement entropy obtained from the covariance matrix of the ITE algorithm (solid lines) and DMRG (empty markers) simulations for \(N=100\) and \(\alpha\in\{0.3,0.5,0.75\}\).
Figure 6: Violations to the area law: The effective central charge \(c\) [Eq. (12)] calculated from finite scaling of system sizes \(N\in\{40,50,\ldots,100\}\) for 50 different values deep in the gapped region \(\theta\in(0,\pi/4)\) for the GHF ITE algorithm. Error bars for the standard deviation are also included, but too small to be visible.
it is straightforward to apply FGS to the two- and three-dimensional TFIM. Therefore, it would be interesting to compare FGS simulations with methods that can be applied to the two-dimensional AFM-TFIM [10]. Moreover, while we have focused on the AFM regime, FGS can readily be applied to the ferromagnetic regime \(\theta\in(-\pi,0)\). In this work we have focused on the entanglement entropy, however, pair correlation functions and the entanglement spectrum can be extracted from the covariance matrix as well. FGS can also be used to study dynamics under the evolution of the TFIM, with equations of motion similar to Eq. (8) [23; 24]. In particular, studying the dynamics of the entropy after a quench would offer the possibility to verify the breaking of conformal symmetry in the regime \(\alpha<1\)[43]. From a numerical standpoint, more efficient calculations of the central quantities such as Eqs. (7) could lead to dramatic computational speedups. As a possible pathway, it would be interesting to see if sum-identities for Pfaffians such as provided in Refs. [44; 45; 46] could be applied to the TFIM Hamiltonian. Finally, one could study if different spin-to-fermion mappings [47; 48; 49], each resulting in a different form of \(H\) when expressed in fermionic operators, have an effect on the FGS simulations.
###### Acknowledgements.
The authors thank Kai Phillip Schmidt for providing the data for the LCE calculations from Ref.[10] and Luca Tagliacozzo for providing the DMRG data from Ref. [6]. The authors also thank Giovanna Morigi for insightful discussions. M.K. thanks Miguel Angel Martin-Delgado and Frank Wilhelm-Mauch for helpful discussions and support. S.B.J. acknowledges support from the Research Centers of the Deutsche Forschungsgemeinschaft (DFG): Projects A4 and A5 in SFB/Transregio 185: "OSCAR."
|
2305.19150 | The Centralizing Effects of Private Order Flow on Proposer-Builder
Separation | The current Proposer-Builder Separation (PBS) equilibrium has several
builders with different backgrounds winning blocks consistently. This paper
considers how that equilibrium will shift when transactions are sold privately
via order flow auctions (OFAs) rather than forwarded directly to the public
mempool. We discuss a novel model that highlights the augmented value of
private order flow for integrated builder searchers. We show that private order
flow is complementary to top-of-block opportunities, and therefore integrated
builder-searchers are more likely to participate in OFAs and outbid non
integrated builders. They will then parlay access to these private transactions
into an advantage in the PBS auction, winning blocks more often and extracting
higher profits than non-integrated builders. To validate our main assumptions,
we construct a novel dataset pairing post-merge PBS outcomes with realized
12-second volatility on a leading CEX (Binance). Our results show that
integrated builder-searchers are more likely to win in the PBS auction when
realized volatility is high, suggesting that indeed such builders have an
advantage in extracting top-of-block opportunities. Our findings suggest that
modifying PBS to disentangle the intertwined dynamics between top-of-block
extraction and private order flow would pave the way for a fairer and more
decentralized Ethereum. | Tivas Gupta, Mallesh M Pai, Max Resnick | 2023-05-30T15:54:07Z | http://arxiv.org/abs/2305.19150v2 | # The centralizing effects of private order flow on proposer-builder separation
###### Abstract.
The current Proposer Builder Separation (PBS) equilibrium has several builders with different backgrounds winning blocks consistently. This paper considers how this equilibrium will shift when transactions are sold privately via order flow auctions (OFAs) rather than forwarded directly to the public mempool. We discuss a novel model that highlights the augmented value of private order flow for integrated builder searchers. We show that private order flow is complementary to top-of-block opportunities, and therefore integrated builder-searchers are more likely to participate in OFAs and outbid non integrated builders. They will then parlay access to these private transactions into an advantage in the PBS auction, winning blocks more often and extracting higher profits than non-integrated builders. To validate our main assumptions, we construct a novel dataset pairing post-merge PBS outcomes with realized 12-second volatility on a leading CEX (Bi-nance). Our results show that integrated builder-searchers are more likely to win in the PBS auction when realized volatility is high, suggesting that indeed such builders have an advantage in extracting top-of-block opportunities. Our findings suggest that modifying PBS to disentangle the intertwined dynamics between top-of-block extraction and private order flow would pave the way for a fairer and more decentralized Ethereum.
Keywords:Private Order Flow, PBS, OFAs, decentralization.
## 1. Introduction
Most Ethereum blocks today are built by specialized _builders_ rather than validators. In every slot, builders gather transactions and assemble them into blocks. They then compete against each other in an ascending price (English) auction for the right to have the block they assembled proposed by the proposer. Whichever builder bids the highest wins the Proposer-Builder Separation (PBS) auction, and pays their bid to the proposer.
The right to build a block is valuable for several reasons most obviously because users pay _tips_ for inclusion. Presently these tips make up only a small portion of the total value from building a block. A majority of the value from building a block comes from the builder exploiting _MEV opportunities_. MEV (Maximal Extractable Value) refers to additional value that can be exploited from strategically reordering or including specific transactions.
Current MEV opportunities on Ethereum can be broadly segmented into two categories: _top-of-block_ and _block body_. Let us describe each in turn. Top-of-block opportunities are primarily CEX/DEX arbitrage: exploiting price divergences of a token between a centralized exchange (CEX) and some on-chain Decentralized Exchange operated by a smart contract (DEX, e.g. Uniswap). Intuitively, successfully exploiting such a price divergence requires both priority access to the first few transactions in the block on-chain, and also high quality execution on the centralized exchange. The latter requires high-frequency trading (HFT) strategies and low CEX transaction fees.
Block-body opportunities are typically frontrunning attacks that involve sandwiching user transactions or executing user orders against each other to cut out the liquidity providers. The value of the Block-body is primarily dictated by access to transactions. Historically, most transactions have been forwarded to the public mempool, meaning all block builders have access to the same transactions; however, some builders have access to private order flow which is not available in the public mempool. The availability of private order flow is likely to be further supplemented in the near future by the advent of order flow auctions (OFAs), venues where order flow providers (wallets) sell the exclusive right to execute their users' transactions.
This paper focuses on the complementarity between top-of-block and block body opportunities. In particular, the PBS auction makes no distinction between top-of-block and block body, instead, the right to build the entire block is sold wholesale. This means that an advantage in top-of-block extraction capability can help secure value from the body of the block and vice versa.
This paper makes two contributions. First, we demonstrate empirically that builders operated by high-frequency trading firms are superior at capturing the top of block opportunities. Second, we construct a simple model of proposer-builder separation and
demonstrate that, in this model, private order flow is more valuable to vertically integrated builder searchers than non-integrated builders. Our theoretical results therefore imply that private order flow markets are likely to be dominated by these firms.
Let us now describe our analysis and results in a little more detail. The main assumption in our subsequent theoretical analysis is that some bidders are stochastically advantaged at extracting top-of-block opportunities. We validate this assumption empirically. In particular, we construct a unique dataset that combines roughly a month PBS auction outcome data, i.e., which builder won which blocks over the course of a month; paired with detailed price data on a major CEX, namely, Binance. Our empirical strategy posits that the realized 12-second volatility of ETH on Binance is plausibly exogenous. Therefore the realized volatility will generate blocks that have varying top of block value. A block in which the price on Binance is flat over the previous 12 seconds will have almost no top of block value, meaning any advantage that builder searchers have at extracting from the top of the block will be irrelevant. In contrast, if the price shift is large in that period, winning the block becomes should be far more valuable to builders who excel at top-of-block extraction. Our results show that when absolute log price change on Binance in a 12 second period is large, builder-searchers operated by HFTs are far more likely to win. These results are statisitcally significant, invalidating the null hypothesis that all builders are roughly equivalent in their top-of-block extraction capability.
Having demonstrated this, we turn to a theoretical model which explores the centralizing effects of this top-of-block advantage on the equilibrium of the PBS auction. Our model considers a simple abstraction where a block can contain at most two transactions. In the first stage, the builders gather block body opportunities: this can be either from the public mempool (which models the current state of affairs) or by purchasing them in an OFA (the plausible scenario we are moving towards). In the second stage the builders combine their block body transactions with their top-of-block transactions to form a block and then compete with each other in a first price auction for the right to append their block onto the chain. Our results show that advantages in top of block extraction capabilities are magnified when private order flow is available, in comparison with the current scenario where block body opportunities are available in the public mempool or otherwise shared with all builders. In particular, a builder with an advantage, be it deterministic or stochastic, at extracting top-of-block opportunities, will win the OFA. With access to the private transactions, it will then win the PBS auction more often, and have higher profits, than it would have in the counterfactual world without OFAs/ private transactions.
Our results suggest a troubling centralizing tendency of PBS when private order flow is available via an Order Flow Auction, a setting we are moving towards. In particular,
a small number of integrated builder-searchers who have top-of-block extraction capabilities will dominate both the OFAs and the downstream PBS auction. This contrasts with the popular idea that the OFAs and the PBS auction will squeeze proposer profits between the validators (who earn the PBS auction revenue) and the order flow providers (who earn the OFA revenue). This also contrasts with the original goal of PBS which was to keep block building decentralized.
Our results therefore provide a further impetus for various initiatives to "unbundle" PBS-- unbundling PBS in some form is necessary to prevent concentration into a few integrated builder-searchers. Previous work has focused on limiting the power of builders to build blocks by imposing certain constraints on them: see e.g. the recent works of Buterin (2022), Monnot (2022). There have also been studies on the possibility of implementing blockspace futures (see, e.g., Ma (2022)), which would effectively partially disintermediate the builder by guaranteeing inclusion for some transactions.
## 2. Background
The easiest way to understand the top-of-block, block-body distinction is to look at the blocks themselves. CEX/DEX arb transactions are easily identifiable since they are large directional trades, typically in the first few slots of the block.
These CEX/DEX arb transactions are usually executed by an MEV bot contract that disproportionately lands transactions in blocks associated with the corresponding builder. For example, block 17195495,1, contains 182 transactions. The first 37 appear to be CEX/DEX arb transactions from an MEV bot with the address 0xA69b...e78C.2 These are subjectively large swaps on major pools (uniswap, sushisswap etc). For example, the first transaction swaps 4.265 Million USDC for 2168 wETH 3 on the Uniswap v3 0.05% fee pool 4. The subsequent 36 are also similarly large swaps, each of the order of several hundred wETH.
Footnote 1: See, e.g., [https://etherscan.io/block/17195495](https://etherscan.io/block/17195495).
Footnote 2: OxA69babEF1cA67A37Fdaf7a485DIFF3382056e78C
Footnote 3: [https://eigenphi.io/mev/eigentx/0xca8ec486cb4d066b464104c1b91b3e253218dac6e9570408b66962](https://eigenphi.io/mev/eigentx/0xca8ec486cb4d066b464104c1b91b3e253218dac6e9570408b66962)
Note that these CEX/DEX arbitrage transactions are not found on all blocks--for example, the preceding block, 17195494, does not contain such transactions. They typically only appear when there is high volatility in the preceding 12 seconds, and even then, the sizes tend to be much smaller than this selected block in most cases. For example, in the next block 17195496, there is only 1 CEX/DEX arb transaction from the same bot 5 and the volume traded is only 1.2 Million USDC for 600 WETH.
In the block after that, block 17195497, the same bot has a single CEX/DEX arb transaction, swapping 272k USDT for 138 ETH6. After this transaction, the rest of the block is filled with block-body opportunities. transactions at indexed 1-4 and 11-14 are sandwich attacks7.
Footnote 6: [https://eigenphi.io/mev/eigentx/0x95b1e7dc5f54a5f6ca02be2e17e26e2c73ececac374f88e7451691e88dfcd8fec](https://eigenphi.io/mev/eigentx/0x95b1e7dc5f54a5f6ca02be2e17e26e2c73ececac374f88e7451691e88dfcd8fec)
Block 17195497 in particular shows that builders can exploit both top-of-block and block-body opportunities in the same block. This is an important aspect of our model, and drives our results.
## 3. Data and Empirical Analysis
The driving assumption in the theoretical analysis in Section 4 is that some builders are superior at extracting value from the top-of-block opportunities. In this section, we provide empirical evidence for this assumption. We use realized price-volatility on the CEX as a plausibly exogenous shifter that affects top-of-block but not block body opportunities. In particular, a period with high price movement is one with large CEX-DEX arbitrage opportunity, while a period where prices are relatively flat is one without such opportunities. The null hypothesis (of builders having homogeneous top-of-block extraction capability) is that the identity of PBS auction winners should not be affected by exogenous realized volatility in the 12 seconds before the block is built.
To test our hypothesis/ reject the null, we obtained block-level data from Etherscan for a period corresponding to roughly a month from April 1st, 2023 to May 1st, 2023 (ETH blocks 16950609 to 17150609). We combined this data with detailed price data of ETHUSD from a leading centralized exchange (Binance) in the 12 seconds before each block was built. Price movement in this window allows us to plausibly estimate the amount that builders are able to earn through arbitrage with central exchanges for that block.
Once we merged the block-level and volatility data, clear patterns in builder-volatility relationships emerge. Three builders--Manta, Rsync Builder, and Beaver Build--were identified before the analysis as likely to be better at extracting top of block MEV due to rumored connections with High Frequency Trading Firms. We show how realized volatility is related to whether or not one of these three builders constructed the block in Figure 1.
We assigned the blocks to a quintile for the log price change in the 12-second period before the block was built, our measure of volatility. Manta had 44.5% of its blocks built in the most volatile quintile, and only 22% built in the two least volatile quintiles. Manta
is a special case among the HFT builders since it does not accept external bundles, therefore its only advantage in winning blocks is top of block extraction capability. Rsync and Beaver build both accept external bundles so the relationship is less stark for them; however, as we will demonstrate they still rely heavily on their top of block extraction capabilities to secure blocks. When analyzing the most volatile blocks from each of the HFT traders, there are many large trades with uniswap v3 pools at the top of each block. In general, the larger the realized volatility in the preceding 12 seconds, the larger these trades were. In some cases, when the realized volatility was most extreme, blocks included more than 30 apparent CEX/DEX arb trades, with many consisting of notional sizes in millions of USD. We show the blocks of several notable builders and their respective volatilities in 2.
To formalize these findings, we model the relationship between volatility and HFT builders winning the PBS auction. First, we created a binary variable denoting whether a builder was one of the three HFT builders (Beaver, Manta, and Rsync). We regressed realized volatility on this HFT builder dummy using a logistic regression:
\[P(\text{HFT Builder}=1|\text{Log Price Change})=\frac{1}{1+e^{-(\beta_{0}+\beta_{1 }\cdot\text{Log Price Change})}}\]
We find that the coefficient for the log price change predictor variable is 2055.151, with a standard error of 47.584. The significant positive relationship indicates that as the log price change increases, the odds of HFT builders winning the block also increases. We also see a constant value of -0.821. When the Log10 Price Change is equal to 0 (i.e., no
Figure 1. HFT versus Other Builders’ Block Volatility
change) in the period before the block, the log odds of an HFT builder winning the block are -0.821. This corresponds to a probability of 0.306. If the realized volatility was 1% The probability that an HFT builder won the block was 0.775. When the realized volatility was 2% the probability that an HFT builder won the block was 0.964.
Through this model we find that the likelihood of HFT builders winning the block grows as realized volatility increases. This suggests that these builders are much better than the rest of the field at extracting top-of-block value.
We can identify the specific builders better at extracting top-of-block MEV, we construct a multi-class logistic regression:
\begin{table}
\begin{tabular}{l c} \hline \hline & \multicolumn{1}{c}{_HFT builder:_} \\ \hline & Model \\ & (1) \\ \hline Log10 Price Change & 2055.151*** \\ const & (47.584) \\ & -0.821*** \\ & (0.006) \\ \hline Observations & 199,770 \\ \hline _Note:_ & *p\(<\)0.1; **p\(<\)0.05; ***p\(<\)0.01 \\ \end{tabular}
\end{table}
Table 1. Logistic Regression Results
Figure 2. Top Builders’ Block Volatility
\[\log\left(\frac{P(\text{Builder}_{i})}{P(\text{Builder}_{\text{ref}})}\right)= \beta_{0i}+\beta_{1i}(\text{Log Price Change})\]
We identified seven HFT or high-volume builders: BeaverBuild, Blocknative, Builder 69, Flashbots, Eden Network, Manta, and Rsync Builder. This model analyzes how the volatility between blocks impacts the probability of one of these entities becoming the block winner. This change in probability is measured relative to the likelihood of any other builder not in our set winning the block.
The resulting model coefficients for each builder in Table 2 estimate how a one unit increase in the Log Price Change before a block will significantly impact the log of the ratio between the probability of that block being won by that particular builder vs. the probability of it being won by a builder in the reference class. While these coefficients are more difficult to interpret than simple logistic model with HFT builders, our findings show a several significant relationships between increased volatility before a block and that block being won by a particular builder. Seeing the scale of the coefficients for particular builders relative to each other highlights which builders can better extract top-of-block value.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & **coeff** & **std err** & \(z\) & **P \(>|z|\)** & **[0.025** & **0.975**] \\ \hline _Beaver Build_ & & & & & \\ const & -0.4173 & 0.009 & -46.231 & 0.000 & -0.435 & -0.400 \\ Log Price Change & 618.9638 & 31.098 & 19.903 & 0.000 & 558.012 & 679.916 \\ _Blocknative_ & & & & & \\ const & -2.4854 & 0.020 & -127.105 & 0.000 & -2.524 & -2.447 \\ Log Price Change & 749.1785 & 59.527 & 12.586 & 0.000 & 632.508 & 865.849 \\ _Builder 69_ & & & & & \\ const & 0.0124 & 0.008 & 1.471 & 0.141 & -0.004 & 0.029 \\ Log Price Change & -212.6819 & 32.966 & -6.452 & 0.000 & -277.294 & -148.070 \\ _Flashbots_ & & & & & \\ const & -0.4552 & 0.010 & -47.287 & 0.000 & -0.474 & -0.436 \\ Log Price Change & -181.4047 & 37.588 & -4.826 & 0.000 & -255.076 & -107.733 \\ _Manta_ & & & & & \\ const & -3.2367 & 0.024 & -137.687 & 0.000 & -3.283 & -3.191 \\ Log Price Change & 1685.3602 & 45.448 & 37.083 & 0.000 & 1596.284 & 1774.436 \\ _Rsync Builder_ & & & & & \\ const & -0.6843 & 0.010 & -71.681 & 0.000 & -0.703 & -0.666 \\ Log Price Change & 926.9315 & 30.966 & 29.934 & 0.000 & 866.239 & 987.624 \\ \hline \hline \end{tabular}
\end{table}
Table 2. MNLogit Regression Results
## 4. Model and Theoretical Analysis
We will study a simple static model for a single slot: A block consists of at most 2 transactions. There is a single available block body transaction which can generate MEV (for example a swap transaction that can be sandwiched). Further, there is a single top-of-block CEX/DEX arbitrage opportunity. There are two builders, \(A\) and \(B\). Each of these builders competes in a first-price PBS auction to have their block included.
We consider two scenarios. Scenario 1 models the current situation with little/ no private order flow, while scenario 2 models a setting with private order flow.
**Scenario 1**: In this setting the block body transaction is available to both builders, for example as a bundle from a third (unmodeled) searcher. Both builders therefore have the same value for this transaction, equaling the searcher's tip which is paid to the including builder--we will denote this value as \(v_{T}\). At the time of the PBS auction, each builder \(x\in\{A,B\}\) also sees their value \(v_{x}\) for the CEX/DEX arb. They then bid in the PBS auction, with the winning bidder's block being included.
**Scenario 2**: In this setting the block body transaction is available for sale at an OFA that runs prior to the PBS auction. The value of the transaction for sale is \(v_{T}\), commonly known among the two bidders. For simplicity we will first assume that this auction runs as a second-price auction, i.e. builders submit bids and the winner (highest bid) pays the second-highest bid. In this setting, the loser of the auction does not have access to the block body transaction. At the time of the PBS, each builder \(x\in\{A,B\}\) also sees their value \(v_{x}\) for the CEX/DEX arb. They then bid in the PBS transaction, with the winning bidder in this auction having their block included.
**Assumption 1**.: _We will assume that for each \(x\in\{A,B\}\), \(v_{x}\sim F_{x}\) where \(F_{x}\) is a CDF on \([0,1]\), and that \(v_{A}\perp v_{B}\), i.e. \(A\) and \(B\) are independently drawn._
_Further we assume that \(F_{A}\succ_{FOSD}F_{B}\), i.e., builder \(A\) is stochastically better at \(CEX/DEX\) arb than builder \(B\)._
Our results in this section show that the outcomes in Scenario 2, i.e., the scenario with OFAs and private order flow, overly advantage builder \(A\) over builder \(B\) relative to scenario 1.
### Baseline Results
The basic idea is straightforward and can be easily described in a setting where \(v_{A}\) and \(v_{B}\) are deterministic (or equivalently, \(F_{A}\) and \(F_{B}\) are degenerate distributions). Without loss of generality, assume that \(v_{A}>v_{B}\).
**Theorem 1**.: _In Scenario 1, suppose that \(v_{x}\) for each of \(x\in A,B\) is common knowledge among the builders before bidding in the PBS auction. Then the equilibrium of the PBS auction is that \(A\) wins the PBS auction at price \(v_{T}+v_{B}\). Their total profit is therefore \(v_{A}-v_{B}\)._
In short, the Theorem asserts that the outcome in Scenario 1 allocates blockspace efficiently. To see why block body transaction is available to both builders and has the same value, so the sole differentiation is in terms of their value for the top-of-block (CEX/DEX arb). The value of each bidder \(x\) for winning the auction is therefore \(v_{T}+v_{x}\). In the standard equilibrium of an English auction with complete information, the outcome is efficient with the high value bidder winning at the second highest price.
As a first benchmark to compare this against, suppose in scenario 2 the builders know their value for the CEX/DEX opportunity before the OFA begins.
**Theorem 2**.: _In Scenario 1, suppose that the value \(v_{x}\) for each builder \(x\in A,B\) for top of block is common knowledge among them before bidding in the OFA. Then the overall outcome of OFA followed by PBS auction is \(A\) wins both auctions at total price \(\max(v_{T}+2v_{B}-v_{A},v_{B})\). Their total surplus is therefore \(\min(2(v_{A}-v_{B}),v_{A}+v_{T}-v_{B})\)._
Proof.: The proof follow straightforwardly from backward induction. We can work out the willingness to pay of each party for the transaction in the OFA based on the difference in profit in the PBS auction conditional on who wins the OFA. There are two mutually exclusive, totally exhaustive cases:
**Case 1:**\(v_{A}>v_{B}+v_{T}\). In this case, note that the winner of the PBS auction is \(A\) regardless of who wins the OFA (since we already have that \(v_{A}>v_{B}\)). Therefore \(B\) gets a 0 surplus regardless. As a result, we have that \(B\) bids 0 in the OFA and therefore \(A\) wins the transaction. Then, the PBS auction clears at a price of \(v_{B}\) with \(A\) winning the block, and the total surplus of \(A\) is \(v_{A}+v_{T}-v_{B}\).
**Case 2:**\(v_{A}\leq v_{B}+v_{T}\). In this case, the winner of the OFA will go ahead and win the PBS (since the value of the transaction \(v_{T}\) plus their own value for the top-of-block opportunity combines will be larger than the competitor's value for the top-of-block opportunity). Note that if \(A\) wins the OFA, then they will therefore win the PBS at a price of \(v_{B}\) for a net surplus of \(v_{A}+v_{T}-v_{B}\) (and \(B\) will make a total surplus of 0). Conversely, if \(B\) wins the OFA, they will win the PBS for a price of \(v_{A}\), with a net surplus of \(v_{B}+v_{T}-v_{A}\) (and \(A\) will make a total surplus of 0).
Therefore, \(A\)'s willingness to pay for the transaction in the OFA is \(v_{A}+v_{T}-v_{B}\), whereas \(B\) is willing to pay \(v_{B}+v_{T}-v_{A}<v_{A}+v_{T}-v_{B}\) (since \(v_{A}>v_{B}\) by assumption). As a result the OFA will see \(A\) winning for a price of \(v_{B}+v_{T}-v_{A}\). Combining these (the outcomes of the PBA above and the OFA here) we have the desired result.
These results already exhibit the 'centralization effects' of private order flow on proposer builder separation: every additional dollar of advantage a builder has in top of block extraction translates into more than a dollars of surplus (for a small advantage, up to two dollars). In short, a builder who is already advantaged has a steeper incentive to invest in improving their advantage.
### Stochastic Top-of-Block Opportunities
Our results carry through, _mutatis mutandis_, for a more realistic model where at the time of bidding in the OFA, builders do not know the value of the top of block opportunity. Of course this applies solely to Scenario 2. In this case, builder \(x\) at the stage of the OFA bids on the understanding that their top-of-block opportunity will be revealed to them later, and is distributed as \(v_{x}\sim F_{x}\). At the conclusion of the OFA, the realized top-of-block opportunity for each builder is revealed to them, and is modeled as a private value.8
Footnote 8: It maybe interesting to consider the case where this value is a signal of expected top-of-block value. In this case, we may be in a setting of interdependent values as in Milgrom and Weber (1982).
Suppose builder A wins the OFA. In this case, their value for the block is \(v_{T}+v_{A}\), while builder B's value for the block is \(v_{B}\). Conversely, if builder B loses the OFA, their value for the block is \(v_{A}\) while builder B's value for the block is \(V_{B}+v_{T}\).
**Theorem 3**.: _Builder A's value for the transaction in the OFA, \(v_{T,A}\), can be written as:_
\[v_{T,A}=\int_{0}^{\infty}\int_{0}^{v_{A}}F_{B}(v+v_{T})-F_{B}(v-v_{T})dvdF_{A} (v_{A}),\]
_with \(v_{T,B}\) defined analogously._
**Proof.** Note that conditional on builder \(A\)'s value for top of block slot being \(v_{A}\), their interim probability of winning the block is
\[x_{A}^{\text{win}}(v_{A})=F_{B}(v_{A}+v_{T}),\]
and analogously their probability of winning the block from losing the OFA is
\[x_{A}^{\text{lose}}(v_{A})=F_{B}(v_{A}-v_{T}).\]
Therefore, by the revenue equivalence theorem (see e.g., Proposition 3.1 Krishna (2009)), the expected surplus of builder \(A\) in the PBS, conditional on the outcome of the OFA with a value of \(V_{A}\) for the top of the block can be written as
\[s_{A}^{\text{win}}(v_{A}) =\int_{0}^{v_{A}}x_{A}^{\text{win}}(v)dv=\int_{0}^{v_{A}}F_{B}(v+ v_{T})dv,\] \[s_{A}^{\text{lose}}(v_{A}) =\int_{0}^{v_{A}}x_{A}^{\text{lose}}(v)dv=\int_{0}^{v_{A}}F_{B}(v- v_{T})dv\]
Finally, the ex-ante expected surplus from winning can be written as:
\[S_{A}^{\text{win}}=\int_{0}^{\infty}s_{A}^{\text{win}}(v_{A})dF_{A}(v_{A})=\int_{ 0}^{\infty}\int_{0}^{v_{A}}F_{B}(v+v_{T})dvdF_{A}(v_{A}),\]
and expected surplus from losing as,
\[S_{A}^{\text{lose}}=\int_{0}^{\infty}s_{A}^{\text{lose}}(v_{A})dF_{A}(v_{A})= \int_{0}^{\infty}\int_{0}^{v_{A}}F_{B}(v-v_{T})dvdF_{A}(v_{A}).\]
Therefore the effective valuation of builder \(A\) to win the the transaction in the OFA, \(v_{T,A}\) equals \(S_{A}^{\text{win}}-S_{A}^{\text{lose}}\). Analogously, the valuation of builder \(B\) in the transaction in the OFA equals \(S_{B}^{\text{win}}-S_{B}^{\text{lose}}\).
Note that
\[v_{T,A} =S_{A}^{\text{win}}-S_{A}^{\text{lose}},\] \[=\int_{0}^{\infty}\int_{0}^{v_{A}}F_{B}(v+v_{T})-F_{B}(v-v_{T})dvdF _{A}(v_{A}),\]
and, analogously,
\[v_{T,B} =S_{B}^{\text{win}}-S_{B}^{\text{lose}},\] \[=\int_{0}^{\infty}\int_{0}^{v_{B}}F_{A}(v+v_{T})-F_{A}(v-v_{T})dvdF _{B}(v_{A}),\]
as desired.
Finally, note that under various assumptions, it can be shown that \(v_{T,A}>v_{T,B}\).
**Corollary 1**.: _Suppose \(v_{T}\) is small enough so that a Taylor series approximation is appropriate. Then \(v_{T,A}\geq v_{T_{B}}\)._
**Proof.** To see this note that
\[v_{T,A} =\int_{0}^{\infty}\int_{0}^{v_{A}}F_{B}(v+v_{T})-F_{B}(v-v_{T})dvdF _{A}(v_{A}),\] \[\approx\int_{0}^{\infty}\int_{0}^{v_{A}}2v_{T}f_{B}(v)dvdF_{A}(v _{A})\] \[=2v_{T}\int_{0}^{\infty}F_{B}(v_{A})f_{A}(v_{A})dv_{A}.\]
By an analogous argument,
\[v_{T,B}\approx 2v_{T}\int_{0}^{\infty}F_{A}(v_{B})f_{B}(v_{B})dv_{B}.\]
Since \(F_{A}\succ_{\text{FOSD}}F_{B}\), we have that for all \(v\), \(F_{A}(v)\leq F_{B}(v)\). Therefore we have that,
\[v_{T,A}\approx 2v_{T}\int_{0}^{\infty}F_{B}(v_{A})f_{A}(v_{A})dv_{A}\]
\[\geq\int_{0}^{\infty}F_{A}(v_{A})f_{A}(v_{A})dv_{A} \text{(since $F_{A}\succ_{\text{FOSD}}F_{B}$)},\] \[\geq\int_{0}^{\infty}F_{A}(v_{A})f_{B}(v_{A})dv_{A} \text{(since $F_{A}\succ_{\text{FOSD}}F_{B}$)},\] \[\approx v_{T,B}. \blacksquare\]
Note that this corollary already implies that even though the top of block opportunities are \(v_{A}\) and \(v_{B}\) are stochastic, builder _A always_ wins the OFA, since it expects better (stochastic) top of block opportunities. Using this, we can compare winning probabilities and builder profit across the two scenarios. We summarize our results with the following theorem:
**Theorem 4**.: _Under Scenario 1, builder _A wins the block with probability \(\int_{0}^{\infty}F_{B}(v_{A})f_{B}(v_{A})\); whereas under OFAs with private transactions, builder _A's winning probability increases to \(\int_{0}^{\infty}F_{B}(v_{A}+v_{T})f_{A}(v_{A})\)._
_Under Scenario 1, the total expected profit of builder _A is_
\[\int_{0}^{\infty}\int_{0}^{v_{A}}F_{B}(v)dvdF_{A}(v_{A}).\]
_Under Scenario 2, the total expected profit of builder _A is_
\[(v_{T,A}-v_{T,B})+\int_{0}^{\infty}\int_{0}^{v_{A}}F_{B}(v+v_{T})dvdF_{A}(v_{ A}).\]
By observation, the profits of builder 1 have gone up: firstly, they make positive profit in the OFA since they are more aggressive in the OFA. Secondly, having won the OFA, they are advantaged in the PBS auction (since they have access to the private transaction to increase their value for the block, and builder B does not). Further results require us to make a functional form assumption on \(F_{A}\) and \(F_{B}\), which we do in the next section.
### An Analytic Example
To better understand the effect on surplus etc, we can use the formulas above in an analytic example so that we can do some simple comparative statics. To that end suppose both \(v_{A}\) and \(v_{B}\) are exponentially distributed, with parameter \(\lambda_{A}\) and \(\lambda_{B}\) respectively. By assumption that A is the stronger builder in terms of first order stochastic dominance of top-of-block opportunities, we must have that \(\lambda_{A}<\lambda_{B}\).
Note that, for each \(x\in\{A,B\}\)
\[F_{x}(v) =1-\exp\{-\lambda_{x}v\},\] \[f_{x}(v) =\lambda_{x}\exp\{-\lambda_{x}v\}.\]
Therefore, substituting in, we have that:
\[v_{T,A} =\int_{0}^{\infty}\int_{0}^{v_{A}}F_{B}(v+v_{T})-F_{B}(v-v_{T})dvdF_ {A}(v_{A}),\] \[=\int_{0}^{\infty}H_{B}(v_{A})dF_{A}(v_{A}),\]
where
\[H_{B}(v_{A})=\begin{cases}\int_{v_{T}}^{v_{A}}F_{B}(v+v_{T})-F_{B}(v-v_{T})dv+ \int_{0}^{v_{T}}F_{B}(v+v_{T})dv&\text{ if }v_{A}>v_{T}\\ \int_{0}^{v_{A}}F_{B}(v+v_{T})dv&\text{ o.w.}\end{cases}\]
A mechanical but involved calculation delivers that:
\[v_{T,A}=\frac{\lambda_{A}(1-\exp(-v_{T}\lambda_{B}))+\lambda_{B}(1-\exp(-v_{T} \lambda_{A}))}{(\lambda_{A}^{2}+\lambda_{A}\lambda_{B})}\]
And analogously \(v_{T,B}\). Further it is straightforward to verify that \(v_{T,A}>v_{T,B}\) (since \(\lambda_{A}<\lambda_{B}\) by assumption) as desired.
Substituting in to the formulas in Theorem 4, we have that the probability of \(A\) winning rises to
\[1-\frac{\exp\{-v_{T}\lambda_{B}\}\lambda_{A}}{\lambda_{A}+\lambda_{B}}>\frac{ \lambda_{B}}{\lambda_{A}+\lambda_{B}}\]
where the right hand side is the probability of \(A\) winning in Scenario 1.
Finally, note that under scenario 1, the total expected profit of Builder \(A\) is
\[\int_{0}^{\infty}\int_{0}^{v_{A}}F_{B}(v)dvdF_{A}(v_{A})=\frac{ \lambda_{B}}{\lambda_{A}(\lambda_{A}+\lambda_{B})}.\]
By comparison, under scenario 2, the total expected profit of Builder \(A\) is:
\[(v_{T,A}-v_{T,B})+\int_{0}^{\infty}\int_{0}^{v_{A}}F_{B}(v+v_{T} )dvdF_{A}(v_{A}),\] \[= \frac{(\lambda_{B}-\lambda_{A})(\lambda_{A}(1-\exp(-v_{T}\lambda _{B}))+\lambda_{B}(1-\exp(-v_{T}\lambda_{A})))}{\lambda_{A}\lambda_{B}( \lambda_{A}+\lambda_{B})}+\frac{\lambda_{B}+\lambda_{A}(1-\exp(-v_{T}\lambda _{B}))}{\lambda_{A}(\lambda_{A}+\lambda_{B})}.\]
Therefore the difference in profit between the two scenarios is:
\[\frac{(\lambda_{B}-\lambda_{A})(\lambda_{A}(1-\exp(-v_{T}\lambda _{B}))+\lambda_{B}(1-\exp(-v_{T}\lambda_{A})))}{\lambda_{A}\lambda_{B}( \lambda_{A}+\lambda_{B})}+\frac{(1-\exp(-v_{T}\lambda_{B}))}{(\lambda_{A}+ \lambda_{B})}\]
Note that since each of the terms is positive, so is the sum, i.e. builder \(A\)'s total profit increases in Scenario 2 relative to scenario 1.
These comparative statics are illustrated in Figure 3. We normalize \(v_{T}\) to 1, and capture the advantage of builder \(A\) by varying \(\frac{\lambda_{A}}{\lambda_{A}+\lambda_{B}}\) holding \(\lambda_{A}+\lambda_{B}\) fixed. The smaller the former, the larger is builder \(A\)'s advantage in top of block extraction. The figure three
demonstrates how even small advantages can be discontinuously magnified by private OFAs: it is instructive to note that even if the advantaged builder has a small advantage in top-of-block extraction, e.g., \(\lambda_{B}=\lambda_{A}+\varepsilon\) for \(\varepsilon\) small, they have a discontinuous jump in their probability of winning the PBS auction in scenario 2 relative to scenario 1. This is because even a small advantage in top-of-block extraction leads to the advantaged builder always winning the OFA in Scenario 2, which in turn gives them a discontinuous advantage in the PBS auction.
## 5. Discussion
Our empirical results show that a small group of integrated builder-searchers have a demonstrable advantage in top-of-block extraction capability.
Our theoretical model then shows that builders with superior top-of-block capabilities are likely to dominate OFAs and subsequently use the private order flow obtained in these OFAs to dominate the PBS auction. Put simply, top-of-block and block-body opportunities are complementary because the block is sold wholesale. Therefore, builders who earn more from the top of the block, will be willing to pay more for private order flow. This complementarity is a strong centralizing force that threatens to suffocate small builders and upset the currently somewhat pluralistic builder equilibrium.
Asking order flow originators not to participate in OFAs is futile because it is in their own best interest to do so. Similarly, builders cannot be barred from participating in OFAs. The only solution then is to modify PBS itself.
Our results suggest that unbundling the PBS auction would be a step in the right direction. By this we mean selling the top of the block and the block-body separately. Implementing such a mechanism would reduce HFT advantage and allow alternative strategies to integrated builder searchers to compete for the right to build blocks.
Figure 3. How winning probability (left) and expected profit (right) vary across scenarios 1 and 2 as the relative advantage of Builder \(A\) varies. |
2305.16263 | Unified Modeling of Multi-Talker Overlapped Speech Recognition and
Diarization with a Sidecar Separator | Multi-talker overlapped speech poses a significant challenge for speech
recognition and diarization. Recent research indicated that these two tasks are
inter-dependent and complementary, motivating us to explore a unified modeling
method to address them in the context of overlapped speech. A recent study
proposed a cost-effective method to convert a single-talker automatic speech
recognition (ASR) system into a multi-talker one, by inserting a Sidecar
separator into the frozen well-trained ASR model. Extending on this, we
incorporate a diarization branch into the Sidecar, allowing for unified
modeling of both ASR and diarization with a negligible overhead of only 768
parameters. The proposed method yields better ASR results compared to the
baseline on LibriMix and LibriSpeechMix datasets. Moreover, without
sophisticated customization on the diarization task, our method achieves
acceptable diarization results on the two-speaker subset of CALLHOME with only
a few adaptation steps. | Lingwei Meng, Jiawen Kang, Mingyu Cui, Haibin Wu, Xixin Wu, Helen Meng | 2023-05-25T17:18:37Z | http://arxiv.org/abs/2305.16263v1 | # Unified Modeling of Multi-Talker Overlapped Speech Recognition
###### Abstract
Multi-talker overlapped speech poses a significant challenge for speech recognition and diarization. Recent research indicated that these two tasks are inter-dependent and complementary, motivating us to explore a unified modeling method to address them in the context of overlapped speech.
A recent study proposed a cost-effective method to convert a single-talker automatic speech recognition (ASR) system into a multi-talker one, by inserting a _Sidecar_ separator into the frozen well-trained ASR model. Extending on this, we incorporate a diarization branch into the Sidecar, allowing for unified modeling of both ASR and diarization with a negligible overhead of only 768 parameters. The proposed method yields better ASR results compared to the baseline on LibriMix and LibriSpeechMix datasets. Moreover, without sophisticated customization on the diarization task, our method achieves acceptable diarization results on the two-speaker subset of CALLHOME with only a few adaptation steps.
Lingwei Meng\({}^{1}\), Jiawen Kang\({}^{1}\), Mingyu Cui\({}^{1}\), Haibin Wu\({}^{2}\), Xixin Wu\({}^{1}\), Helen Meng\({}^{1}\)\({}^{1}\) Dept. of Systems Engineering & Engineering Management, The Chinese University of Hong Kong
\({}^{2}\) Graduate Institute of Communication Engineering, National Taiwan University
{lmeng, jwkang, mycui, wuxx, hmmeng}@se.cuhk.edu.hk, [email protected]
**Index Terms**: multi-talker speech recognition, end-to-end speech recognition, domain adaptation, speaker diarization
## 1 Introduction
Multi-talker (or multi-speaker) overlapped speech has presented significant challenges for many tasks in speech techniques, such as automatic speech recognition (ASR) and diarization [1]. Although recent years have seen progress in addressing this scenario, these studies tend to be conducted independently, isolated within their respective fields.
In the field of multi-talker overlapped speech recognition, two dominant paradigms have emerged: cascade architectures and fully end-to-end models, both with their drawbacks [1]. The cascade architectures, where an ASR module follows a speech separation module, require joint training [2, 3], and may cause performance degradation in the modules' original domains. In contrast, the carefully customized end-to-end models typically necessitate extensive training efforts from scratch [4, 5, 6, 7], and fail to capitalize on significant achievements made in common single-talker ASR. To overcome the limitations of current multi-talker ASR paradigms, a novel **Sidecar** approach has been recently proposed [8]. It involves loosely coupling with well-trained single-talker ASR models, which are then efficiently adapted for multi-talker scenes, without altering the original model's parameters. Specifically, the Sidecar approach converts a single-talker wav2vec 2.0-based ASR model into a multi-talker ASR model, by plugging a Sidecar separator between two lower encoder layers. This integration utilizes speech separation techniques to effectively disentangle overlapped acoustic embeddings from multiple speakers, thus equipping a standard ASR system to manage multi-talker ASR at a minimal cost. This research inspires us that the representations hierarchically extracted by ASR encoder [9, 10] can be leveraged to handle multiple tasks in a cost-effective manner.
In the diarization field, end-to-end modeling methods have emerged as a promising alternative in recent years [11, 12, 13, 14], demonstrating superiority in handling overlapped speech compared to conventional pipelines based on speaker embedding clustering [15, 16, 17]. Although limited in number, some recent studies have investigated joint multi-talker speech recognition and diarization modeling, and indicated that the two tasks are inter-dependent and complementary [18]. Methods such as [19, 20], and speaker-attributed ASR [21, 22] predict ASR transcriptions alongside sentence-level speaker identification. However, these works do not explicitly output timestamps for speaker activity boundaries. [23] proposes to iteratively apply an external speaker embedding extractor and a target-speaker ASR model; and similarly the pipeline in [24] requires an external pre-trained speaker embedding extractor. They showed promising results in ASR with additional timestamps, but their systems are intricate and not truly unified in modeling the ASR and diarization tasks. RPN-JOINT [25] is a cascade architecture consisting of a diarization module followed by an ASR module with shared lower blocks. The modules are pre-trained in their respective domains and subsequently fine-tuned jointly. However, since the majority of the two modules remain separate, the overall architecture can still be quite cumbersome.
Encouraged by the capability of the Sidecar in separating embeddings, we aim to extend the prospects of its application for low-cost, end-to-end unified modeling of ASR and diarization. Instead of deploying individual ASR and diarization modules, we hypothesize that a unified backbone could foster knowledge sharing between the two tasks. As shown in Figure 1, building upon the Sidecar approach, we incorporate a diarization branch with merely 768 additional parameters, thereby enabling the unified modeling of both ASR and diarization tasks with negligible computational overhead. The total number of trainable parameters is 8.7 M (8.4% of all parameters) for the two-speaker model and 8.8 M (8.5% of all parameters) for the three-speaker model. The contributions of the proposed method are threefold:
* We propose a pioneering framework for unified modeling multi-talker ASR and diarization tasks. Exploiting the frozen well-trained ASR model, this approach only contains a small number of trainable parameters hence easy to implement.
* and three-speaker overlapped speech recognition tasks. We demonstrate that this strategy is not only feasible with wav2vec 2.0 backbone, but also with data2vec 2.0 [26, 27]
- yielding even better results for ASR.
* Furthermore, without any sophisticated customization on diarization task, our proposed method achieves acceptable performance on the two-speaker subset of CALLHOME with only a few adaptation steps.
We believe that the proposed method holds the potential for a cost-effective solution for the unified modeling of multi-talker overlapped speech recognition and diarization.
## 2 Unified Modeling of Multi-Talker Speech Recognition and Diarization with Sidecar
The proposed approach comprises three main components: a well-trained single-talker ASR model with the parameters frozen, a Sidecar separator with diarization branch, and the training objective. Figure 1 illustrates that the Sidecar with diarization branch is inserted between two ASR encoder layers, aided by one convolutional layer on each side, creating a multi-talker ASR and diarization unified modeling system. The model is optimized with permutation invariant training (PIT) [28] for connectionist temporal classification (CTC) loss [29], and using the same permutation for the diarization loss.
No lexicons or language models are involved in this work.
### Well-trained single-talker ASR model
An end-to-end ASR model typically comprises an encoder that converts waveform or acoustic features into high-level representations and a decoder that models these representations into language tokens. However, training such a model from scratch can be time-consuming and challenging, especially in multi-talker environments. As indicated in [8], a Sidecar separator can re-purpose existing single-talker models for multi-talker overlapped speech recognition with low cost.
As a well-known pre-trained speech representation model based on self-supervised learning (SSL), wav2vec 2.0 [30] has gained significant attention in the field of ASR. To adhere to a widely accepted speech representation model, we utilize a well-trained wav2vec 2.0 base-based ASR model, as used in the original Sidecar paper [8]. Additionally, to validate the feasibility and generality of the Sidecar approach, we also employ a data2vec 2.0 base-based ASR model as another backbone for comparison [27]. The two models differ primarily in two ways: (1) they are pre-trained using different protocols, resulting in variations in the encoded representations, and (2) data2vec 2.0 biases the query-key attention scores with a penalty proportional to their distance.
Both ASR models are well-trained and comprise a CNN feature extractor, a Transformer encoder, and a fully-connected layer as the decoder. Specifically, the model takes waveform as input and extracts acoustic features using a seven-layer CNN feature extractor. The extracted features will be fed into the 12-layer Transformer encoder to generate high-level representations. Following the paradigm outlined in [30] and [27], we use only a fully-connected layer as the decoder for letter-level prediction. We directly utilize fairseq's official released model parameters [31], and denote them as _W2V-CTC_ and _D2V-CTC_ throughout subsequent sections.
### Sidecar separator
Enlightened by the findings that the ASR encoder captures more acoustic information in its lower layers and more linguistic information in the upper layers [9, 10], a recent study proposes using a Sidecar separator to address multi-talker speech recognition, drawing on methodologies in speech separation [8].
The Sidecar separator is a temporal convolutional network that comprises stacked 1-D dilated convolutional blocks similar to Conv-TasNet [32]. This design allows the Sidecar to model long-term dependencies of acoustic embeddings while maintaining a small size. As shown in Figure 1, a 3-kernel-size 1-D convolutional layer is employed on each side of the Sidecar to filter the input-mixed and output-separated embeddings. Following the design proposed in [8], we plug the Sidecar between the second and the third encoder layers as a compromise in semantics.
During the forward process, the mixed speech embedding generated by the preceding layer is filtered through a convolutional layer and then fed into the Sidecar to synthesize speaker-dependent masks. These masks are used to element-wise multiply the filtered mixed speech embedding, and the resulting product is further adjusted with another convolutional layer to obtain separated embeddings. These embeddings, corresponding to different speakers, are concatenated onto the batch dimension for parallel processing and transcription into text.
### The diarization branch and processes for diarization
Furthermore, we incorporate a diarization branch into the Sidecar to enable the unified modeling of speech recognition and diarization. As illustrated in Figure 1, the main component of the branch is a point-wise 2-D convolutional layer.
In the forward process, the speaker-dependent masks, as shown in Figure 1, generated by the Sidecar possess a tensor shape of \((B\times S,C,T)\) where \(B\) denotes batch size, \(S\) denotes the number of speakers, \(C\) denotes the number of channels, and \(T\) denotes time frames. Each time frame spans a duration of 20 ms. Within the diarization branch, these masks are first reshaped to \((B,S,C,T)\) before being transposed to \((B,C,S,T)\). The reshaped-and-transposed masks then go through a point-wise 2-D convolutional layer, which is the only trainable layer in the branch having \(C\) parameters, to generate a tensor with a
Figure 1: The proposed strategy plugs a Sidecar separator with a diarization branch into a frozen well-trained single-talker ASR model, enabling it unified modeling for multi-talker overlapped speech recognition and diarization. The Sidecar is with Conv-TasNet-like architecture.
shape of \((B,1,S,T)\). After squeezing on the second dimension and applying a sigmoid activation function, frame-wise predictions for each speaker's speech activities \(D\) are synthesized with a shape of \((B,S,T)\). For every speaker \(s\) in \(S\) and for each time frame \(t\) in \(T\), if the element value is greater than 0.5, we consider that speaker \(s\) was activated on the time frame \(t\). This will yield the results for diarization.
During the training phase on LibriMix and LibriSpeechMix datasets, the model is fed with complete utterances. However, during the adaptation and inference for diarization on CALLHOME (Section 4.2), we segment the utterances to ensure the alignment with real-world diarization scenarios and guarantee its practicality. As depicted in Figure 2, we divide each utterance into several 30-second segments that share a common 15-second interval between every two adjacent segments. For shared parts, we calculate Euclidean distance for different speaker permutations between segment tensors \(D\) and select one with minimum distance to modify the speaker arrangement in subsequent segments. Afterward, we average the element values of adjacent segments' shared parts. Note that this segmenting-and-permuting process is not utilized in the experiments mentioned in Section 4.1.
### Training and adaptation objectives
During training on LibriMix and LibriSpeechMix datasets, both CTC loss and diarization loss require a permutation for the speaker order to be assigned to address the label ambiguity issue [28]. To explicitly construct the inter-dependence between the two tasks, the permutation is determined by permutation invariant training (PIT) based on CTC loss, and then is assigned for diarization loss. The diarization loss is to calculate the mean squared error (MSE) between the predicted speaker activities \(D\) and the diarization ground truth. At last, the final objective function is the sum of PIT-CTC loss and corresponding diarization loss multiplied by a coefficient \(\lambda\).
However, when we adapt the model for CALLHOME, we solely employ diarization loss and determine speaker permutation relying on the strategy outlined in Section 2.3.
## 3 Experimental Setup
### Datasets
The experiments are performed on two benchmark datasets for multi-talker ASR (LibriMix [33] and LibriSpeechMix [4]), and the two-speaker subset of a real-world dataset for diarization (CALLHOME). Although LibriMix and LibriSpeechMix datasets were not specifically designed for the diarization task, we also present diarization results on them.
**LibriSpeechMix**. The utterances are simulated with the mixtures of two or three speakers from LibriSpeech. Only standard official dev and test sets are published. Our training set is home-made from the 960-hour LibriSpeech training dataset (LS-960) referring to the protocol established in [4]. LibriSpeechMix randomly samples a delay time for the second and the third utterances, so the mixture is partially overlapping.
**LibriMix**. The dataset simulates audio mixtures using a combination of two or three speakers sourced from the LibriSpeech-clean corpus. We focus on its two-speaker-clean subset _Libri2Mix-clean_ and three-speaker-clean subset _Libri3Mix-clean_. The mixtures are made in a left-aligned style. Thus, the shorter source speech will be entirely overlapped by the longer one from the start, which challenges the model more than LibriSpeechMix in separating overlaps.
**CALLHOME**. We evaluate the proposed method on the diarization task with CALLHOME, which is a benchmark dataset consisting of spontaneous multilingual telephone conversations, as one part of the 2000 NIST Speaker Recognition Evaluation (LDC2001S97). We take its two-speaker subset and split it into an adaptation set of 155 recordings and a test set of 148 recordings, following the same partition protocol with EEND [11, 12] and Kaldi1[34]. The average duration is 73.1 seconds.
Footnote 1: [https://github.com/kaldi-asr/kaldi/tree/master/egs/callhome_diarization/v2](https://github.com/kaldi-asr/kaldi/tree/master/egs/callhome_diarization/v2)
### Model settings
**Well-trained single-talker ASR model**. We utilize the well-trained W2V-CTC and D2V-CTC as the backbone, respectively. To ensure consistency with [30] and [27], we directly employ the official released model weights by fairseq2[31]. Both models are pre-trained on unlabeled LS-960 data and subsequently fine-tuned on labeled LS-960 using CTC loss. The resulting well-trained models are then frozen for use in our experiments.
Footnote 2: [https://github.com/facebookresearch/fairseq](https://github.com/facebookresearch/fairseq)
**Sidecar separator**. Drawing inspiration from [32], the Sidecar separator implements a sequence of \(K\) temporal convolutional blocks with dilation rates spanning from 1 to \(2^{K-1}\), repeated up to \(K\) times. To align with the protocol in [8], we set \(K=8\) and \(R=3\), remove skip-connection paths within the convolutional blocks, and substitute the final sigmoid activation with ReLU. The Sidecar has 128 bottleneck channels and 768 input/output channels. The Sidecar is plugged between the second and the third transformer layers, which mirrors that of [8].
**Diarization branch**. As shown in Figure 1, the speaker-dependent masks predicted by Sidecar will be fed into the diarization branch. The main component of the branch is a point-wise 2-D convolutional layer with an input-channel size of 756 and output-channel size of 1, featuring a kernel size of \((1,1)\) with stride 1. As such, this branch only requires an additional 768 parameters. As is typical, we use a collar of 250 ms when evaluating the DER performance [11, 12].
**Training settings**. With W2V-CTC or D2V-CTC frozen, the models only have 8.7 M trainable parameters (8.4% of all parameters) for the two-speaker experiments and 8.8 M trainable parameters (8.5% of all parameters) for the three-speaker experiments. We set the coefficient of diarization loss \(\lambda\) to 0.01. Adhering to the settings in [8], we optimize the proposed models using a 2e-4 learning rate with a three-stage schedule and Adam optimizer, for at most 100 k updates. It takes about 7 hours for two-speaker models and 9 hours for three-speaker models with 8 NVIDIA V100 GPUs, thanks to Sidecar's small size and the ejection start provided by the well-trained ASR model.
In the following, we denote the proposed models as _W2V-Sidecar_, _W2V-Sidecar-DB_, _D2V-Sidecar_, _D2V-Sidecar-DB_, where "-_DB_" denotes "with diarization branch". Note that _W2V-Sidecar_ is identical to the implementation in [8], serving as the baseline in this work. Permutations with minimum errors are used to compute word error rates (WERs).
Figure 2: The pre-process for diarization on CALLHOME. The interval between the green and orange dashed lines is the shared interval for the two segments.
## 4 Results and Discussion
### Results on LibriMix and LibriSpeechMix datasets
The models (b) and (d) in Table 1 are optimized with CTC and diarization loss, enabling the unified modeling of multi-talker ASR and diarization.
**ASR.** The ASR results of the four models on two- and three-speaker LibriMix and LibriSpeechMix datasets are presented in Table 1. Aligning with our previous hypothesis, the enhanced model (b) with a diarization branch consistently outperforms the baseline (a) [8] across all four datasets, benefiting from the inter-dependent and complementarity of the two tasks. Additionally, models utilizing D2V as a backbone generally outperform those with W2V. We attribute the boost in performance of models (c) and (d) to the better representations learned in its pre-training phase as illustrated in [27]. The diarization branch's performance enhancement on the LibriSpeechMix-2pk is relatively modest, and we contend that this is due to the dataset being relatively simple for ASR owing to its lower two-speaker overlap rate. This study achieves the state-of-the-art performances on Libri2Mix, and it is the first to report ASR results on Libri3Mix in the field. However, WERs on Libri3Mix remains high due to shorter source speeches being completely overlapped by longer ones from the outset, rendering it more challenging for multi-talker ASR.
**Diarization.** Although LibriMix and LibriSpeechMix datasets were not created for diarization purposes, we include the diarization results on them in Table 2 to offer a more comprehensive perspective. As a similar trend to Table 1, D2V backbone achieves better performance on the diarization task compared to W2V. Note that while left-aligned-style generated LibriMix dataset poses greater challenges for ASR than LibriSpeechMix, it proves easier for the diarization task since it only requires predicting speaker activity timestamps on the right side.
### Diarization results on CALLHOME dataset
To demonstrate its practicality, we conducted the evaluation of the proposed method on the real-world CALLHOME dataset under realistic settings. Specifically, we adapt W2V-Sidecar-DB and D2V-Sidecar-DB, which have been trained on Libri2Mix, to the two-speaker subset of CALLHOME. During adaptation and inference, the segmenting-and-permuting process is employed to align with real-world diarization scenes, as discussed in Section 2.3. As shown in Table 3, in comparison to EEND models that are carefully designed for diarization and trained on datasets tailored for this purpose, our method delivers satisfactory performance with just 8.7 M trainable parameters and a few adaptation steps, demonstrating the effectiveness and flexibility of our approach with limited resources.
We observed that the proposed method can achieve better or comparable performance on MI and FA, which are metrics related to voice activity detection, but falls behind in CF. We argue that this is because the used Libri2Mix dataset is clean and designed for ASR purpose with only 1,172 speakers, far fewer than the 5,743 speakers of the carefully crafted dataset used by EEND. We anticipate significant improvement by training it with data simulated specifically for diarization purposes as done by [11, 12].
### Limitations and future work
While this work is innovative in unifying the modeling of speech recognition and diarization, several limitations remain. Firstly, due to considerations of simplicity and comparability with previous work [8], the model's performance on diarization is acceptable but restricted by its training strategy. However, access to more suitable datasets and training schemes may yield improved performance. Secondly, the system requires pre-defining the maximum number of speakers addressed, which could potentially be resolved by maintaining a speaker embedding bank in the future. Lastly, the current model does not tackle the "who spoke when and what" issue. Nevertheless, we believe this work has pointed the direction toward a potential solution to it.
## 5 Conclusion
A recent study proposed a low-cost approach for converting a single-talker ASR system to a multi-talker one, by plugging a Sidecar separator into a fixed well-trained common ASR model. Extending on this approach, we incorporate a diarization branch into the Sidecar with only 768 additional parameters, allowing for unified modeling of both multi-talker overlapped speech recognition and diarization tasks with a low training cost.
With very few parameters (8.7 M for the two-speaker model, and 8.8 M for the three-speaker model) requiring tuning, the proposed approach outperforms the original Sidecar scheme on ASR tasks for LibriMix and LibriSpeechMix datasets. Furthermore, without sophisticated customization on the diarization task, the proposed method achieves potential diarization results on the two-speaker subset of the real-world CALLHOME dataset, with only a few adaptation steps.
## 6 Acknowledgements
This research is partially supported by the HKSARG Research Grants Council's Theme-based Research Grant Scheme (Project No. T45-407/19N) and by the CUHK Stanley Ho Big Data Decision Analytics Research Centre.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{**LibriMix**} & \multicolumn{2}{c}{**LibriSpeechMix**} \\ \cline{2-5}
**System** & 2spk & 3spk & 2spk & 3spk \\ \hline (a) W2V-Sidecar\({}^{\dagger}\)[8] & 10.36 & 35.22 & 7.56 & 13.87 \\ (b) W2V-Sidecar-DB & 9.88 & 34.38 & 7.53 & 12.93 \\ \hline (c) D2V-Sidecar & 10.11 & 34.84 & 7.61 & 12.56 \\ (d) D2V-Sidecar-DB & **9.69** & **33.91** & **7.49** & **11.94** \\ \hline \hline \end{tabular}
\end{table}
Table 1: ASR performance on the test sets of LibriMix and LibriSpeechMix. Evaluated by WER (%). “-DB” refers to “with diarization branch”
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{**LibriMix**} & \multicolumn{2}{c}{**LibriSpeechMix**} \\ \cline{2-5}
**System** & 2spk & 3spk & 2spk & 3spk \\ \hline W2V-Sidecar-DB & 0.97 & 2.35 & 2.20 & 3.65 \\ D2V-Sidecar-DB & 0.91 & 2.14 & 2.12 & 3.47 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Diarization performance on the test sets of LibriMix and LibriSpeechMix. Evaluated by DER (%).
\begin{table}
\begin{tabular}{l c c c|c} \hline \hline & \multicolumn{2}{c}{**LibriMix**} & \multicolumn{2}{c}{**LibriSpeechMix**} \\ \cline{2-5}
**System** & 2spk & 3spk & 2spk & 3spk \\ \hline W2V-Sidecar-DB & 0.91 & 2.14 & 2.12 & 3.47 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Detailed diarization results on CALLHOME, evaluated by DER (%), which is the sum of misses (MI), false alarms (FA), and speaker confusions (CF) errors. |
2307.01166 | Strategic Distribution Shift of Interacting Agents via Coupled Gradient
Flows | We propose a novel framework for analyzing the dynamics of distribution shift
in real-world systems that captures the feedback loop between learning
algorithms and the distributions on which they are deployed. Prior work largely
models feedback-induced distribution shift as adversarial or via an overly
simplistic distribution-shift structure. In contrast, we propose a coupled
partial differential equation model that captures fine-grained changes in the
distribution over time by accounting for complex dynamics that arise due to
strategic responses to algorithmic decision-making, non-local endogenous
population interactions, and other exogenous sources of distribution shift. We
consider two common settings in machine learning: cooperative settings with
information asymmetries, and competitive settings where a learner faces
strategic users. For both of these settings, when the algorithm retrains via
gradient descent, we prove asymptotic convergence of the retraining procedure
to a steady-state, both in finite and in infinite dimensions, obtaining
explicit rates in terms of the model parameters. To do so we derive new results
on the convergence of coupled PDEs that extends what is known on multi-species
systems. Empirically, we show that our approach captures well-documented forms
of distribution shifts like polarization and disparate impacts that simpler
models cannot capture. | Lauren Conger, Franca Hoffmann, Eric Mazumdar, Lillian Ratliff | 2023-07-03T17:18:50Z | http://arxiv.org/abs/2307.01166v3 | # Coupled Gradient Flows for Strategic Non-Local Distribution Shift
###### Abstract
We propose a novel framework for analyzing the dynamics of distribution shift in real-world systems that captures the feedback loop between learning algorithms and the distributions on which they are deployed. Prior work largely models feedback-induced distribution shift as adversarial or via an overly simplistic distribution-shift structure. In contrast, we propose a coupled partial differential equation model that captures fine-grained changes in the distribution over time by accounting for complex dynamics that arise due to strategic responses to algorithmic decision-making, non-local endogenous population interactions, and other exogenous sources of distribution shift. We consider two common settings in machine learning: cooperative settings with information asymmetries, and competitive settings where a learner faces strategic users. For both of these settings, when the algorithm retrains via gradient descent, we prove asymptotic convergence of the retraining procedure to a steady-state, both in finite and in infinite dimensions, obtaining explicit rates in terms of the model parameters. To do so we derive new results on the convergence of coupled PDEs that extends what is known on multi-species systems. Empirically, we show that our approach captures well-documented forms of distribution shifts like polarization and disparate impacts that simpler models cannot capture.
## 1 Introduction
In many machine learning tasks, there are commonly sources of exogenous and endogenous distribution shift, necessitating that the algorithm be retrained repeatedly over time. Some of these shifts occur without the influence of an algorithm; for example, individuals influence each other to become more or less similar in their attributes, or benign forms of distributional shift occur [11]. Other shifts, however, are in response to algorithmic decision-making. Indeed, the very use of a decision-making algorithm can incentivize individuals to change or mis-report their data to achieve desired outcomes-- a phenomenon known in economics as Goodhart's law. Such phenomena have been empirically observed, a well-known example being in [1], where researchers observed a population in Columbia strategically mis-reporting data to game a poverty index score used for distributing government assistance. Works such as [12; 13], which investigate the effects of distribution shift over time on a machine learning algorithm, point toward the need for evaluating the robustness of algorithms to distribution shifts. Many existing approaches for modeling distribution shift focus on simple metrics like optimizing over moments or covariates [1; 14; 15]. Other methods consider worst-case scenarios, as in distributionally robust optimization [1; 17; 18]. However, when humans respond to algorithms, these techniques may not be sufficient to holistically capture the impact an algorithm has on a population. For example, an
algorithm that takes into account shifts in a distribution's mean might inadvertently drive polarization, rendering a portion of the population disadvantaged.
Motivated by the need for a more descriptive model, we present an alternative perspective which allows us to fully capture complex dynamics that might drive distribution shifts in real-world systems. Our approach is general enough to capture various sources of exogenous and endogenous distribution shift including the feedback loop between algorithms and data distributions studied in the literature on performative prediction [14, 15, 16, 17, 18, 19], the strategic interactions studied in strategic classification [1, 13], and also endogenous factors like intra-population dynamics and distributional shifts. Indeed, while previous works have studied these phenomena in isolation, our method allows us to capture all of them as well as their interactions. For example, in [15], the authors investigate the effects of dynamics in strategic classification problems--but the model they analyze does not capture individual interactions in the population. In [15], the authors model the interaction between a population that repeatedly responds to algorithmic decision-making by shifting its mean. Additionally, [16] study settings in which the population has both exogenous and endogenous distribution shifts due to feedback, but much like the other cited work, the focus remains on average performance. Each of these works fails to account for diffusion or intra-population interactions that can result in important qualitative changes to the distribution.
**Contributions.** Our approach to this problem relies on a detailed non-local PDE model of the data distribution which captures each of these factors. One term driving the evolution of the distribution over time captures the response of the population to the deployed algorithm, another draws on models used in the PDE literature for describing non-local effects and consensus in biological systems to model intra-population dynamics, and the last captures a background source of distribution shift. This is coupled with an ODE, lifted to a PDE, which describes the training of a machine learning algorithm results in a coupled PDE system which we analyze to better understand the behaviors that can arise among these interactions.
In one subcase, our model exhibits a joint gradient flow structure, where both PDEs can be written as gradients flows with respect to the same joint energy, but considering infinite dimensional gradients with respect to the different arguments. This mathematical structure provides powerful tools for analysis and has been an emerging area of study with a relatively small body of prior work, none of which related to distribution shifts in societal systems, and a general theory for multi-species gradient flows is still lacking. We give a brief overview of the models that are known to exhibit this joint gradient flow structure: in [10] the authors consider a two-species tumor model with coupling through Brinkman's Law. A number of works consider coupling via convolution kernels [13, 14, 15, 16, 17, 18] and cross-diffusion [15, 16, 17], with applications in chemotaxis among other areas. In the models we consider here, the way the interaction between the two populations manifests is neither via cross-diffusion, nor via the non-local self-interaction term. A related type of coupling has recently appeared in [19, 18], however in the setting of graphs. Recent work [13] provides particle-based methods to approximately compute the solution to a minimax problem where the optimization space is over measures; following that work, [18] provides another particle-based method using mirror descent-ascent to solve a similar problem. Other recent work [16] proves that a mean-field gradient ascent-descent scheme with an entropy annealing schedule converges to the solution of a minimax optimization problem with a timescale separation parameter that is also time-varying; in contrast, our work considers fixed timescale separation setting. [12] show that the mean-field description of a particle method for solving minimax problems has proveable convergence guarantees in the Wasserstein-Fisher-Rao metric. Each of these references considers an energy functional that is linear in the distribution of each species respectively; our energy includes nonlinearities in the distributions via a self-interaction term as well as diffusion for the population. Moreover, the above works introduce a gradient flow dynamic as a tool for obtaining and characterizing the corresponding steady states, whereas in our setting we seek to capture the time-varying behavior that models distributions shifts. In the other subcase, we prove exponential convergence in two competitive, timescale separated settings where the algorithm and strategic population have conflicting objectives. We show numerically that retraining in a competitive setting leads to polarization in the population, illustrating the importance of fine-grained modeling.
Problem Formulation
Machine learning algorithms that are deployed into the real world for decision-making often become part of complex feedback loops with the data distributions and data sources with which they interact. In an effort to model these interactions, consider a machine learning algorithm that has loss given by \(L(z,x)\) where \(x\in\mathbb{R}^{d}\) are the algorithm parameters and \(z\in\mathbb{R}^{d}\) are the population attributes, and the goal is to solve
\[\operatorname*{argmin}_{x\in\mathcal{X}}\operatorname*{\mathbb{E}}_{z\sim \rho}L(z,x),\]
where \(\mathcal{X}\) is the class of model parameters and \(\rho(z)\) is the population distribution. Individuals have an objective given by \(J(z,x)\) in response to a model parameterized by \(x\), and they seek to solve
\[\operatorname*{argmin}_{z\in\mathbb{R}^{d}}J(z,x).\]
When individuals in the population and the algorithm have access to gradients, we model the optimization process as a gradient-descent-type process. Realistically, individuals in the population will have nonlocal information and influences, as well as external perturbations, the effects of which we seek to capture in addition to just minimization. To address this, we propose a partial differential equation (PDE) model for the population, that is able to capture nonlocal interactions between individuals on the level of a collective population. To analyse how the population evolves over time, a notion of derivative in infinite dimensions is needed. A natural, and in this context physically meaningful, way of measuring the dissipation mechanism for probability distributions is the Wasserstein-2 metric (see Definition 4). The following expression appears when computing the gradient of an energy functional with respect to the Wasserstein-2 topology.
**Definition 1**.: _[First Variation] For a map \(G:\mathcal{P}(\mathbb{R}^{d})\mapsto\mathbb{R}\) and fixed probability distribution \(\rho\in\mathcal{P}(\mathbb{R}^{d})\), the first variation of \(G\) at the point \(\rho\) is denoted by \(\delta_{\rho}G[\rho]:\mathbb{R}^{d}\to\mathbb{R}\), and is defined via the relation_
\[\int\delta_{\rho}G[\rho](z)\psi(z)\mathrm{d}z=\lim_{\epsilon\to 0}\frac{1}{ \epsilon}(G(\rho+\epsilon\psi)-G(\rho))\]
_for all \(\psi\in C_{c}^{\infty}(\mathbb{R}^{d})\) such that \(\int\mathrm{d}\psi=0\), assuming that \(G\) is regular enough for all quantities to exist._
Here, \(\mathcal{P}(\mathbb{R}^{d})\) denotes the space of probability measures on the Borel sigma algebra. Using the first variation, we can express the gradient in Wasserstein-2 space, see for example [20, Exercise 8.8].
**Lemma 1**.: _The gradient of an energy \(G:\mathcal{P}_{2}(\mathbb{R}^{d})\to\mathbb{R}\) in the Wasserstein-2 space is given by_
\[\nabla_{W_{2}}G(\rho)=-\mathrm{div}\left(\rho\nabla\delta_{\rho}G[\rho]\right)\.\]
Here, \(\mathcal{P}_{2}(\mathbb{R}^{d})\) denotes the set of probability measures with bounded second moments, also see Appendix A.2. As a consequence, the infinite dimensional steepest descent in Wasserstein-2 space can be expressed as the PDE
\[\partial_{t}\rho=-\nabla_{W_{2}}G(\rho)=\mathrm{div}\left(\rho\nabla\delta_{ \rho}G[\rho]\right). \tag{1}\]
All the coupled gradient flows considered in this work have this Wasserstein-2 structure. In particular, when considering that individuals minimize their own loss, we can capture these dynamics via a gradient flow in the Wasserstein-2 metric on the level of the distribution of the population. Then for given algorithm parameters \(x\in\mathbb{R}^{d}\), the evolution for this strategic population is given by
\[\partial_{t}\rho=\mathrm{div}\left(\rho\nabla\delta_{\rho}\Big{[}\operatorname* {\mathbb{E}}_{z\sim\rho}J(z,x)+E(\rho)\Big{]}\right), \tag{2}\]
where \(E(\rho)\) is a functional including terms for internal influences and external perturbations. In real-world deployment of algorithms, decision makers update their algorithm over time, leading to an interaction between the two processes. We also consider the algorithm dynamics over time, which we model as
\[\dot{x}=-\nabla_{x}\big{[}\operatorname*{\mathbb{E}}_{z\sim\rho}L(z,x)\big{]}. \tag{3}\]
In this work, we analyze the behavior of the dynamics under the following model. The algorithm suffers a cost \(f_{1}(z,x)\) for a data point \(z\) under model parameters \(x\) in the strategic population, and a cost \(f_{2}(z,x)\) for a data point in a fixed, non-strategic population. The strategic population is denoted by \(\rho\in\mathcal{P}\), and the non-strategic population by \(\bar{\rho}\in\mathcal{P}\). The algorithm aims to minimize
\[\mathop{\mathbb{E}}_{z\sim\rho}L(z,x)=\int f_{1}(z,x)\mathrm{d}\rho(z)+\int f _{2}(z,x)\mathrm{d}\bar{\rho}(z)+\frac{\beta}{2}\left\|x-x_{0}\right\|^{2}\,,\]
where the norm is the vector inner product \(\left\|x\right\|^{2}=\left\langle x,x\right\rangle\) and \(\beta>0\) weights the cost of updating the model parameters from its initial condition.
We consider two settings: \((i)\) aligned objectives, and \((ii)\) competing objectives. Case \((i)\) captures the setting in which the strategic population minimization improves the performance of the algorithm, subject to a cost for deviating from a reference distribution \(\tilde{\rho}\in\mathcal{P}\). This cost stems from effort required to manipulate features, such as a loan applicant adding or closing credit cards. On the other hand, Case \((ii)\) captures the setting in which the strategic population minimization worsens the performance of the algorithm, again incurring cost from distributional changes.
### Case (i): Aligned Objectives
In this setting, we consider the case where the strategic population and the algorithm have aligned objectives. This occurs in examples such as recommendation systems, where users and algorithm designers both seek to develop accurate recommendations for the users. This corresponds to the population cost
\[\mathop{\mathbb{E}}_{z\sim\rho,x\sim\mu}J(z,x)=\iint f_{1}(z,x)\mathrm{d} \rho(z)\mathrm{d}\mu(x)+\alpha KL(\rho\,|\,\tilde{\rho}),\]
where \(KL(\cdot\,|\cdot)\) denotes the Kullback-Leibler divergence. Note that the KL divergence introduces diffusion to the dynamics for \(\rho\). The weight \(\alpha>0\) parameterizes the cost of distribution shift to the population. To account for nonlocal information and influence among members of the population, we include a kernel term \(E(\rho)=\frac{1}{2}\int\rho W*\rho\,\mathrm{d}z\), where \((W*\rho)(z)=\int W(z-\bar{z})\mathrm{d}\rho(\bar{z})\) is a convolution integral and \(W\) is a suitable interaction potential.
### Case (ii): Competing Objectives
In settings such as online internet forums, where algorithms and users have used manipulative strategies for marketing [1], the strategic population may be incentivized to modify or mis-report their attributes. The algorithm has a competitive objective, in that it aims to maintain performance against a population whose dynamics cause the algorithm performance to suffer. When the strategic population seeks an outcome contrary to the algorithm, we model strategic population cost as
\[\mathop{\mathbb{E}}_{z\sim\rho,x\sim\mu}J(z,x)=-\iint f_{1}(z,x)\mathrm{d} \rho(z)\mathrm{d}\mu(x)+\alpha KL(\rho\,|\,\tilde{\rho}).\]
A significant factor in the dynamics for the strategic population is the timescale separation between the two "species"--i.e., the population and the algorithm. In our analysis, we will consider two cases: one, where the population responds much faster than the algorithm, and two, where the algorithm responds much faster than the population. We illustrate the intermediate case in a simulation example.
## 3 Results
We are interested in characterizing the long-time asymptotic behavior of the population distribution, as it depends on the decision-makers action over time. The structure of the population distribution gives us insights about how the decision-makers actions influences the entire population of users. For instance, as noted in the preceding sections, different behaviors such as bimodal distributions or large tails or variance might emerge, and such effects are not captured in simply looking at average performance. To understand this intricate interplay, one would like to characterize the behavior of both the population and the algorithm over large times. Our main contribution towards this goal is a novel analytical framework as well as analysis of the long-time asymptotics.
A key observation is that the dynamics in (2) and (3) can be re-formulated as a gradient flow; we lift \(x\) to a probability distribution \(\mu\) by representing it as a Dirac delta \(\mu\) sitting at the point \(x\). As a result, the evolution of \(\mu\) will be governed by a PDE, and combined with the PDE for the population, we obtain a system of coupled PDEs,
\[\partial_{t}\rho =\operatorname{div}\left(\rho\nabla_{z}\delta_{\rho}\big{[}\operatorname {\mathbb{E}}_{z\sim\rho,x\sim\mu}J(z,x)+E(\rho)\big{]}\right)\] \[\partial_{t}\mu =\operatorname{div}\left(\mu\nabla_{x}\delta_{\mu}\big{[} \operatorname{\mathbb{E}}_{z\sim\rho,x\sim\mu}L(z,x)\big{]}\right),\]
where \(\delta_{\rho}\) and \(\delta_{\mu}\) are first variations with respect to \(\rho\) and \(\mu\) according to Definition 1. The natural candidates for the asymptotic profiles of this coupled system are its steady states, which - thanks to the gradient flow structure - can be characterized as ground states of the corresponding energy functionals. In this work, we show existence and uniqueness of minimizers (maximizers) for the functionals under suitable conditions on the dynamics. We also provide criteria for convergence and explicit convergence rates. We begin with the case where the interests of the population and algorithm are aligned, and follow with analogous results in the competitive setting. We show convergence in energy, which in turn ensures convergence in a product Wasserstein metric. For convergence in energy, we use the notion of relative energy and prove that the relative energy converges to zero as time increases.
**Definition 2** (Relative Energy).: _The relative energy of a functional \(G\) is given by \(G(\gamma|\gamma_{\infty})=G(\gamma)-G(\gamma_{\infty})\), where \(G(\gamma_{\infty})\) is the energy at the steady state._
Since we consider the joint evolution of two probability distributions, we define a distance metric \(\overline{W}\) on the product space of probability measures with bounded second moment.
**Definition 3** (Joint Wasserstein Metric).: _The metric over \(\mathcal{P}_{2}(\mathbb{R}^{d})\times\mathcal{P}_{2}(\mathbb{R}^{d})\) is called \(\overline{W}\) and is given by_
\[\overline{W}((\rho,\mu),(\tilde{\rho},\tilde{\mu}))^{2}=W_{2}(\rho,\tilde{\rho })^{2}+W_{2}(\mu,\tilde{\mu})^{2}\]
_for all pairs \((\rho,\mu),(\tilde{\rho},\tilde{\mu})\in\mathcal{P}_{2}(\mathbb{R}^{d})\times \mathcal{P}_{2}(\mathbb{R}^{d})\), and where \(W_{2}\) denotes the Wasserstein-2 metric (see Definition 4). We denote by \(\overline{\mathcal{W}}(\mathbb{R}^{d}):=(\mathcal{P}_{2}(\mathbb{R}^{d}) \times\mathcal{P}_{2}(\mathbb{R}^{d}),\overline{W})\) the corresponding metric space._
### Gradient Flow Structure
In the case where the objectives of the algorithm and population are _aligned_, we can write the dynamics as a gradient flow by using the same energy functional for both species. Let \(G_{a}(\rho,\mu):\mathcal{P}(\mathbb{R}^{d})\times\mathcal{P}(\mathbb{R}^{d}) \mapsto[0,\infty]\) be the energy functional given by
\[G_{a}(\rho,\mu) =\iint f_{1}(z,x)\mathrm{d}\rho(z)\mathrm{d}\mu(x)+\iint f_{2}(z,x)\mathrm{d}\tilde{\rho}(z)\mathrm{d}\mu(x)+\alpha KL(\rho|\tilde{\rho})+ \frac{1}{2}\int\rho W*\rho\] \[\quad+\frac{\beta}{2}\int\left\|x-x_{0}\right\|^{2}\mathrm{d}\mu (x).\]
This expression is well-defined as the relative entropy \(KL(\rho\,|\,\tilde{\rho})\) can be extended to the full set \(\mathcal{P}(\mathbb{R}^{d})\) by setting \(G_{a}(\rho,\mu)=+\infty\) in case \(\rho\) is not absolutely continuous with respect to \(\tilde{\rho}\).
In the _competitive_ case we define \(G_{c}(\rho,x):\mathcal{P}(\mathbb{R}^{d})\times\mathbb{R}^{d}\mapsto[-\infty,\infty]\) by
\[G_{c}(\rho,x)=\int f_{1}(z,x)\mathrm{d}\rho(z)+\int f_{2}(x,z^{\prime}) \mathrm{d}\tilde{\rho}(z^{\prime})-\alpha KL(\rho|\tilde{\rho})-\frac{1}{2} \int\rho W*\rho+\frac{\beta}{2}\left\|x-x_{0}\right\|^{2}.\]
In settings like recommender systems, the population and algorithm have aligned objectives; they seek to minimize the same cost but are subject to different dynamic constraints and influences, modeled by the regularizer and convolution terms. In the case where the objectives are aligned, the dynamics are given by
\[\partial_{t}\rho =\operatorname{div}\left(\rho\nabla_{z}\delta_{\rho}G_{a}[\rho, \mu]\right) \tag{4}\] \[\partial_{t}\mu =\operatorname{div}\left(\mu\nabla_{x}\delta_{\mu}G_{a}[\rho, \mu]\right).\]
Note that (4) is a joint gradient flow, because the dynamics can be written in the form
\[\partial_{t}\gamma=\operatorname{div}\left(\gamma\nabla\delta_{\gamma}G_{a}( \gamma)\right)\,,\]
where \(\gamma=(\rho,\mu)\) and where the gradient and divergence are taken in both variables \((z,x)\). We discuss the structure of the dynamics (4) as well as the meaning of the different terms appearing in the energy functional \(G_{a}\) in Appendix A.1.
In other settings, such as credit score reporting, the objectives of the population are competitive with respect to the algorithm. Here we consider two scenarios; one, where the algorithm responds quickly relative to the population, and two, where the population responds quickly relative to the algorithm. In the case where the algorithm can immediately adjust optimally (best-respond) to the distribution, the dynamics are given by
\[\begin{split}\partial_{t}\rho&=-\mathrm{div}\left( \rho\left(\nabla_{z}\delta_{\rho}G_{c}[\rho,x]\right)\left|{}_{x=b(\rho)} \right)\right.,\\ b(\rho)&\coloneqq\operatorname*{argmin}_{\bar{x}}G _{c}(\rho,\bar{x})\,.\end{split} \tag{5}\]
Next we can consider the population immediately responding to the algorithm, which has dynamics
\[\begin{split}\frac{\mathrm{d}}{\mathrm{d}t}x&=- \nabla_{x}G_{c}(\rho,x)|_{\rho=r(x)}\,,\\ r(x)&\coloneqq\operatorname*{argmin}_{\hat{\rho}\in \mathcal{P}}-G_{c}(\hat{\rho},x)\,.\end{split} \tag{6}\]
In this time-scale separated setting, model (5) represents a dyadic maximization of \(G_{c}\) with respect to \(\rho\) in Wasserstein-2 space, and an instantaneous minimization of \(G_{c}\) with respect to the algorithm parameters \(x\). Model (6) represents an instantaneous maximization of \(G_{c}\) with respect to \(\rho\) and a dynamic minimization of \(G_{c}\) with respect to the algorithm parameters \(x\). The key results on existence and uniqueness of a ground state as well as the convergence behavior of solutions depend on convexity (concavity) of \(G_{a}\) and \(G_{c}\). The notion of convexity that we will employ for energy functionals in the Wasserstein-2 geometry is _(uniform) displacement convexity_, which is analogous to (strong) convexity in Euclidean spaces. One can think of displacement convexity for an energy functional defined on \(\mathcal{P}_{2}\) as convexity along the shortest path in the Wasserstein-2 metric (linear interpolation in the Wasserstein-2 space) between any two given probability distributions. For a detailed definition of (uniform) displacement convexity and concavity, see Section A.2. In fact, suitable convexity properties of the input functions \(f_{1},f_{2},W\) and \(\tilde{\rho}\) will ensure (uniform) displacement convexity of the resulting energy functionals appearing in the gradient flow structure, see for instance [23, Chapter 5.2].
We make the following assumptions in both the competitive case and aligned interest cases. Here, \(\mathrm{I}_{d}\) denotes the \(d\times d\) identity matrix, \(\mathrm{Hess}\left(f\right)\) denotes the Hessian of \(f\) in all variables, while \(\nabla_{x}^{2}f\) denotes the Hessian of \(f\) in the variable \(x\) only.
**Assumption 1** (Convexity of \(f_{1}\) and \(f_{2}\)).: _The functions \(f_{1},f_{2}\in C^{2}(\mathbb{R}^{d}\times\mathbb{R}^{d};[0,\infty))\) satisfy for all \((z,x)\in\mathbb{R}^{d}\times\mathbb{R}^{d}\) the following:_
* _There exists constants_ \(\lambda_{1},\lambda_{2}\geq 0\) _such that_ \(\mathrm{Hess}\left(f_{1}\right)\succeq\lambda_{1}\mathrm{I}_{2d}\) _and_ \(\nabla_{x}^{2}f_{2}\succeq\lambda_{2}\mathrm{I}_{d}\)_;_
* _There exist constants_ \(a_{i}>0\) _such that_ \(x\cdot\nabla_{x}f_{i}(z,x)\geq-a_{i}\) _for_ \(i=1,2\)_;_
**Assumption 2** (Reference Distribution Shape).: _The reference distribution \(\tilde{\rho}\in\mathcal{P}(\mathbb{R}^{d})\cap L^{1}(\mathbb{R}^{d})\) satisfies \(\log\tilde{\rho}\in C^{2}(\mathbb{R}^{d})\) and \(\nabla_{z}^{2}\log\tilde{\rho}(z)\preceq-\tilde{\lambda}\mathrm{I}_{d}\) for some \(\tilde{\lambda}>0\)._
**Assumption 3** (Convex Interaction Kernel).: _The interaction kernel \(W\in C^{2}(\mathbb{R}^{d};[0,\infty))\) is convex, symmetric \(W(-z)=W(z)\), and for some \(D>0\) satisfies_
\[z\cdot\nabla_{z}W(z)\geq-D,\quad|\nabla_{z}W(z)|\leq D(1+|z|)\quad\forall\,z \in\mathbb{R}^{d}\,.\]
We make the following observations regarding the assumptions above:
* The convexity in Assumption 3 can be relaxed and without affecting the results outlined below by following a more detailed analysis analogous to the approach in [13].
* If \(f_{1}\) and \(f_{2}\) are strongly convex, the proveable convergence rate increases, but without strict or strong convexity of \(f_{1}\) and \(f_{2}\), the regularizers \(KL(\rho|\tilde{\rho})\) and \(\int\left\lVert{x-x_{0}}\right\rVert_{2}^{2}\mathrm{d}x\) provide the convexity guarantees necessary for convergence.
For concreteness, one can consider the following classical choices of input functions to the evolution:
* Using the log-loss function for \(f_{1}\) and \(f_{2}\) satisfies Assumption 1.
* Taking the reference measure \(\tilde{\rho}\) to be the normal distribution satisfies Assumption 2, which ensures the distribution is not too flat.
* Taking quadratic interactions \(W(z)=\frac{1}{2}|z|^{2}\) satisfies Assumption 3.
**Remark 1** (Cauchy-Problem).: _To complete the arguments on convergence to equilibrium, we require sufficient regularity of solutions to the PDEs under consideration._
_In fact, it is sufficient if we can show that equations (4), (5), and (6) can be approximated by equations with smooth solutions. Albeit tedious, these are standard techniques in the regularity theory for partial differential equations, see for example [15, Proposition 2.1 and Appendix A], [14, Chapter 9], and the references therein. Similar arguments as in [14] are expected to apply to the coupled gradient flows considered here, guaranteeing existence of smooth solutions with fast enough decay at infinity, and we leave a detailed proof for future work._
### Analysis of Case (i): Aligned Objectives
The primary technical contribution of this setting consists of lifting the algorithm dynamics from an ODE to a PDE, which allows us to model the system as a joint gradient flow on the product space of probability measures. The coupling occurs in the potential function, rather than as cross-diffusion or non-local interaction as more commonly seen in the literature for multi-species systems.
**Theorem 2**.: _Suppose that Assumptions 1-3 are satisfied and let \(\lambda_{a}:=\lambda_{1}+\min(\lambda_{2}+\beta,\alpha\tilde{\lambda})>0\). Consider solutions \(\gamma_{t}:=(\rho_{t},\mu_{t})\) to the dynamics (4) with initial conditions satisfying \(\gamma_{0}\in\mathcal{P}_{2}(\mathbb{R}^{d})\times\mathcal{P}_{2}(\mathbb{R}^ {d})\) and \(G_{a}(\gamma_{0})<\infty\). Then the following hold:_
* _There exists a unique minimizer_ \(\gamma_{\infty}=(\rho_{\infty},\mu_{\infty})\) _of_ \(G_{a}\)_, which is also a steady state for equation (_4_). Moreover,_ \(\rho_{\infty}\in L^{1}(\mathbb{R}^{d})\)_, has the same support as_ \(\tilde{\rho}\)_, and its density is continuous._
* _The solution_ \(\gamma_{t}\) _converges exponentially fast in_ \(G_{a}(\cdot\,|\,\gamma_{\infty})\) _and_ \(\overline{W}\)_,_ \[G_{a}(\gamma_{t}\,|\,\gamma_{\infty})\leq e^{-2\lambda_{a}t}G_{a}(\gamma_{0} \,|\,\gamma_{\infty})\quad\text{ and }\quad\overline{W}(\gamma_{t},\gamma_{ \infty})\leq ce^{-\lambda_{a}t}\quad\text{ for all }t\geq 0\,,\] _where_ \(c>0\) _is a constant only depending on_ \(\gamma_{0}\)_,_ \(\gamma_{\infty}\) _and the parameter_ \(\lambda_{a}\)_._
Proof.: (Sketch) For existence and uniqueness, we leverage classical techniques in the calculus of variations. To obtain convergence to equilibrium in energy, our key result is a new HWI-type inequality, providing as a consequence generalizations of the log-Sobolev inequality and the Talagrand inequality. Together, these inequalities relate the energy (classically denoted by \(H\) in the case of the Boltzmann entropy), the metric (classically denoted by \(W\) in the case of the Wasserstein-2 metric) and the energy dissipation (classically denoted by \(I\) in the case of the Fisher information)1. Combining these inequalities with Gronwall's inequality allows us to deduce convergence both in energy and in the metric \(\overline{W}\).
Footnote 1: Hence the name HWI inequalities.
### Analysis of Case (ii): Competing Objectives
In this setting, we consider the case where the algorithm and the strategic population have goals in opposition to each other; specifically, the population benefits from being classified incorrectly. First, we will show that when the algorithm instantly best-responds to the population, then the distribution of the population converges exponentially in energy and in \(W_{2}\). Then we will show a similar result for the case where the population instantly best-responds to the algorithm.
In both cases, we begin by proving two Danskin-type results (see [10, 1]) which will be used for the main convergence theorem, including convexity (concavity) results. To this end, we make the following assumption ensuring that the regularizing component in the evolution of \(\rho\) is able to control the concavity introduced by \(f_{1}\) and \(f_{2}\).
**Assumption 4** (Upper bounds for \(f_{1}\) and \(f_{2}\)).: _There exists a constant \(\Lambda_{1}>0\) such that_
\[\nabla_{z}^{2}f_{1}(z,x)\preceq\Lambda_{1}I_{d}\qquad\text{ for all }(z,x)\in\mathbb{R}^{d}\times\mathbb{R}^{d}\,,\]
_and for any \(R>0\) there exists a constant \(c_{2}=c_{2}(R)\in\mathbb{R}\) such that_
\[\sup_{x\in B_{R}(0)}\int f_{2}(z,x)\mathrm{d}\tilde{\rho}(z)<c_{2}\,.\]
Equipped with Assumption 4, we state the result for a best-responding algorithm.
**Theorem 3**.: _Suppose Assumptions 1-4 are satisfied with \(\alpha\tilde{\lambda}>\Lambda_{1}\). Let \(\lambda_{b}\coloneqq\alpha\tilde{\lambda}-\Lambda_{1}\). Define \(G_{b}(\rho)\coloneqq G_{c}(\rho,b(\rho))\). Consider a solution \(\rho_{t}\) to the dynamics (5) with initial condition \(\rho_{0}\in\mathcal{P}_{2}(\mathbb{R}^{d})\) such that \(G_{b}(\rho_{0})<\infty\). Then the following hold:_
* _There exists a unique maximizer_ \(\rho_{\infty}\) _of_ \(G_{b}(\rho)\)_, which is also a steady state for equation (_5_). Moreover,_ \(\rho_{\infty}\in L^{1}(\mathbb{R}^{d})\)_, has the same support as_ \(\tilde{\rho}\)_, and its density is continuous._
* _The solution_ \(\rho_{t}\) _converges exponentially fast to_ \(\rho_{\infty}\) _with rate_ \(\lambda_{b}\) _in_ \(G_{b}(\cdot\,|\,\rho_{\infty})\) _and_ \(W_{2}\)_,_ \[G_{b}(\rho_{t}\,|\,\rho_{\infty})\leq e^{-2\lambda_{b}t}G_{a}(\rho_{0}\,|\, \rho_{\infty})\quad\text{ and }\quad W_{2}(\rho_{t},\rho_{\infty})\leq ce^{- \lambda_{b}t}\quad\text{ for all }t\geq 0\,,\] _where_ \(c>0\) _is a constant only depending on_ \(\rho_{0}\)_,_ \(\rho_{\infty}\) _and the parameter_ \(\lambda_{b}\)_._
Proof.: (Sketch) The key addition in this setting as compared with Theorem 2 is proving that \(G_{b}(\rho)\) is bounded below, uniformly displacement concave and guaranteeing its smoothness via Berge's Maximum Theorem. This is non-trivial as it uses the properties of the best response \(b(\rho)\). A central observation for our arguments to work is that \(\delta_{\rho}G_{b}[\rho]=(\delta_{\rho}G_{c}[\rho,x])\,|_{x=b(\rho)}\). We can then conclude using the direct method in the calculus of variations and the HWI method.
Here, the condition that \(\alpha\tilde{\lambda}\) must be large enough corresponds to the statement that the system must be subjected to a strong enough regularizing effect.
In the opposite case, where \(\rho\) instantly best-responds to the algorithm, we show Danskin-like results for derivatives through the best response function and convexity of the resulting energy in \(x\) which allows to deduce convergence.
**Theorem 4**.: _Suppose Assumptions 1-4 are satisfied with \(\alpha\tilde{\lambda}>\Lambda_{1}\). Define \(G_{d}(x)\coloneqq G_{c}(r(x),x)\). Then it holds:_
* _There exists a unique minimizer_ \(x_{\infty}\) _of_ \(G_{d}(x)\) _which is also a steady state for (_6_)._
* _The vector_ \(x(t)\) _solving the dynamics (_6_) with initial condition_ \(x(0)\in\mathbb{R}^{d}\) _converges exponentially fast to_ \(x_{\infty}\) _with rate_ \(\lambda_{d}:=\lambda_{1}+\lambda_{2}+\beta>0\) _in_ \(G_{d}\) _and in the Euclidean norm:_ \[\|x(t)-x_{\infty}\| \leq e^{-\lambda_{d}t}\|x(0)-x_{\infty}\|\,,\] \[G_{d}(x(t))-G_{d}(x_{\infty}) \leq e^{-2\lambda_{d}t}\left(G_{d}(x(0))-G_{d}(x_{\infty})\right)\] _for all_ \(t\geq 0\)_._
**Remark 2**.: _In the proof, we use that the best response of \(\rho\) given a particular \(x\) is differentiable with respect to \(x\). This can be ensured by the condition outlined in Lemma 27. In Lemma 28, we provide examples of additional assumptions guaranteeing that this condition holds, making sure the best response function is in fact differentiable. Another approach is to show suitable bounds on the second derivative of \(G_{d}(x)\) following arguments in [11]. A more detailed analysis of this condition is an interesting direction for future research._
These two theorems illustrate that, under sufficient convexity conditions on the cost functions, we expect the distribution \(\rho\) and the algorithm \(x\) to converge to a steady state. In practice, when the distributions are close enough to the steady state there is no need to retrain the algorithm.
While we have proven results for the extreme timescale cases, we anticipate convergence to the same equilibrium in the intermediate cases. Indeed, it is well known [1] (especially for systems in Euclidean space) that for two-timescale stochastic approximations of dynamical systems, with appropriate stepsize choices, converge asymptotically, and finite-time high probability concentration bounds can also be obtained. These results have been leveraged in strategic classification [16] and Stackelberg games [13, 14, 15]. We leave this intricate analysis to future work.
In the following section we show numerical results in the case of a best-responding \(x\), best-responding \(\rho\), and in between where \(x\) and \(\rho\) evolve on a similar timescale. Note that in these settings, the dynamics do not have a gradient flow structure due to a sign difference in the energies, requiring conditions to ensure that one species does not dominate the other.
Numerical Examples
We illustrate numerical results for the case of a classifier, which are used in scenarios such as loan or government aid applications [11], school admissions [23], residency match [10], and recommendation algorithms [15], all of which have some population which is incentivized to submit data that will result in a desirable classification. For all examples, we select classifiers of the form \(x\in\mathbb{R}\), so that a data point \(z\in\mathbb{R}\) is assigned a label of \(1\) with probability \(q(z,x)=(1+\exp\left(-b^{\top}z+x\right))^{-1}\) where \(b>0\). Let \(f_{1}\) and \(f_{2}\) be given by
\[f_{1}(z,x)=-\log(1-q(z,x))\,,\qquad f_{2}(z,x)=-\log q(z,x).\]
Note that \(\operatorname{Hess}\left(f_{1}\right)\succeq 0\) and \(\nabla_{x}^{2}f_{2}\succeq 0\), so \(\lambda_{1}=\lambda_{2}=0\). Here, the strictness of the convexity of the functional is coming from the regularizers, not the cost functions, with \(\tilde{\rho}\) a scaled normal distribution. We show numerical results for two scenarios with additional settings in the appendix. First we illustrate competitive interests under three different timescale settings. Then we simulate the classifier taking an even more naive strategy than gradient descent and discuss the results. The PDEs were implemented based on the finite volume method from [11].
### Competitive Objectives
In the setting with competitive objectives, we utilize \(G_{c}(\rho,x)\) with \(W=0\), \(f_{1}\) and \(f_{2}\) as defined above with \(b=3\) fixed as it only changes the steepness of the classifier for \(d=1\), and \(\alpha=0.1\) and \(\beta=0.05\). In Figure 1, we simulate two extremes of the timescale setting; first when \(\rho\) is nearly best-responding and then when \(x\) is best-responding. The simulations have the same initial conditions and end with the same distribution shape; however, the behavior of the strategic population differs in the intermediate stages.
When \(\rho\) is nearly best-responding, we see that the distribution quickly shifts mass over the classifier threshold. Then the classifier shifts right, correcting for the shift in \(\rho\), which then incentivizes \(\rho\) to shift more mass back to the original mode. In contrast, when \(x\) best-responds, the right-hand mode slowly increases in size until the system converges.
Figure 2 shows simulation results from the setting where \(\rho\) and \(x\) evolve on the same timescale. We observe that the distribution shift in \(\rho\) appears to fall between the two extreme timescale cases, which we expect. We highlight two important observations for the competitive case. One, a single-mode distribution becomes bimodel, which would not be captured using simplistic metrics such as the mean and variance. This split can be seen as polarization in the population, a phenomenon that a mean-based strategic classification model would not capture. Two, the timescale on which the classifier updates significantly impacts the intermediate behavior of the distribution. In our example, when \(x\) updated slowly relative to the strategic population, the shifts in the population were greater than in the other two cases. This suggests that understanding the effects of timescale separation are important for minimizing volatility of the coupled dynamics.
Figure 1: When \(x\) versus \(\rho\) best-responds, we observe the same final state but different intermediate states. Modes appear in the strategic population which simpler models cannot capture.
### Naive Behavior
In this example, we explore the results of the classifier adopting a non-gradient-flow strategy, where the classifier chooses an initially-suboptimal value for \(x\) and does not move, allowing the strategic population to respond.
All functions and parameters are the same as in the previous example. When comparing with the gradient descent strategy, we observe that while the initial loss for the classifier is worse for the naive strategy, the final cost is better. While this results is not surprising, because one can view this as a general-sum game where the best response to a fixed decision may be better than the equilibrium, it illustrates how our method provides a framework for evaluating how different training strategies perform in the long run against a strategic population.
## 5 Future Directions and Limitations
Our work presents a method for evaluating the robustness of an algorithm to a strategic population, and investigating a variety of robustness using our techniques opens a range of future research directions. Our application suggests many questions relevant to the PDE literature, such as: (1) Does convergence still hold with the gradient replaced by an estimated gradient? (2) Can we prove convergence in between the two timescale extremes? (3) How do multiple dynamic populations respond to an algorithm, or multiple algorithms? In the realm of learning algorithms, our framework can be extended to other learning update strategies and presents a way to model how we can design these update strategies to induce desired behaviors in the population.
A challenge in our method is that numerically solving high-dimensional PDEs is computationally expensive and possibly unfeasible. Here we note that in many applications, agents in the population do not alter more than a few features due to the cost of manipulation. We are encouraged by the recent progress using deep learning to solve PDEs, which could be used in our application.
Figure 3: Although the classifier starts with a larger cost by taking the naive strategy, the final loss is better. This illustrates how our model can be used to compare robustness of different strategies against a strategic population.
Figure 2: In this experiment the population and classifier have similar rates of change, and the distribution change for \(\rho\) exhibits behaviors from both the fast \(\rho\) and fast \(x\) simulations; the right-hand mode does not peak as high as the fast \(\rho\) case but does exceed its final height and return to the equilibrium.
## Acknowledgments and Disclosure of Funding
LC is supported by an NDSEG fellowship from the Air Force Office of Scientific Research. FH is supported by start-up funds at the California Institute of Technology. LR is supported by ONR YIP N00014-20-1-2571 P00003 and NSF Awards CAREER 1844729, and CPS 1931718. EM acknowledges support from NSF Award 2240110. We are grateful for helpful discussions with Jose A. Carrillo.
|
2307.06609 | Well-posedness of regular solutions for 3-D full compressible
Navier-Stokes equations with degenerate viscosities and heat conductivity | For the degenerate viscous and heat conductive compressible fluids, the
momentum equations and the energy equation are degenerate both in the time
evolution and spatial dissipation when vacuum appears, and then the physical
entropy S behaves singularly, which make it challenging to study the
corresponding well-posedness of regular solutions with high order regularities
of S near the vacuum. In this paper, for the physically important case that the
coefficients of viscosities and heat conductivity depend on the absolute
temperature \theta in a power law of Chapman-Enskog, we identify a class of
initial data admitting a local-in-time regular solution with far field vacuum
to the Cauchy problem of the 3-D full CNS, and such a solution possesses the
uniformly high order regularities for S near the vacuum. The key idea here is
to study the vacuum problem in terms of the mass density \rho, velocity u and S
instead of (\rho, u,\theta), which makes it possible to compare the orders of
the degeneracy of the time evolution and the spatial dissipations near the
vacuum in terms of the powers of \rho. However, for heat conductive fluids,
both a degenerate spatial dissipation and a source term related to \triangle
\rho^{\gamma-1}, will appear in the time evolution equation for S, which makes
it formidable to study the propagation of regularities of S. Fortunately, based
on some elaborate analysis of the intrinsic degenerate-singular structures of
the 3-D full CNS, we can choose proper weights to control the behaviors of
(\rho, u,S) by introducing an enlarged reformulated system, which includes a
singular parabolic system for u, and one degenerate-singular parabolic equation
for S. Then one can carry out a series of weighted energy estimates carefully
designed for this reformulated system, which provides an effective propagation
mechanism for S's high order regularities near the vacuum. | Qin Duan, Zhouping Xin, Shengguo Zhu | 2023-07-13T08:14:48Z | http://arxiv.org/abs/2307.06609v2 | Well-posedness of the three-dimensional heat conductive compressible Navier-Stokes equations with degenerate viscosities and far field vacuum
###### Abstract.
For the degenerate viscous and heat conductive compressible fluids, the momentum equations and the energy equation are degenerate both in the time evolution and spatial dissipation structures when vacuum appears, and then the physical entropy \(S\) behaves singularly, which make it challenging to study the corresponding well-posedness of regular solutions with high order regularities of \(S\) near the vacuum. In this paper, when the coefficients of viscosities and heat conductivity depend on the absolute temperature \(\theta\) in a power law (\(\theta^{\nu}\) with \(\nu>0\)) of Chapman-Enskog, by some elaborate analysis of the intrinsic degenerate-singular structures of the full compressible Navier-Stokes equations (**CNS**), we identify a class of initial data admitting a local-in-time regular solution with far field vacuum to the Cauchy problem of the three-dimensional (3-D) **CNS** in terms of the mass density \(\rho\), velocity \(u\) and \(S\). Furthermore, it is shown that within its life span of such a regular solution, \(u\) stays in an inhomogeneous Sobolev space, i.e., \(u\in H^{3}(\mathbb{R}^{3})\), \(S\) has uniformly finite lower and upper bounds in \(\mathbb{R}^{3}\), and the laws of conservation of total mass, momentum and total energy are all satisfied. The key idea for proving the existence is to introduce an enlarged system by considering some new variables, which includes a singular parabolic system for \(u\), and one degenerate-singular parabolic equation for \(S\). It is worth pointing out that this reformulation can transfer part of the degeneracies of the full **CNS** to some singular source terms, and then one can carry out a series of singular or degenerate weighted energy estimates carefully designed for this reformulated system, which provides successfully an effective propagation mechanism for \(S^{\prime}s\) high order regularities along with the time.
Key words and phrases:Compressible Navier-Stokes equations, three-dimensions, degenerate viscosities and heat conductivity, far field vacuum, well-posedness, asymptotic behavior 2010 Mathematics Subject Classification: 35Q30, 35A09, 35A01, 35B44, 35B40, 76N10
###### Contents
* 1 Introduction
* 2 Reformulation and main strategy
* 2.1 A reformulation
* 2.2 Main strategy
* 3 Local-in-time well-posedness with far field vacuum
* 3.1 Linearization away from the vacuum with artificial dissipations
* 3.2 Uniform a priori estimates
* 3.3 Vanishing of the artificial dissipations
* 3.4 Nonlinear approximation solutions away from vacuum
* 3.5 Limit to the flow with far field vacuum
###### Contents
* 1 Introduction
* 2 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.4 The \(3\times 3\) identity matrix
* 2.5 The \(3\times 3\) identity matrix
* 2.6 The \(3\times 3\) identity matrix
* 2.7 The \(3\times 3\) identity matrix
* 2.8 The \(3\times 3\) identity matrix
* 2.9 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.4 The \(3\times 3\) identity matrix
* 2.5 The \(3\times 3\) identity matrix
* 2.6 The \(3\times 3\) identity matrix
* 2.7 The \(3\times 3\) identity matrix
* 2.8 The \(3\times 3\) identity matrix
* 2.9 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.4 The \(3\times 3\) identity matrix
* 2.5 The \(3\times 3\) identity matrix
* 2.6 The \(3\times 3\) identity matrix
* 2.7 The \(3\times 3\) identity matrix
* 2.8 The \(3\times 3\) identity matrix
* 2.9 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.4 The \(3\times 3\) identity matrix
* 2.5 The \(3\times 3\) identity matrix
* 2.6 The \(3\times 3\) identity matrix
* 2.7 The \(3\times 3\) identity matrix
* 2.8 The \(3\times 3\) identity matrix
* 2.9 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.4 The \(3\times 3\) identity matrix
* 2.5 The \(3\times 3\) identity matrix
* 2.6 The \(3\times 3\) identity matrix
* 2.7 The \(3\times 3\) identity matrix
* 2.8 The \(3\times 3\) identity matrix
* 2.9 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.4 The \(3\times 3\) identity matrix
* 2.5 The \(3\times 3\) identity matrix
* 2.6 The \(3\times 3\) identity matrix
* 2.7 The \(3\times 3\) identity matrix
* 2.8 The \(3\times 3\) identity matrix
* 2.9 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.4 The \(3\times 3\) identity matrix
* 2.5 The \(3\times 3\) identity matrix
* 2.6 The \(3\times 3\) identity matrix
* 2.7 The \(3\times 3\) identity matrix
* 2.8 The \(3\times 3\) identity matrix
* 2.9 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.4 The \(3\times 3\) identity matrix
* 2.5 The \(3\times 3\) identity matrix
* 2.6 The \(3\times 3\) identity matrix
* 2.7 The \(3\times 3\) identity matrix
* 2.8 The \(3\times 3\) identity matrix
* 2.9 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.4 The \(3\times 3\) identity matrix
* 2.5 The \(3\times 3\) identity matrix
* 2.6 The \(3\times 3\) identity matrix
* 2.7 The \(3\times 3\) identity matrix
* 2.8 The \(3\times 3\) identity matrix
* 2.9 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.4 The \(3\times 3\) identity matrix
* 2.5 The \(3\times 3\) identity matrix
* 2.6 The \(3\times 3\) identity matrix
* 2.7 The \(3\times 3\) identity matrix
* 2.8 The \(3\times 3\) identity matrix
* 2.9 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.4 The \(3\times 3\) identity matrix
* 2.5 The \(3\times 3\) identity matrix
* 2.6 The \(3\times 3\) identity matrix
* 2.7 The \(3\times 3\) identity matrix
* 2.8 The \(3\times 3\) identity matrix
* 2.9 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.4 The \(3\times 3\) identity matrix
* 2.5 The \(3\times 3\) identity matrix
* 2.6 The \(3\times 3\) identity matrix
* 2.7 The \(3\times 3\) identity matrix
* 2.8 The \(3\times 3\) identity matrix
* 2.9 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.4 The \(3\times 3\) identity matrix
* 2.5 The \(3\times 3\) identity matrix
* 2.6 The \(3\times 3\) identity matrix
* 2.7 The \(3\times 3\) identity matrix
* 2.8 The \(3\times 3\) identity matrix
* 2.9 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.4 The \(3\times 3\) identity matrix
* 2.5 The \(3\times 3\) identity matrix
* 2.6 The \(3\times 3\) identity matrix
* 2.7 The \(3\times 3\) identity matrix
* 2.8 The \(3\times 3\) identity matrix
* 2.9 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.4 The \(3\times 3\) identity matrix
* 2.5 The \(3\times 3\) identity matrix
* 2.6 The \(3\times 3\) identity matrix
* 2.7 The \(3\times 3\) identity matrix
* 2.8 The \(3\times 3\) identity matrix
* 2.9 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.4 The \(3\times 3\) identity matrix
* 2.5 The \(3\times 3\) identity matrix
* 2.6 The \(3\times 3\) identity matrix
* 2.7 The \(3\times 3\) identity matrix
* 2.8 The \(3\times 3\) identity matrix
* 2.9 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.1 The \(3\times 3\) identity matrix
* 2.2 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.3 The \(3\times 3\) identity matrix
* 2.4 The \(3\times 3\) identity matrix
* 2.5 The \(3\times 3\) identity matrix *
where \((\alpha,\beta,\daldal,\nu)\) are all constants satisfying
\[\alpha>0,\quad 2\alpha+3\beta\geq 0,\quad\daldal>0\quad\text{and}\quad 0<\delta=( \gamma-1)\nu<1. \tag{1.7}\]
In terms of \((\rho,u,S)\), it follows from (1.2)-(1.3) and (1.6) that (1.1) can be rewritten as the following system which does not explicitly contain negative powers of \(\rho\):
\[\begin{cases}&\rho_{t}+\text{div}(\rho u)=0,\\ &\underbrace{\rho(u_{t}+u\cdot\nabla u)}_{\Delta}+\nabla P=\underbrace{A^{\nu} R^{-\nu}\text{div}(\rho^{\delta}e^{\frac{S}{c_{\nu}}\nu}Q(u))}_{\Diamond},\\ &\underbrace{P\big{(}S_{t}+u\cdot\nabla S\big{)}}_{\Delta}-\underbrace{\digamma \rho^{\delta+\gamma-1}e^{\frac{S}{c_{\nu}}\nu}\triangle e^{\frac{S}{c_{\nu}}}} _{\Diamond}\\ =&A^{\nu}R^{1-\rho}\theta^{\delta}e^{\frac{S}{c_{\nu}}\nu}H(u)+\underbrace{ \digamma\rho^{\delta}e^{\frac{S}{c_{\nu}}(\nu+1)}\triangle\rho^{\gamma-1}}_{ \star}+\Lambda(\rho,S),\end{cases} \tag{1.8}\]
where \(\digamma=R^{\dal
One of the main issues is to understand the dynamics of \(\left(u,\theta,S\right)\) near the vacuum. Note that near the vacuum, in the sense that the corresponding equations do not explicitly contain negative powers of \(\rho\), the equation \(\left(\ref{eq:1}\right)_{3}\) for \(\theta\) is degenerate only in the time evolution, while the equation \(\left(\ref{eq:1}\right)_{3}\) for \(S\) is degenerate both in the time evolution and spatial dissipation operators even for the case \(\nu=0\), which makes the behavior of \(S\) more singular than that of \(\theta\), and the study on the regularities of \(S\) challenging. Thus, most of the well-posedness theories on the full **CNS** with vacuum state developed in the existing literatures are regardless of \(S\). It is worth pointing out that, in the presence of vacuum, the full **CNS** formulated in terms of \(\left(\rho,u,\theta\right)\) is not equivalent to the one formulated in terms of \(\left(\rho,u,S\right)\), since the boundedness and regularities of \(\left(\rho,\theta\right)\) do not yield any information for \(S\) near the vacuum. Actually, for general initial data containing vacuum, the local well-posedness of strong solutions to the Cauchy problem of the 3-D full **CNS** was obtained by Cho-Kim [9] in terms of \(\left(\rho,u,\theta\right)\), and the corresponding global well-posedness theories with small total energy have been established by Huang-Li [21] with non-vanishing \(\left(\rho,\theta\right)\) at far fields, and Wen-Zhu [50] with vanishing \(\left(\rho,\theta\right)\) at far fields by extending the corresponding studies on the isentropic case by Huang-Li-Xin [22]. It should be noticed that the solutions obtained in [9, 21, 50] are in some homogeneous space, that is, \(\sqrt{\rho}u\) rather than \(u\) itself has the \(L^{\infty}([0,T];L^{2})\) regularity. In fact, one can not expect that the strong solutions to the full **CNS** lie in the inhomogeneous Sobolev spaces, if the initial density has compact support or even decays to zero in the far field rapidly, see Li-Wang-Xin [30] and Li-Xin [33]. Moreover, it follows from Xin-Yan [52] that the global solutions in [21, 50] must have unbounded \(S\) if initially there is an isolated mass group surrounded by the vacuum region. However, when the initial density vanishes only at far fields with a slow decay rate, recently in Li-Xin [31, 32], for the Cauchy problem of the full **CNS**, it is shown that the uniform boundedness of \(S\) and the \(L^{2}\) regularity of \(u\) can be propagated within the solution's life span. For specific pressure laws excluding (1.2), the global existence of so-called "variational" solutions with vacuum in dimension \(d\geq 2\) has been established by Feireisl in [13, 14] (see also Poul [43] for the **CNS**-Poisson system), where \(\theta\) satisfies an inequality. We also refer the readers to [10, 11, 15, 19, 20, 25, 27, 38, 40, 51] and the references therein for some related progress.
In contrast to the fruitful development in the classical case \(\nu=0\) in (1.6), the coresponding progress for the degenerate case \(\nu>0\) in (1.6) is very limited due to the strong degeneracy and nonlinearity both in viscosity and heat conductivity besides the degeneracy in the time evolution near the vacuum. Recently, the degenearte isentropic **CNS** (**DICNS**), i.e., \(\left(\ref{eq:1}\right)_{1}\)-\(\left(\ref{eq:1}\right)_{2}\) with \(S(t,x)\) being constant and \(\left(\ref{eq:1}\right)_{3}\) ignored, has received extensive attentions, in which the viscosity vanishes at vacuum:
\[\underbrace{\rho(u_{t}+u\cdot\nabla u)}_{\Delta}+\nabla P=\underbrace{\text{ div}(\rho^{\delta}Q(u))}_{\Diamond}. \tag{1.13}\]
Via making use of the B-D entropy in [2, 3], some significant achievements on weak solutions with vacuum for the **DICNS** and related models have been obtained, cf. [1, 4, 5, 18, 34, 41, 49]. On the other hand, there are a few results available for strong (or smooth) solutions with finite energy. In particular, for the case \(0<\delta<1\), by introducing an elaborate elliptic approach on the singularly weighted regularity estimates for \(u\) and a symmetric hyperbolic system with singularities for some
quantities involving \(\rho\) and its derivatives, Xin-Zhu [54] identifies a class of initial data admitting one unique 3-D local regular solution with far field vacuum to the Cauchy problem of \(\eqref{eq:1.1}_{1}\) and \(\eqref{eq:1.13}\) in some inhomogeneous Sobolev spaces, which has been extended to be global-in-time ones with large data in \(\mathbb{R}\) by Cao-Li-Zhu [6]. The related progress for the cases \(\delta\geq 1\) on smooth solutions with vacuum can also be found in [16, 36, 37, 53]. Since the coefficients of the time evolution and \(Q(u)\) are powers of \(\rho\), it is easy to compare the order of the degeneracy of these two operators near the vacuum, which enable one to select the dominant operator to control the behavior of \(u\) and lead to the "hyperbolic-strong singular elliptic" structure in [6, 54] and the " quasi-symmetric hyperbolic"-"degenerate elliptic" structure in [16, 37, 53]. Some other related progress can also be found in [8, 17, 29, 39] and the references therein.
Since \(e\), \(\theta\) and \(S\) are all fundamental states for viscous compressible fluids, it is of great importance to study their dynamics for the full **CNS**, which is a subtle and difficult problem in the presence of vacuum. Indeed, in the studies for the well-posedness of classical solutions with vacuum to the full **CNS** (1.1)-(1.3) with degenerate viscosities and heat conductivity of the form (1.4)-(1.5), the structures of the coefficients for the time evolution and the spatial dissipation operators are different, and \(S\) plays important roles but behaves more singularly than \(\theta\) near the vacuum, which cause substantial difficulties in the analysis and make it difficult to adapt the approaches for the isentropic case in [6, 16, 36, 37, 54]. It should be pointed out that due to the physical requirements on \(\theta\) and \(S\) near the vacuum, it is of more advantages to formulate the **CNS** (1.1)-(1.3) in terms of \((\rho,u,S)\) instead of \((\rho,u,\theta)\) in contrast to [9, 21, 50] as illustrated below. Since
\[\theta=AR^{-1}\rho^{\gamma-1}e^{S/c_{v}}, \tag{1.14}\]
for \(\rho>0\), one may rewrite \(\eqref{eq:1.1}_{2}\)-\(\eqref{eq:1.1}_{3}\) as \(\eqref{eq:1.1}_{2}\)-\(\eqref{eq:1.1}_{3}\) which do not contain explicitly negative powers of \(\rho\). Thus, if \(S\) has uniform boundedness and high enough regularities, then it is still possible to compare the orders of the degeneracy of the time evolution and the spatial dissipation operators near the vacuum by the powers of \(\rho\), and then to choose proper weights to control the behaviors of the physical quantities. However, no matter for the case \(\nu=0\) or the case \(\nu>0\), due to the degeneracy in both the time evolution and the spatial dissipation in \(\eqref{eq:1.1}_{3}\), the physical entropy for polytropic gases behaves singularly near the vacuum, and it is thus a challenge to study its regularities. Indeed, even for the case of constant viscosities and heat conductivity, i.e., \(\nu=0\) in (1.6), only the boundedness of \(S\) has been achieved in Li-Xin [31, 32], and yet the higher regularities of \(S\) near the vacuum have not been established in the existing literatures. Furthermore, it seems difficult to adapt the approach for \(\nu=0\) in [31, 32] to the case \(\nu>0\) due to the stronger degeneracy in spatial dissipations. Recently, for the case \(\nu>0\) and \(\overline{\ \ }=0\) in \(\eqref{eq:1.1}_{3}\), we have shown in [12] that the following equation:
\[S_{t}+u\cdot\nabla S=A^{\nu-1}R^{1-\nu}\rho^{\delta-\gamma}e^{\frac{S}{c_{v}}( \nu-1)}H(u) \tag{1.15}\]
can provide an effective propagation mechanism for regularities of \(S\) in \(D^{1}_{*}\cap D^{3}\) in short time, and the corresponding analysis depends essentially on the transport structure of (1.15). However, for the case \(\nu>0\) and \(\overline{\ }>0\), the emergence of the degenerate dissipation term \(\digamma\rho^{\delta+\gamma-1}e^{\frac{S}{c_{v}}\nu}\triangle e^{\frac{S}{c_{ v}}}\) and the source term \(\digamma\rho^{\delta}e^{\frac{S}{c_{v}}(\nu+1)}\triangle\rho^{\gamma-1}\)
makes the propagation of the regularities of \(S\) very subtle, which implies that some of the key arguments used in [12] for the case \(\overline{\upgamma}=0\) fail here and leads to some essential difficulties in establishing high order regularities of \(S\) near the vacuum.
In order to overcome these difficulties, under the assumptions (1.6)-(1.7), we reformulate the equations (1.8)\({}_{2}\)-(1.8)\({}_{3}\) as
\[\left\{\begin{aligned} & u_{t}+u\cdot\nabla u+\frac{A\gamma}{ \gamma-1}e^{\frac{S}{c_{v}}}\nabla\rho^{\gamma-1}+A\rho^{\gamma-1}\nabla e^{ \frac{S}{c_{v}}}+\underbrace{A^{\nu}R^{-\nu}\rho^{\delta-1}e^{\frac{S}{c_{v}} \nu}Lu}_{\Box}\\ =& A^{\nu}R^{-\nu}\frac{\delta}{\delta-1}\nabla\rho^ {\delta-1}\cdot Q(u)e^{\frac{S}{c_{v}}\nu}+\underbrace{A^{\nu}R^{-\nu}\rho^{ \delta-1}\nabla e^{\frac{S}{c_{v}}\nu}\cdot Q(u)}_{\sim}\\ &\quad\quad\underbrace{\rho^{\frac{1-\delta}{2}}(S_{t}+u\cdot \nabla S)}_{\triangle}-\underbrace{\digamma A^{-1}\rho^{\frac{\delta-1}{2}}e^ {\frac{S}{c_{v}}(\nu-1)}\triangle e^{\frac{S}{c_{v}}}}_{\Box}\\ =& A^{\nu-1}R^{1-\nu}\rho^{\frac{1+\delta-2\gamma}{ 2}}e^{\frac{S}{c_{v}}(\nu-1)}H(u)\\ &+\underbrace{\digamma A^{-1}\rho^{\frac{1+\delta-2\gamma}{2}}e^ {\frac{S}{c_{v}}\nu}\triangle\rho^{\gamma-1}}_{\sim}+A^{-1}\rho^{\frac{1- \delta-2\gamma}{2}}e^{-\frac{S}{c_{v}}}\Lambda(\rho,S),\end{aligned}\right. \tag{1.16}\]
where \(\Box\) denotes the singular dissipation, \(\backsim\) the strong singular source term, and \(L\) the Lame operator defined by
\[Lu\triangleq-\alpha\triangle u-(\alpha+\beta)\nabla\mathtt{div}u.\]
It should be noted that the key to the (1.16) is the choice of the degenerate weight \(\rho^{\frac{1-\delta}{2}}\) in front of \(S_{t}+u\cdot\nabla S\) in the equation for the entropy, which is inspired by the competition of different terms in the system (1.8) for weights in singular weighted energy estimates. In fact, if \(\rho\) decays to zero in the far field, the right hand side of \(\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq
in \(\left(\ref{eq:16}\right)_{2}\) will tend to \(\infty\) as \(\rho\to 0\) in the far filed, which makes it highly non-trivial to show that \(\rho^{\frac{\delta-1}{2}}e^{\frac{S}{c_{v}}(\nu-1)}\triangle e^{\frac{S}{c_{v}}}\) is well defined in some Sobolev space. Moreover, how to utilize the smoothing effect of this singular elliptic operator when \(\rho\) loses its strictly positive lower bound is also a tricky issue.
3. at last but more importantly, the time evolution equation \(\left(\ref{eq:16}\right)_{2}\) for \(S\) also contains some strong singularities such as: \(\rho^{\frac{1+\delta-2\gamma}{2}}e^{\frac{S}{c_{v}}(\nu-1)}H(u),\quad\rho^{ \frac{1-\delta-2\gamma}{2}}e^{-\frac{S}{c_{v}}}\Lambda(\rho,S)\quad\text{and} \quad\rho^{\frac{1+\delta-2\gamma}{2}}e^{\frac{S}{c_{v}}\nu}\triangle\rho^{ \gamma-1}.\) It is worth pointing out that the appearance of \(\rho^{\frac{1+\delta-2\gamma}{2}}e^{\frac{S}{c_{v}}\nu}\triangle\rho^{\gamma-1}\) makes it difficult to show \(S\in D^{4}\) for \(t>0\). In fact, it follows from \(\left(\ref{eq:16}\right)_{2}\) and the regularity theory of elliptic equations that \(|S|_{D^{4}}\) can be controlled by \(|\rho^{\frac{1+\delta-2\gamma}{2}}e^{\frac{S}{c_{v}}\nu}\triangle\rho^{\gamma -1}|_{D^{2}}\), which seems impossible in the current \(H^{3}\) framework. Then such kind of singularities becomes some of the main obstacles to get the uniform boundedness and high order regularities of \(S\), thus whose analysis becomes extremely crucial. The dissipative term \(\rho^{\frac{\delta-1}{2}}e^{\frac{S}{c_{v}}(\nu-1)}\triangle e^{\frac{S}{c_{v}}}\) does not provide a substantial contribution to the treatment of \(\rho^{\frac{1+\delta-2\gamma}{2}}e^{\frac{S}{c_{v}}\nu}\triangle\rho^{\gamma-1}\). We need to conduct a very detailed analysis of some quantities related with the derivatives of \(\rho\).
Therefore, the following quantities will play significant roles in our analysis:
\[(\rho^{\gamma-1},\ \nabla\rho^{\delta-1},\ \rho^{\delta-1}Lu,\ e^{\frac{S}{c_{ v}}},\ \rho^{\frac{\delta-1}{2}}\triangle e^{\frac{S}{c_{v}}}).\]
Due to this observation, we first introduce a proper class of solutions called regular solutions to the Cauchy problem (1.8) with (1.2) and (1.9)-(1.11) as follows.
**Definition 1.1**.: _Let \(T>0\) be a finite constant. The triple \((\rho,u,S)\) is called a regular solution to the Cauchy problem (1.8) with (1.2) and (1.9)-(1.11) in \([0,T]\times\mathbb{R}^{3}\) if \((\rho,u,S)\) solves this problem in the sense of distributions and\(:\)_
1. \(\rho>0,\ \rho^{\gamma-1}\in C([0,T];D^{1}_{*}\cap D^{3}),\ \nabla\rho^{\delta-1}\in C([0,T];L^{ \infty}\cap D^{1,3}\cap D^{2})\)_;_ \(\nabla\rho^{\frac{3(\delta-1)}{4}}\in C([0,T];D^{1}_{*}),\ \nabla\rho^{\frac{3(\delta-1)}{8}}\in C([0,T];L^{4})\)_;_
2. \(u\in C([0,T];H^{3})\cap L^{2}([0,T];D^{4}),\quad\rho^{\frac{\delta-1}{2}} \nabla u\in C([0,T];L^{2}),\)__ \(\rho^{\delta-1}\nabla^{2}u\in L^{\infty}([0,T];H^{1})\cap L^{2}([0,T];D^{2})\)_;_
3. \(S-\bar{S}\in C([0,T];D^{1}_{*}\cap D^{3}),\quad e^{\frac{S}{c_{v}}}-e^{\frac{ \bar{S}}{c_{v}}}\in C([0,T];D^{1}_{*}\cap D^{3})\)_,_ \(\rho^{\frac{\delta-1}{4}}\nabla e^{\frac{S}{c_{v}}}\in L^{\infty}([0,T];L^{2})\)_,_ \(\rho^{\frac{\delta-1}{2}}\nabla^{2}e^{\frac{S}{c_{v}}}\in L^{\infty}([0,T];H^{ 1})\)_,_ \(\rho^{\delta+\gamma-1}e^{\frac{S}{c_{v}}(\nu+1)}\in L^{2}([0,T];D^{1}\cap D^{4})\)_._
**Remark 1.1**.: _First, it follows from Definition 1.1 that \(\nabla\rho^{\delta-1}\in L^{\infty}\), which implies that the vacuum can occur only in the far field._
_Second, denote by \(m(t)\), \(\mathbb{P}(t)\), \(E_{k}(t)\), \(E_{p}(t)\), and \(E(t)=E_{k}(t)+E_{p}(t)\) the total mass, momentum, total kinetic energy, the potential energy, and the total energy respectively. It then can be checked easily (see Lemma 3.17) that regular solutions defined in Definition 1.1 satisfy the conservation of \(m(t)\), \(\mathbb{P}(t)\) and \(E(t)\), which is not clear for strong solutions in the case of constant viscosities and heat conductivity obtained in \([9,21,50]\), cf. [12, 44, 54]. In this sense, the definition of regular solutions above is consistent with the physical background of the **CNS**._
The regular solutions select \((\rho,u,S)\) in a physically reasonable way when far field vacuum appears. Then finding a regular solution to (1.8) will be further reformulated into solving an enlarged system consisting of (up to leading order): a transport equation for \(\rho^{\gamma-1}\), a singular parabolic system for \(u\), a degenerate-singular parabolic equation for \(e^{\frac{S}{c_{v}}}\), and a symmetric hyperbolic system for \(\nabla\rho^{\delta-1}\), which makes the original problem trackable. The first main result in this paper can be stated as follows.
**Theorem 1.1**.: _Let parameters \((\gamma,\delta=\nu(\gamma-1),\alpha,\beta,\
**Remark 1.2**.: _(1.18)-(1.19) identify a class of admissible initial data that provide unique solvability to (1.8) with (1.2) and (1.9)-(1.11). Such initial data include_
\[\rho_{0}(x)=\frac{1}{(1+|x|^{2})^{\varkappa}},\quad u_{0}(x)\in C_{0}^{3}( \mathbb{R}^{3}),\quad S_{0}=\bar{S}+f(x), \tag{1.21}\]
_for any \(f(x)\in D_{*}^{1}\cap D^{3}\), where_
\[\frac{1}{4(\gamma-1)}<\varkappa<\min\left\{\frac{1-3/q}{2(1-\delta)},\frac{1}{ 3(1-\delta)}\right\}\quad\text{and}\quad\frac{7}{4}+\frac{\delta}{4}<\gamma+ \delta\leq 2. \tag{1.22}\]
**Remark 1.3**.: _The compatibility conditions (1.19) are important for the existence of regular solutions \((\rho,u,S)\) obtained in Theorem 1.1. Indeed,_
* \(\nabla u_{0}=\rho_{0}^{\frac{1-\delta}{2}}g_{1}\)__(resp., \(\nabla e^{\frac{S_{0}}{c_{v}}}=\rho_{0}^{\frac{1-\delta}{4}}g_{4}\)) plays a key role in the derivation of \(\rho^{\frac{\delta-1}{2}}\nabla u\in L^{\infty}([0,T_{*}];L^{2})\) (resp., \(\rho^{\frac{\delta-1}{4}}\nabla S\in L^{\infty}([0,T_{*}];L^{2})\));
* \(Lu_{0}=\rho_{0}^{1-\delta}g_{2}\)__(resp., \(\triangle e^{\frac{S_{0}}{c_{v}}}=\rho_{0}^{\frac{3(1-\delta)}{4}}g_{5}\)) is crucial in the derivation of \(u_{t}\in L^{\infty}([0,T_{*}];L^{2})\) (resp., \(S_{t}\in L^{\infty}([0,T_{*}];L^{2})\)), which will be used in the estimate for \(|u|_{D^{2}}\) (resp., \(|S|_{D^{2}}\));
* and \(\nabla(\rho_{0}^{\delta-1}Lu_{0})=\rho_{0}^{\frac{1-\delta}{2}}g_{3}\) (resp., \(\nabla(\rho_{0}^{\frac{\delta-1}{2}}\triangle e^{\frac{S_{0}}{c_{v}}})=\rho_ {0}^{\frac{3(1-\delta)}{4}}g_{6}\)) is used in the derivation of \(\rho^{\frac{\delta-1}{2}}\nabla u_{t}\in L^{\infty}([0,T_{*}];L^{2})\) (resp., \(\rho^{\frac{\delta-1}{4}}\nabla S_{t}\in L^{\infty}([0,T_{*}];L^{2})\)), which leads to some desired estimate for \(|u|_{D^{3}}\) (resp., \(|S|_{D^{3}}\)).
**Remark 1.4**.: _It should be pointed out that due to the requirement \(\gamma+\delta\leq 2\) on \((\gamma,\delta=(\gamma-1)\nu)\) in (1.17), Theorem 1.1 applies to the monatomic gas, for which, \((\gamma,\nu)=(\frac{5}{3},\frac{1}{2})\)._
**Remark 1.5**.: _Note that for the regular solution \((\rho,u,S)\) obtained in Theorem 1.1, \(u\) stays in the inhomogeneous Sobolev space \(H^{3}\) instead of the homogenous one \(D_{*}^{1}\cap D^{2}\) in \([9,50]\) for flows with constant viscosity and heat conductivity coefficients._
_In [33], it is shown that for the case of constant viscosities and heat conductivity, the specific entropy becomes not uniformly bounded immediately after the initial time, as long as the initial density decays to zero in the far field rapidly. Compared with the conclusions obtained in Theorem 1.1 and the discussion in Remark 1.2, there is a natural question whether the conclusion mentioned above can be applied to the degenerate system considered here. Due to strong degeneracy near the vacuum in \(\eqref{eq:1}_{2}\)-\(\eqref{eq:1}_{3}\), such questions are not easy and will be discussed in our future work._
**Remark 1.6**.: _It is worth pointing out that in the current \(H^{3}\) framework, although the regular solution \((\rho,u,S)\) obtained in Theorem 1.1 is not a classical one to the Cauchy problem (1.8) with (1.2) and (1.9)-(1.11) due to the appearance of the second order source term \(\digamma\rho^{\delta}e^{\frac{S}{c_{v}}(\nu+1)}\triangle\rho^{\gamma-1}\), the corresponding \((\rho,u,\theta=AR^{-1}\rho^{\gamma-1}e^{S/c_{v}})\) solves the problem (1.1)-(1.3) with (1.6) and (1.10)-(1.11) classically._
A natural question is whether the local solution obtained in Theorem 1.1 can be extended globally in time. In contrast to the classical theory [27, 40, 50], we show the following somewhat surprising phenomenon that such an extension is impossible if \(u\) decays to zero as \(t\to\infty\), the laws of conservation of \(m(t)\) and \(\mathbb{P}(t)\) are both satisfied, and \(\mathbb{P}(0)\) is non-zero. To this end, we need the following definition.
**Definition 1.2**.: _Let \(T>0\) be a positive time. For the Cauchy problem (1.1)-(1.3) with (1.6) and (1.10)-(1.11), a classical solution \((\rho,u,\theta)\) in \((0,T]\times\mathbb{R}^{3}\) is said to be in \(D(T)\) if \((\rho,u,\theta)\) satisfies the following conditions:_
* \(m(t)\)_,_ \(\mathbb{P}(t)\) _and_ \(E_{k}(t)\) _all belong to_ \(L^{\infty}([0,T])\)_;_
* _The total mass is conserved, i.e.,_ \(\frac{d}{dt}m(t)=0\) _for any_ \(t\in[0,T]\)_;_
* _The momentum is conserved, i.e.,_ \(\frac{d}{dt}\mathbb{P}(t)=0\) _for any_ \(t\in[0,T]\)_._
Then one has:
**Theorem 1.2**.: _Assume that \(m(0)>0\), \(|\mathbb{P}(0)|>0\), and \((\gamma,\mu,\lambda,\kappa)\) satisfy_
\[\gamma\geq 1,\quad\mu\geq 0,\quad 2\mu+3\lambda\geq 0,\quad\kappa\geq 0. \tag{1.23}\]
_Then for the Cauchy problem (1.1)-(1.3) with (1.6) and (1.10)-(1.11), there is no classical solution \((\rho,u,\theta)\in D(\infty)\) with_
\[\limsup_{t\to\infty}|u(t,\cdot)|_{\infty}=0. \tag{1.24}\]
An immediate consequence of Theorems 1.1-1.2 is
**Corollary 1.1**.: _For the Cauchy problem (1.8) with (1.2) and (1.9)-(1.11), if one assumes \(0<m(0)<\infty\) and \(|\mathbb{P}(0)|>0\) additionally, then there is no global regular solution \((\rho,u,S)\) with the regularities in Theorem 1.1 satisfying (1.24)._
**Remark 1.7**.: _The framework established in this paper is applicable to other physical dimensions with some minor modifications._
The rest of this paper is organised as follows. In SS2, we first reformulate the Cauchy problem (1.8) with (1.2) and (1.9)-(1.11) into a specifically chosen enlarged system, which makes the problem trackable through an elaborate linearization and approximation process. Then we outline the main strategy to establish the well-posedness theory. SS3 is devoted to proving the local-in-time well-posedness theory stated in Theorem 1.1, which can be achieved in five steps:
1. construct global approximate solutions away from the vacuum for a specially designed linearized problem with an artificial viscosity \(\sqrt{\rho^{\delta-1}+\epsilon^{2}}e^{\frac{S}{c_{v}}\nu}Lu\) in the momentum equations, an artificial heat conductivity \((\rho^{\delta-1}+\epsilon^{2})^{\frac{1}{4}}e^{\frac{S}{c_{v}}\nu}\triangle e ^{\frac{S}{c_{v}}}\) in the entropy equation, and \(\inf_{x\in\mathbb{R}^{3}}\rho_{0}^{\gamma-1}=\frac{\gamma-1}{A\gamma}\eta\) for some positive constants \(\epsilon>0\) and \(\eta>0\);
2. establish the a priori estimates independent of both \(\epsilon\) and \(\eta\) on the approximate solutions;
3. then pass to the limit \(\epsilon\to 0\) to recover the solution of the corresponding linearized problem away from the vacuum with only physical viscosities;
4. prove the unique solvability away from the vacuum of the reformulated nonlinear problem through a standard iteration process;
5. finally take the limit \(\eta\to 0\) to recover the solution of the reformulated nonlinear problem with physical viscosities and far field vacuum.
The global non-existence results stated in Theorem 1.2 and Corollary 1.1 are proved in SS4. Finally, for convenience of readers, we list some basic facts which have been used frequently in this paper in the appendix.
## 2. Reformulation and main strategy
In this section, we first reformulate the highly degenerate system (1.8) into an enlarged trackable system, and then sketch the main strategies of our analysis.
### A reformulation
Denote \(\delta=(\gamma-1)\nu\). In terms of
\[\phi=\frac{A\gamma}{\gamma-1}\rho^{\gamma-1},\quad l=e^{\frac{S}{c_{v}}},\quad \psi=\frac{\delta}{\delta-1}\nabla\rho^{\delta-1},\quad n=\rho^{2-\delta- \gamma}, \tag{2.1}\]
the problem (1.8) with (1.2) and (1.9)-(1.11) implies that
\[\begin{cases}\quad\phi_{t}+u\cdot\nabla\phi+(\gamma-1)\phi\mathrm{ div}u=0,\\ \quad u_{t}+u\cdot\nabla u+a_{1}\phi\nabla l+l\nabla\phi+a_{2}l^{\nu}\phi^{2\iota }Lu\\ =&a_{2}\phi^{2\iota}\nabla l^{\nu}\cdot Q(u)+a_{3}l^{\nu}\psi\cdot Q(u),\\ \quad\phi^{-\iota}(l_{t}+u\cdot\nabla l)-a_{4}\phi^{\iota}l^{\nu}\triangle l \\ =&a_{5}l^{\nu}n\phi^{3\iota}H(u)+a_{6}l^{\nu+1}\phi^{-\iota}\mathrm{ div}\psi+\Theta(\phi,l,\psi),\\ \quad\psi_{t}+\sum_{k=1}^{3}A_{k}(u)\partial_{k}\psi+B(u)\psi+\delta a \phi^{2\iota}\nabla\mathrm{div}u=0,\end{cases} \tag{2.2}\]
where
\[\Theta(\phi,l,\psi)= a_{7}l^{\nu+1}\phi^{-3\iota}\psi\cdot\psi+a_{8}l^{\nu}\phi^{- \iota}\nabla l\cdot\psi+a_{9}l^{\nu-1}\phi^{\iota}\nabla l\cdot\nabla l, \tag{2.3}\]
and
\[\begin{split} a_{1}=&\frac{\gamma-1}{\gamma},\quad a _{2}=a\Big{(}\frac{A}{R}\Big{)}^{\nu},\quad a_{3}=\Big{(}\frac{A}{R}\Big{)}^{ \nu},\quad a_{4}=\digamma\frac{a}{Ac_{v}},\\ a_{5}=&\frac{A^{\nu-1}a^{2}(\gamma-1)}{R^{\nu}}, \quad a_{6}=\digamma\frac{(\gamma-1)}{Ac_{v}\delta},\quad a_{7}=\digamma\frac {\gamma(\gamma-1)}{aAc_{v}\delta^{2}},\\ a_{8}=& 2\digamma\frac{1+\nu}{Ac_{v}\nu},\quad a_{9}= \digamma\frac{a\nu}{Ac_{v}},\quad\iota=\frac{\delta-1}{2(\gamma-1)},\quad a =\Big{(}\frac{A\gamma}{\gamma-1}\Big{)}^{\frac{1-\delta}{\gamma-1}},\end{split} \tag{2.4}\]
\(A_{k}(u)=(a_{ij}^{k})_{3\times 3}\) for \(i\), \(j\), \(k=1\), \(2\), \(3\), are symmetric with
\[a_{ij}^{k}=u^{(k)}\quad\text{for $i=j$};\quad\text{otherwise $a_{ij}^{k}=0$},\]
and \(B(u)=(\nabla u)^{\top}+(\delta-1)\mathrm{div}u\mathbb{I}_{3}\).
The initial data for (2.2) are given by
\[\begin{split}&(\phi,u,l,\psi)|_{t=0}=(\phi_{0},u_{0},l_{0},\psi_ {0})\\ =&\Big{(}\frac{A\gamma}{\gamma-1}\rho_{0}^{\gamma-1} (x),u_{0}(x),e^{S_{0}(x)/c_{v}},\frac{\delta}{\delta-1}\nabla\rho_{0}^{\delta -1}(x)\Big{)}\quad\text{for}\quad x\in\mathbb{R}^{3}.\end{split} \tag{2.5}\]
\((\phi,u,l,\psi)\) are required to satisfy the following far filed behavior:
\[(\phi,u,l,\psi)\to(0,0,\bar{l},0)\,\,\,\text{as}\,\,\,|x|\to\infty\,\,\,\text{ for}\quad t\geq 0, \tag{2.6}\]
with \(\bar{l}>0\) being a constant.
Note that the enlarged system (2.2) consists of (up to leading order)
* one _scalar transport_ equation \(\eqref{eq:2.2}_{1}\) for \(\phi\);
* one _singular parabolic_ system \(\eqref{eq:1}_{2}\) for the velocity \(u\);
* one _degenerate (time evolution operator)-singular (elliptic operator) parabolic_ equation \(\eqref{eq:1}_{3}\) with several singular source terms for \(l\);
* one _symmetric hyperbolic_ system \(\eqref{eq:1}_{4}\) but with several singular source terms for \(\psi\),
such a structure will enable us to establish the following main theorem.
**Theorem 2.1**.: _Let \(\eqref{eq:1}\) hold. Assume that the initial data \((\phi_{0},u_{0},l_{0},\psi_{0})\) satisfy:_
\[\begin{split}&\phi_{0}>0,\quad\phi_{0}\in D_{*}^{1}\cap D^{3}, \quad\nabla\phi_{0}^{\frac{3}{2}\iota}\in D_{*}^{1},\quad\nabla\phi_{0}^{\frac {3}{4}\iota}\in L^{4},\quad u_{0}\in H^{3},\\ & l_{0}-\bar{l}\in D_{*}^{1}\cap D^{3},\quad\inf_{x\in\mathbb{R} ^{3}}l_{0}>0,\quad\psi_{0}\in L^{q}\cap D^{1,3},\ \ \phi_{0}^{\frac{1}{2}\iota}\nabla^{2}\psi_{0}\in L^{2},\end{split} \tag{2.7}\]
_for some \(q\in(3,\infty)\), and the following compatibility conditions:_
\[\begin{split}&\nabla u_{0}=\phi_{0}^{-\iota}g_{1},\quad Lu_{0}= \phi_{0}^{-2\iota}g_{2},\quad\nabla(\phi_{0}^{2\iota}Lu_{0})=\phi_{0}^{-\iota} g_{3},\\ &[3pt]\nabla l_{0}=\phi_{0}^{-\frac{\iota}{2}}g_{4},\quad\triangle l _{0}=\phi_{0}^{-\frac{3}{2}\iota}g_{5},\quad\nabla(\phi_{0}^{\iota}\triangle l _{0})=\phi_{0}^{-\frac{3}{2}\iota}g_{6},\end{split} \tag{2.8}\]
_for some \((g_{1},g_{2},g_{3},g_{4},g_{5},g_{6})\in L^{2}\). Then there exist a time \(T_{*}>0\) and a unique strong solution \((\phi,u,l,\psi=\frac{a\delta}{\delta-1}\nabla\phi^{2\iota})\) in \([0,T_{*}]\times\mathbb{R}^{3}\) to the Cauchy problem (2.2)-(2.6), such that \(\phi(t,x)>0\) in \([0,T_{*}]\times\mathbb{R}^{3}\), \(\inf_{(t,x)\in[0,T_{*}]\times\mathbb{R}^{3}}l>0\) and_
\[\begin{split}&\phi\in C([0,T_{*}];D_{*}^{1}\cap D^{3}),\quad \nabla\phi^{\frac{3}{2}\iota}\in C([0,T_{*}];D_{*}^{1}),\quad\nabla\phi^{\frac {3}{4}\iota}\in C([0,T_{*}];L^{4}),\\ &\psi\in C([0,T_{*}];L^{q}\cap D^{1,3}\cap D^{2}),\quad\phi_{t} \in C([0,T_{*}];H^{2}),\quad\psi_{t}\in C([0,T_{*}];H^{1}),\\ &\phi_{tt}\in C([0,T_{*}];L^{2})\cap L^{2}([0,T_{*}];D_{*}^{1}), \quad u\in C([0,T_{*}];H^{3})\cap L^{2}([0,T_{*}];H^{4}),\\ & u_{t}\in C([0,T_{*}];H^{1}),\quad\phi^{2\iota}\nabla^{2}u\in L^ {\infty}([0,T_{*}];H^{1})\cap L^{2}([0,T_{*}];D^{2}),\\ &(\phi^{\iota}\nabla u,t^{\frac{1}{2}}\phi^{2\iota}\nabla^{4}u, \phi^{\iota}\nabla u_{t},t^{\frac{1}{2}}\phi^{2\iota}\nabla^{2}u_{t})\in L^{ \infty}([0,T_{*}];L^{2}),\\ &(\psi_{tt},\phi^{2\iota}\nabla^{2}u_{t},t^{\frac{1}{2}}\phi^{2 \iota}\nabla^{3}u_{t},u_{tt},t^{\frac{1}{2}}\phi^{\iota}\nabla u_{tt})\in L^{ 2}([0,T_{*}];L^{2}),\\ & t^{\frac{1}{2}}u_{tt}\in L^{\infty}([0,T_{*}];L^{2})\cap L^{2}( [0,T_{*}];D_{*}^{1}),\quad l-\bar{l}\in C([0,T_{*}];D_{*}^{1}\cap D^{3}),\\ &(\phi^{\frac{\iota}{2}}\nabla l,\phi^{\iota}\nabla^{2}l,\phi^{ \iota}\nabla^{3}l,\phi^{-\frac{\iota}{2}}l_{t},t^{\frac{1}{2}}\phi^{\iota} \nabla^{2}l_{t},t^{\frac{1}{2}}\phi^{-\frac{\iota}{2}}l_{tt})\in L^{\infty}([0,T_{*}];L^{2}),\\ & l_{t}\in C([0,T^{*}];D_{*}^{1})\cap L^{2}([0,T_{*}];D^{2}),\quad \phi^{\frac{\iota}{2}}l_{t}\in L^{\infty}([0,T_{*}];D_{*}^{1}),\\ &\phi^{\iota}l_{t}\in L^{2}([0,T_{*}];D^{2}),\quad(\phi^{-\frac{ \iota}{2}}l_{tt},t^{\frac{1}{2}}\phi^{\frac{\iota}{2}}\nabla l_{tt})\in L^{2}( [0,T_{*}];L^{2}).\end{split} \tag{2.9}\]
**Remark 2.1**.: _In Theorem 2.1, \((\phi,u,l,\psi=\frac{a\delta}{\delta-1}\nabla\phi^{2\iota})\) in \([0,T_{*}]\times\mathbb{R}^{3}\) is called a strong solution to the Cauchy problem (2.2)-(2.6), if it satisfies (2.2)-(2.6) in the sense of distributions, and satisfies the equations (2.2)-(2.4) a.e. \((t,x)\in(0,T_{*}]\times\mathbb{R}^{3}\)._
### Main strategy
Now we sketch the main strategy to prove Theorem 2.1.
#### 2.2.1. A priori weighted energy estimates
We now formally indicate how to obtain closed energy estimates based on the degenerate-singular structure described above.
Note first that \(\phi\) satisfies a _scalar transport_ equation \(\eqref{eq:1}_{1}\). Then \(\phi\) can be estimated by classical arguments.
Second, \(u\) is governed by the following _singular parabolic_ equations:
\[u_{t}+u\cdot\nabla u+a_{1}\phi\nabla l+l\nabla\phi+\underbrace{a_{2}\phi^{2t}l^ {\nu}Lu}_{\Box}=\underbrace{a_{2}\phi^{2t}\nabla l^{\nu}\cdot Q(u)+a_{3}l^{ \nu}\psi\cdot Q(u)}_{\sim_{1}},\]
where \(\backsim_{1}\) represents the source terms containing first order derivatives of \((\rho,u,S)\) that are singular near the vacuum. \(S\) is expected to be bounded below uniformly such that \(l=e^{\frac{S}{c_{v}}}\) and \(\phi^{2t}\) with \(\iota<0\) have uniformly positive lower bounds in the whole space. Then for this quasi-linear parabolic system, one can find formally that even though the coefficients \(a_{2}\phi^{2t}l^{\nu}\) in front of Lame operator \(Lu\) will tend to \(\infty\) as \(\rho\to 0\) in the far filed, yet this structure could give a better a priori estimate on \(u\) in \(H^{3}\) than those of [9, 36, 37, 53] if one can control \(l-\bar{l}\) in \(D_{*}^{1}\cap D^{3}\), \(\psi\) in \(L^{q}\cap D^{1,3}\cap D^{2}\), and \(\phi^{2t}\nabla l^{\nu}\cdot Q(u)\) with singular coefficient in proper spaces successfully. In fact, \(\eqref{eq:1}_{2}\) can be regarded as the following inhomogeneous Lame equations:
\[a_{2}L(\phi^{2t}u)= l^{-\nu}\mathcal{H}(\phi,u,l,\psi)-\frac{\delta-1}{\delta} \Big{(}\frac{A}{R}\Big{)}^{\nu}G(\psi,u)=W,\]
where
\[\mathcal{H}(\phi,u,l,\psi)= -u_{t}-u\cdot\nabla u-l\nabla\phi-a_{1}\phi\nabla l+a_{2}\phi^{2 t}\nabla l^{\nu}\cdot Q(u)+a_{3}l^{\nu}\psi\cdot Q(u),\] \[G(\psi,u)= \alpha\psi\cdot\nabla u+\alpha\mathrm{div}(u\otimes\psi)+( \alpha+\beta)\big{(}\psi\mathrm{div}u+\psi\cdot\nabla u+u\cdot\nabla\psi\big{)}.\]
Then it holds that
\[|\phi^{2t}u|_{D^{2}}\leq C|W|_{2}\quad\text{and}\quad|\phi^{2t}u|_{D^{3}}\leq C |W|_{D^{1}}, \tag{2.10}\]
for some constant \(C>0\) independent of the lower bound of \(\phi\) provided that
\[\phi^{2t}u\to 0\qquad\text{as}\qquad|x|\to\infty,\]
which can be verified by a non-vacuum approximation. Based on (2.10), one has
\[|\phi^{2t}\nabla^{2}u|_{2}\leq C(|W|_{2}+|\psi|_{\infty}|\nabla u|_{2}+|\nabla\psi|_{3}|u|_{6}), \tag{2.11}\] \[|\phi^{2t}\nabla^{3}u|_{2}\leq C(|W|_{D^{1}}+|\psi|_{\infty}|\nabla^{2}u|_{2}+|\nabla\psi|_{3}| \nabla u|_{6}+|\nabla^{2}\psi|_{2}|u|_{\infty}),\] \[|\phi^{2t}\nabla^{2}u|_{D^{1}}\leq C(|\phi^{2t}\nabla^{3}u|_{2}+|\psi|_{\infty}|\nabla^{2}u|_{2})\] \[\leq C(|W|_{D^{1}}+|\psi|_{\infty}|\nabla^{2}u|_{2}+|\nabla\psi|_{3}| \nabla u|_{6}+|\nabla^{2}\psi|_{2}|u|_{\infty}).\]
Similarly, the estimate of \(u\) in \(D^{4}\) follows from the following elliptic structure:
\[a_{2}L(\phi^{2t}\nabla^{\varsigma}u)=\phi^{2t}\nabla^{\varsigma}\big{(}\phi^{-2 t}l^{-\nu}\mathcal{H})-\frac{\delta-1}{\delta}\Big{(}\frac{A}{R}\Big{)}^{\nu}G( \psi,\nabla^{\varsigma}u)\quad\text{with}\quad|\varsigma|=2. \tag{2.12}\]
Next we show how to treat \(l\) and \(\psi\). Note that \(l\) can be controlled by the following _degenerate -singular parabolic_ equations:
\[\underbrace{\phi^{-\iota}(l_{t}+u\cdot\nabla l)}_{\triangle}- \underbrace{a_{4}\phi^{\iota}l^{\nu}\triangle l}_{\square}\] \[= \underbrace{a_{5}l^{\nu}n\phi^{3\iota}H(u)+\Theta(\phi,l,\psi)}_ {\backsim_{1}}+\underbrace{a_{6}l^{\nu+1}\phi^{-\iota}\text{div}\psi}_{ \backsim_{2}},\]
where \(\backsim_{2}\) denotes the source term with second order derivatives of \(\rho\), which may be singular near the vacuum. It follows from \(\eqref{eq:2.2}_{3}\) that
\[-a_{4}\triangle(\phi^{\iota}(l-\bar{l}))= l^{-\nu}\mathcal{E}(\phi,u,l,\psi)-a_{4}F(\nabla\phi^{\iota},l-\bar{l})=V,\]
where
\[\mathcal{E}(\phi,u,l,\psi)= -\phi^{-\iota}(l_{t}+u\cdot\nabla l)+a_{5}l^{\nu}n\phi^{3\iota}H( u)+a_{6}l^{\nu+1}\phi^{-\iota}\text{div}\psi\] \[+\Theta(\phi,l,\psi),\] \[F(\nabla\phi^{\iota},l-\bar{l})= (l-\bar{l})\triangle\phi^{\iota}+2\nabla\phi^{\iota}\cdot\nabla l.\]
Then the standard elliptic regularity theory yields
\[|\phi^{\iota}(l-\bar{l})|_{D^{2}}\leq C|V|_{2}\quad\text{and}\quad|\phi^{ \iota}(l-\bar{l})|_{D^{3}}\leq C|V|_{D^{1}}, \tag{2.13}\]
for some constant \(C>0\) independent of the lower bound of \(\phi\) provided that
\[\phi^{\iota}(l-\bar{l})\to 0\qquad\text{as}\qquad|x|\to\infty,\]
which can be verified by a non-vacuum approximation. Based on (2.13), one has
\[|\phi^{\iota}\nabla^{2}l|_{2}\leq C(|V|_{2}+|\nabla\phi^{\iota}|_{\infty}|\nabla l|_{2}+|l- \bar{l}|_{\infty}|\nabla^{2}\phi^{\iota}|_{2}), \tag{2.14}\] \[|\phi^{\iota}\nabla^{3}l|_{2}\leq C(|V|_{D^{1}}+|\nabla\phi^{\iota}|_{\infty}|\nabla^{2}l|_{2}+| \nabla^{2}\phi^{\iota}|_{2}|\nabla l|_{\infty}+|\nabla^{3}\phi^{\iota}|_{2}|l -\bar{l}|_{\infty}).\]
It should be noted here that this analysis does not yield \(l\in L^{2}([0,T_{*}];D^{4})\) due to the appearance of the term \(a_{6}l^{\nu+1}\phi^{-\iota}\text{div}\psi\) in \(\eqref{eq:2.2}_{3}\) or \(\mathcal{E}\). \(|l|_{D^{4}}\) can be controlled by \(|l^{\nu+1}\phi^{-\iota}\text{div}\psi|_{D^{2}}\), which seems impossible in the current \(H^{3}\) framework. What we can show is that \(\theta^{\nu+1}\in L^{2}([0,T_{*}];D^{4})\), which is enough to show that the solution obtained here is just a classical one to the original system (1.1). The singular term \(\phi^{-\iota}\text{div}\psi\) satisfies a scalar transport equation with singular source terms involving third order derivatives, and then the desired estimates follows from this structure and the estimates shown in (2.11).
Next, we turn to the estimates on \(l^{\nu}n\phi^{3\iota}H(u)\), which are more complicated and depend on the estimates of \(n\) and \(\phi^{3\iota}|\nabla u|^{2}\). An observation used here is that the initial assumption (1.18) and the definition of \(n\) in (2.1) imply that
\[n(0,x)\in L^{\infty}\cap D^{1,q}\cap D^{1,4}\cap D^{1,6}\cap D^{2}\cap D^{3}. \tag{2.15}\]
It is easy to check that \(n\) solves the following transport equation:
\[n_{t}+u\cdot\nabla n+(2-\delta-\gamma)n\text{div}u=0, \tag{2.16}\]
which, along with the expected regularities of \(u\) and \(\gamma+\delta\leq 2\) in (2.1), implies that
\[n(t,x)\in L^{\infty}\cap D^{1,q}\cap D^{1,4}\cap D^{1,6}\cap D^{2}\cap D^{3}\]
within the life span of the solution. While \(\phi^{3t}|\nabla u|^{2}\) can be controlled by using the weighted estimates on \(u\) including \(|\phi^{\iota}\nabla u|_{2}\), \(|\phi^{2\iota}\mathrm{div}u|_{\infty}\), \(|\phi^{2\iota}\nabla^{2}u|_{D^{2}}\) and so on. The arguments used here can also be applied to deal with the term \(\Theta(\phi,l,\psi)\).
Note further that \(\eqref{eq:2.2}_{4}\) implies that the subtle term \(\psi\) solves a symmetric hyperbolic system with a singular term \(\delta a\phi^{2\iota}\nabla\mathrm{div}u\). Then the estimates (2.11)-(2.12) for \(\phi^{2\iota}\nabla\mathrm{div}u\) can help us to close the desired estimates.
#### 2.2.2. A linearized problem
To prove Theorem 2.1, it is crucial to carry out the strategy of energy estimates discussed above for suitably chosen approximate solutions which are constructed by an elaborate linear scheme. In SS3.1, we design a linearized problem (3.1) for the nonlinear one (2.2)-(2.6) based on a careful analysis on the structure of the nonlinear system (2.2) with \(\phi(0,x)=\phi_{0}\) having positive lower bound \(\eta\). The linearization needs to be careful due to the appearance of the far field vacuum. Some necessary structures should be preserved in order to obtain the desired a priori estimates as mentioned above. For the problem (2.2)-(2.6), a key step is to estimate \(\psi\). According to the analysis in the above paragraphs, it is crucial to keep the two factors \(\phi^{2\iota}\) and \(\nabla\mathrm{div}u\) of the source term \(\delta a\phi^{2\iota}\nabla\mathrm{div}u\) in \(\eqref{eq:2.2}_{4}\) in the same step. Then let \(v=(v^{(1)},v^{(2)},v^{(3)})^{\top}\in\mathbb{R}^{3}\) be a known vector, \(g\) and \(w\) be known real (scalar) functions satisfying \((v(0,x),g(0,x),w(0,x))=(u_{0},\phi_{0}^{2\iota},l_{0})\) and (3.3). A natural linearization of the system (2.2) seems to be
\[\left\{\begin{aligned} &\phi_{t}+v\cdot\nabla\phi+(\gamma-1) \phi\mathrm{div}v=0,\\ & u_{t}+v\cdot\nabla v+a_{1}\phi\nabla l+l\nabla\phi+a_{2}\phi^{2 \iota}l^{\nu}Lu\\ &=a_{2}g\nabla l^{\nu}\cdot Q(v)+a_{3}l^{\nu}\psi\cdot Q(v),\\ &\phi^{-\iota}(l_{t}+v\cdot\nabla l)-a_{4}\phi^{\iota}w^{\nu} \triangle l\\ &=a_{5}w^{\nu}ng^{\frac{3}{2}}H(v)+a_{6}w^{\nu+1}\phi^{-\iota} \mathrm{div}\psi+a_{7}w^{\nu+1}\phi^{-3\iota}\psi\cdot\psi\\ &\quad+a_{8}w^{\nu}\phi^{-\iota}\nabla l\cdot\psi+a_{9}w^{\nu-1}g ^{\frac{1}{2}}\nabla w\cdot\nabla w,\\ &\quad\psi_{t}+\sum_{k=1}^{3}A_{k}(v)\partial_{k}\psi+B(v)\psi+ \delta ag\nabla\mathrm{div}v=0.\end{aligned}\right. \tag{2.17}\]
However, it should be noted that, in (2.17), the important relationship
\[\psi=\frac{a\delta}{\delta-1}\nabla\phi^{2\iota} \tag{2.18}\]
between \(\psi\) and \(\phi\) cannot be guaranteed due to the term \(g\nabla\mathrm{div}v\) in (2.17)\({}_{4}\). Then one would encounter the following difficulty in deriving the weighted \(L^{2}\) estimate for \(\nabla l\):
\[\begin{aligned} &\frac{a_{4}}{2}\frac{d}{dt}|\phi^{-\frac{1}{2} \iota}\nabla l|_{2}^{2}+|w^{-\frac{\nu}{2}}\phi^{-\frac{1}{2}\iota}l_{t}|_{2}^ {2}+\int w^{-\nu}\phi^{-\iota}v\cdot\nabla ll_{t}\\ =&-a_{4}\int\underbrace{\nabla\phi^{\iota}}_{\neq \frac{\delta-1}{2a\delta}\phi^{-\iota}\psi}\cdot\nabla ll_{t}+\frac{a_{4}}{2} \int(\phi^{-\iota})_{t}|\nabla l|^{2}+\int\big{(}a_{5}ng^{\frac{3}{2}}H(v)\\ &+a_{6}w\phi^{-\iota}\mathrm{div}\psi+a_{7}w\phi^{-3\iota}\psi \cdot\psi+a_{8}\phi^{-\iota}\nabla l\cdot\psi+a_{9}w^{-1}\sqrt{g}\nabla w\cdot \nabla w\big{)}l_{t}.\end{aligned} \tag{2.19}\]
Since \(\nabla\phi^{2_{t}}\) does not coincide with \(\frac{\delta-1}{a\delta}\psi\) in (2.17), it seems difficult to control the term \(-a_{4}\nabla\phi^{\iota}\cdot\nabla ll_{t}\) in (2.19). The above difficulty is caused by the absence of (2.18), which will also happen in the \(L^{2}\) estiamtes for \(u\) based on (2.17)\({}_{2}\).
In order to overcome this difficulty, in (3.1), we first linearize the equation for \(h=\phi^{2_{t}}\) as:
\[h_{t}+v\cdot\nabla h+(\delta-1)g\mathrm{div}v=0, \tag{2.20}\]
and then use \(h\) to define \(\psi=\frac{a\delta}{\delta-1}\nabla h\) again. Here, it should be pointed out that, due to the term \((\delta-1)g\mathrm{div}v\) in (2.20), the relation \(h=\phi^{2_{t}}\) between \(h\) and \(\phi\) no longer exists in the linear problem.
On the one hand, the linear equations for \(u\) will be chosen as
\[u_{t}+v\cdot\nabla v+a_{1}\phi\nabla l+l\nabla\phi+a_{2}\sqrt{h^ {2}+\epsilon^{2}}l^{\nu}Lu\] \[= a_{2}g\nabla l^{\nu}\cdot Q(v)+a_{3}l^{\nu}\psi\cdot Q(v),\]
for any positive constant \(\epsilon>0\). Here \(\epsilon\) is added to compensate the lack of lower bound of \(h\).
Note also that in order to linearize (2.2)\({}_{3}\) for the entropy, one has to define \(n\) since the relation \(h=\phi^{2_{t}}\) does not hold for the linearized scheme above. Here, in order to make full use of the estimates on \(\psi\) and the singular weighted estimates on \(u\), we will define \(n\) as
\[n=(ah)^{\frac{2-\delta-\gamma}{\delta-1}}.\]
Similarly, we need to redefine \(\phi^{-\iota}\), \(\phi^{-3\iota}\) and \(\phi^{\iota}\) in the equation of \(l\) as follows
\[\phi^{-\iota}=h^{-\frac{1}{2}},\quad\phi^{-3\iota}=h^{-\frac{3}{2}},\quad\phi^ {\iota}=\sqrt{h}.\]
Based on the above considerations, the linear equation for \(l\) is chosen as
\[h^{-\frac{1}{2}}(l_{t}+v\cdot\nabla l)-a_{4}(h^{2}+\epsilon^{2} )^{\frac{1}{4}}w^{\nu}\triangle l\] \[= a_{5}w^{\nu}ng^{\frac{3}{2}}H(v)+a_{6}w^{\nu+1}h^{-\frac{1}{2}} \mathrm{div}\psi+\Pi(h,l,\psi,w,g),\]
for any positive constant \(\epsilon>0\), where
\[\Pi(h,l,\psi,w,g)= a_{7}w^{\nu+1}h^{-\frac{3}{2}}\psi\cdot\psi+a_{8}w^{\nu}h^{- \frac{1}{2}}\nabla l\cdot\psi+a_{9}w^{\nu-1}\sqrt{g}\nabla w\cdot\nabla w. \tag{2.21}\]
Finally, it follows from (2.20) and the relation \(\psi=\frac{a\delta}{\delta-1}\nabla h\) that
\[\psi_{t}+\sum_{k=1}^{3}A_{k}(v)\partial_{k}\psi+(\nabla v)^{\top}\psi+a\delta \big{(}g\nabla\mathrm{div}v+\nabla g\mathrm{div}v\big{)}=0,\]
which turns out to be the appropriate structure to ensure the desired estimates on \(\psi\).
Then in SS3.2, the uniform a priori estimates independent of \((\epsilon,\eta)\) for the solutions \((\phi,u,l,h)\) to the linearized problem (3.1) (see SS3.1) are established. Based on these uniform estimates, one can first pass to the limit \(\epsilon\to 0\) in (3.1) to recover the solution of the corresponding linearized problem away from the vacuum with only physical viscosities. Then the unique solvability away from the vacuum to the corresponding Cauchy problem (3.144) (see SS3.4) of the nonlinear system (2.2) could be established through a standard iteration process. Finally the local-in-time well-posedness of the regular solution with far field vacuum to the Cauchy problem (2.2)-(2.6) can be obtained by passing to the limit \(\eta\to 0\) in (3.144).
## 3. Local-in-time well-posedness with far field vacuum
In this section, the proofs for Theorems 1.1 and 2.1 will be given.
### Linearization away from the vacuum with artificial dissipations
Let \(T\) be some positive time. To solve the nonlinear problem (2.2)-(2.6), we start with the following linearized problem for \((\phi^{\epsilon,\eta},u^{\epsilon,\eta},l^{\epsilon,\eta},h^{\epsilon,\eta})\) in \([0,T]\times\mathbb{R}^{3}\):
\[\begin{cases}&\phi^{\epsilon,\eta}_{t}+v\cdot\nabla\phi^{\epsilon,\eta}+(\gamma -1)\phi^{\epsilon,\eta}\mathrm{div}v=0,\\ &u^{\epsilon,\eta}_{t}+v\cdot\nabla v+a_{1}\phi^{\epsilon,\eta}\nabla l^{ \epsilon,\eta}+l^{\epsilon,\eta}\nabla\phi^{\epsilon,\eta}+a_{2}(l^{\epsilon,\eta})^{\nu}\sqrt{(h^{\epsilon,\eta})^{2}+\epsilon^{2}}Lu^{\epsilon,\eta}\\ =&a_{2}g\nabla(l^{\epsilon,\eta})^{\nu}\cdot Q(v)+a_{3}(l^{\epsilon,\eta})^{ \nu}\psi^{\epsilon,\eta}\cdot Q(v),\\ &(h^{\epsilon,\eta})^{-\frac{1}{2}}(l^{\epsilon,\eta}_{t}+v\cdot\nabla l^{ \epsilon,\eta})-a_{4}w^{\nu}((h^{\epsilon,\eta})^{2}+\epsilon^{2})^{\frac{1} {4}}\triangle l^{\epsilon,\eta}\\ =&a_{5}w^{\nu}n^{\epsilon,\eta}g^{\frac{3}{2}}H(v)+a_{6}w^{\nu+1}(h^{ \epsilon,\eta})^{-\frac{1}{2}}\mathrm{div}\psi^{\epsilon,\eta}+\Pi(l^{ \epsilon,\eta},h^{\epsilon,\eta},w,g),\\ &h^{\epsilon,\eta}_{t}+v\cdot\nabla h^{\epsilon,\eta}+(\delta-1)g\mathrm{ div}v=0,\\ &(\phi^{\epsilon,\eta},u^{\epsilon,\eta},l^{\epsilon,\eta},h^{\epsilon,\eta})|_{t=0}=(\phi^{\eta}_{0},u^{\eta}_{0},l^{ \eta}_{0},h^{\eta}_{0})\\ =&(\phi_{0}+\eta,u_{0},l_{0},(\phi_{0}+\eta)^{2\iota})\quad\text{for}\quad x \in\mathbb{R}^{3},\\ &(\phi^{\epsilon,\eta},u^{\epsilon,\eta},l^{\epsilon,\eta},h^{\epsilon,\eta}) \rightarrow(\eta,0,\bar{l},\eta^{2\iota})\quad\text{as}\ \ |x|\rightarrow\infty\quad\text{for}\quad\text{t}\geq 0, \end{cases} \tag{3.1}\]
where \(\epsilon\) and \(\eta\) are any given positive constants,
\[\begin{split}&\psi^{\epsilon,\eta}=\frac{a\delta}{\delta-1} \nabla h^{\epsilon,\eta},\quad n^{\epsilon,\eta}=(ah^{\epsilon,\eta})^{b}, \quad b=\frac{2-\delta-\gamma}{\delta-1}\leq 0,\\ &\Pi(l^{\epsilon,\eta},h^{\epsilon,\eta},w,g)=a_{7}w^{\nu+1}(h^{ \epsilon,\eta})^{-\frac{3}{2}}\psi^{\epsilon,\eta}\cdot\psi^{\epsilon,\eta}+a _{8}w^{\nu}(h^{\epsilon,\eta})^{-\frac{1}{2}}\nabla l^{\epsilon,\eta}\cdot \psi^{\epsilon,\eta}\\ &\qquad\qquad\qquad\qquad\qquad\qquad+a_{9}w^{\nu-1}\sqrt{g} \nabla w\cdot\nabla w,\end{split} \tag{3.2}\]
\(v=(v^{(1)},v^{(2)},v^{(3)})^{\top}\in\mathbb{R}^{3}\) is a given vector, \(g\) and \(w\) are given real functions satisfying \(w>0\), \((v(0,x),g(0,x),w(0,x))=(u_{0}(x),h_{0}(x)=(\phi^{\eta}_{0})^{2\iota}(x),l_{0} (x))\) and:
\[\begin{split}& g\in L^{\infty}\cap C([0,T]\times\mathbb{R}^{3}), \quad\nabla g\in C([0,T];L^{q}\cap D^{1,3}\cap D^{2}),\\ &\nabla g^{\frac{3}{4}}\in C([0,T];D^{1}_{*}),\ \ \nabla g^{\frac{3}{8}}\in C([0,T];L^{4}),\quad g_{t}\in C([0,T];H^{2}),\\ &(\nabla g_{tt},v_{tt},w_{tt})\in L^{2}([0,T];L^{2}),\quad v\in C ([0,T];H^{3})\cap L^{2}([0,T];H^{4}),\\ & t^{\frac{1}{2}}v\in L^{\infty}([0,T];D^{4}),\quad v_{t}\in C([0, T];H^{1})\cap L^{2}([0,T];D^{2}),\\ & t^{\frac{1}{2}}v_{t}\in L^{\infty}([0,T];D^{2})\cap L^{2}([0,T];D ^{3}),\quad w-\bar{l}\in C([0,T];D^{1}_{*}\cap D^{3}),\\ & t^{\frac{1}{2}}(v_{tt},w_{tt})\in L^{\infty}([0,T];L^{2})\cap L ^{2}([0,T];D^{1}_{*}),\quad\inf_{(t,x)\in[0,T]\times\mathbb{R}^{3}}w>0,\\ & w_{t}\in C([0,T];D^{1}_{*})\cap L^{2}([0,T];D^{2}),\quad t^{\frac{1 }{2}}w_{t}\in L^{\infty}([0,T];D^{2}).\end{split} \tag{3.3}\]
It follows from the standard theory [28] that the problem (3.1) has a global classical solution as follows.
**Lemma 3.1**.: _Assume that \(\eta\) and \(\epsilon\) are given positive constants, (1.17) holds, and the initial data \((\phi_{0},u_{0},l_{0},h_{0})\) satisfy (2.7)-(2.8). Then for any time \(T>0\), there
exists a unique classical solution \((\phi^{\epsilon,\eta},u^{\epsilon,\eta},l^{\epsilon,\eta},h^{\epsilon,\eta})\) to \((\ref{eq:1})\) in \([0,T]\times\mathbb{R}^{3}\) such that_
\[(\phi^{\epsilon,\eta}-\eta,l^{\epsilon,\eta}-\bar{l})\in C([0,T];D^{1 }_{*}\cap D^{3}),\quad(\phi^{\epsilon,\eta}_{t},\nabla h^{\epsilon,\eta},h^{ \epsilon,\eta}_{t})\in C([0,T];H^{2}),\] \[h^{\epsilon,\eta}\in L^{\infty}\cap C([0,T]\times\mathbb{R}^{3} ),\quad u^{\epsilon,\eta}\in C([0,T];H^{3})\cap L^{2}([0,T];H^{4}),\] \[(u^{\epsilon,\eta}_{t},l^{\epsilon,\eta}_{t})\in C([0,T];H^{1}) \cap L^{2}([0,T];D^{2}),\quad(u^{\epsilon,\eta}_{t},l^{\epsilon,\eta}_{tt})\in L ^{2}([0,T];L^{2}), \tag{3.4}\] \[t^{\frac{1}{2}}u^{\epsilon,\eta}\in L^{\infty}([0,T];D^{4}),\quad t ^{\frac{1}{2}}u^{\epsilon,\eta}_{t}\in L^{\infty}([0,T];D^{2})\cap L^{2}([0,T ];D^{3}),\] \[t^{\frac{1}{2}}(u^{\epsilon,\eta}_{tt},l^{\epsilon,\eta}_{tt}) \in L^{\infty}([0,T];L^{2})\cap L^{2}([0,T];D^{1}_{*}),\quad t^{\frac{1}{2}}l ^{\epsilon,\eta}_{t}\in L^{\infty}([0,T];D^{2}).\]
The next key analysis is to derive the uniform a priori estimates independent of \((\epsilon,\eta)\) for the unique solution \((\phi^{\epsilon,\eta},u^{\epsilon,\eta},l^{\epsilon,\eta},h^{\epsilon,\eta})\) to \((\ref{eq:1})\) obtained in Lemma 3.1.
### Uniform a priori estimates
Note that for any fixed \(\eta\in(0,1]\),
\[(\phi^{\eta}_{0},u^{\eta}_{0},l^{\eta}_{0},h^{\eta}_{0})=(\phi_{0}+\eta,u_{0}, l_{0},(\phi_{0}+\eta)^{2\iota}),\]
with \((\phi_{0},u_{0},l_{0},h_{0})\) satisfying (2.7)-(2.8) and \(\psi_{0}=\frac{a\delta}{\delta-1}\nabla\phi_{0}^{2\iota}\), there exists a constant \(c_{0}>0\) independent of \(\eta\) such that
\[2+\eta+\bar{l}+\|\phi^{\eta}_{0}-\eta\|_{D^{1}_{*}\cap D^{3}}+ \|u^{\eta}_{0}\|_{3}+\|\nabla h^{\eta}_{0}\|_{L^{q}\cap D^{1,3}\cap D^{2}} \tag{3.5}\] \[+|(h^{\eta}_{0})^{\frac{1}{4}}\nabla^{3}h^{\eta}_{0}|_{2}+|\nabla (h^{\eta}_{0})^{\frac{3}{4}}|_{D^{1}_{*}}+|\nabla(h^{\eta}_{0})^{\frac{3}{8}} |_{4}+|(h^{\eta}_{0})^{-1}|_{\infty}+|g^{\eta}_{1}|_{2}+|g^{\eta}_{2}|_{2}\] \[+|g^{\eta}_{3}|_{2}+|g^{\eta}_{4}|_{2}+|g^{\eta}_{5}|_{2}+|g^{ \eta}_{6}|_{2}+\|l^{\eta}_{0}-\bar{l}\|_{D^{1}_{*}\cap D^{3}}+|(l^{\eta}_{0})^ {-1}|_{\infty}\leq c_{0},\]
where
\[g^{\eta}_{1}=(\phi^{\eta}_{0})^{\iota}\nabla u^{\eta}_{0},\quad g ^{\eta}_{2}=(\phi^{\eta}_{0})^{2\iota}Lu^{\eta}_{0},\quad g^{\eta}_{3}=(\phi^ {\eta}_{0})^{\iota}\nabla((\phi^{\eta}_{0})^{2\iota}Lu^{\eta}_{0}),\] \[g^{\eta}_{4}=(\phi^{\eta}_{0})^{\frac{\iota}{2}}\nabla l^{\eta}_{ 0},\quad g^{\eta}_{5}=(\phi^{\eta}_{0})^{\frac{3}{4}}\triangle l^{\eta}_{0}, \quad g^{\eta}_{6}=(\phi^{\eta}_{0})^{\frac{3}{2}}\nabla((\phi^{\eta}_{0})^{ \iota}\triangle l^{\eta}_{0}).\]
**Remark 3.1**.: _First, it follows from the definition of \(g^{\eta}_{2}\) and \(\phi^{\eta}_{0}>\eta\) that_
\[\begin{cases}L((\phi^{\eta}_{0})^{2\iota}u^{\eta}_{0})=g^{\eta}_{2}-\frac{ \delta-1}{a\delta}G(\psi^{\eta}_{0},u^{\eta}_{0}),\\ (\phi^{\eta}_{0})^{2\iota}u^{\eta}_{0}\longrightarrow 0\ \ \text{as}\ \ |x| \longrightarrow\infty,\end{cases} \tag{3.6}\]
_where \(\psi^{\eta}_{0}=\frac{a\delta}{\delta-1}\nabla(\phi^{\eta}_{0})^{2\iota}=\frac {a\delta}{\delta-1}\nabla h^{\eta}_{0}\) and_
\[G=\alpha\psi^{\eta}_{0}\cdot\nabla u^{\eta}_{0}+\alpha\text{div}(u^{\eta}_{0} \otimes\psi^{\eta}_{0})+(\alpha+\beta)(\psi^{\eta}_{0}\text{div}u^{\eta}_{0}+ \psi^{\eta}_{0}\cdot\nabla u^{\eta}_{0}+u^{\eta}_{0}\cdot\nabla\psi^{\eta}_{0}). \tag{3.7}\]
_Then the standard elliptic theory and \((\ref{eq:1})\) yield that_
\[|(\phi^{\eta}_{0})^{2\iota}u^{\eta}_{0}|_{D^{2}}\leq C(|g^{\eta}_{2}|_{2}+|G(\psi^{\eta}_{0},u^{\eta}_{0})|_{2})\leq C_{1}, \tag{3.8}\] \[|(\phi^{\eta}_{0})^{2\iota}\nabla^{2}u^{\eta}_{0}|_{2}\leq C(|(\phi^{\eta}_{0})^{2\iota}u^{\eta}_{0}|_{D^{2}}+|\nabla\psi^{\eta}_{0}|_{3}|u^{ \eta}_{0}|_{6}+|\psi^{\eta}_{0}|_{\infty}|\nabla u^{\eta}_{0}|_{2})\leq C_{1},\]
_where \(C\) and \(C_{1}\) are generic positive constants independent of \((\epsilon,\eta)\). Due to \(\nabla^{2}\phi^{2\iota}_{0}\in L^{3}\) and \((\ref{eq:1})\), it holds that_
\[|(\phi^{\eta}_{0})^{\iota}\nabla^{2}\phi^{\eta}_{0}|_{2}+|(\phi^{\eta}_{0})^{ \iota}\nabla(\psi^{\eta}_{0}\cdot Q(u^{\eta}_{0}))|_{2}\leq C_{1}, \tag{3.9}\]
_where one has used the fact that_
\[|\phi^{\iota}_{0}\nabla^{2}\phi_{0}|_{2}\leq C_{1}(|\phi_{0}|_{6}|\phi_{0}|_{\infty}^{-\iota}|\nabla^{2}\phi^{2\iota}_{0}|_{ 3}+|\nabla\phi^{\iota}_{0}|_{6}|\nabla\phi_{0}|_{3})\leq C_{1},\]
\[|(\phi^{\eta}_{0})^{\iota}\nabla^{2}\phi^{\eta}_{0}|_{2}= \Big{|}\phi^{\iota}_{0}\nabla^{2}\phi_{0}\frac{\phi^{-\iota}_{0}}{( \phi_{0}+\eta)^{-\iota}}\Big{|}_{2}\leq|\phi^{\iota}_{0}\nabla^{2}\phi_{0}|_{2} \leq C_{1}.\]
_Second, the initial compatibility condition_
\[\nabla((\phi_{0}^{\eta})^{2\iota}Lu_{0}^{\eta})=(\phi_{0}^{\eta})^{-\iota}g_{3}^{ \eta}\in L^{2}\]
_implies formally that_
\[\begin{cases}L((\phi_{0}^{\eta})^{2\iota}u_{0}^{\eta})=\triangle^{-1}\text{div }((\phi_{0}^{\eta})^{-\iota}g_{3}^{\eta})-\frac{\delta-1}{a\delta}G(\psi_{0}^{ \eta},u_{0}^{\eta}),\\ (\phi_{0}^{\eta})^{2\iota}u_{0}^{\eta}\longrightarrow 0\ \ \text{as}\ \ |x| \longrightarrow\infty.\end{cases} \tag{3.10}\]
_Thus the standard elliptic theory yields_
\[\begin{split}|(\phi_{0}^{\eta})^{2\iota}u_{0}^{\eta}|_{D^{3}}\leq& C(|\phi_{0}^{\eta})^{-\iota}g_{3}^{\eta}|_{2}+|G(\psi_{0}^{\eta},u_{0}^{\eta})|_{D^{1} })\leq C_{1}<\infty,\\ |(\phi_{0}^{\eta})^{2\iota}\nabla^{3}u_{0}^{\eta}|_{2}\leq& C(|(\phi_{0}^{\eta})^{2\iota}u_{0}^{\eta}|_{D^{3}}+|\nabla\psi_{0}^{\eta}|_{3}| \nabla u_{0}^{\eta}|_{6}\\ &+|\psi_{0}^{\eta}|_{\infty}|\nabla^{2}u_{0}^{\eta}|_{2}+|\nabla^{2 }\psi_{0}^{\eta}|_{2}|u_{0}^{\eta}|_{\infty})\leq C_{1}.\end{split} \tag{3.11}\]
_Similarly, the definition of \(g_{5}^{\eta}\) and \(\phi_{0}^{\eta}>\eta\) imply that_
\[\begin{cases}\triangle((\phi_{0}^{\eta})^{\frac{3}{2\iota}}(l_{0}^{\eta}-\bar{ l}))=g_{5}^{\eta}+2\nabla(\phi_{0}^{\eta})^{\frac{3}{2\iota}}\cdot\nabla l_{0}^{ \eta}+(l_{0}^{\eta}-\bar{l})\triangle(\phi_{0}^{\eta})^{\frac{3}{2\iota}},\\ (\phi_{0}^{\eta})^{\frac{3}{2\iota}}(l_{0}^{\eta}-\bar{l})\longrightarrow 0\ \ \text{as}\ \ |x| \longrightarrow\infty,\end{cases} \tag{3.12}\]
_which, together with \((\ref{eq:1})\), yields that_
\[\begin{split}|(\phi_{0}^{\eta})^{\frac{3}{2\iota}}(l_{0}^{\eta} -\bar{l})|_{D^{2}}\leq& C(|g_{5}^{\eta}|_{2}+|(\phi_{0}^{\eta})^{-\frac{1}{2 \iota}}|_{\infty}|\psi_{0}^{\eta}|_{\infty}|\nabla l_{0}^{\eta}|_{2}\\ &+|l_{0}^{\eta}-\bar{l}|_{\infty}|\nabla^{2}(h_{0}^{\eta})^{\frac{3}{2 \iota}}|_{2})\leq C_{1}<\infty,\\ |(\phi_{0}^{\eta})^{\frac{3}{2\iota}}\nabla^{2}l_{0}^{\eta}|_{2} \leq& C(|(\phi_{0}^{\eta})^{\frac{3}{2\iota}}(l_{0}^{\eta}-\bar{l})|_{D^{2} }+|(\phi_{0}^{\eta})^{-\frac{1}{2\iota}}|_{\infty}|\psi_{0}^{\eta}|_{\infty} |\nabla l_{0}^{\eta}|_{2}\\ &+|l_{0}^{\eta}-\bar{l}|_{\infty}|\nabla^{2}(h_{0}^{\eta})^{\frac{3}{ 4}}|_{2})\leq C_{1}<\infty.\end{split} \tag{3.13}\]
_At last, the initial compatibility condition_
\[\nabla((\phi_{0}^{\eta})^{\iota}\triangle l_{0}^{\eta})=(\phi_{0}^{\eta})^{- \frac{3}{2\iota}}g_{6}^{\eta}\in L^{2}\]
_implies formally that_
\[\begin{cases}\triangle((\phi_{0}^{\eta})^{\frac{5}{2\iota}}(l_{0}^{\eta}-\bar{ l}))=\triangle^{-1}\text{div}(g_{6}^{\eta}+\nabla(\phi_{0}^{\eta})^{\frac{3}{2 \iota}}\cdot(\phi_{0}^{\eta})^{\iota}\triangle l_{0}^{\eta})\\ \qquad+2\nabla(\phi_{0}^{\eta})^{\frac{5}{2\iota}}\cdot\nabla l_{0}^{\eta}+(l_{0 }^{\eta}-\bar{l})\triangle(\phi_{0}^{\eta})^{\frac{5}{2\iota}},\\ (\phi_{0}^{\eta})^{\frac{5}{2\iota}}(l_{0}^{\eta}-\bar{l})\longrightarrow 0\ \ \text{as}\ \ |x| \longrightarrow\infty,\end{cases} \tag{3.14}\]
_which yields_
\[\begin{split}|(\phi_{0}^{\eta})^{\frac{5}{2\iota}}(l_{0}^{\eta}- \bar{l})|_{D^{3}}\leq& C(|g_{6}^{\eta}|_{2}+|\psi_{0}^{\eta}|_{\infty}|(h_{0}^{\eta})^{-1}|_{ \infty}^{\frac{1}{2}}|g_{5}^{\eta}|_{2}+\aleph)\leq C_{1},\\ |(\phi_{0}^{\eta})^{\frac{5}{2\iota}}\nabla^{3}l_{0}^{\eta}|_{2} \leq& C(|(\phi_{0}^{\eta})^{\frac{5}{2\iota}}(l_{0}^{\eta}-\bar{l}))|_{D^{3} }+\aleph)\leq C_{1},\end{split} \tag{3.15}\]
_where_
\[\begin{split}\aleph=&|\psi_{0}^{\eta}|_{\infty}|(\phi_{0}^{ \eta})^{\frac{1}{2\iota}}\nabla^{2}l_{0}^{\eta}|_{2}+|\nabla l_{0}^{\eta}|_{3}(| \nabla(\phi_{0}^{\eta})^{\frac{7}{4}\iota}|_{\infty}|\nabla(h_{0}^{\eta})^{ \frac{3}{8}}|_{6}+|(h_{0}^{\eta})^{\frac{1}{4}}\nabla^{2}h_{0}^{\eta}|_{6})\\ &+|l_{0}^{\eta}-\bar{l}|_{\infty}(|(h_{0}^{\eta})^{\frac{1}{4}} \nabla^{3}h_{0}^{\eta}|_{2}+|\nabla(\phi_{0}^{\eta})^{\frac{1}{2\iota}}|_{6}| \nabla^{2}(\phi_{0}^{\eta})^{2\iota}|_{3}+|\psi_{0}^{\eta}|_{\infty}|\nabla( \phi_{0}^{\eta})^{\frac{1}{4}\iota}|_{4}^{2}).\end{split}\]
_Actually, the rigorous verifications of (3.10)and (3.14) can be obtained by a standard smoothing process of the initial data, and details are omitted here._
Now let \(T\) be a positive fixed constant, and assume that there exist some time \(T^{*}\in(0,T]\) and constants \(c_{i}\)\((i=1,\cdots,5)\) such that
\[1<c_{0}\leq c_{1}\leq c_{2}\leq c_{3}\leq c_{4}\leq c_{5}, \tag{3.16}\]
and
\[\sup_{0\leq t\leq T^{*}}(\|\nabla g\|^{2}_{L^{\infty}\cap L^{q} \cap D^{1,3}\cap D^{2}}+|\nabla g^{\frac{3}{4}}|^{2}_{D^{1}_{*}}+|\nabla g^{ \frac{3}{8}}|^{2}_{4})(t)\leq c_{1}^{2},\] \[\inf_{[0,T^{*}]\times\mathbb{R}^{3}}w(t,x)\geq c_{1}^{-1},\quad \inf_{[0,T^{*}]\times\mathbb{R}^{3}}g(t,x)\geq c_{1}^{-1},\] \[\sup_{0\leq t\leq T^{*}}(|w|^{2}_{\infty}+|v|^{2}_{\infty}+|\sqrt {g}\nabla v|^{2}_{2}+\|v\|^{2}_{1})(t)+\int_{0}^{T^{*}}(|v|^{2}_{D^{2}}+|v_{t} |^{2}_{2})\mathrm{d}t\leq c_{1}^{2},\] \[\sup_{0\leq t\leq T^{*}}|g^{\frac{1}{4}}\nabla w(t)|^{2}_{2}+ \int_{0}^{T^{*}}(|g^{-\frac{1}{4}}w_{t}|^{2}_{2}+|\sqrt{g}\nabla^{2}w|^{2}_{2} )\mathrm{d}t\leq c_{2}^{2},\] \[\sup_{0\leq t\leq T^{*}}(|g^{-\frac{1}{4}}w_{t}|^{2}_{2}+|\sqrt{ g}\nabla^{2}w|^{2}_{2})(t)+\int_{0}^{T^{*}}(|g^{\frac{1}{4}}\nabla w_{t}|^{2}_{2} +|\sqrt{g}\nabla^{3}w|^{2}_{2})\mathrm{d}t\leq c_{2}^{2},\] \[\sup_{0\leq t\leq T^{*}}(|g^{\frac{1}{4}}\nabla w_{t}|^{2}_{2}+ |\sqrt{g}\nabla^{3}w|^{2}_{2})(t)+\int_{0}^{T^{*}}(|g^{-\frac{1}{4}}w_{tt}|^{2 }_{2}+|\sqrt{g}\nabla^{2}w_{t}|^{2}_{2})\mathrm{d}t\leq c_{2}^{2}, \tag{3.17}\] \[\text{ess}\sup_{0\leq t\leq T^{*}}t(|\sqrt{g}\nabla^{2}w_{t}|^{2} _{2}+|g^{-\frac{1}{4}}w_{tt}|^{2}_{2})(t)+\int_{0}^{T^{*}}t|g^{\frac{1}{4}}w_{ tt}|^{2}_{D^{1}_{*}}\mathrm{d}t\leq c_{2}^{2},\] \[\sup_{0\leq t\leq T^{*}}(|v|^{2}_{D^{2}}+|v_{t}|^{2}_{2}+|g\nabla ^{2}v|^{2}_{2})(t)+\int_{0}^{T^{*}}(|v|^{2}_{D^{3}}+|v_{t}|^{2}_{D^{1}_{*}}) \mathrm{d}t\leq c_{3}^{2},\] \[\sup_{0\leq t\leq T^{*}}(|v|^{2}_{D^{3}}+|\sqrt{g}\nabla v_{t}|^{ 2}_{2}+|g_{t}|^{2}_{D^{1}_{*}})(t)+\int_{0}^{T^{*}}(|v|^{2}_{D^{4}}+|v_{t}|^{2 }_{D^{2}}+|v_{tt}|^{2}_{2})\mathrm{d}t\leq c_{4}^{2},\] \[\sup_{0\leq t\leq T^{*}}(|g\nabla^{2}v|^{2}_{D^{1}_{*}}+|g_{t}|^{ 2}_{\infty})(t)+\int_{0}^{T^{*}}(|(g\nabla^{2}v)_{t}|^{2}_{2}+|g\nabla^{2}v|^{ 2}_{D^{2}})\mathrm{d}t\leq c_{4}^{2},\] \[\text{ess}\sup_{0\leq t\leq T^{*}}t(|v|^{2}_{D^{4}}+|g\nabla^{2}v _{t}|^{2}_{2})(t)+\int_{0}^{T^{*}}|g_{tt}|^{2}_{D^{1}_{*}}\mathrm{d}t\leq c_{5} ^{2},\] \[\text{ess}\sup_{0\leq t\leq T^{*}}t|v_{tt}(t)|^{2}_{2}+\int_{0}^{ T^{*}}t(|v_{tt}|^{2}_{D^{1}_{*}}+|\sqrt{g}v_{tt}|^{2}_{D^{1}_{*}}+|v_{t}|^{2}_{D^{3}}) \mathrm{d}t\leq c_{5}^{2}.\]
\(T^{*}\) and \(c_{i}\)\((i=1,\cdots,5)\) will be determined later, and depend only on \(c_{0}\) and the fixed constants \((A,R,c_{v},\alpha,\beta,\gamma,\delta,T)\). In the rest of SS3.2, \(M(c)\geq 1\) will denote a generic continuous and increasing function on \([0,\infty)\), and \(C\geq 1\) will denote a generic positive constant. Both \(M(c)\) and \(C\) depend only on fixed constants \((A,R,c_{v},\alpha,\beta,\gamma,\delta,T)\), and may be different from line to line. Moreover, in the rest of SS3.2, without ambiguity, we simply drop the superscript \(\epsilon\) and \(\eta\) in \((\phi_{0}^{\eta},u_{0}^{\eta},l_{0}^{\eta},h_{0}^{\eta},\psi_{0}^{\eta})\), \((\phi^{\epsilon,\eta},u^{\epsilon,\eta},l^{\epsilon,\eta},h^{\epsilon,\eta}, \psi^{\epsilon,\eta})\), and \((g_{1}^{\eta},g_{2}^{\eta},g_{3}^{\eta},g_{4}^{\eta},g_{5}^{\eta},g_{6}^{\eta})\).
#### 3.2.1. A priori estimates for \(\phi\)
In the rest of SS3.2, let \((\phi,u,l,h)\) be the unique classical solution to (3.1) in \([0,T]\times\mathbb{R}^{3}\) obtained in Lemma 3.1.
**Lemma 3.2**.: _For \(T_{1}=\min\{T^{*},(1+Cc_{4})^{-6}\}\) and \(t\in[0,T_{1}]\), it holds that_
\[\begin{split}&\|\phi(t)-\eta\|_{D_{4}^{1}\cap D^{3}}\leq Cc_{0}, \quad|\phi_{t}(t)|_{2}\leq Cc_{0}c_{1},\quad|\phi_{t}(t)|_{D_{4}^{1}}\leq Cc_{ 0}c_{3},\\ &|\phi_{t}(t)|_{D^{2}}\leq Cc_{0}c_{4},\quad|\phi_{tt}(t)|_{2} \leq Cc_{4}^{3},\quad\int_{0}^{t}\|\phi_{ss}\|_{1}^{2}ds\leq Cc_{0}^{2}c_{4}^{2 }.\end{split} \tag{3.18}\]
Proof.: First, it follows directly from \(\eqref{eq:1}_{1}\) that, for \(0\leq t\leq T_{1}\),
\[|\phi|_{\infty}\leq|\phi_{0}|_{\infty}\exp\big{(}C\int_{0}^{t}|\mathrm{div}v|_ {\infty}\mathrm{d}s\big{)}\leq Cc_{0}. \tag{3.19}\]
Second, standard energy estimates for transport equations, (3.17) and (3.19) yield that, for \(0\leq t\leq T_{1}\),
\[\begin{split}\|\phi(t)-\eta\|_{D_{4}^{1}\cap D^{3}}\leq& C(\|\phi_{0}-\eta\|_{D_{4}^{1}\cap D^{3}}+\eta\int_{0}^{t}\|\nabla v \|_{3}\mathrm{d}s)\exp\Big{(}\int_{0}^{t}C\|v\|_{4}\mathrm{d}s\Big{)}\leq Cc_ {0},\end{split}\]
which, together with \(\eqref{eq:1}_{1}\) and (3.17), implies that for \(0\leq t\leq T_{1}\),
\[\begin{split}|\phi_{t}(t)|_{2}\leq& C(|v_{3}|\nabla\phi|_{6}+|\phi|_{\infty}|\nabla v|_{2})\leq Cc_{0}c_{1},\\ |\phi_{t}(t)|_{D_{1}^{1}}\leq& C(|v_{\infty}|\nabla^ {2}\phi|_{2}+|\nabla\phi|_{6}|\nabla v|_{3}+|\phi|_{\infty}|\nabla^{2}v|_{2}) \leq Cc_{0}c_{3},\\ |\phi_{t}(t)|_{D^{2}}\leq& C\|v\|_{3}(\|\nabla\phi \|_{2}+|\phi|_{\infty})\leq Cc_{0}c_{4},\\ |\phi_{tt}|_{2}\leq& C(|v_{t}|_{3}|\nabla\phi|_{6}+|v |_{\infty}|\nabla\phi_{t}|_{2}+|\nabla v|_{\infty}|\phi_{t}|_{2}+|\phi|_{ \infty}|\nabla v_{t}|_{2})\leq Cc_{4}^{3},\\ \int_{0}^{t}\|\phi_{ss}\|_{1}^{2}\mathrm{d}s\leq& \int_{0}^{t}(\|(v\cdot\nabla\phi)_{s}\|_{1}+\|(\phi\mathrm{div}v)_{s}\|_{1})^{ 2}\mathrm{d}s\leq Cc_{0}^{2}c_{4}^{2}.\end{split}\]
The proof of Lemma 3.2 is complete.
#### 3.2.2. A priori estimates for \(\psi\)
The following estimates for \(\psi\) are needed to deal with the degenerate elliptic operators in (3.1).
**Lemma 3.3**.: For \(t\in[0,T_{1}]\) and \(q>3\), it holds that
\[\begin{split}&|\psi(t)|_{\infty}^{2}+\|\psi(t)\|_{L^{q}\cap D^{ 1,3}\cap D^{2}}^{2}\leq Cc_{0}^{2},\ \ |\psi_{t}(t)|_{2}\leq Cc_{3}^{2},\\ &|h_{t}(t)|_{\infty}^{2}\leq Cc_{3}^{3}c_{4},\quad|\psi_{t}(t)|_{ D_{4}^{1}}^{2}+\int_{0}^{t}(|\psi_{ss}|_{2}^{2}+|h_{ss}|_{6}^{2})\mathrm{d}s \leq\mathrm{Cc}_{4}^{4}.\end{split} \tag{3.20}\]
Proof.: It follows from \(\psi=\frac{a\delta}{\delta-1}\nabla h\) and \(\eqref{eq:1}_{4}\) that
\[\psi_{t}+\sum_{k=1}^{3}A_{k}(v)\partial_{k}\psi+B^{*}(v)\psi+a\delta(g\nabla \mathrm{div}v+\nabla g\mathrm{div}v)=0, \tag{3.21}\]
with \(B^{*}(v)=(\nabla v)^{\top}\) and \(A_{k}(v)\) defined in (2.2).
First, multiplying (3.21) by \(q\psi|\psi|^{q-2}\) and integrating over \(\mathbb{R}^{3}\) yield that
\[\begin{split}\frac{d}{dt}|\psi|_{q}^{q}\leq& C(|\nabla v|_{\infty}|\psi|_{q}^{q}+|\mathrm{div}v|_{ \infty}|\nabla g|_{q}|\psi|_{q}^{q-1}+|g\nabla^{2}v|_{q}|\psi|_{q}^{q-1})\\ \leq& C(|\nabla v|_{\infty}|\psi|_{q}^{q}+|\mathrm{ div}v|_{\infty}|\nabla g|_{q}|\psi|_{q}^{q-1}+\|g\nabla^{2}v\|_{2}|\psi|_{q}^{q-1}). \end{split} \tag{3.22}\]
According to (3.17), one can obtain that
\[\int_{0}^{t}\|g\nabla^{2}v\|_{2}\mathrm{d}s\leq t^{\frac{1}{2}}\big{(}\int_{0}^{t}\| g\nabla^{2}v\|_{2}^{2}\mathrm{d}s\big{)}^{\frac{1}{2}}\leq c_{4}t^{\frac{1}{2}},\]
which, together with (3.22) and Gronwall's inequality, yields that
\[|\psi(t)|_{q}\leq Cc_{0}\quad\text{for}\quad 0\leq t\leq T_{1}.\]
Second, set \(\varsigma=(\varsigma_{1},\varsigma_{2},\varsigma_{3})^{\top}\) (\(|\varsigma|=1\) and \(\varsigma_{i}=0,1\)). Applying \(\partial_{x}^{\varsigma}\) to (3.21), multiplying by \(3|\partial_{x}^{\varsigma}\psi|\partial_{x}^{\varsigma}\psi\) and then integrating over \(\mathbb{R}^{3}\), one can get
\[\frac{d}{dt}|\partial_{x}^{\varsigma}\psi|_{3}^{3}\leq \Big{(}\sum_{k=1}^{3}|\partial_{k}A_{k}(v)|_{\infty}+|B^{*}(v)|_{ \infty}\Big{)}|\partial_{x}^{\varsigma}\psi|_{3}^{3}+C|\Theta_{\varsigma}|_{3} |\partial_{x}^{\varsigma}\psi|_{3}^{2}, \tag{3.23}\]
where
\[\Theta_{\varsigma}=\partial_{x}^{\varsigma}(B^{*}\psi)-B^{*}\partial_{x}^{ \varsigma}\psi+\sum_{k=1}^{3}\big{(}\partial_{x}^{\varsigma}(A_{k}\partial_{k }\psi)-A_{k}\partial_{k}\partial_{x}^{\varsigma}\psi\big{)}+a\delta\partial_{ x}^{\varsigma}\big{(}g\nabla\mathrm{div}v+\nabla g\mathrm{div}v\big{)}.\]
On the other hand, for \(|\varsigma|=2\) and \(\varsigma_{i}=0,1,2\), applying \(\partial_{x}^{\varsigma}\) to (3.21), multiplying by \(2\partial_{x}^{\varsigma}\psi\) and then integrating over \(\mathbb{R}^{3}\) lead to
\[\frac{d}{dt}|\partial_{x}^{\varsigma}\psi|_{2}^{2}\leq \Big{(}\sum_{k=1}^{3}|\partial_{k}A_{k}(v)|_{\infty}+|B^{*}(v)|_{ \infty}\Big{)}|\partial_{x}^{\varsigma}\psi|_{2}^{2}+C|\Theta_{\varsigma}|_{2 }|\partial_{x}^{\varsigma}\psi|_{2}. \tag{3.24}\]
For \(|\varsigma|=1\), it is easy to obtain
\[|\Theta_{\varsigma}|_{3}\leq C\big{(}|\nabla^{2}v|_{3}(|\psi|_{\infty}+|\nabla g|_{\infty})+| \nabla v|_{\infty}(|\nabla\psi|_{3}+|\nabla^{2}g|_{3})+|\nabla(g\nabla^{2}v) |_{3}\big{)}. \tag{3.25}\]
Similarly, for \(|\varsigma|=2\), one has
\[|\Theta_{\varsigma}|_{2}\leq C\big{(}|\nabla v|_{\infty}(|\nabla^{2}\psi|_{2}+|\nabla^{3}g|_{2} )+|\nabla^{2}v|_{6}(|\nabla\psi|_{3}+|\nabla^{2}g|_{3})\big{)} \tag{3.26}\] \[+C|\nabla^{3}v|_{2}(|\psi|_{\infty}+|\nabla g|_{\infty})+C|g \nabla\mathrm{div}v|_{D^{2}}.\]
It follows from (3.23)-(3.26) and the Gagliardo-Nirenberg inequality
\[|\psi|_{\infty}\leq C|\psi|_{q}^{\Xi}|\nabla\psi|_{6}^{1-\Xi}\leq C|\psi|_{q}^{\Xi}| \nabla^{2}\psi|_{2}^{1-\Xi}\quad\text{with}\quad\Xi=\frac{q}{6+q},\]
that
\[\frac{d}{dt}\|\psi(t)\|_{D^{1,3}\cap D^{2}}\leq Cc_{4}\|\psi(t)\|_{D^{1,3} \cap D^{2}}+C|g\nabla\mathrm{div}v|_{D^{2}}+Cc_{4}^{2},\]
which, along with Gronwall's inequality, implies that for \(0\leq t\leq T_{1}\),
\[\|\psi(t)\|_{D^{1,3}\cap D^{2}}\leq \Big{(}c_{0}+Cc_{4}^{2}t+C\int_{0}^{t}|g\nabla\mathrm{div}v|_{D^{ 2}}\mathrm{d}s\Big{)}\exp(Cc_{4}t)\leq Cc_{0}. \tag{3.27}\]
Next, due to (3.21), it holds that for \(0\leq t\leq T_{1}\),
\[|\psi_{t}(t)|_{2}\leq C\big{(}|\nabla v|_{2}|\psi|_{D^{1,3}}+|\nabla v|_{2}|\psi|_{ \infty}+|g\nabla^{2}v|_{2}+|\nabla g|_{\infty}|\nabla v|_{2}\big{)}\leq Cc_{ 3}^{2},\] \[|\nabla\psi_{t}(t)|_{2}\leq C\big{(}\|v\|_{3}(\|\psi\|_{L^{q}\cap D^{1,3}\cap D^{2}}+ \|\nabla g\|_{L^{q}\cap D^{1,3}\cap D^{2}})+|g\nabla^{2}v|_{D_{*}^{1}}\big{)} \leq Cc_{4}^{2},\] \[\int_{0}^{t}|\psi_{ss}|_{2}^{2}\mathrm{d}s\leq C\int_{0}^{t}\Big{(}|\nabla(v\cdot\psi)_{s}|_{2}^{2}+|(g \nabla\mathrm{div}v)_{s}|_{2}^{2}+|(\nabla g\mathrm{div}v)_{s}|_{2}^{2}\Big{)} \mathrm{d}s\leq Cc_{4}^{4}.\]
Finally, it follows from Gagliardo-Nirenberg inequality and (3.17) that
\[|g\mathrm{div}v|_{\infty}\leq C|g\mathrm{div}v|_{D^{1}}^{\frac{1}{2}}|g\mathrm{div}v|_{D^{2}}^ {\frac{1}{2}}\leq C\big{(}|\nabla g|_{\infty}|\nabla v|_{2}+|g\nabla^{2}v|_{2} \big{)}^{\frac{1}{2}} \tag{3.28}\] \[\cdot\big{(}|\nabla^{2}g|_{2}|\nabla v|_{\infty}+|\nabla g|_{ \infty}|\nabla^{2}v|_{2}+|g\nabla^{2}v|_{D_{*}^{1}}\big{)}^{\frac{1}{2}}\leq Cc _{3}^{\frac{3}{2}}c_{4}^{\frac{1}{2}}.\]
Then, together with \((\ref{eq:1})_{4}\), it yields that for \(0\leq t\leq T_{1}\),
\[|h_{t}(t)|_{\infty}\leq C(|v|_{\infty}|\psi|_{\infty}+|g\mathrm{div}v|_{\infty})\leq Ce_{3}^{\frac{3}{2}}c_{4}^{\frac{1}{2}},\] \[\int_{0}^{t}|h_{ss}|_{6}^{2}\mathrm{d}s\leq C\int_{0}^{t}\big{(}|v|_{\infty}|\psi_{s}|_{6}+|v_{s}|_{6}|\psi|_{ \infty}+|g_{s}|_{\infty}|\nabla v|_{6}+|g\nabla v_{s}|_{6}\big{)}^{2}\mathrm{d}s\] \[\leq Cc_{4}^{4},\]
where one has used the fact that
\[|g\nabla v_{t}|_{6}\leq C\big{(}|\nabla g|_{\infty}|\nabla v_{t}|_{2}+|g\nabla^{2}v_{t}| _{2}) \tag{3.29}\] \[\leq C\big{(}|\nabla g|_{\infty}|\nabla v_{t}|_{2}+|(g\nabla^{2}v)_{t }|_{2}+|g_{t}|_{\infty}|\nabla^{2}v|_{2}).\]
The proof of Lemma 3.3 is complete.
#### 3.2.3. The equivalence of \(g\) and \(h\) in short time
Set \(\varphi=h^{-1}\).
**Lemma 3.4**.: _It holds that for \((t,x)\in[0,T_{1}]\times\mathbb{R}^{3}\),_
\[\frac{2}{3}\eta^{-2\iota}<\varphi(t,x)\leq 2c_{0},\quad h(t,x)>\frac{1}{2c_{0}},\quad\widetilde{C}^{-1} \leq gh^{-1}(t,x)\leq\widetilde{C}, \tag{3.30}\]
_where \(\widetilde{C}\) is a suitable constant independent of \((\epsilon,\eta)\) and \(c_{i}\)\((i=1,2,...,5)\)._
Proof.: Note that
\[\varphi_{t}+v\cdot\nabla\varphi-(\delta-1)g\varphi^{2}\mathrm{div}v=0. \tag{3.31}\]
Let \(X(t;x)\) be the particle path defined by
\[\begin{cases}\frac{d}{ds}X(t;x)=v(s,X(t;x)),&0\leq t\leq T;\\ X(0;x)=x,&x\in\mathbb{R}^{3}.\end{cases} \tag{3.32}\]
Then
\[\varphi(t,X(t;x))=\varphi_{0}(x)\Big{(}1+(1-\delta)\varphi_{0}(x)\int_{0}^{t }g\mathrm{div}v(s,X(s;x))\mathrm{d}s\Big{)}^{-1}. \tag{3.33}\]
This, along with (3.28), implies that
\[\frac{2}{3}\eta^{-2\iota}<\varphi(t,x)<2|\varphi_{0}|_{\infty}\leq 2c_{0}\quad \text{for}\quad[t,x]\in[0,T_{1}]\times\mathbb{R}^{3}. \tag{3.34}\]
Set \(gh^{-1}=y(t,x)\). Then
\[y_{t}+yh^{-1}h_{t}=g_{t}\varphi;\quad y(0,x)=1. \tag{3.35}\]
Thus
\[y(t,x)=\exp\big{(}-\int_{0}^{t}h_{s}h^{-1}\mathrm{d}s\big{)}\big{(}1+\int_{0}^ {t}g_{s}\varphi\exp\big{(}\int_{0}^{s}h_{\tau}h^{-1}\mathrm{d}\tau\big{)} \mathrm{d}s\big{)}, \tag{3.36}\]
which, along with Lemma 3.3, (3.17) and (3.34), yields (3.30).
The proof of Lemma 3.4 is complete.
#### 3.2.4. A priori estimates for h-related auxiliary variables
Set
\[\xi=\nabla h^{\frac{3}{4}},\quad\zeta=\nabla h^{\frac{3}{8}},\quad n=(ah)^{b}=a^{b }h^{\frac{2-\delta-\gamma}{\delta-1}}.\]
**Lemma 3.5**.: For \(t\in[0,T_{1}]\) and \(q>3\), it holds that
\[|\xi(t)|_{D_{*}^{1}}+|\zeta(t)|_{4}+|h^{-\frac{1}{4}}\nabla^{2}h( t)|_{2}\leq M(c_{0}), \tag{3.37}\] \[\|n(t)\|_{L^{\infty}\cap D^{1,q}\cap D^{1,4}\cap D^{1,6}\cap D^{ 2}\cap D^{3}}\leq M(c_{0}),\quad|n_{t}(t)|_{2}\leq M(c_{0})c_{1},\] \[|n_{t}(t)|_{\infty}+|\nabla n_{t}(t)|_{2}+|\nabla n_{t}(t)|_{6} \leq M(c_{0})c_{4}^{2},\ \ |n_{tt}(t)|_{2}\leq M(c_{0})c_{4}^{3}.\]
Proof.: We start with estimate on \(\xi\). (3.1)\({}_{4}\) implies that
\[\xi_{t}+\sum_{k=1}^{3}A_{k}(v)\partial_{k}\xi+B^{*}(v)\xi+\frac{3}{4}(\delta-1 )\nabla(h^{-\frac{1}{4}}g\mathrm{div}v)=0. \tag{3.38}\]
Set \(\varsigma=(\varsigma_{1},\varsigma_{2},\varsigma_{3})^{\top}\) (\(|\varsigma|=1\) and \(\varsigma_{i}=0,1\)). Applying \(\partial_{x}^{\varsigma}\) to (3.38), multiplying by \(2\partial_{x}^{\varsigma}\xi\) and then integrating over \(\mathbb{R}^{3}\), one can get
\[\frac{d}{dt}|\partial_{x}^{\varsigma}\xi|_{2}^{2}\leq C(|\nabla v|_{\infty}+|g\mathrm{div}v|_{\infty}|\varphi|_{\infty})| \partial_{x}^{\varsigma}\xi|_{2}^{2}+C|\nabla^{2}v|_{3}|\xi|_{6}|\partial_{x}^ {\varsigma}\xi|_{2}\] \[+C\big{(}|\varphi|_{\infty}^{\frac{5}{4}}(|gh^{-1}|_{\infty}| \psi|_{\infty}^{2}|\nabla v|_{2}+|\nabla g|_{\infty}|\nabla v|_{2}|\psi|_{ \infty}+|g\nabla^{2}v|_{2}|\psi|_{\infty})\] \[+|\varphi|_{\infty}^{\frac{1}{4}}(|\nabla^{2}g|_{3}|\nabla v|_{6 }+|\nabla g|_{\infty}|\nabla^{2}v|_{2}+|g\nabla^{3}v|_{2})\big{)}|\partial_{x }^{\varsigma}\xi|_{2},\]
which, along with (3.17), Lemmas 3.3-3.4 and Gronwall's inequality, yields that
\[|\nabla\xi(t)|_{2}\leq Cc_{0}\quad\text{for}\quad 0\leq t\leq T_{1}. \tag{3.39}\]
Similarly, (3.1)\({}_{4}\) implies that
\[\zeta_{t}+\sum_{k=1}^{3}A_{k}(v)\partial_{k}\zeta+B^{*}(v)\zeta+\frac{3}{8}( \delta-1)\nabla(h^{-\frac{5}{8}}g\mathrm{div}v)=0. \tag{3.40}\]
Then multiplying (3.40) by \(4|\zeta|^{2}\zeta\) and integrating with respect to \(x\) over \(\mathbb{R}^{3}\) yield
\[\frac{d}{dt}|\zeta|_{4}^{4}\leq C|\nabla v|_{\infty}|\zeta|_{4}^{4}+C\big{(}|g\nabla^{2}v|_{4}+| \nabla g|_{\infty}|\nabla v|_{4}+|gh^{-1}|_{\infty}|\nabla v|_{4}|\psi|_{ \infty}\big{)}|\varphi|_{\infty}^{\frac{5}{8}}|\zeta|_{4}^{3},\]
which, along with (3.17), Lemmas 3.3-3.4 and Gronwall's inequality, yields that
\[|\zeta(t)|_{4}\leq Cc_{0}\quad\text{for}\quad 0\leq t\leq T_{1}. \tag{3.41}\]
Combining (3.39) with (3.41) yields that
\[|h^{-\frac{1}{4}}\nabla^{2}h(t)|_{2}\leq C(|\nabla\xi(t)|_{2}+|\zeta(t)|_{4}^ {2})\leq M(c_{0})\quad\text{for}\quad 0\leq t\leq T_{1}. \tag{3.42}\]
Finally, note that \(n=(ah)^{b}\) satisfies
\[n_{t}+v\cdot\nabla n+(2-\delta-\gamma)a^{b}h^{b-1}g\mathrm{div}v=0. \tag{3.43}\]
Then it follows from Lemmas 3.3-3.4, (3.17), (3.39) and (3.41) that for \(0\leq t\leq T_{1}\),
\[|n|_{\infty}\leq a^{b}|\varphi|_{\infty}^{-b}\leq M(c_{0}),\quad| \nabla n|_{q}=a^{b}|bh^{b-1}\nabla h|_{q}\leq M(c_{0}),\] \[|\nabla n|_{6}=\frac{4}{3}a^{b}|bh^{b-\frac{3}{4}}\nabla h^{\frac{ 3}{4}}|_{6}\leq M(c_{0}),\quad|\nabla n|_{4}=\frac{8}{3}a^{b}|bh^{b-\frac{3}{8 }}\nabla h^{\frac{3}{8}}|_{4}\leq M(c_{0}),\] \[|\nabla^{2}n|_{2}\leq C(|h^{b-\frac{3}{4}}\nabla^{2}h^{\frac{3}{ 4}}|_{2}+|h^{b-\frac{3}{4}}\nabla h^{\frac{3}{8}}\cdot\nabla h^{\frac{3}{8}}|_ {2})\leq M(c_{0}),\] \[|\nabla^{3}n|_{2}\leq C(|h^{b-1}\nabla^{3}h|_{2}+|h^{b-\frac{7}{ 4}}\nabla^{2}h\cdot\nabla h^{\frac{3}{4}}|_{2}+|h^{b-\frac{9}{4}}|\nabla h^{ \frac{3}{4}}|^{3}|_{2})\leq M(c_{0}),\] \[|n_{t}|_{\infty}\leq C(|v|_{\infty}|\nabla n|_{\infty}+|\varphi^{1 -b}|_{\infty}|g\mathrm{div}v|_{\infty})\leq M(c_{0})c_{4}^{2},\] \[|n_{t}|_{2}\leq C(|v|_{3}|\nabla n|_{6}+|h^{b}|_{\infty}|gh^{-1}| _{\infty}|\mathrm{div}v|_{2})\leq M(c_{0})c_{1},\] \[|\nabla n_{t}|_{2}\leq C(|\nabla(v\cdot\nabla n)|_{2}+|\nabla(h^{ b-1}g\mathrm{div}v)|_{2})\leq M(c_{0})c_{4}^{2},\] \[|\nabla n_{t}|_{6}\leq C(|\nabla(v\cdot\nabla n)|_{6}+|\nabla(h^{ b-1}g\mathrm{div}v)|_{6})\leq M(c_{0})c_{4}^{2},\] \[|n_{tt}|_{2}\leq C(|(v\cdot\nabla n)_{t}|_{2}+|(h^{b-1}g\mathrm{div }v)_{t}|_{2})\leq M(c_{0})c_{4}^{3}.\]
The proof of Lemma 3.5 is complete.
#### 3.2.5. A priori estimates for \(l\)
Recall that
\[H(v)=2\alpha\sum_{i=1}^{3}(\partial_{i}v_{i})^{2}+\beta(\mathrm{div}v)^{2}+ \alpha\sum_{i\neq j}^{3}(\partial_{i}v_{j})^{2}+2\alpha\sum_{i>j}(\partial_{i }v_{j})(\partial_{j}v_{i}).\]
**Lemma 3.6**.: _For \(T_{2}=\min\{T_{1},(1+Cc_{4})^{-12-2\nu}\}\) and \(t\in[0,T_{2}]\), it holds that_
\[|\nabla l(t)|_{2}^{2}+|h^{\frac{1}{4}}\nabla l(t)|_{2}^{2}+\int_{ 0}^{t}|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{s}|_{2}^{2}\text{ds}\leq M(c_{0}), \tag{3.44}\] \[\int_{0}^{t}(|h^{-\frac{1}{4}}l_{s}|_{2}^{2}+|\sqrt{h}\nabla^{2} l|_{2}^{2}+|\nabla^{2}l|_{2}^{2})\text{ds}\leq M(c_{0})c_{1}^{3\nu}.\]
Proof.: It follows from (3.1)\({}_{3}\) that
\[w^{-\nu}h^{-\frac{1}{2}}(l_{t}+v\cdot\nabla l)-a_{4}(h^{2}+\epsilon ^{2})^{\frac{1}{4}}\triangle l \tag{3.45}\] \[= a_{5}ng^{\frac{3}{2}}H(v)+a_{6}wh^{-\frac{1}{2}}\mathrm{div}\psi+ w^{-\nu}\Pi(l,h,w,g).\]
Multiplying (3.45) by \(l_{t}\) and integrating over \(\mathbb{R}^{3}\), one can obtain by integration by parts, Holder's inequality, (3.17), Lemmas 3.3-3.5 and Young's inequality that
\[\frac{a_{4}}{2}\frac{d}{dt}|(h^{2}+\epsilon^{2})^{\frac{1}{8}} \nabla l|_{2}^{2}+|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{t}|_{2}^{2}\] \[= -a_{4}\int\nabla(h^{2}+\epsilon^{2})^{\frac{1}{4}}\cdot\nabla ll_ {t}-\int w^{-\nu}h^{-\frac{1}{2}}v\cdot\nabla ll_{t}+a_{5}\int ng^{\frac{3}{2} }H(v)l_{t}\] \[+a_{6}\int wh^{-\frac{1}{2}}\mathrm{div}\psi l_{t}+\int w^{-\nu} \Pi(l,h,w,g)l_{t}+\frac{1}{4}\int a_{4}\frac{hh_{t}}{(h^{2}+\epsilon^{2})^{ \frac{3}{4}}}|\nabla l|^{2}\] \[\leq C(|w^{\frac{\nu}{2}}|_{\infty}|\psi|_{\infty}+|w^{-\frac{\nu}{2} }|_{\infty}|v|_{\infty})|\varphi|_{\infty}^{\frac{1}{6}}|h^{\frac{1}{4}}\nabla l |_{2}|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{t}|_{2}\] \[+C\big{(}|w^{\frac{\nu}{2}}|_{\infty}|n|_{\infty}|g^{\frac{3}{4}} \nabla v|_{3}|g\nabla v|_{6}|hg^{-1}|_{\infty}^{\frac{1}{4}}+|w^{1+\frac{\nu}{2} }|_{\infty}(|h^{-\frac{1}{4}}\nabla^{2}h|_{2}+|\nabla h^{\frac{3}{8}}|_{4}^{2})\]
\[+|w^{\nu+1+\frac{\nu}{2}}|_{\infty}|hg^{-1}|^{\frac{1}{4}}_{\infty}|g^{ \frac{1}{4}}\nabla w|_{3}|\sqrt{g}\nabla w|_{6}\big{)}|w^{-\frac{\nu}{2}}h^{- \frac{1}{4}}l_{t}|_{2}\] \[+C|\varphi|_{\infty}|h_{t}|_{\infty}|h^{\frac{1}{4}}\nabla l|_{2}^ {2}\] \[\leq M(c_{0})c_{1}^{\nu}c_{4}^{2}|h^{\frac{1}{4}}\nabla l|_{2}^{2}+M(c_ {0})c_{1}^{2+\nu}c_{4}^{10}+\frac{1}{2}|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{t} |_{2}^{2},\]
which, along with (3.5),
\[|g^{\frac{3}{4}}\nabla v|_{3}\leq C|\sqrt{g}\nabla v|_{2}^{\frac{1}{2}}|g \nabla v|_{6}^{\frac{1}{2}}, \tag{3.46}\]
and Gronwall's inequality, yields that for \(0\leq t\leq\min\{T_{1},(1+Cc_{4})^{-12-\nu}\}\),
\[|h^{\frac{1}{4}}\nabla l|_{2}^{2}+\int_{0}^{t}|w^{-\frac{\nu}{2}}h^{-\frac{1} {4}}l_{s}|_{2}^{2}\mathrm{d}s\leq M(c_{0}). \tag{3.47}\]
This, together with (3.17) and Lemma 3.4, leads to
\[|\nabla l|_{2}^{2}\leq M(c_{0}),\quad\int_{0}^{t}|h^{-\frac{1}{4}}l_{s}|_{2}^{2 }\mathrm{d}s\leq M(c_{0})c_{1}^{\nu}. \tag{3.48}\]
On the other hand, (3.45) implies that
\[-a_{4}\triangle((h^{2}+\epsilon^{2})^{\frac{1}{4}}(l-\bar{l})) =-a_{4}(h^{2}+\epsilon^{2})^{\frac{1}{4}}\triangle l-a_{4}F(\nabla (h^{2}+\epsilon^{2})^{\frac{1}{4}},l-\bar{l}) \tag{3.49}\] \[=w^{-\nu}\mathcal{E}-a_{4}F(\nabla(h^{2}+\epsilon^{2})^{\frac{1}{ 4}},l-\bar{l}),\]
where
\[\mathcal{E}= -h^{-\frac{1}{2}}(l_{t}+v\cdot\nabla l)+a_{5}w^{\nu}ng^{\frac{3}{ 2}}H(v)+a_{6}w^{\nu+1}h^{-\frac{1}{2}}\mathrm{div}\psi+\Pi(l,h,w,g), \tag{3.50}\] \[F= F(\nabla(h^{2}+\epsilon^{2})^{\frac{1}{4}},l-\bar{l})=(l-\bar{l })\triangle(h^{2}+\epsilon^{2})^{\frac{1}{4}}+2\nabla(h^{2}+\epsilon^{2})^{ \frac{1}{4}}\cdot\nabla l.\]
To derive the \(L^{2}\) estimate of \(\nabla^{2}l\) from (3.49), one starts with the \(L^{2}\) estimates of
\[(\mathcal{E},F=F(\nabla(h^{2}+\epsilon^{2})^{\frac{1}{4}},l-\bar{l})).\]
It follows from (3.17), (3.28), (3.48) and Lemmas 3.3-3.5 that
\[|\mathcal{E}|_{2}\leq C(|\varphi|^{\frac{1}{4}}_{\infty}|h^{-\frac{1}{4}}l_{t}|_{2}+| \varphi|^{\frac{1}{4}}_{\infty}|v|_{\infty}|\nabla l|_{2}+|w^{\nu}|_{\infty}| n|_{\infty}|g^{\frac{3}{2}}\nabla v\cdot\nabla v|_{2} \tag{3.51}\] \[+|w^{\nu+1}|_{\infty}|\varphi|^{\frac{1}{4}}_{\infty}|h^{-\frac{1} {4}}\nabla^{2}h|_{2}+|w^{\nu+1}|_{\infty}|\varphi|^{\frac{1}{4}}_{\infty}| \nabla h^{\frac{3}{8}}|_{4}^{2}\] \[+|w^{\nu}|_{\infty}|\varphi|^{\frac{1}{4}}_{\infty}|\psi|_{\infty }|\nabla l|_{2}+|w^{\nu-1}|_{\infty}|\sqrt{g}\nabla w\cdot\nabla w|_{2})\] \[\leq M(c_{0})(c_{1}^{\nu+1}+|h^{-\frac{1}{4}}l_{t}|_{2}),\] \[|F|_{2}\leq C\big{(}|\nabla^{2}(h^{2}+\epsilon^{2})^{\frac{1}{4}}|_{3}|l-\bar{l }|_{6}+|\varphi|^{\frac{1}{4}}_{\infty}|\psi|_{\infty}|\nabla l|_{2}\big{)} \leq M(c_{0}),\]
where one has used (3.5) and the facts that for \(0\leq t\leq\min\{T_{1},(1+Cc_{4})^{-12-\nu}\}\),
\[\|v\|_{2}\leq \|u_{0}\|_{2}+t^{\frac{1}{2}}\Big{(}\int_{0}^{t}\|v_{s}\|_{2}^{2} \mathrm{d}s\Big{)}^{\frac{1}{2}}\leq M(c_{0})(1+c_{4}t^{\frac{1}{2}})\leq M(c_{0}), \tag{3.52}\]
\[|g^{\frac{3}{2}}\nabla v\cdot\nabla v|_{2}\leq |\sqrt{g}(0,x)\nabla u_{0}|_{3}|g(0,x)\nabla u_{0}|_{6}+t^{\frac{1}{ 2}}\Big{(}\int_{0}^{t}|(g^{\frac{3}{2}}\nabla v\cdot\nabla v)_{s}|_{2}^{2} \mathrm{d}s\Big{)}^{\frac{1}{2}}\] \[\leq |\sqrt{g}(0,x)\nabla u_{0}|_{3}|g(0,x)\nabla u_{0}|_{6}\] \[+Ct^{\frac{1}{2}}\Big{(}\int_{0}^{t}\big{(}|g_{s}|_{\infty}^{2}| \sqrt{g}\nabla v|_{2}^{2}|\nabla v|_{\infty}^{2}+|\sqrt{g}\nabla v_{s}|_{2}^{2 }|g\nabla v|_{\infty}^{2}\big{)}\mathrm{d}s\Big{)}^{\frac{1}{2}}\] \[\leq M(c_{0})(1+c_{4}^{3}t)\leq M(c_{0}), \tag{3.53}\] \[|g^{\frac{1}{2}}\nabla w\cdot\nabla w|_{2}\leq |\sqrt{g}(0,x)\nabla l_{0}|_{6}|\nabla l_{0}|_{3}+t^{\frac{1}{2}} \Big{(}\int_{0}^{t}|(\sqrt{g}\nabla w\cdot\nabla w)_{s}|_{2}^{2}\mathrm{d}s \Big{)}^{\frac{1}{2}}\] \[\leq |\sqrt{h_{0}}\nabla l_{0}|_{6}|\nabla l_{0}|_{3}\] \[+Ct^{\frac{1}{2}}\Big{(}\int_{0}^{t}\big{(}|(\sqrt{g})_{s}|_{ \infty}^{2}|\nabla w|_{6}^{2}|\nabla w|_{3}^{2}+|g^{\frac{1}{4}}\nabla w_{s}|_{ 2}^{2}|g^{\frac{1}{4}}\nabla w|_{\infty}^{2}\big{)}\mathrm{d}s\Big{)}^{\frac{1 }{2}}\] \[\leq M(c_{0})(1+c_{4}^{5}t)\leq M(c_{0}),\] \[|\nabla^{2}(h^{2}+\epsilon^{2})^{\frac{1}{4}}|_{3}\leq C(|\varphi|_{\infty}^{\frac{1}{2}}|\nabla\psi|_{3}+|\varphi|_{\infty}| \nabla h^{\frac{3}{4}}|_{6}^{2})\leq M(c_{0}).\]
Then it follows from (3.49)-(3.51), Lemma 4.3 and Lemmas 3.3-3.5 that for \(0\leq t\leq\min\{T_{1},(1+Cc_{4})^{-12-\nu}\}\),
\[|(h^{2}+\epsilon^{2})^{\frac{1}{4}}(l-\bar{l})|_{D^{2}}\leq C(|w^{-\nu}\mathcal{E}|_{2}+|F(\nabla(h^{2}+\epsilon^{2})^{\frac{1}{4} },l-\bar{l})|_{2}) \tag{3.54}\] \[\leq M(c_{0})(c_{1}^{2\nu+1}+c_{1}^{\nu}|h^{-\frac{1}{4}}l_{t}|_{2}),\] \[|(h^{2}+\epsilon^{2})^{\frac{1}{4}}\nabla^{2}l|_{2}\leq C(|(h^{2}+\epsilon^{2})^{\frac{1}{4}}(l-\bar{l})|_{D^{2}}+|\nabla^{2}(h^{2 }+\epsilon^{2})^{\frac{1}{4}}(l-\bar{l})|_{2}\] \[+|\nabla l\cdot\nabla(h^{2}+\epsilon^{2})^{\frac{1}{4}}|_{2})\] \[\leq C|(h^{2}+\epsilon^{2})^{\frac{1}{4}}(l-\bar{l})|_{D^{2}}+|\nabla^ {2}(h^{2}+\epsilon^{2})^{\frac{1}{4}}|_{3}|l-\bar{l}|_{6}\] \[+|\varphi|_{\infty}^{\frac{1}{2}}|\psi|_{\infty}|\nabla l|_{2}) \leq M(c_{0})(c_{1}^{2\nu+1}+c_{1}^{\nu}|h^{-\frac{1}{4}}l_{t}|_{2}).\]
Consequently, this, together with (3.48) and Lemma 3.5, shows that \(\eqref{eq:C1}_{2}\) holds for \(0\leq t\leq T_{2}=\min\{T_{1},(1+Cc_{4})^{-12-2\nu}\}\).
**Lemma 3.7**.: _For \(T_{3}=\min\{T_{2},(1+Cc_{4})^{-20-8\nu}\}\) and \(t\in[0,T_{3}]\), it holds that_
\[|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{t}(t)|_{2}^{2}+\int_{0}^{t}(|h^{\frac{1} {4}}\nabla l_{s}|_{2}^{2}+|\nabla l_{s}|_{2}^{2})\text{ds}\leq M(c_{0}),\]
\[|h^{-\frac{1}{4}}l_{t}(t)|_{2}+|\sqrt{h}\nabla^{2}l(t)|_{2}+|l(t)|_{D^{2}}\leq M(c_{0})c_{1}^{2\nu+1}, \tag{3.55}\] \[\int_{0}^{t}(|\sqrt{h}\nabla^{3}l|_{2}^{2}+|\sqrt{h}\nabla^{2}l|_ {D_{*}^{1}}^{2}+|l|_{D^{3}}^{2})\text{ds}\leq M(c_{0})c_{1}^{2\nu+2}.\]
Proof.: Applying \(\partial_{t}\) to (3.1)\({}_{3}\) yields
\[h^{-\frac{1}{2}}l_{tt}-a_{4}w^{\nu}(h^{2}+\epsilon^{2})^{\frac{1 }{4}}\triangle l_{t} \tag{3.56}\] \[= -(h^{-\frac{1}{2}})_{t}l_{t}-(h^{-\frac{1}{2}}v\cdot\nabla l)_{t}+a _{4}w_{t}^{\nu}(h^{2}+\epsilon^{2})^{\frac{1}{4}}\triangle l+a_{4}w^{\nu}(h^{2} +\epsilon^{2})^{\frac{1}{4}}_{t}\triangle l\] \[+a_{5}(w^{\nu}ng^{\frac{3}{2}}H(v))_{t}+a_{6}(w^{\nu+1}h^{-\frac{ 1}{2}}\mathrm{div}\psi)_{t}+\Pi(l,h,w,g)_{t}.\]
Multiplying (3.56) by \(w^{-\nu}l_{t}\), integrating over \(\mathbb{R}^{3}\) and integration by parts lead to
\[\begin{split}&\frac{1}{2}\frac{d}{dt}|w^{-\frac{\nu}{2}}h^{-\frac{ 1}{4}}l_{t}|_{2}^{2}+a_{4}|(h^{2}+\epsilon^{2})^{\frac{1}{8}}\nabla l_{t}|_{2} ^{2}\\ =&-\int\big{(}(h^{-\frac{1}{2}})_{t}l_{t}+(h^{-\frac {1}{2}}v\cdot\nabla l)_{t}\big{)}w^{-\nu}l_{t}\\ &+a_{4}\int(w_{t}^{\nu}(h^{2}+\epsilon^{2})^{\frac{1}{4}}\triangle l +w^{\nu}(h^{2}+\epsilon^{2})^{\frac{1}{4}}_{t}\triangle l)w^{-\nu}l_{t}\\ &+\int\big{(}a_{5}(w^{\nu}ng^{\frac{3}{2}}H(v))_{t}+a_{6}(w^{\nu +1}h^{-\frac{1}{2}}\text{div}\psi)_{t}+\Pi(l,h,w,g)_{t}\big{)}w^{-\nu}l_{t}\\ &-a_{4}\int\nabla(h^{2}+\epsilon^{2})^{\frac{1}{4}}\nabla l_{t}l_ {t}+\frac{1}{2}\int(w^{-\nu}h^{-\frac{1}{2}})_{t}|l_{t}|^{2}=\sum_{i=1}^{6}J_ {i},\end{split} \tag{3.57}\]
where \(J_{i}\), \(i=1,2,\cdots,6\), are given and estimated as follows:
\[\begin{split} J_{1}=&-\int\big{(}(h^{-\frac{1}{2}} )_{t}l_{t}+(h^{-\frac{1}{2}}v\cdot\nabla l)_{t}\big{)}w^{-\nu}l_{t}\\ \leq& C|\varphi|_{\infty}|h_{t}|_{\infty}|w^{-\frac{ \nu}{2}}h^{-\frac{1}{4}}l_{t}|_{2}^{2}+C|w^{-\frac{\nu}{2}}|_{\infty}(|\varphi |_{\infty}^{\frac{5}{4}}|h_{t}|_{\infty}|v|_{\infty}|\nabla l|_{2}\\ &+|\varphi|_{\infty}^{\frac{1}{4}}|v_{t}|_{3}|\nabla l|_{6}+| \varphi|_{\infty}^{\frac{1}{2}}|v|_{\infty}|h^{\frac{1}{4}}\nabla l_{t}|_{2} )|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{t}|_{2},\\ J_{2}=& a_{4}\int(w_{t}^{\nu}(h^{2}+\epsilon^{2})^{ \frac{1}{4}}\triangle l+w^{\nu}(h^{2}+\epsilon^{2})^{\frac{1}{4}}_{t}\triangle l )w^{-\nu}l_{t}\\ \leq& C(|w^{-1+\frac{\nu}{2}}|_{\infty}|hg^{-1}|_{ \infty}^{\frac{1}{4}}|g^{\frac{1}{4}}w_{t}|_{\infty}|(h^{2}+\epsilon^{2})^{ \frac{1}{4}}\nabla^{2}l|_{2}\\ &+|w^{\frac{\nu}{2}}|_{\infty}|\varphi|_{\infty}^{\frac{3}{4}}|h _{t}|_{\infty}|\sqrt{h}\nabla^{2}l|_{2})|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_ {t}|_{2},\\ J_{3}=& a_{5}\int(w^{\nu}ng^{\frac{3}{2}}H(v))_{t}w ^{-\nu}l_{t}\\ \leq& C|hg^{-1}|_{\infty}^{\frac{1}{4}}\Big{(}(|w^{-1+ \frac{\nu}{2}}|_{\infty}|n|_{\infty}|w_{t}|_{6}|g\nabla v|_{\infty}+|w^{\frac {\nu}{2}}|_{\infty}|n_{t}|_{\infty}|g\nabla v|_{6})|g^{\frac{3}{4}}\nabla v|_{ 3}\\ &+|w^{\frac{\nu}{2}}|_{\infty}|n|_{\infty}(|g_{t}|_{6}|g^{\frac{3 }{4}}\nabla v|_{3}|\nabla v|_{\infty}+|g\nabla v|_{6}|g^{\frac{3}{4}}\nabla v _{t}|_{3})\Big{)}|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{t}|_{2},\\ J_{4}=& a_{6}\int(w^{\nu+1}h^{-\frac{1}{2}}\text{ div}\psi)_{t}w^{-\nu}l_{t}\\ \leq& C\big{(}|w^{\frac{\nu}{2}}|_{\infty}|\varphi|_{ \infty}^{\frac{1}{4}}|\nabla\psi|_{3}|w_{t}|_{6}+|w^{1+\frac{\nu}{2}}|_{\infty }(|\varphi|_{\infty}|h^{-\frac{1}{4}}\nabla^{2}h|_{2}|h_{t}|_{\infty}\\ &+|\varphi|_{\infty}^{\frac{1}{4}}|\nabla\psi_{t}|_{2})\big{)}|w ^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{t}|_{2},\\ J_{5}=&\int\Pi(l,h,w,g)_{t}w^{-\nu}l_{t}\\ \leq& C\big{(}|w^{\frac{\nu}{2}}|_{\infty}|\varphi|_{ \infty}|hg^{-1}|_{\infty}^{\frac{1}{4}}|g^{\frac{1}{4}}w_{t}|_{6}|\nabla h^{ \frac{3}{4}}|_{6}^{2}\\ &+|w^{1+\frac{\nu}{2}}|_{\infty}|\varphi|_{\infty}(|h_{t}|_{\infty }|\nabla h^{\frac{3}{8}}|_{4}^{2}+|\nabla h^{\frac{3}{4}}|_{6}|\psi_{t}|_{3}) \big{)}|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{t}|_{2}\\ &+C\big{(}|w^{-1+\frac{\nu}{2}}|_{\infty}|\nabla l|_{6}|w_{t}|_{6 }|\nabla h^{\frac{3}{4}}|_{6}+|w^{\frac{\nu}{2}}|_{\infty}(|\varphi|_{\infty}^{ \frac{3}{2}}|\psi|_{\infty}|h^{\frac{1}{4}}\nabla l|_{2}|h_{t}|_{\infty}\\ &+|\varphi|_{\infty}^{\frac{1}{2}}|\psi|_{\infty}|h^{\frac{1}{4}} \nabla l|_{2}+|\varphi|_{\infty}^{\frac{3}{4}}|\psi_{t}|_{3}|\sqrt{h}\nabla l|_{6} )\end{split} \tag{3.58}\]
\[+|b_{0}^{2b_{0}}|_{\infty}|\phi_{0}^{\frac{3}{2}\epsilon}\nabla u_{0} |_{3}|\phi_{0}^{2\epsilon}\nabla u_{0}|_{6})+C|l_{0}^{1+\frac{\nu}{2}}|_{\infty}(| \nabla^{2}\phi_{0}^{\frac{3}{2}\epsilon}|_{2}+|\nabla\phi_{0}^{\frac{3}{4} \epsilon}|_{4}^{2})\] \[+C|l_{0}^{\frac{\nu}{2}}|_{\infty}|\nabla\phi_{0}^{\frac{3}{2} \epsilon}|_{6}|\nabla l_{0}|_{3}+C|l_{0}^{\frac{\nu}{2}-1}|_{\infty}|\phi_{0}^{ \frac{3}{2}\epsilon}\nabla l_{0}|_{6}|\nabla l_{0}|_{3}\leq M(c_{0}).\]
Letting \(\tau\to 0\) in (3.61) and using Gronwall's inequality give that for \(0\leq t\leq\min\{T_{2},(1+Cc_{4})^{-20-4\nu}\}\),
\[|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{t}(t)|_{2}^{2}+\int_{0}^{t}(|h^{\frac{1 }{4}}\nabla l_{s}|_{2}^{2}+|\nabla l_{s}|_{2}^{2})\mathrm{d}s\leq M(c_{0}), \tag{3.63}\]
which, along with (3.54), yields that for \(0\leq t\leq\min\{T_{2},(1+Cc_{4})^{-20-4\nu}\}\),
\[|(h^{2}+\epsilon^{2})^{\frac{1}{4}}l(t)|_{D^{2}}+|\sqrt{h}\nabla^{2}l(t)|_{2}+ |l(t)|_{D^{2}}\leq M(c_{0})c_{1}^{2\nu+1}. \tag{3.64}\]
Next, to derive the \(L^{2}\)-estimates of \(\nabla^{3}l\), one considers the \(L^{2}\)-estimates of
\[\big{(}\nabla\mathcal{E},\nabla\widetilde{F}=\nabla F(\nabla(h^{2}+\epsilon^{2})^ {\frac{1}{4}},l-\bar{l})\big{)}.\]
Using Lemmas 3.3-3.6, (3.17) and (3.52)-(3.53), one can get that
\[|\mathcal{E}|_{D^{1}_{*}}\leq C\big{(}|\varphi|_{\infty}^{\frac{1}{2}}|\nabla l_{t}|_{2}+| \varphi|_{\infty}^{\frac{5}{4}}|\psi|_{\infty}|h^{-\frac{1}{4}}l_{t}|_{2}+| \varphi|_{\infty}^{\frac{3}{2}}|\psi|_{\infty}|v|_{3}|\nabla l|_{6} \tag{3.65}\] \[+|\varphi|_{\infty}^{\frac{1}{2}}(|v|_{\infty}|\nabla^{2}l|_{2}+ |\nabla v|_{3}|\nabla l|_{6})+|w^{\nu-1}|_{\infty}|n|_{\infty}|\nabla w|_{6}|g ^{\frac{3}{2}}\nabla v\cdot\nabla v|_{3}\] \[+|w^{\nu}|_{\infty}(|\nabla n|_{\infty}|g^{\frac{3}{2}}\nabla v \cdot\nabla v|_{2}+|n|_{\infty}|\nabla g|_{\infty}|\varphi|_{\infty}^{\frac{ 1}{2}}|\nabla v|_{3}|g\nabla v|_{6}\] \[+|n|_{\infty}|g^{\frac{3}{2}}\nabla v\cdot\nabla^{2}v|_{2}+| \varphi|_{\infty}^{\frac{1}{2}}|\nabla w|_{6}|\nabla\psi|_{3})\] \[+|w^{\nu+1}|_{\infty}(|\varphi|_{\infty}^{\frac{5}{4}}|\nabla h^{ \frac{3}{4}}|_{6}|\nabla\psi|_{3}+|\varphi|_{\infty}^{\frac{1}{2}}|\nabla^{2} \psi|_{2})\] \[+|w^{\nu}|_{\infty}|\varphi|_{\infty}^{\frac{3}{2}}|\psi|_{\infty }^{2}|\nabla w|_{2}+|w^{\nu+1}|_{\infty}(|\varphi|_{\infty}^{\frac{5}{4}}|\psi |_{\infty}|\nabla h^{\frac{3}{8}}|_{4}^{2}\] \[+|\varphi|_{\infty}^{\frac{5}{4}}|\psi|_{\infty}|h^{-\frac{1}{4} }\nabla^{2}h|_{2})+|w^{\nu-1}|_{\infty}|\varphi|_{\infty}^{\frac{1}{2}}|\psi |_{\infty}|\nabla l|_{6}|\nabla w|_{3}\] \[+|w^{\nu}|_{\infty}(|\varphi|_{\infty}^{\frac{3}{2}}|\psi|_{ \infty}^{2}|\nabla l|_{2}+|\varphi|_{\infty}^{\frac{1}{2}}|\psi|_{\infty}| \nabla^{2}l|_{2}\] \[+|\varphi|_{\infty}^{\frac{1}{2}}|\nabla\psi|_{3}|\nabla l|_{6}) +|w^{\nu-2}|_{\infty}|\sqrt{g}\nabla w|_{6}|\nabla w|_{6}^{2}\] \[+|w^{\nu-1}|_{\infty}(|g^{-\frac{1}{2}}|_{\infty}|\nabla g|_{ \infty}|\nabla w|_{4}^{2}+|\sqrt{g}\nabla w\cdot\nabla^{2}w|_{2})\big{)}\] \[\leq M(c_{0})(|\nabla l_{t}|_{2}+c_{1}^{3\nu+2}),\] \[|\widetilde{F}|_{D^{1}_{*}}\leq C(|\nabla(h^{2}+\epsilon^{2})^{\frac{1}{4}}|_{\infty}|\nabla^{2}l|_{2}+| \nabla^{2}(h^{2}+\epsilon^{2})^{\frac{1}{4}}|_{3}|\nabla l|_{6}\] \[+|\nabla^{3}(h^{2}+\epsilon^{2})^{\frac{1}{4}}|_{2}|l-\bar{l}|_{ \infty})\leq M(c_{0})c_{1}^{2\nu+1},\]
where one has used (3.5) and the facts that
\[|g\nabla v|_{6}\leq |h_{0}\nabla u_{0}|_{6}+t^{\frac{1}{2}}\Big{(}\int_{0}^{t}|(g \nabla v)_{s}|_{6}^{2}\mathrm{d}s\Big{)}^{\frac{1}{2}} \tag{3.66}\] \[\leq C(|h_{0}\nabla^{2}u_{0}|_{2}+|\psi_{0}|_{\infty}|\nabla u_{0}|_{2})\] \[+t^{\frac{1}{2}}\Big{(}\int_{0}^{t}(|g_{s}|_{\infty}|\nabla v|_{6} +|g\nabla^{2}v_{s}|_{2}+|\nabla g|_{\infty}|\nabla v_{s}|_{2})^{2}\mathrm{d}s \Big{)}^{\frac{1}{2}}\] \[\leq M(c_{0})(1+c_{4}^{2}(t+t^{\frac{1}{2}}))\leq M(c_{0}),\] \[|g^{\frac{3}{2}}\nabla v\cdot\nabla^{2}v|_{2}\leq |\sqrt{h_{0}}\nabla u_{0}|_{3}|h_{0}\nabla^{2}u_{0}|_{6}+t^{\frac{1}{2 }}\Big{(}\int_{0}^{t}|(g^{\frac{3}{2}}\nabla v\cdot\nabla^{2}v)_{s}|_{2}^{2} \mathrm{d}s\Big{)}^{\frac{1}{2}}\] \[\leq\] \[+|g\nabla v_{s}|_{6}^{2}|\sqrt{g}\nabla^{2}v|_{3}^{2}+|g\nabla^{2}v _{s}|_{2}^{2}|\sqrt{g}\nabla v|_{\infty}^{2}\Big{)}\mathrm{d}s\Big{)}^{\frac{1}{2}}\] \[\leq M(c_{0})(1+c_{4}^{4}(t+t^{\frac{1}{2}}))\leq M(c_{0}),\]
\[|\sqrt{g}\nabla w|_{6}\leq |\sqrt{h_{0}}\nabla l_{0}|_{6}+t^{\frac{1}{2}}\Big{(}\int_{0}^{t}|( \sqrt{g}\nabla w)_{s}|_{6}^{2}\mathrm{d}s\Big{)}^{\frac{1}{2}} \tag{3.67}\] \[\leq C(|\sqrt{h_{0}}\nabla^{2}l_{0}|_{2}+|(h_{0})^{-1}|_{\infty}^{ \frac{1}{2}}|\psi_{0}|_{\infty}|\nabla l_{0}|_{2})\] \[+t^{\frac{1}{2}}\Big{(}\int_{0}^{t}(|g^{-1}|_{\infty}^{\frac{1}{ 2}}|g_{s}|_{\infty}|\nabla w|_{6}\] \[+|\sqrt{g}\nabla^{2}w_{s}|_{2}+|g^{-1}|_{\infty}^{\frac{1}{2}}| \nabla g|_{\infty}|\nabla w_{s}|_{2})^{2}\mathrm{d}s\Big{)}^{\frac{1}{2}}\] \[\leq M(c_{0})(1+c_{4}^{3}(t+t^{\frac{1}{2}}))\leq M(c_{0}),\] \[|g^{\frac{3}{2}}\nabla v\cdot\nabla v|_{3}\leq C|g^{\frac{3}{2}}\nabla v\cdot\nabla v|_{2}^{\frac{1}{2}}|g^{ \frac{3}{2}}\nabla v\cdot\nabla v|_{6}^{\frac{1}{2}}\] \[\leq M(c_{0})(|g^{\frac{3}{2}}\nabla v\cdot\nabla v|_{2}+|\nabla g|_{ \infty}|\varphi|_{\infty}^{\frac{1}{2}}|\nabla v|_{3}|g\nabla v|_{6}\] \[+|g^{\frac{3}{2}}\nabla v\cdot\nabla^{2}v|_{2})^{\frac{1}{2}} \leq M(c_{0})c_{1},\] \[|\sqrt{g}\nabla w\cdot\nabla^{2}w|_{2}\leq |\nabla l_{0}|_{\infty}|\sqrt{h_{0}}\nabla^{2}l_{0}|_{2}+t^{\frac{ 1}{2}}\Big{(}\int_{0}^{t}|(\sqrt{g}\nabla w\cdot\nabla^{2}w)_{s}|_{2}^{2} \mathrm{d}s\Big{)}^{\frac{1}{2}}\] \[\leq |\nabla l_{0}|_{\infty}|\sqrt{h_{0}}\nabla^{2}l_{0}|_{2}+Ct^{ \frac{1}{2}}\Big{(}\int_{0}^{t}\big{(}|g_{s}|_{\infty}^{2}|g^{-1}|_{\infty}| \nabla w|_{\infty}^{2}|\nabla^{2}w|_{2}^{2}\] \[+|\sqrt{g}\nabla w_{s}|_{6}^{2}|\nabla^{2}w|_{3}^{2}+|\sqrt{g} \nabla^{2}w_{s}|_{2}^{2}|\nabla w|_{\infty}^{2}\big{)}\mathrm{d}s\Big{)}^{ \frac{1}{2}}\] \[\leq M(c_{0})(1+c_{4}^{4}(t+t^{\frac{1}{2}}))\leq M(c_{0}),\] \[\|w\|_{D^{1}\cap D^{2}}\leq \|l_{0}\|_{D^{1}\cap D^{2}}+t^{\frac{1}{2}}\Big{(}\int_{0}^{t}\|w _{s}\|_{D^{1}\cap D^{2}}^{2}\mathrm{d}s\Big{)}^{\frac{1}{2}}\] \[\leq M(c_{0})(1+c_{4}^{2}(t+t^{\frac{1}{2}}))\leq M(c_{0}),\] \[|\nabla^{3}(h^{2}+\epsilon^{2})^{\frac{1}{4}}|_{2}\leq C(|\varphi|_{\infty}^{\frac{1}{2}}|\nabla^{3}h|_{2}+|\varphi|_{ \infty}^{\frac{5}{4}}|\psi|_{\infty}|h^{-\frac{1}{4}}\nabla^{2}h|_{2}+| \varphi|_{\infty}^{\frac{7}{4}}|\nabla h^{\frac{3}{4}}|_{6}^{3})\] \[\leq M(c_{0}),\]
for \(0\leq t\leq\min\{T_{2},(1+Cc_{4})^{-20-4\nu}\}\).
It follows from (3.49), (3.64)-(3.65), Lemma 4.3 and Lemmas 3.3-3.6 that
\[|(h^{2}+\epsilon^{2})^{\frac{1}{4}}(l-\bar{l})(t)|_{D^{3}}\leq C(|w^{-\nu}\mathcal{E}|_{D^{1}_{*}}+|F(\nabla(h^{2}+\epsilon^{2})^{ \frac{1}{4}},l-\bar{l})|_{D^{1}_{*}}) \tag{3.68}\] \[\leq C(|w^{-\nu}|_{\infty}|\mathcal{E}|_{D^{1}_{*}}+|\nabla w^{-\nu} |_{3}|\mathcal{E}|_{6}+|\widetilde{F}|_{D^{1}_{*}})\] \[\leq M(c_{0})(c_{1}^{\nu+1}|\nabla l_{t}|_{2}+c_{1}^{4\nu+3}),\] \[|(h^{2}+\epsilon^{2})^{\frac{1}{4}}\nabla^{3}l(l)|_{2}\leq C(|(h^{2}+\epsilon^{2})^{\frac{1}{4}}(l(t)-\bar{l})|_{D^{3}}+|\varphi|_{ \infty}^{2}(|\nabla^{2}\psi|_{2}\] \[+|\nabla\psi|_{3}|\nabla h^{\frac{3}{4}}|_{6}+|\psi|_{\infty}^{2} +|\nabla h^{\frac{3}{4}}|_{6}^{3})(1+\|\nabla l\|_{1}))\] \[\leq M(c_{0})(c_{1}^{\nu+1}|\nabla l_{t}|_{2}+c_{1}^{4\nu+3}).\]
Finally, one gets from (3.63), (3.68) and Lemma 3.4 that
\[\int_{0}^{t}(|\sqrt{h}\nabla^{3}l|_{2}^{2}+|\sqrt{h}\nabla^{2}l|_{D^{1}_{*}}^{ 2}+|l|_{D^{3}}^{2})\mathrm{d}s\leq M(c_{0})c_{1}^{2\nu+2}, \tag{3.69}\]
for \(0\leq t\leq T_{3}=\min\{T_{2},(1+Cc_{4})^{-20-8\nu}\}\).
The proof of Lemma 3.7 is completed.
**Lemma 3.8**.: _For \(T_{4}=\min\{T_{3},(1+Cc_{4})^{-40-10\nu}\}\) and \(t\in[0,T_{4}]\), it holds that_
\[|h^{\frac{1}{4}}\nabla l_{t}(t)|_{2}^{2}+|\nabla l_{t}(t)|_{2}^{2 }+\int_{0}^{t}|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{ss}|_{2}^{2}ds\leq M(c_{0}), \tag{3.70}\] \[|\sqrt{h}\nabla^{3}l(t)|_{2}+|\sqrt{h}\nabla^{2}l(t)|_{D_{*}^{1}} +|l(t)|_{D^{3}}\leq M(c_{0})c_{1}^{4\nu+3},\] \[\int_{0}^{t}(|\sqrt{h}\nabla^{2}l_{s}|_{2}^{2}+|\nabla^{2}l_{s}|_ {2}^{2})ds\leq M(c_{0})c_{1}^{3\nu}.\]
Proof.: Multiplying (3.56) by \(w^{-\nu}l_{tt}\) and integrating over \(\mathbb{R}^{3}\) yield
\[\frac{a_{4}}{2}\frac{d}{dt}|(h^{2}+\epsilon^{2})^{\frac{1}{8}}\nabla l_{t}|_{ 2}^{2}+|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{tt}|_{2}^{2}=\sum_{i=7}^{12}J_{i}, \tag{3.71}\]
where \(J_{i}\), \(i=7,8,\cdots,12\), are given and estimated as follows:
\[J_{7}= -\int\big{(}(h^{-\frac{1}{2}})_{t}l_{t}+(h^{-\frac{1}{2}}v\cdot \nabla l)_{t}\big{)}w^{-\nu}l_{tt} \tag{3.72}\] \[\leq C|w^{-\frac{\nu}{2}}|_{\infty}(|\varphi|_{\infty}|h_{t}|_{\infty }|h^{-\frac{1}{4}}l_{t}|_{2}+|\varphi|_{\infty}^{\frac{3}{2}}|h_{t}|_{\infty}| v|_{\infty}|h^{\frac{1}{4}}\nabla l|_{2}\] \[+|\varphi|_{\infty}^{\frac{1}{2}}|v_{t}|_{3}|\nabla l|_{6}+| \varphi|_{\infty}^{\frac{1}{2}}|v|_{\infty}|h^{\frac{1}{4}}\nabla l_{t}|_{2} )|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{tt}|_{2},\] \[J_{8}= a_{4}\int(w_{t}^{\nu}(h^{2}+\epsilon^{2})^{\frac{1}{4}}\triangle l +w^{\nu}(h^{2}+\epsilon^{2})^{\frac{1}{4}}_{t}\triangle l)w^{-\nu}l_{tt}\] \[\leq C|hg^{-1}|_{\infty}^{\frac{1}{4}}(|w^{-1+\frac{\nu}{2}}|_{\infty }|g^{\frac{1}{4}}w_{t}|_{\infty}|(h^{2}+\epsilon^{2})^{\frac{1}{4}}\nabla^{2} l|_{2}\] \[+|w^{\frac{\nu}{2}}|_{\infty}|\varphi|_{\infty}^{\frac{3}{4}}|h_ {t}|_{\infty}|\sqrt{h}\nabla^{2}l|_{2})|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{ tt}|_{2},\] \[J_{9}= a_{5}\int(w^{\nu}ng^{\frac{3}{2}}H(v))_{t}w^{-\nu}l_{tt}\] (3.73) \[\leq C|hg^{-1}|_{\infty}^{\frac{1}{4}}\Big{(}(|w^{-1+\frac{\nu}{2}} |_{\infty}|n|_{\infty}|w_{t}|_{6}+|w^{\frac{\nu}{2}}|_{\infty}|n_{t}|_{6})|g \nabla v|_{\infty}|g^{\frac{3}{4}}\nabla v|_{3}\] \[+|w^{\frac{\nu}{2}}|_{\infty}|n|_{\infty}(|\nabla v|_{\infty}|g_{ t}|_{6}|g^{\frac{3}{4}}\nabla v|_{3}+|g\nabla v|_{6}|g^{\frac{3}{4}}\nabla v _{t}|_{3})\Big{)}|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{tt}|_{2},\] \[J_{10}= a_{6}\int(w^{\nu+1}h^{-\frac{1}{2}}\mathrm{div}\psi)_{t}w^{-\nu}l _{tt}\] \[\leq C(|w^{\frac{\nu}{2}}|_{\infty}|\varphi|_{\infty}^{\frac{1}{4}}| \nabla\psi|_{3}|w_{t}|_{6}+|w^{1+\frac{\nu}{2}}|_{\infty}|\varphi|_{\infty}|h^ {-\frac{1}{4}}\nabla^{2}h|_{2}|h_{t}|_{\infty}\] \[+|w^{1+\frac{\nu}{2}}|_{\infty}|\varphi|_{\infty}^{\frac{1}{4}}| \nabla\psi_{t}|_{2})|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{tt}|_{2},\] \[J_{11}= \int\Pi(l,h,w,g)_{t}w^{-\nu}l_{tt}\] \[\leq C\big{(}|w^{\frac{\nu}{2}}|_{\infty}|\varphi|_{\infty}|hg^{-1}|_{ \infty}^{\frac{1}{4}}|g^{\frac{1}{4}}w_{t}|_{6}|\nabla h^{\frac{3}{4}}|_{6}^{2}\] \[+|w^{1+\frac{\nu}{2}}|_{\infty}|\varphi|_{\infty}(|h_{t}|_{\infty} |\nabla h^{\frac{3}{8}}|_{4}^{2}+|\nabla h^{\frac{3}{4}}|6|\psi_{t}|_{3})\] \[+|w^{-1+\frac{\nu}{2}}|_{\infty}|\nabla l|_{6}w_{t}|_{6}|\nabla h^{ \frac{3}{4}}|_{6}+|w^{\frac{\nu}{2}}|_{\infty}(|\varphi|_{\infty}^{\frac{3}{ 2}}|\psi|_{\infty}|h^{\frac{1}{4}}\nabla l|_{2}|h_{t}|_{\infty}\]
\[+|l_{0}^{\nu}|_{\infty}|\phi_{0}^{-\iota}|_{\infty}|\phi_{0}^{\frac{ \frac{5}{4}}{4}}\nabla^{3}l_{0}|_{2}+|\nabla l_{0}^{\nu}|_{3}|\phi_{0}^{\frac{5 }{4}}\nabla^{2}l_{0}|_{6}+|\nabla l_{0}^{\nu}|_{\infty}|\phi_{0}^{\frac{3}{4}} \nabla^{2}l_{0}|_{2}\] \[+|l_{0}^{\nu}|_{\infty}|\psi_{0}|_{\infty}(|\phi_{0}^{-\iota}|_{ \infty}|\phi_{0}^{\frac{3}{4}}\nabla^{2}l_{0}|_{2}+|\phi_{0}^{-\frac{\iota}{2} }|_{\infty}|\nabla^{2}l_{0}|_{2})\] \[+|\phi_{0}^{2b\iota}|_{\infty}(|l_{0}^{\nu-1}|_{\infty}|\phi_{0}^{ \frac{\iota}{2}}\nabla l_{0}|_{2}|\phi_{0}^{2\iota}\nabla u_{0}|_{\infty}^{2} +|l_{0}^{\nu}|_{\infty}|\phi_{0}^{\frac{3}{4}}\nabla u_{0}|_{3}|\phi_{0}^{3 \iota}\nabla^{2}u_{0}|_{6}) \tag{3.76}\] \[+|l_{0}^{\nu}|_{\infty}(|\phi_{0}^{(2b-1)\iota}|_{\infty}|\nabla \phi_{0}^{\frac{3}{4}}|_{6}|\phi_{0}^{2\iota}\nabla u_{0}|_{6}^{2}+|\nabla \psi_{0}|_{3}|\phi_{0}^{\frac{\iota}{2}}\nabla l_{0}|_{6})\] \[+|l_{0}^{\nu+1}|_{\infty}|h_{0}^{\frac{1}{4}}\nabla^{3}h_{0}|_{2} +|l_{0}^{\nu}|_{\infty}|\phi_{0}^{-2\iota}|_{\infty}|\phi_{0}^{\frac{\iota}{2} }\nabla l_{0}|_{2}\] \[+|l_{0}^{\nu+1}|_{\infty}(|\phi_{0}^{-2\iota}|_{\infty}|\nabla \phi_{0}^{\frac{3}{2}\iota}|_{6}^{3}+|\nabla\phi_{0}^{\frac{3}{4}\iota}|_{6}| \nabla\psi_{0}|_{3}|\phi_{0}^{-\iota}|_{\infty})\] \[+|l_{0}^{\nu-1}|_{\infty}|\phi_{0}^{\frac{\iota}{2}}\nabla l_{0}| _{2}|\nabla l_{0}|_{\infty}|\psi_{0}|_{\infty}+|l_{0}^{\nu-2}|_{\infty}|\phi_ {0}^{-2\iota}|_{\infty}|\phi_{0}^{\frac{3}{4}\iota}\nabla l_{0}|_{6}^{3}\] \[+|l_{0}^{\nu}|_{\infty}(|\psi_{0}|_{\infty}|\phi_{0}^{\frac{\iota} {2}}\nabla^{2}l_{0}|_{2}+|\nabla\psi_{0}|_{3}|\phi_{0}^{\frac{\iota}{2}}\nabla l _{0}|_{6})\] \[+|l_{0}^{\nu-1}|_{\infty}(|\psi_{0}|_{\infty}|\phi_{0}^{\frac{ \iota}{2}}\nabla l_{0}|_{2}|\nabla l_{0}|_{\infty}|+|\nabla l_{0}|_{3}|\phi_{ 0}^{\frac{5}{2}\iota}\nabla^{2}l_{0}|_{6})\big{)}\leq M(c_{0}),\] \[\limsup_{\tau\to 0}|\epsilon^{\frac{1}{4}}\nabla l_{t}(\tau)|_{2} \leq\limsup_{\tau\to 0}\epsilon^{\frac{1}{4}}|\varphi|_{\infty}^{\frac{1}{4}}|h ^{\frac{1}{4}}\nabla l_{t}(\tau)|_{2}\leq M(c_{0}).\]
Letting \(\tau\to 0\), one gets from (3.74) and Gronwall's inequality that for \(0\leq t\leq T_{4}=\min\{T_{3},(1+Cc_{4})^{-10\nu-40}\}\),
\[\begin{split}|h^{\frac{1}{4}}\nabla l_{t}(t)|_{2}^{2}+|\nabla l_{t }(t)|_{2}^{2}+\int_{0}^{t}|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{ss}|_{2}^{2} \mathrm{d}s\\ \leq& M(c_{0})(1+c_{4}^{5\nu+20}(t+t^{\frac{1}{2}})) \exp(M(c_{0})c_{4}^{\nu+4}t)\leq M(c_{0}),\end{split} \tag{3.77}\]
which, along with (3.68), yields
\[|(h^{2}+\varepsilon^{2})^{\frac{1}{4}}(l-\bar{l})|_{D^{3}}+|\sqrt{h}\nabla^{3}l |_{2}+|\sqrt{h}\nabla^{2}l|_{D_{*}^{1}}+|\nabla^{3}l|_{2}\leq M(c_{0})c_{1}^{4 \nu+3}. \tag{3.78}\]
Note that (3.56) gives
\[\begin{split}-a_{4}\triangle\big{(}(h^{2}+\epsilon^{2})^{\frac{ 1}{4}}l_{t}\big{)}&=-a_{4}(h^{2}+\epsilon^{2})^{\frac{1}{4}} \triangle l_{t}-a_{4}F(\nabla(h^{2}+\epsilon^{2})^{\frac{1}{4}},l_{t})\\ &=w^{-\nu}\mathcal{B}-a_{4}F(\nabla(h^{2}+\epsilon^{2})^{\frac{1} {4}},l_{t}),\end{split} \tag{3.79}\]
with
\[\begin{split}\mathcal{B}=&-h^{-\frac{1}{2}}l_{tt}-( h^{-\frac{1}{2}})_{t}l_{t}-(h^{-\frac{1}{2}}v\cdot\nabla l)_{t}+a_{4}(w^{\nu}(h^{2 }+\epsilon^{2})^{\frac{1}{4}})_{t}\triangle l\\ &+a_{5}(w^{\nu}ng^{\frac{3}{2}}H(v))_{t}+a_{6}(w^{\nu+1}h^{-\frac{ 1}{2}}\mathrm{div}\psi)_{t}+\Pi(l,h,w,g)_{t}.\end{split} \tag{3.80}\]
Next, to derive the \(L^{2}\)-estimates of \(\nabla^{2}l_{t}\), one first deals with the \(L^{2}\)-estimates of
\[\big{(}\mathcal{B},\hat{F}=F(\nabla(h^{2}+\epsilon^{2})^{\frac{1}{4}},l_{t}) \big{)}\]
by using (3.17) and Lemmas 3.3-3.7 as follows:
\[\begin{split}|\mathcal{B}|_{2}\leq& C\big{(}|\varphi|^{ \frac{1}{4}}_{\infty}|h^{-\frac{1}{4}}l_{tt}|_{2}+|\varphi|^{\frac{5}{4}}_{ \infty}|h_{t}|_{\infty}|h^{-\frac{1}{4}}l_{t}|_{2}\\ &+\|\nabla l\|_{1}(|\varphi|^{\frac{3}{2}}_{\infty}|h_{t}|_{ \infty}|v|_{\infty}+|\varphi|^{\frac{1}{2}}_{\infty}|v_{t}|_{3})+|\varphi|^{ \frac{3}{4}}_{\infty}|v|_{\infty}|h^{\frac{1}{4}}\nabla l_{t}|_{2}\\ &+|(w^{\nu})_{t}|_{6}|(h^{2}+\epsilon^{2})^{\frac{1}{4}} \triangle l_{3}+|w^{\nu}|_{\infty}|((h^{2}+\epsilon^{2})^{\frac{1}{4}})_{t} \triangle l|_{2}\\ &+|w^{\nu}|_{\infty}|n_{t}|_{\infty}|\sqrt{g}\nabla v|_{2}|g \nabla v|_{\infty}+|w^{\nu-1}|_{\infty}|n|_{\infty}|w_{t}|_{6}|g\nabla v|_{ \infty}|\sqrt{g}\nabla v|_{3}\\ &+|w^{\nu}|_{\infty}|n|_{\infty}(|\sqrt{g}\nabla v|_{2}|g_{t}|_{ \infty}|\nabla v|_{\infty}+|g\nabla v|_{\infty}|\sqrt{g}\nabla v_{t}|_{2})\\ &+|w^{\nu}|_{\infty}|\varphi|^{\frac{1}{2}}_{\infty}|w_{t}|_{6}| \nabla\psi|_{3}+|w^{\nu+1}|_{\infty}(|\varphi|^{\frac{3}{2}}_{\infty}|h_{t}| _{6}|\nabla\psi|_{3}+|\varphi|^{\frac{1}{2}}_{\infty}|\nabla\psi_{t}|_{2})\\ &+|w^{\nu}|_{\infty}|\varphi|_{\infty}|w_{t}|_{6}|\nabla h^{ \frac{3}{2}}|_{6}^{2}+|w^{1+\nu}|_{\infty}(|\varphi|^{\frac{5}{4}}_{\infty}|h_ {t}|_{\infty}|\nabla h^{\frac{3}{8}}|_{4}^{2}\\ &+|\varphi|^{\frac{3}{2}}_{\infty}|\psi|_{\infty}|\psi_{t}|_{2})+ |w^{-1+\nu}|_{\infty}|\nabla l|_{3}|w_{t}|_{6}|\varphi|^{\frac{1}{2}}_{\infty} |\psi|_{\infty}\\ &+|w^{\nu}|_{\infty}(|\varphi|^{\frac{3}{2}}_{\infty}|\psi|_{ \infty}|\nabla l|_{2}|h_{t}|_{\infty}+|\varphi|^{\frac{1}{2}}_{\infty}|\psi|_{ \infty}|\nabla l_{t}|_{2}+|\varphi|^{\frac{1}{2}}_{\infty}|\psi|_{t}|_{2}| \nabla l|_{\infty})\\ &+|\sqrt{g}\nabla w|_{\infty}(|w^{-2+\nu}|_{\infty}|\nabla w|_{3 }|w_{t}|_{6}+|w^{-1+\nu}|_{\infty}|g^{-1}|_{\infty}|g_{t}|_{\infty}|\nabla w|_{ 2})\\ &+|w^{-1+\nu}|_{\infty}|g^{-1}|^{\frac{1}{4}}_{\infty}|g^{\frac{1} {4}}\nabla w_{t}|_{2}|\sqrt{g}\nabla w|_{\infty}),\end{split}\]
\[\begin{split}|\hat{F}|_{2}\leq& C\big{(}|\varphi|^{ \frac{5}{4}}_{\infty}|\psi|^{2}_{\infty}|h^{-\frac{1}{4}}l_{t}|_{2}+|\varphi|^{ \frac{1}{2}}_{\infty}(|l_{t}|_{6}|\nabla\psi|_{3}+|\psi|_{\infty}|\nabla l_{t}|_{ 2})\big{)},\end{split}\]
which, along with (3.56), (3.77)-(3.79), Lemma 4.3 and Lemmas 3.3-3.7, implies that
\[|(h^{2}+\epsilon^{2})^{\frac{1}{4}}l_{t}|_{D^{2}}\leq M(c_{0})(c_{1}^{\nu}|h^{-\frac{1}{4}}l_{tt}|_{2}+c_{4}^{5\nu+10}), \tag{3.81}\] \[|(h^{2}+\epsilon^{2})^{\frac{1}{4}}\nabla^{2}l_{t}|_{2}\leq M(c_{0})(|(h^{2}+\epsilon^{2})^{\frac{1}{4}}l_{t}|_{D^{2}}+|\varphi| \overset{5}{\underset{\infty}{\omega}}|\psi|_{\infty}^{2}|h^{-\frac{1}{4}}l_{t }|_{2}\] \[+|\varphi|\overset{1}{\underset{\infty}{\omega}}|l_{t}|_{6}| \nabla\psi|_{3}+|\varphi|\overset{1}{\underset{\infty}{\omega}}|\psi|_{\infty }|\nabla l_{t}|_{2}\] \[\leq M(c_{0})(c_{1}^{\nu}|h^{-\frac{1}{4}}l_{tt}|_{2}+c_{4}^{5\nu+10}).\]
Then it follows from (3.77) and (3.81) that for \(0\leq t\leq T_{4}\), \(\eqref{3.70}_{3}\) holds.
The proof of Lemma 3.8 is complete.
Finally, we derive the time weighted estimates for \(l\), which will be used to show that the regular solution is actually a classical one. For simplicity, set
\[H^{t}(v)= 4\alpha\sum_{i=1}^{3}\partial_{i}v_{i}\partial_{itt}v_{i}+2\beta \text{div}v\text{div}v_{tt}+2\alpha\sum_{i\neq j}^{3}\partial_{i}v_{j} \partial_{itt}v_{j}\] \[+2\alpha\sum_{i>j}(\partial_{itt}v_{j}\partial_{j}v_{i}+\partial_ {i}v_{j}\partial_{jtt}v_{i}).\]
**Lemma 3.9**.: _For \(T_{5}=\min\{T_{4},(1+M(c_{0})c_{5})^{-40-10\nu}\}\) and \(t\in[0,T_{5}]\), it holds that_
\[t^{\frac{1}{2}}|l_{t}(t)|_{D^{2}}+t^{\frac{1}{2}}|\sqrt{h}\nabla ^{2}l_{t}(t)|_{2}+t^{\frac{1}{2}}|h^{-\frac{1}{4}}l_{tt}(t)|_{2}\leq M(c_{0}) c_{1}^{\frac{\nu}{2}}, \tag{3.82}\] \[\int_{0}^{t}s(|l_{ss}|_{D_{*}^{1}}^{2}+|h^{\frac{1}{4}}l_{ss}|_{ D_{*}^{1}}^{2})\text{ds}\leq M(c_{0}),\] \[\frac{1}{2}c_{0}^{-1}\leq l(t,x)\leq\frac{3}{2}c_{0}\quad\text{ for}\quad(t,x)\in[0,T_{5}]\times\mathbb{R}^{3}.\]
Proof.: Applying \(\partial_{t}\) to (3.56) yields
\[h^{-\frac{1}{2}}l_{ttt}-a_{4}w^{\nu}(h^{2}+\epsilon^{2})^{\frac{ 1}{4}}\triangle l_{tt}+2(h^{-\frac{1}{2}})_{t}l_{tt}+(h^{-\frac{1}{2}})_{tt}l_ {t}+(h^{-\frac{1}{2}}v\cdot\nabla l)_{tt} \tag{3.83}\] \[= 2a_{4}(w^{\nu}(h^{2}+\epsilon^{2})^{\frac{1}{4}})_{t}\triangle l _{t}+2a_{4}(w^{\nu})_{t}((h^{2}+\epsilon^{2})^{\frac{1}{4}})_{t}\triangle l\] \[+a_{4}(w^{\nu})_{tt}(h^{2}+\epsilon^{2})^{\frac{1}{4}}\triangle l +a_{4}w^{\nu}((h^{2}+\epsilon^{2})^{\frac{1}{4}})_{tt}\triangle l\] \[+a_{5}(w^{\nu}ng^{\frac{3}{2}}H(v))_{tt}+a_{6}(w^{\nu+1}h^{-\frac{ 1}{2}}\text{div}\psi)_{tt}+\Pi(l,h,w,g)_{tt}.\]
Multiplying (3.83) by \(w^{-\nu}l_{tt}\), integrating over \(\mathbb{R}^{3}\) and integration by part lead to
\[\frac{1}{2}\frac{d}{dt}|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{tt}| _{2}^{2}+a_{4}|(h^{2}+\epsilon^{2})^{\frac{1}{8}}\nabla l_{tt}|_{2}^{2} \tag{3.84}\] \[= -\int\big{(}2(h^{-\frac{1}{2}})_{t}l_{tt}+(h^{-\frac{1}{2}})_{tt}l _{t}+(h^{-\frac{1}{2}}v\cdot\nabla l)_{tt}\big{)}w^{-\nu}l_{tt}\] \[+\int(2a_{4}(w^{\nu}(h^{2}+\epsilon^{2})^{\frac{1}{4}})_{t} \triangle l_{t}+2a_{4}(w^{\nu})_{t}((h^{2}+\epsilon^{2})^{\frac{1}{4}})_{t} \triangle l)w^{-\nu}l_{tt}\] \[+\int a_{4}(w^{\nu})_{tt}(h^{2}+\epsilon^{2})^{\frac{1}{4}} \triangle lw^{-\nu}l_{tt}\] \[+\int(a_{4}w^{\nu}((h^{2}+\epsilon^{2})^{\frac{1}{4}})_{tt} \triangle l+a_{5}(w^{\nu}ng^{\frac{3}{2}}H(v))_{tt})w^{-\nu}l_{tt}\] \[+\int\big{(}a_{6}(w^{\nu+1}h^{-\frac{1}{2}}\text{div}\psi)_{tt}+ \Pi(l,h,w,g)_{tt}\big{)}w^{-\nu}l_{tt}\]
\[-a_{4}\int\nabla(h^{2}+\epsilon^{2})^{\frac{1}{4}}\cdot\nabla l_{tt}l_{tt}+ \frac{1}{2}\int(w^{-\nu}h^{-\frac{1}{2}})_{t}|l_{tt}|^{2}=\sum_{i=13}^{20}J_{i}, \tag{3.85}\]
where \(J_{i}\), \(i=13,14,\cdots,20\), are given and estimated as follows:
\[J_{13}= -\int\big{(}2(h^{-\frac{1}{2}})_{t}l_{tt}+(h^{-\frac{1}{2}})_{tt} l_{t}+(h^{-\frac{1}{2}}v\cdot\nabla l)_{tt}\big{)}w^{-\nu}l_{tt} \tag{3.86}\] \[\leq C|\varphi|_{\infty}|h_{t}|_{\infty}|w^{-\frac{\nu}{2}}h^{-\frac{ 1}{4}}l_{tt}|_{2}^{2}+C\big{(}|\varphi|_{\infty}^{2}|h_{t}|_{\infty}^{2}|w^{- \frac{\nu}{2}}|_{\infty}|h^{-\frac{1}{4}}l_{t}|_{2}\] \[+|\varphi|_{\infty}|w^{-\frac{\nu}{2}}|_{\infty}|h_{tt}|_{6}|h^{- \frac{1}{4}}l_{3}+|\varphi|_{\infty}^{\frac{5}{2}}|h_{t}|_{\infty}^{2}|w^{- \frac{\nu}{2}}|_{\infty}|h^{\frac{1}{4}}\nabla l|_{2}|v|_{\infty}\] \[+|w^{-\frac{\nu}{2}}|_{\infty}|\nabla l|_{3}|\varphi|_{\infty}^{ \frac{5}{4}}|v|_{\infty}|h_{tt}|_{6}+|\varphi|_{\infty}^{\frac{3}{2}}|h_{t}|_{ \infty}|w^{-\frac{\nu}{2}}|_{\infty}|v|_{\infty}|h^{\frac{1}{4}}\nabla l_{t}|_ {2}\] \[+|w^{-\frac{\nu}{2}}|_{\infty}|\nabla l|_{\infty}(|\varphi|_{ \infty}^{\frac{5}{4}}|h_{t}|_{\infty}|v_{t}|_{2}+|\varphi^{\frac{1}{4}}|_{ \infty}|v_{tt}|_{2}))|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{tt}|_{2}\] \[+C|\varphi|_{\infty}|w^{-\nu}|_{\infty}|v_{t}|_{3}|h^{\frac{1}{4 }}\nabla l_{t}|_{2}|h^{\frac{1}{4}}l_{tt}|_{6}\] \[+C|\varphi|_{\infty}^{\frac{1}{2}}|w^{-\frac{\nu}{2}}|_{\infty}|v |_{\infty}|h^{\frac{1}{4}}\nabla l_{tt}|_{2}|w^{-\frac{\nu}{2}}h^{-\frac{1}{ 4}}l_{tt}|_{2},\] \[J_{14}= \int\big{(}2a_{4}(w^{\nu}(h^{2}+\epsilon^{2})^{\frac{1}{4}})_{t} \triangle l_{t}+2a_{4}(w^{\nu})_{t}((h^{2}+\epsilon^{2})^{\frac{1}{4}})_{t} \triangle l\] \[+a_{4}(w^{\nu})_{tt}(h^{2}+\epsilon^{2})^{\frac{1}{4}}\triangle l +a_{4}w^{\nu}((h^{2}+\epsilon^{2})^{\frac{1}{4}})_{tt}\triangle l\big{)}w^{- \nu}l_{tt}\] \[\leq C\big{(}|\varphi|_{\infty}^{\frac{3}{4}}|h_{t}|_{\infty}|w^{ \frac{\nu}{2}}|_{\infty}|\sqrt{h}\nabla^{2}l_{t}|_{2}+|w^{\frac{\nu}{2}-1}|_{ \infty}|\varphi|_{\infty}^{\frac{1}{4}}|w_{t}|_{6}|h_{t}|_{\infty}|\nabla^{2} l|_{3}\] \[+|w^{\frac{\nu}{2}}|_{\infty}(|\varphi|_{\infty}^{\frac{1}{4}}|h_{ t}|_{\infty}^{2}|\nabla^{2}l_{2}|+|\varphi|_{\infty}^{\frac{1}{4}}|h_{tt}|_{6}| \nabla^{2}l|_{3})\] \[+|w^{\frac{\nu}{2}-2}|_{\infty}|hg^{-1}|_{\infty}^{\frac{1}{4}}|g ^{\frac{1}{4}}w_{t}|_{6}|(h^{2}+\epsilon^{2})^{\frac{1}{4}}\nabla^{2}l|_{6})| w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{tt}|_{2}\] \[+C|w^{-1}|_{\infty}(|gh^{-1}|_{\infty}^{\frac{1}{4}}|g^{-\frac{1} {4}}w_{tt}|_{2}|(h^{2}+\epsilon^{2})^{\frac{1}{4}}\nabla^{2}l|_{3}\] \[+|\varphi|_{\infty}^{\frac{1}{4}}|w_{t}|_{3}|(h^{2}+\epsilon^{2} )^{\frac{1}{4}}\nabla^{2}l_{t}|_{2})|h^{\frac{1}{4}}l_{tt}|_{6},\] \[J_{15}= \int a_{5}(w^{\nu}ng^{\frac{3}{2}}H(v))_{tt}w^{-\nu}l_{tt}\] \[\leq C\big{(}|n|_{\infty}|hg^{-1}|_{\infty}^{\frac{1}{4}}|g\nabla v|_{ \infty}^{2}(|w^{\frac{\nu}{2}-2}|_{\infty}|w_{t}|_{6}|g^{-\frac{1}{4}}w_{t}| _{3}+|w^{\frac{\nu}{2}-1}|_{\infty}|g^{-\frac{1}{4}}w_{tt}|_{2})\] \[+|w^{\frac{\nu}{2}}|_{\infty}|\varphi|_{\infty}^{\frac{1}{4}}|hg^{ -1}|_{\infty}^{\frac{1}{2}}(|n_{tt}|_{2}|g\nabla v|_{\infty}^{2}+|n|_{\infty}| g_{t}|_{\infty}^{2}|\nabla v|_{4}^{2})\] \[+|w^{\frac{\nu}{2}}|_{\infty}|hg^{-1}|_{\infty}^{\frac{3}{2}}|g \nabla v|_{6}^{2}|\varphi|_{\infty}^{\frac{5}{4}}|n|_{\infty}|g_{tt}|_{6}\] \[+|\varphi|_{\infty}^{\frac{1}{4}}|w^{\frac{\nu}{2}-1}|_{\infty}|w _{t}|_{6}|n_{t}|_{3}hg^{-1}|_{\infty}^{\frac{1}{2}}|g\nabla v|_{\infty}^{2}\] \[+|w^{\frac{\nu}{2}-1}|_{\infty}|g^{-\frac{1}{4}}w_{t}|_{2}|n|_{ \infty}hg^{-1}|_{\infty}^{\frac{1}{4}}|g_{t}|_{\infty}|g\nabla v|_{\infty}| \nabla v|_{\infty}\] \[+|w^{\frac{\nu}{2}}|_{\infty}|n_{t}|_{2}|\varphi|_{\infty}^{\frac{1} {4}}|hg^{-1}|_{\infty}^{\frac{1}{2}}|g_{t}|_{\infty}|g\nabla v|_{\infty}| \nabla v|_{\infty})|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{tt}|_{2}\] \[+\int a_{5}ng^{\frac{3}{2}}H^{t}(v)l_{tt}+C\big{(}|n|_{\infty}|gh^{ -1}|_{\infty}^{\frac{1}{4}}(|g^{\frac{3}{4}}\nabla v_{t}|_{3}|\sqrt{g}\nabla v _{t}|_{2}\] \[+|w^{-1}|_{\infty}|g^{-\frac{1}{4}}w_{t}|_{3}|g\nabla v|_{\infty}| \sqrt{g}\nabla v_{t}|_{2})\] \[+|\varphi|_{\infty}^{\frac{1}{4}}|\sqrt{g}\nabla v_{t}|_{2}(|n_{t} |_{3}|g\nabla v|_{\infty}+|n|_{\infty}|g_{t}|_{\infty}|\nabla v|_{3})\big{)}|h^{ \frac{1}{4}}l_{tt}|_{6},\]
\[J_{16}= \int a_{6}(w^{\nu+1}h^{-\frac{1}{2}}\text{div}\psi)_{tt}w^{-\nu}l_{tt} \tag{3.87}\] \[\leq -\int a_{6}wh^{-\frac{1}{2}}\text{div}\psi_{tt}l_{tt}+C\big{(}|w^{ \frac{\nu}{2}-1}|_{\infty}|\varphi|_{\infty}^{\frac{1}{4}}|w_{t}|_{6}^{2}| \nabla\psi|_{6}\] \[+|w^{\frac{\nu}{2}+1}|_{\infty}|\nabla\psi|_{3}(|\varphi|_{\infty }^{\frac{5}{4}}|h_{tt}|_{6}+|\varphi|_{\infty}^{\frac{9}{4}}|h_{t}|_{\infty}|w_ {t}|_{6})\] \[+|\varphi|_{\infty}^{\frac{3}{2}}|\psi^{\frac{\nu}{2}}|_{\infty}|hg ^{-1}|_{\infty}^{\frac{1}{4}}|g|^{\frac{1}{4}}u_{t}|_{6}|h_{t}|_{\infty}| \nabla\psi|_{3}\] \[+|\varphi|_{\infty}^{\frac{5}{4}}|w^{\frac{\nu}{2}+1}|_{\infty}|h _{t}|_{\infty}|\nabla\psi_{t}|_{2}\big{)}|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}} l_{tt}|_{2}\] \[+C|gh^{-1}|_{\infty}^{\frac{1}{4}}(|\varphi|_{\infty}^{\frac{7}{ 4}}|g^{-\frac{1}{4}}w_{tt}|_{2}|\nabla\psi|_{3}+|\varphi|_{\infty}^{\frac{1}{ 2}}|g^{-\frac{1}{4}}w_{t}|_{3}|\nabla\psi_{t}|_{2})|h^{\frac{1}{4}}l_{tt}|_{6},\] \[J_{17}= a_{7}\int(w^{\nu+1}h^{-\frac{3}{2}}\psi\cdot\psi)_{tt}w^{-\nu}l_ {tt}\] \[\leq C\big{(}|gh^{-1}|_{\infty}^{\frac{1}{4}}|\psi|_{\infty}^{2}(|w^{ \frac{\nu}{2}}|_{\infty}|g^{-\frac{1}{4}}w_{tt}|_{2}|\varphi|_{\infty}+|w^{ \frac{\nu}{2}-1}|_{\infty}|\varphi|_{\infty}|w_{t}|_{6}|g^{-\frac{1}{4}}w_{t} |_{3})\] \[+|w^{\frac{\nu}{2}+1}|_{\infty}(|\varphi|_{\infty}^{\frac{7}{4}} |\nabla h^{\frac{3}{4}}|_{6}^{2}|h_{tt}|_{6}+|\varphi|_{\infty}^{2}|h_{t}|_{ \infty}^{2}|\nabla h^{\frac{3}{8}}|_{4}^{2})\] \[+|\varphi|_{\infty}^{\frac{5}{4}}|w^{\frac{\nu}{2}+1}|_{\infty}(| v_{t}|_{3}|\psi_{t}|_{6}+|\psi|_{\infty}|\psi_{tt}|_{2})\] \[+|w^{\frac{\nu}{2}}|_{\infty}(|w_{t}|_{6}|\varphi|_{\infty}^{ \frac{5}{4}}|\psi|_{\infty}|\psi_{t}|_{3}+|g^{-\frac{1}{4}}w_{t}|_{2}|\varphi |_{\infty}^{2}|h_{t}|_{\infty}|\psi|_{\infty}^{2}|gh^{-1}|_{\infty}^{\frac{1} {4}})\] \[+|w^{\frac{\nu}{2}+1}|_{\infty}|\varphi|_{\infty}^{\frac{9}{4}}| h_{t}|_{\infty}|\psi|_{\infty}|\psi_{t}|_{2})|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}} l_{tt}|_{2},\] \[J_{18}= a_{8}\int(w^{\nu}h^{-\frac{1}{2}}\nabla l\cdot\psi)_{tt}w^{-\nu}l_ {tt}\] (3.88) \[\leq C\big{(}|\nabla l|_{\infty}|\psi|_{\infty}|gh^{-1}|_{\infty}^{ \frac{1}{4}}(|w^{\frac{\nu}{2}-1}|_{\infty}|g^{-\frac{1}{4}}w_{tt}|_{2}+|w^{ \frac{\nu}{2}-2}|_{\infty}|g^{-\frac{1}{4}}w_{t}|_{3}|w_{t}|_{6})\] \[+|w^{\frac{\nu}{2}}|_{\infty}|\psi|_{\infty}(|\varphi|_{\infty}^{ \frac{5}{4}}|h_{tt}|_{6}|\nabla l|_{3}+|\varphi|_{\infty}^{\frac{5}{2}}|h_{t}| _{\infty}^{2}|h^{\frac{1}{4}}\nabla l|_{2})\] \[+|w^{\frac{\nu}{2}}|_{\infty}(|\varphi|_{\infty}^{\frac{1}{2}}| \psi|_{\infty}|h^{\frac{1}{4}}\nabla l_{tt}|_{2}+|\varphi|_{\infty}^{\frac{1} {4}}|\nabla l|_{\infty}|\psi_{tt}|_{2})\] \[+|w^{\frac{\nu}{2}-1}|_{\infty}|g^{-\frac{1}{4}}w_{t}|_{3}|gh^{-1 }|_{\infty}^{\frac{1}{4}}|\nabla l|_{\infty}(|\varphi|_{\infty}|h_{t}|_{6}| \psi|_{\infty}+|\psi_{t}|_{6})\] \[+|w^{\frac{\nu}{2}}|_{\infty}|\varphi|_{\infty}^{\frac{3}{2}}|h_{t }|_{\infty}(|\psi|_{\infty}|h^{\frac{1}{4}}\nabla l_{t}|_{2}+|h^{\frac{1}{4}} \nabla l|_{3}|\psi_{t}|_{6}))|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{tt}|_{2}\] \[+C(|w^{-1}|_{\infty}|g^{-\frac{1}{4}}w_{t}|_{3}|gh^{-1}|_{\infty}^ {\frac{1}{4}}|\varphi|_{\infty}^{\frac{3}{4}}|h^{\frac{1}{4}}\nabla l_{t}|_{2}| \psi|_{\infty}\] \[+|\varphi|_{\infty}|h^{\frac{1}{4}}\nabla l_{t}|_{2}|\psi_{t}|_{3} )|h^{\frac{1}{4}}l_{tt}|_{6},\] \[J_{19}= a_{9}\int(w^{\nu-1}\sqrt{g}\nabla w\cdot\nabla)_{tt}w^{-\nu}l_{tt}\] \[\leq C(|\sqrt{g}\nabla w|_{\infty}^{2}|hg^{-1}|_{\infty}^{\frac{1}{4}}(|w ^{\frac{\nu}{2}-2}|_{\infty}|g^{-\frac{1}{4}}w_{tt}|_{2}+|w^{\frac{\nu}{2}-3}|_{ \infty}|g^{-\frac{1}{4}}w_{t}|_{3}|w_{t}|_{6})\] \[+|w^{\frac{\nu}{2}-1}|_{\infty}|hg^{-1}|_{\infty}^{\frac{1}{4}}(|g^ {-\frac{1}{4}}|_{\infty}|g_{tt}|_{6}|\nabla w|_{6}^{2}+|g^{-1}|_{\infty}^{\frac{5} {4}}|g_{t}|_{\infty}^{2}|\nabla w|_{3}|\nabla w|_{6})\] \[+|w^{\frac{\nu}{2}-1}|_{\infty}|hg^{-1}|_{\infty}^{\frac{1}{4}}| \sqrt{g}\nabla w|_{\infty}|g^{\frac{1}{4}}\nabla w_{tt}|_{2}\] \[+|w^{\frac{\nu}{2}-2}|_{\infty}|hg^{-1}|_{\infty}^{\frac{1}{4}}(|g^ {-\frac{1}{4}}w_{t}|_{2}|g_{t}|_{\infty}|\nabla w|_{\infty}^{2}+|g^{\frac{1}{4}} w_{t}|_{6}|\nabla w_{t}|_{3}|\sqrt{g}\nabla w|_{\infty})\] \[+|w^{\frac{\nu}{2}-1}|_{\infty}|hg^{-1}|_{\infty}^{\frac{1}{4}}| \sqrt{g}^{-1}|_{\infty}|g_{t}|_{\infty}|\sqrt{g}\nabla w|_{\infty}|_{\infty} |g^{\frac{1}{4}}\nabla w_{t}|_{2})|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{tt}|_{2}\] \[+C|w^{-1}|_{\infty}|gh^{-1}|_{\infty}^{\frac{1}{4}}|\sqrt{g}\nabla w _{t}|_{2}|\nabla w_{t}|_{3}|h^{\frac{1}{4}}l_{tt}|_{6},\]
\[J_{20}= -a_{4}\int\nabla(h^{2}+\epsilon^{2})^{\frac{1}{4}}\cdot\nabla l_{tt} l_{tt}+\frac{1}{2}\int(w^{-\nu}h^{-\frac{1}{2}})_{t}|l_{tt}|^{2} \tag{3.88}\] \[\leq C(|\varphi|_{\infty}^{\frac{1}{2}}|w^{\frac{\nu}{2}}|_{\infty}| \psi|_{\infty}|h^{\frac{1}{4}}\nabla l_{tt}|_{2}+|\varphi|_{\infty}|h_{t}|_{ \infty}|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{tt}|_{2}\] \[+|gh^{-1}|_{\infty}^{\frac{1}{4}}|w^{-\frac{\nu}{2}-1}|_{\infty}| g^{-\frac{1}{4}}w_{t}|_{3}|\varphi|_{\infty}^{\frac{1}{4}}|h^{\frac{1}{4}}l_{tt}|_{6} )|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{tt}|_{2}.\]
To finish the estimates on \(J_{15}\) and \(J_{16}\), one can integrate by parts to get
\[\int ng^{\frac{3}{2}}H^{t}(v)l_{tt}\leq C|w^{\frac{\nu}{2}}|_{\infty}|g\nabla v|_{\infty}|v_{tt}|_{2}(|n|_{ \infty}|\nabla g|_{\infty}|hg^{-1}|_{\infty}^{\frac{1}{4}}|g^{-1}|_{\infty}^{ \frac{1}{4}}\] \[+|\psi|_{\infty}|\varphi|_{\infty}^{\frac{1}{4}-b}|gh^{-1}|_{ \infty}^{\frac{1}{4}})|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{tt}|_{2}\] \[+C|n|_{\infty}|g^{\frac{1}{4}}v_{tt}|_{3}|gh^{-1}|_{\infty}^{ \frac{1}{4}}(|g\nabla^{2}v|_{2}|h^{\frac{1}{4}}l_{tt}|_{6}+|g\nabla v|_{6}|h^{ \frac{1}{4}}\nabla l_{tt}|_{2}),\] \[\int wh^{-\frac{1}{2}}\text{div}\psi_{tt}l_{tt}= -\int(\nabla wh^{-\frac{1}{2}}+w\nabla h^{-\frac{1}{2}})\cdot \psi_{tt}l_{tt}-\int wh^{-\frac{1}{2}}\psi_{tt}\cdot\nabla l_{tt}\] \[\leq C(|w^{\frac{\nu}{2}}|_{\infty}|\varphi|_{\infty}^{\frac{1}{4}}| \nabla w|_{\infty}+|w^{\frac{\nu}{2}+1}|_{\infty}|\varphi|_{\infty}^{\frac{5} {4}}|\psi|_{\infty})|\psi_{tt}|_{2}|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{tt}| _{2}\] \[+C|w|_{\infty}|\varphi|_{\infty}^{\frac{3}{4}}|\psi_{tt}|_{2}|h^{ \frac{1}{4}}\nabla l_{tt}|_{2}.\]
Multiplying (3.84) by \(t\) and integrating over \((\tau,t)\), one can obtain from above estimates on \(J_{i}\) (\(i=13,...,20\)), (3.17) and Lemmas 3.3-3.8 that
\[t|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{tt}|_{2}^{2}+\frac{a_{4}} {4}\int_{\tau}^{t}s|(h^{2}+\epsilon^{2})^{\frac{1}{8}}\nabla l_{ss}|_{2}^{2} \mathrm{d}s \tag{3.89}\] \[\leq \tau|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{tt}(\tau)|_{2}^{2}+M(c_ {0})(c_{5}^{14+8\nu}t+1)\] \[+M(c_{0})c_{4}^{14+3\nu}\int_{\tau}^{t}s|w^{-\frac{\nu}{2}}h^{- \frac{1}{4}}l_{ss}|_{2}^{2}\mathrm{d}s,\]
where one has used the inequality
\[|g^{\frac{1}{4}}v_{tt}|_{3}\leq C|v_{tt}|_{2}^{\frac{1}{2}}|\sqrt{g}v_{tt}|_{6} ^{\frac{1}{2}}.\]
Note that due to (3.74), there exists a sequence \(s_{k}\) such that
\[s_{k}\longrightarrow 0,\quad\text{and}\quad s_{k}|w^{-\frac{\nu}{2}}h^{-\frac{1}{ 4}}l_{tt}(s_{k},x)|_{2}^{2}\longrightarrow 0,\quad\text{as}\quad k \longrightarrow\infty.\]
Taking \(\tau=s_{k}\) and letting \(k\rightarrow\infty\) in (3.89), one gets by Gronwall's inequality that
\[t|w^{-\frac{\nu}{2}}h^{-\frac{1}{4}}l_{tt}|_{2}^{2}+\frac{a_{4}}{4}\int_{\tau }^{t}s|(h^{2}+\epsilon^{2})^{\frac{1}{8}}\nabla l_{ss}|_{2}^{2}\mathrm{d}s+ \int_{0}^{t}s|\nabla l_{ss}|_{2}^{2}\mathrm{d}s\leq M(c_{0}), \tag{3.90}\]
for \(0\leq t\leq\min\{T_{4},(1+Cc_{5})^{-40-10\nu}\}\), which, along with (3.81), yields that
\[t^{\frac{1}{2}}|h^{-\frac{1}{4}}l_{tt}(t)|_{2}+t^{\frac{1}{2}}|\nabla^{2}l_{t} (t)|_{2}+t^{\frac{1}{2}}|\sqrt{h}\nabla^{2}l_{t}(t)|_{2}\leq M(c_{0})c_{1}^{ \frac{\nu}{2}}. \tag{3.91}\]
Due to (3.77) and (3.91), \(l\) can be bounded by
\[|l|_{\infty}= |l_{0}+\int_{0}^{t}l_{s}\mathrm{d}s|_{\infty}\leq|l_{0}|_{\infty}+ t|l_{t}|_{\infty}\leq c_{0}+Ct|\nabla l_{t}|_{2}^{\frac{1}{2}}|\nabla^{2}l_{t}|_{2}^{ \frac{1}{2}}\leq\frac{3}{2}c_{0}, \tag{3.92}\] \[l= l_{0}+\int_{0}^{t}l_{s}\mathrm{d}s\geq l_{0}-t|l_{t}|_{\infty}\geq c _{0}^{-1}-Ct|\nabla l_{t}|_{2}^{\frac{1}{2}}|\nabla^{2}l_{t}|_{2}^{\frac{1}{2}} \geq\frac{1}{2}c_{0}^{-1},\]
for \(0\leq t\leq T_{5}=\min\{T_{4},(1+M(c_{0})c_{5})^{-40-10\nu}\}\).
The proof of Lemma 3.9 is complete.
#### 3.2.6. A priori estimates for \(u\)
Based on the estimates for \((\phi,h,l)\) obtained above, one can now turn to give the lower order estimates for \(u\). For simplicity, set
\[\begin{split}\mathcal{K}=& v\cdot\nabla v+a_{1}\phi \nabla l+l\nabla\phi+a_{2}\sqrt{h^{2}+\epsilon^{2}}l^{\nu}Lu-a_{2}\nabla l^{ \nu}\cdot gQ(v)\\ &-a_{3}l^{\nu}\psi\cdot Q(v),\\ \mathcal{H}=&-u_{t}-v\cdot\nabla v-l\nabla\phi-a_{1} \phi\nabla l+a_{2}g\nabla l^{\nu}\cdot Q(v)+a_{3}l^{\nu}\psi\cdot Q(v).\end{split} \tag{3.93}\]
**Lemma 3.10**.: For \(t\in[0,T_{5}]\), it holds that
\[\begin{split}|\sqrt{h}\nabla u(t)|_{2}^{2}+\|u(t)\|_{1}^{2}+\int _{0}^{t}(\|\nabla u\|_{1}^{2}+|u_{s}|_{2}^{2})\mathrm{d}s\leq& M(c_{0}),\\ (|u|_{D^{2}}^{2}+|h\nabla^{2}u|_{2}^{2}+|u_{t}|_{2}^{2})(t)+\int _{0}^{t}(|u|_{D^{3}}^{2}+|u_{s}|_{D^{1}_{*}}^{2})\mathrm{d}s\leq& M(c_{0}).\end{split} \tag{3.94}\]
Proof.: First, one estimates \(|u|_{2}\). It follows from \(\eqref{eq:1}_{2}\) that
\[\begin{split}& l^{-\nu}(u_{t}+v\cdot\nabla v+a_{1}\phi\nabla l +l\nabla\phi)+a_{2}\sqrt{h^{2}+\epsilon^{2}}Lu\\ =& a_{2}gl^{-\nu}\nabla l^{\nu}\cdot Q(v)+a_{3} \psi\cdot Q(v).\end{split} \tag{3.95}\]
Multiplying (3.95) by \(u\) and integrating over \(\mathbb{R}^{3}\), one can obtain by integration by parts, Lemma 4.1, Holder's and Young's inequalities that
\[\begin{split}&\frac{1}{2}\frac{d}{dt}|l^{-\frac{\nu}{2}}u|_{2}^{2}+a_ {2}\alpha|(h^{2}+\epsilon^{2})^{\frac{1}{4}}\nabla u|_{2}^{2}+a_{2}(\alpha+ \beta)|(h^{2}+\epsilon^{2})^{\frac{1}{4}}\mathrm{div}u|_{2}^{2}\\ =&-\int l^{-\nu}(v\cdot\nabla v+a_{1}\phi\nabla l+l \nabla\phi-a_{2}\nabla l^{\nu}\cdot gQ(v)-a_{3}l^{\nu}\psi\cdot Q(v))\cdot u\\ &+\frac{1}{2}\int(l^{-\nu})_{t}|u|^{2}-a_{2}\int\nabla\sqrt{h^{2} +\epsilon^{2}}\cdot Q(u)\cdot u\\ \leq& C\big{(}|l^{-\frac{\nu}{2}}|_{\infty}(|v|_{ \infty}|\nabla v|_{2}+|\nabla l|_{2}|\phi|_{\infty}+|l|_{\infty}|\nabla\phi|_{ 2})+|l^{\nu-1}|_{\infty}|g\nabla v|_{\infty}|\nabla l|_{2}\\ &+|l^{\nu}|_{\infty}|\psi|_{\infty}|\nabla v|_{2}\big{)}|l^{-\frac {\nu}{2}}u|_{2}+C|l^{-1}|_{\infty}|l_{t}|_{\infty}|l^{-\frac{\nu}{2}}u|_{2}^{ 2}\\ &+C|\psi|_{\infty}|\frac{l^{\nu}}{2}|_{\infty}|\varphi|_{\infty} ^{\frac{1}{2}}|\sqrt{h}\nabla u|_{2}|l^{-\frac{\nu}{2}}u|_{2}\\ \leq& M(c_{0})(1+|l_{t}|_{D^{2}}^{\frac{1}{2}})|l^{ -\frac{\nu}{2}}u|_{2}^{2}+M(c_{0})c_{4}^{4}+\frac{1}{2}a_{2}\alpha|\sqrt{h} \nabla u|_{2}^{2},\end{split} \tag{3.96}\]
which, along with Gronwall's inequalty and Lemma 3.8, yields that for \(0\leq t\leq T_{5}\),
\[\begin{split}&|u|_{2}^{2}+|l^{-\frac{\nu}{2}}u|_{2}^{2}+\int_{0}^{t }|\sqrt{h}\nabla u|_{2}^{2}\mathrm{d}s\\ \leq& M(c_{0})(|u_{0}|_{2}^{2}+c_{4}^{4}t)\exp\Big{(}M (c_{0})\int_{0}^{t}(1+|l_{s}|_{D^{2}}^{\frac{1}{2}})\mathrm{d}s\Big{)}\leq M(c_{0}).\end{split} \tag{3.97}\]
Second, one deals with \(|\nabla u|_{2}\). Multiplying (3.95) by \(u_{t}\) and integrating over \(\mathbb{R}^{3}\), one gets by integration by parts, Lemma 4.1, Holder's and Young's inequalities that
\[\begin{split}&\frac{1}{2}\frac{d}{dt}(a_{2}\alpha|(h^{2}+\epsilon^{ 2})^{\frac{1}{4}}\nabla u|_{2}^{2}+a_{2}(\alpha+\beta)|(h^{2}+\epsilon^{2})^{ \frac{1}{4}}\mathrm{div}u|_{2}^{2})+|l^{-\frac{\nu}{2}}u_{t}|_{2}^{2}\\ =&-\int l^{-\nu}\big{(}v\cdot\nabla v+a_{1}\phi\nabla l +l\nabla\phi-a_{2}g\nabla l^{\nu}\cdot Q(v)-a_{3}l^{\nu}\psi\cdot Q(v)\big{)} \cdot u_{t}\\ &+\frac{1}{2}\int a_{2}\frac{h}{\sqrt{h^{2}+\epsilon^{2}}}h_{t}( \alpha|\nabla u|^{2}+(\alpha+\beta)|\mathrm{div}u|^{2})\\ &-\int a_{2}\nabla\sqrt{h^{2}+\epsilon^{2}}\cdot Q(u)\cdot u_{t} \\ \leq& C|l^{-\frac{\nu}{2}}|_{\infty}(|v|_{\infty}| \nabla v|_{2}+|\nabla l|_{2}|\phi|_{\infty}+|l|_{\infty}|\nabla\phi|_{2}+|g \nabla v|_{\infty}|l^{\nu-1}|_{\infty}|\nabla l|_{2}\\ &+|\psi|_{\infty}|l^{\nu}|_{\infty}|\nabla v|_{2})|l^{-\frac{\nu} {2}}u_{t}|_{2}+C|h_{t}|_{\infty}|\varphi|_{\infty}|\sqrt{h}\nabla u|_{2}^{2} \\ &+C|l^{\frac{\nu}{2}}|_{\infty}|\psi|_{\infty}|l^{-\frac{\nu}{2}} u_{t}|_{2}|\varphi|_{\infty}^{\frac{1}{2}}|\sqrt{h}\nabla u|_{2}\\ \leq& M(c_{0})c_{4}^{2}|\sqrt{h}\nabla u|_{2}^{2}+ M(c_{0})c_{4}^{4}+\frac{1}{2}|l^{-\frac{\nu}{2}}u_{t}|_{2}^{2},\end{split}\]
which, along with Gronwall's inequality and (3.5), implies that for \(0\leq t\leq T_{5}\),
\[\begin{split}&|\sqrt{h}\nabla u|_{2}^{2}+|\nabla u|_{2}^{2}+ \int_{0}^{t}\big{(}|l^{-\frac{\nu}{2}}u_{s}|_{2}^{2}+|u_{s}|_{2}^{2}\big{)} \mathrm{d}s\\ \leq& M(c_{0})(1+c_{4}^{4}t)\exp{(M(c_{0})c_{4}^{2}t )}\leq M(c_{0}).\end{split} \tag{3.98}\]
Notice that \(u\) solves the following elliptic equation
\[a_{2}L(\sqrt{h^{2}+\epsilon^{2}}u)=l^{-\nu}\mathcal{H}-a_{2}G(\nabla\sqrt{h^{2 }+\epsilon^{2}},u). \tag{3.99}\]
Thus to derive the \(L^{2}\) estimate of \(\nabla^{2}u\), it is sufficient to get the \(L^{2}\) estimates of
\[(\mathcal{H},\widetilde{G}=G(\nabla\sqrt{h^{2}+\epsilon^{2}},u)),\]
which can be obtained from (3.7), (3.17), (3.52)-(3.53), (3.66), (3.93), (3.98) and Lemmas 3.2-3.9 as
\[\begin{split}|\mathcal{H}|_{2}\leq& C(|u_{t}|_{2}+|v|_{6}| \nabla v|_{3}+|l|_{\infty}|\nabla\phi|_{2}+|\phi|_{\infty}|\nabla l|_{2}+| \nabla l_{3}|l^{\nu-1}|_{\infty}|g\nabla v|_{6}\\ &+|l^{\nu}|_{\infty}|\psi|_{\infty}|\nabla v|_{2})\leq M(c_{0})(|u_{t}|_{2}+1),\\ |\widetilde{G}|_{2}\leq& C(|\nabla\sqrt{h^{2}+\epsilon^ {2}}|_{\infty}|\nabla u|_{2}+|\nabla^{2}\sqrt{h^{2}+\epsilon^{2}}|_{3}|u|_{6} )\leq M(c_{0}),\end{split} \tag{3.100}\]
where one also has used the facts that
\[\begin{split}\|l\|_{D^{2}}\leq\|l_{0}\|_{D^{2}}+t^{\frac{1}{2}} \Big{(}\int_{0}^{t}\|l_{s}\|_{D^{2}}^{2}\mathrm{d}s\Big{)}^{\frac{1}{2}}\leq& M(c_{0})(1+c_{1}^{2\nu}t^{\frac{1}{2}})\leq& M(c_{0}),\\ |\nabla^{2}\sqrt{h^{2}+\epsilon^{2}}|_{3}\leq& C(| \varphi|_{\infty}^{\frac{1}{2}}|\nabla h^{\frac{3}{4}}|_{6}^{2}+|\nabla\psi|_{3 })\leq& M(c_{0}).\end{split} \tag{3.101}\]
Then it follows from (3.97)-(3.100), Lemma 4.3 and Lemmas 3.3-3.4 that
\[\begin{split}|\sqrt{h^{2}+\epsilon^{2}}u|_{D^{2}}\leq& C(|l^{-\nu}\mathcal{H}|_{2}+|G(\nabla\sqrt{h^{2}+\epsilon^{2}},u)|_{2}) \leq M(c_{0})(|u_{t}|_{2}+1),\\ |\sqrt{h^{2}+\epsilon^{2}}\nabla^{2}u|_{2}\leq& C(| \sqrt{h^{2}+\epsilon^{2}}u|_{D^{2}}+|\nabla\psi|_{3}|u|_{6}+|\psi|_{\infty}| \nabla u|_{2}\\ &+|\psi|_{\infty}^{2}|u|_{2}|\varphi|_{\infty})\leq C|\sqrt{h^{2}+ \epsilon^{2}}u|_{D^{2}}+M(c_{0}),\end{split} \tag{3.102}\]
which, along with (3.97)-(3.98), yields \(\eqref{eq:2}_{1}\).
Next one estimates \(|u|_{D^{2}}\). Applying \(\partial_{t}\) to \(\eqref{eq:1}_{2}\) yields
\[\begin{split}& u_{tt}+a_{2}l^{\nu}\sqrt{h^{2}+\epsilon^{2}}Lu_{t}+( v\cdot\nabla v)_{t}+(l\nabla\phi)_{t}+a_{1}(\phi\nabla l)_{t}\\ =&-a_{2}(l^{\nu}\sqrt{h^{2}+\epsilon^{2}})_{t}Lu+( a_{2}g\nabla l^{\nu}\cdot Q(v)+a_{3}l^{\nu}\psi\cdot Q(v))_{t}.\end{split} \tag{3.103}\]
Multiplying (3.103) by \(l^{-\nu}u_{t}\), integrating over \(\mathbb{R}^{3}\) and integration by parts lead to
\[\begin{split}&\frac{1}{2}\frac{d}{dt}|l^{-\frac{\nu}{2}}u_{t}|_{2}^ {2}+a_{2}\alpha|(h^{2}+\epsilon^{2})^{\frac{1}{4}}\nabla u_{t}|_{2}^{2}+a_{2} (\alpha+\beta)|(h^{2}+\epsilon^{2})^{\frac{1}{4}}\mathrm{div}u_{t}|_{2}^{2}\\ =&\int l^{-\nu}\Big{(}-(v\cdot\nabla v)_{t}-(l\nabla \phi)_{t}-a_{1}(\phi\nabla l)_{t}-a_{2}(l^{\nu}\sqrt{h^{2}+\epsilon^{2}})_{t} Lu\\ &+(a_{2}g\nabla l^{\nu}\cdot Q(v)+a_{3}l^{\nu}\psi\cdot Q(v))_{t }\Big{)}\cdot u_{t}\\ &-\int a_{2}\nabla\sqrt{h^{2}+\epsilon^{2}}\cdot Q(u_{t})\cdot u _{t}+\frac{1}{2}\int(l^{-\nu})_{t}|u_{t}|^{2}\\ \leq& C|l^{-\frac{\nu}{2}}|_{\infty}(|v|_{\infty}| \nabla v_{t}|_{2}+|v_{t}|_{2}|\nabla v|_{\infty}+|l_{t}|_{6}|\nabla\phi|_{3}+| l|_{\infty}|\nabla\phi_{t}|_{2}\\ &+|\nabla l_{t}|_{2}|\phi|_{\infty}+|\phi_{t}|_{\infty}|\nabla l |_{2})|l^{-\frac{\nu}{2}}u_{t}|_{2}+C|l^{-1}|_{\infty}|l_{t}|_{6}|\sqrt{h^{2} +\epsilon^{2}}\nabla^{2}u|_{2}|u_{t}|_{3}\\ &+C|l^{\frac{\nu}{2}}|_{\infty}|h_{t}|_{\infty}|\nabla^{2}u|_{2} |l^{-\frac{\nu}{2}}u_{t}|_{2}+C\Big{(}|l^{\frac{\nu}{2}-2}|_{\infty}|g\nabla v |_{\infty}|l_{t}|_{6}|\nabla l|_{3}\\ &+|l^{\frac{\nu}{2}-1}|_{\infty}(|g_{t}|_{\infty}|\nabla v|_{ \infty}|\nabla l|_{2}+|g\nabla v|_{\infty}|\nabla l_{t}|_{2}+|\psi|_{\infty}| l_{t}|_{6}|\nabla v|_{3})\\ &+|l^{\frac{\nu}{2}}|_{\infty}(|\psi_{t}|_{2}|\nabla v|_{\infty} +|\psi|_{\infty}|\nabla v_{t}|_{2})\Big{)}|l^{-\frac{\nu}{2}}u_{t}|_{2}\\ &+C|l^{-1}|_{\infty}|gh^{-1}|_{\infty}^{\frac{1}{2}}|\sqrt{g} \nabla v_{t}|_{2}|\nabla l|_{3}|\sqrt{h}u_{t}|_{6}\\ &+C|l^{\frac{\nu}{2}}|_{\infty}|\varphi|_{\infty}^{\frac{1}{2}}| \psi|_{\infty}|\sqrt{h}\nabla u_{t}|_{2}|l^{-\frac{\nu}{2}}u_{t}|_{2}+C|l^{- \frac{\nu}{2}-1}|_{\infty}|l_{t}|_{6}|u_{t}|_{3}|l^{-\frac{\nu}{2}}u_{t}|_{2}. \end{split}\]
Integrating (3.104) over \((\tau,t)\) (\(\tau\in(0,t)\)), one can get by using (3.17), Lemmas 3.2-3.9 and Young's inequality that
\[\begin{split}&\frac{1}{2}|l^{-\frac{\nu}{2}}u_{t}(t)|_{2}^{2}+ \frac{a_{2}\alpha}{2}\int_{\tau}^{t}|\sqrt{h}\nabla u_{s}|_{2}^{2}\mathrm{d}s \\ \leq&\frac{1}{2}|l^{-\frac{\nu}{2}}u_{t}(\tau)|_{2}^{2 }+M(c_{0})c_{4}^{2}\int_{0}^{t}|l^{-\frac{\nu}{2}}u_{s}|^{2}\mathrm{d}s+M(c_{ 0})c_{4}^{4+2\nu}t+M(c_{0}).\end{split} \tag{3.105}\]
Due to \(\eqref{eq:1}_{2}\), it can be checked directly that
\[\begin{split}|u_{t}(\tau)|_{2}\leq&|\mathcal{K}(\tau )|_{2}\leq C(|v|_{\infty}|\nabla v|_{2}+|\phi|_{\infty}|\nabla l|_{2}+|\nabla\phi|_{2 }|l|_{\infty}\\ &+|l|_{\infty}^{\nu}|(h+\epsilon)Lu|_{2}+|l^{\nu-1}|_{\infty}|g \nabla v|_{\infty}|\nabla l|_{2}+|\psi|_{\infty}|l^{\nu}|_{\infty}|\nabla v|_{ 2})(\tau).\end{split} \tag{3.106}\]
It follows from this, (3.3), (3.5), (3.8), (3.11) and Lemma 3.1 that
\[\begin{split}\limsup_{\tau\to 0}|u_{t}(\tau)|_{2}\leq& C(|u_{0}|_{\infty}|\nabla u _{0}|_{2}+|\phi_{0}|_{\infty}|\nabla l_{0}|_{2}+|\nabla\phi_{0}|_{2}|l_{0}|_{ \infty}+|\psi_{0}|_{\infty}|l_{0}^{\nu}|_{\infty}|\nabla u_{0}|_{2}\\ &+|l_{0}^{\nu}|_{\infty}(|g_{2}|_{2}+|Lu_{0}|_{2})+|l_{0}^{\nu-1}| _{\infty}|\phi_{0}^{2\kappa}\nabla u_{0}|_{\infty}|\nabla l_{0}|_{2})\leq M(c_{0}). \end{split}\]
Letting \(\tau\to 0\) in (3.105) and using Gronwall's inequality and Lemma 3.9 give that for \(0\leq t\leq T_{5}\),
\[\begin{split}&|u_{t}(t)|_{2}^{2}+\int_{0}^{t}\big{(}|\sqrt{h} \nabla u_{s}|_{2}^{2}+|\nabla u_{s}|_{2}^{2}\big{)}\mathrm{d}s\\ \leq&(M(c_{0})c_{4}^{4+2\nu}t+M(c_{0}))\exp{(M(c_{0})c_ {4}^{2}t)}\leq M(c_{0}),\end{split} \tag{3.107}\]
which, along with (3.102), yields that for \(0\leq t\leq T_{5}\),
\[|\sqrt{h^{2}+\epsilon^{2}}u(t)|_{D^{2}}+|h\nabla^{2}u(t)|_{2}+|u(t)|_{D^{2}}\leq M (c_{0}). \tag{3.108}\]
Similarly, to estimate \(|\nabla^{3}u|_{2}\), one needs to derive the \(L^{2}\) estimates of
\[(\nabla\mathcal{H},\nabla\widetilde{G}=\nabla G(\nabla\sqrt{h^{2}+\epsilon^{2 }},u)).\]
It follows from (3.7), (3.17), (3.93), (3.97)-(3.98), (3.101), (3.108) and Lemmas 3.2-3.9 that
\[|\mathcal{H}|_{D^{1}_{*}}\leq C(|u_{t}|_{D^{1}_{*}}+|v|_{\infty}|\nabla^{2}v|_{2}+|\nabla v|_{6}| \nabla v|_{3}+|l|_{\infty}|\nabla^{2}\phi|_{2}+|\nabla\phi|_{3}|\nabla l|_{6} \tag{3.109}\] \[+|\phi|_{\infty}|\nabla^{2}l|_{2}+|\nabla g|_{\infty}|\nabla l^{ \nu}|_{\infty}|\nabla v|_{2}+|\nabla^{2}l^{\nu}|_{3}|g\nabla v|_{6}\] \[+|\nabla l^{\nu}|_{\infty}|g\nabla^{2}v|_{2}+|\nabla l^{\nu}|_{ \infty}|\psi|_{\infty}|\nabla v|_{2}+|l^{\nu}|_{\infty}|\nabla\psi|_{3}| \nabla v|_{6}\] \[+|l^{\nu}|_{\infty}|\psi|_{\infty}|\nabla^{2}v|_{2})\leq M(c_{0})(|u_{t}|_{D^{1}_{*}}+c_{3}^{2\nu+3}),\] \[|\widetilde{G}|_{D^{1}_{*}}\leq C(|\nabla\sqrt{h^{2}+\epsilon^{2}}|_{\infty}|\nabla^{2}u|_{2}+| \nabla^{2}\sqrt{h^{2}+\epsilon^{2}}|_{3}|\nabla u|_{6}\] \[+|\nabla^{3}\sqrt{h^{2}+\epsilon^{2}}|_{2}|u|_{\infty})\leq M(c_{0}),\]
where one has used the fact that
\[|\nabla^{3}\sqrt{h^{2}+\epsilon^{2}}|_{2}\leq M(c_{0})(|\nabla h^{\frac{3}{4}}|_{6}^{3}|\varphi|^{\frac{5}{ \delta_{0}}}+|h^{-\frac{1}{4}}\nabla^{2}h|_{2}|\psi|_{\infty}|\varphi|^{\frac {3}{\delta_{0}}}+|\nabla^{3}h|_{2})\leq M(c_{0}). \tag{3.110}\]
Hence, one gets from (3.97)-(3.100), (3.108)-(3.109), Lemmas 3.3-3.4 and Lemma 4.3 that
\[|\sqrt{h^{2}+\epsilon^{2}}u(t)|_{D^{3}}\leq C|l^{-\nu}\mathcal{H}|_{D^{1}_{*}}+C|G(\nabla\sqrt{h^{2}+\epsilon^{2}},u )|_{D^{1}_{*}} \tag{3.111}\] \[\leq M(c_{0})(|u_{t}|_{D^{1}_{*}}+c_{3}^{2\nu+3}),\] \[|\sqrt{h^{2}+\epsilon^{2}}\nabla^{3}u(t)|_{2}\leq C(|\sqrt{h^{2}+\epsilon^{2}}u(t)|_{D^{3}}+\|\psi\|_{L^{\infty}\cap D ^{1,3}\cap D^{2}}\|u\|_{2})\] \[+C(1+\|\psi\|_{L^{\infty}\cap D^{1,3}\cap D^{2}}^{3}\|u\|_{1})(1+| \varphi|_{\infty}^{2})\] \[\leq M(c_{0})(|\sqrt{h^{2}+\epsilon^{2}}u(t)|_{D^{3}}+c_{3}^{2\nu+3}),\]
which, along with (3.107) and Lemma 3.5, yields that
\[\int_{0}^{t}(|h\nabla^{3}u|_{2}^{2}+|h\nabla^{2}u|_{D^{1}_{*}}^{2}+|u|_{D^{3}} ^{2})\mathrm{d}s\leq M(c_{0})\quad\text{for}\quad 0\leq t\leq T_{5}. \tag{3.112}\]
The proof of Lemma 3.10 is complete.
We now turn to estimate the higher order derivatives of \(u\).
**Lemma 3.11**.: For \(t\in[0,T_{5}]\), it holds that
\[(|u|_{D^{3}}+|h\nabla^{2}u|_{D^{1}_{*}})(t)\leq M(c_{0})c_{3}^{2\nu+3}, \tag{3.113}\] \[|\sqrt{h}\nabla u_{t}|_{2}+|u_{t}|_{D^{1}_{*}}+\int_{0}^{t}(|u_{ss }|_{2}^{2}+|u_{s}|_{D^{2}}^{2})\mathrm{d}s\leq M(c_{0}),\] \[\int_{0}^{t}(|h\nabla^{2}u_{s}|_{2}^{2}+|u|_{D^{4}}^{2}+|h\nabla^{ 2}u|_{D^{2}}^{2}+|(h\nabla^{2}u)_{s}|_{2}^{2})\mathrm{d}s\leq M(c_{0}).\]
Proof.: Multiplying (3.103) by \(l^{-\nu}u_{tt}\) and integrating over \(\mathbb{R}^{3}\) lead to
\[\frac{1}{2}\frac{d}{dt}(a_{2}\alpha|(h^{2}+\epsilon^{2})^{\frac{1}{4}}\nabla u_{ t}|_{2}^{2}+a_{2}(\alpha+\beta)|(h^{2}+\epsilon^{2})^{\frac{1}{4}}\text{div}u_{t}|_{2}^ {2})+|l^{-\frac{\nu}{2}}u_{tt}|_{2}^{2}=\sum_{i=1}^{4}I_{i}, \tag{3.114}\]
where \(I_{i}\), \(i=1,2,3,4\), are given and estimated as follows:
\[\begin{split} I_{1}=&\int l^{-\nu}\Big{(}-(v\cdot \nabla v)_{t}-(l\nabla\phi)_{t}-a_{1}(\phi\nabla l)_{t}\\ &-a_{2}(l^{\nu})_{t}\sqrt{h^{2}+\epsilon^{2}}Lu-a_{2}l^{\nu} \frac{h}{\sqrt{h^{2}+\epsilon^{2}}}h_{t}Lu\Big{)}\cdot u_{tt}\\ \leq& C|l^{-\frac{\nu}{2}}|_{\infty}\big{(}|v|_{ \infty}|\nabla v_{t}|_{2}+|v_{t}|_{2}|\nabla v|_{\infty}+|l_{t}|_{6}|\nabla \phi|_{3}+|\nabla l_{t}|_{2}|\phi|_{\infty}\\ &+|\phi_{t}|_{\infty}|\nabla l_{2}+|l|_{\infty}|\nabla\phi_{t}|_{ 2}\\ &+|l^{\nu-1}|_{\infty}|l_{t}|_{6}|\sqrt{h^{2}+\epsilon^{2}}\nabla ^{2}u|_{3}+|l^{\nu}|_{\infty}|h_{t}|_{\infty}|\nabla^{2}u|_{2}\big{)}|l^{- \frac{\nu}{2}}u_{tt}|_{2},\\ I_{2}=&\int l^{-\nu}\big{(}a_{2}g\nabla l^{\nu} \cdot Q(v)+a_{3}l^{\nu}\psi\cdot Q(v)\big{)}_{t}\cdot u_{tt}\\ \leq& C|l^{-\frac{\nu}{2}}|_{\infty}\big{(}|(\nabla l ^{\nu})_{t}|_{2}|g\nabla v|_{\infty}+|g_{t}|_{\infty}|\nabla l^{\nu}|_{3}| \nabla v|_{6}\\ &+|\sqrt{h}\nabla l^{\nu}|_{\infty}|gh^{-1}|_{\infty}^{\frac{1}{ 2}}|\sqrt{g}\nabla v_{t}|_{2}+|l^{\nu}|_{\infty}|\psi|_{\infty}|\nabla v_{t}| _{2}\\ &+|l^{\nu}|_{\infty}|\psi_{t}|_{2}|\nabla v|_{\infty}+|(l^{\nu})_ {t}|_{6}|\psi|_{\infty}|\nabla v|_{3}\big{)}|l^{-\frac{\nu}{2}}u_{tt}|_{2}, \\ I_{3}+I_{4}=&-\int a_{2}\nabla\sqrt{h^{2}+\epsilon^{2} }Q(u_{t})\cdot u_{tt}\\ &+\frac{1}{2}\int a_{2}\frac{h}{\sqrt{h^{2}+\epsilon^{2}}}h_{t}( \alpha|\nabla u_{t}|^{2}+(\alpha+\beta)|\text{div}u_{t}|^{2})\\ \leq& C(|l^{\frac{\nu}{2}}|_{\infty}|\varphi|_{\infty}^{ \frac{1}{2}}|\psi|_{\infty}|\sqrt{h}\nabla u_{t}|_{2}|l^{-\frac{\nu}{2}}u_{tt }|_{2}+|h_{t}|_{\infty}|\sqrt{h}\nabla u_{t}|_{2}^{2}|\varphi|_{\infty}). \end{split} \tag{3.115}\]
Integrating (3.114) over \((\tau,t)\) and using (3.115) yield that for \(0\leq t\leq T_{5}\),
\[\begin{split}&|\sqrt{h}\nabla u_{t}(t)|_{2}^{2}+\int_{\tau}^{t}|l^{- \frac{\nu}{2}}u_{ss}|_{2}^{2}\text{d}s\\ \leq& C|(h^{2}+\epsilon^{2})^{\frac{1}{4}}\nabla u_{ t}(\tau)|_{2}^{2}+M(c_{0})c_{4}^{2}\int_{0}^{t}|\sqrt{h}\nabla u_{s}|_{2}^{2} \text{d}s+M(c_{0})(c_{4}^{7\nu+6}t+1),\end{split} \tag{3.116}\]
where (3.17), Lemmas 3.2-3.4 and 3.6-3.9 have been used.
It follows from the following fact
\[\begin{split}&\sqrt{h_{0}}l_{0}^{\nu}\Big{(}\sqrt{h_{0}^{2}+ \epsilon^{2}}\nabla Lu_{0}+\frac{h_{0}}{\sqrt{h_{0}^{2}+\epsilon^{2}}}Lu_{0} \otimes\nabla h_{0}\Big{)}\\ =& l_{0}^{\nu}\Big{(}\frac{h_{0}}{\sqrt{h_{0}^{2}+ \epsilon^{2}}}g_{3}+\epsilon^{2}\nabla Lu_{0}\frac{\sqrt{h_{0}}}{\sqrt{h_{0}^{2} +\epsilon^{2}}}\Big{)},\end{split}\]
\(\eqref{eq:2}\), (3.3), (3.5), Lemma 3.1 and Remark 3.1 that
\[\limsup_{\tau\to 0}|\sqrt{h}\nabla u_{t}(\tau)|_{2}\leq\limsup_{\tau \to 0}|\sqrt{h}\nabla\mathcal{K}(\tau)|_{2} \tag{3.117}\] \[\leq C(|\phi_{0}^{t}u_{0}|_{6}|\nabla^{2}u_{0}|_{3}+|\nabla u_{0}|_{ \infty}|\phi_{0}^{t}\nabla u_{0}|_{2}+|l_{0}|_{\infty}|\phi_{0}^{t}\nabla^{2} \phi_{0}|_{2}\] \[+|\nabla^{2}l_{0}\phi_{0}^{t+1}|_{2}+|\nabla l_{0}|_{3}|\phi_{0}^ {t}\nabla\phi_{0}|_{6}+|l_{0}^{\nu}|_{\infty}(|\nabla\psi_{0}|_{3}|\phi_{0}^{ t}\nabla u_{0}|_{6}\] \[+|\psi_{0}|_{\infty}|\phi_{0}^{t}\nabla^{2}u_{0}|_{2})+|l_{0}^{ \nu-1}|_{\infty}|\psi_{0}|_{\infty}|\phi_{0}^{t}\nabla u_{0}|_{6}|\nabla l_{0}| _{3}\] \[+|l_{0}^{\nu}|_{\infty}|g_{3}|_{2}+|l_{0}^{\nu}|_{\infty}|\varphi _{0}|_{\infty}^{\frac{1}{2}}|\nabla^{3}u_{0}|_{2}+|l_{0}^{\nu-1}|_{\infty}|h_ {0}^{\frac{3}{2}}Lu_{0}|_{6}|\nabla l_{0}|_{3}\] \[+|g_{2}|_{2}|\phi_{0}^{-t}|_{\infty}|l_{0}^{\nu-1}|_{\infty}| \nabla l_{0}|_{\infty}+|\sqrt{h_{0}}\nabla^{2}l_{0}^{\nu}|_{2}|h_{0}\nabla u_ {0}|_{\infty}\] \[+|\sqrt{h_{0}}\nabla l_{0}^{\nu}|_{6}(|h_{0}\nabla^{2}u_{0}|_{3}+ |\psi_{0}|_{\infty}|\nabla u_{0}|_{3}))\leq M(c_{0}),\] \[\limsup_{\tau\to 0}|\sqrt{\epsilon}\nabla u_{t}(\tau)|_{2}\leq \limsup_{\tau\to 0}\sqrt{\epsilon}|\varphi|_{\infty}^{\frac{1}{2}}|\sqrt{h} \nabla u_{t}(\tau)|_{2}\leq M(c_{0}).\]
Letting \(\tau\to 0\) in (3.116), one gets from Gronwall's inequality that for \(0\leq t\leq T_{5}\),
\[|\sqrt{h}\nabla u_{t}(t)|_{2}^{2}+|\nabla u_{t}(t)|_{2}^{2}+\int_ {0}^{t}|u_{ss}|_{2}^{2}\mathrm{d}s \tag{3.118}\] \[\leq M(c_{0})(1+c_{4}^{7\nu+6}t)\exp(M(c_{0})c_{4}^{2}t)\leq M(c_{0}),\]
which, along with (3.111), yields
\[|\sqrt{h^{2}+\epsilon^{2}}u|_{D^{3}}+|\sqrt{h^{2}+\epsilon^{2}}\nabla^{3}u|_{ 2}+|h\nabla^{2}u|_{D^{1}}+|\nabla^{3}u|_{2}\leq M(c_{0})c_{3}^{2\nu+3}. \tag{3.119}\]
Next, note that (3.103) gives
\[a_{2}L(\sqrt{h^{2}+\epsilon^{2}}u_{t}) =a_{2}\sqrt{h^{2}+\epsilon^{2}}Lu_{t}-a_{2}G(\nabla\sqrt{h^{2}+ \epsilon^{2}},u_{t}) \tag{3.120}\] \[=l^{-\nu}\mathcal{G}-a_{2}G(\nabla\sqrt{h^{2}+\epsilon^{2}},u_{ t}),\]
with
\[\mathcal{G}= -u_{tt}-(v\cdot\nabla v)_{t}-(l\nabla\phi)_{t}-a_{1}(\phi\nabla l )_{t}-a_{2}(l^{\nu})_{t}\sqrt{h^{2}+\epsilon^{2}}Lu \tag{3.121}\] \[-a_{2}\frac{h}{\sqrt{h^{2}+\epsilon^{2}}}h_{t}l^{\nu}Lu+(a_{2}g \nabla l^{\nu}\cdot Q(v)+a_{3}l^{\nu}\psi\cdot Q(v))_{t}.\]
Thus, to derive the \(L^{2}\) estimates of \((\nabla^{2}u_{t},\nabla^{4}u)\), one needs to estimate the \(L^{2}\) norm of
\[(\mathcal{G},\widehat{G}=G(\nabla\sqrt{h^{2}+\epsilon^{2}},u_{t}),\nabla^{2} \mathcal{H}),\]
which follows from (3.7), (3.17), (3.93), (3.101), (3.118)-(3.119), (3.121) and Lemmas 3.2-3.10 as
\[|\mathcal{G}|_{2}\leq C(|u_{tt}|_{2}+\|v\|_{2}|\nabla v_{t}|_{2}+\|l\|_{L^{\infty}\cap D^{1 }\cap D^{2}}\|\phi_{t}\|_{1}+\|\phi\|_{2}|l_{t}|_{D^{1}} \tag{3.122}\] \[+|(l^{\nu})_{t}|_{6}|\sqrt{h^{2}+\epsilon^{2}}Lu|_{3}+|l^{\nu}|_{ \infty}|h_{t}|_{\infty}|\nabla^{2}u|_{2}+|g_{t}|_{\infty}|\nabla l^{\nu}|_{2}| \nabla v|_{\infty}\] \[+|g\nabla v|_{\infty}|\nabla(l^{\nu})_{t}|_{2}+|l^{\nu-1}|_{\infty }|\sqrt{h}\nabla l|_{\infty}|gh^{-1}|_{\infty}^{\frac{1}{2}}|\sqrt{g}\nabla v_{ t}|_{2}\] \[+|(l^{\nu})_{t}|_{6}|\psi|_{\infty}|\nabla v|_{3}+|l^{\nu}|_{ \infty}|\psi_{t}|_{2}|\nabla v|_{\infty}+|l^{\nu}|_{\infty}|\psi|_{\infty}| \nabla v_{t}|_{2})\] \[\leq M(c_{0})(|u_{tt}|_{2}+c_{4}^{3\nu+3}),\]
\[|\mathcal{H}|_{D^{2}}\leq C(|u_{t}|_{D^{2}}+\|v\|_{2}\|\nabla v\|_{2}+\|l\|_{L^{\infty} \cap D^{1}\cap D^{3}}\|\nabla\phi\|_{2}\] \[+\|\nabla\nu^{\flat}\|_{2}(\|g\nabla v\|_{L^{\infty}\cap D^{1} \cap D^{2}}+\|\nabla g\|_{L^{\infty}\cap D^{2}}\|\nabla v\|_{2})\] \[+\|l^{\nu}\|_{L^{\infty}\cap D^{1}\cap D^{2}}\|\psi\|_{L^{q}\cap D ^{1},3\cap D^{2}}\|\nabla v\|_{2}) \tag{3.123}\] \[\leq M(c_{0})(|u_{t}|_{D^{2}}+c_{4}^{6\nu+5}),\] \[|\widehat{G}|_{2}\leq C(|\nabla\sqrt{h^{2}+\epsilon^{2}}|_{\infty}|\nabla u_{t}|_{2}+| \nabla^{2}\sqrt{h^{2}+\epsilon^{2}}|_{3}|u_{t}|_{6})\leq M(c_{0}).\]
It follows from (3.99), (3.100), (3.109), (3.118)-(3.120), (3.122)-(3.123), Lemmas 3.2-3.10 and Lemma 4.3 that
\[|\sqrt{h^{2}+\epsilon^{2}}u_{t}|_{D^{2}}\leq C|l^{-\nu}\mathcal{G}|_{2}+C|G(\nabla\sqrt{h^{2}+\epsilon^{2}},u_{t})|_{2}\] \[\leq M(c_{0})(|u_{tt}|_{2}+c_{4}^{3\nu+3}),\] \[|\sqrt{h^{2}+\epsilon^{2}}\nabla^{2}u_{t}|_{2}\leq C(|\sqrt{h^{2}+\epsilon^{2}}u_{t}|_{D^{2}}+|\nabla u_{t}|_{2}(|\psi|_{ \infty}+|\nabla\psi|_{3})\] \[+|\psi|_{\infty}^{2}|u_{t}|_{2}|\varphi|_{\infty})\leq M(c_{0})(|u_{tt}|_{2}+c_{4}^{3\nu+3}), \tag{3.124}\] \[|(h\nabla^{2}u_{t})_{t}|_{2}\leq C(|h\nabla^{2}u_{t}|_{2}+|h_{t}|_{\infty}|\nabla^{2}u|_{2})\leq M(c_{0})(|u_{tt}|_{2}+c_{4}^{3\nu+3}),\] \[|u|_{D^{4}}\leq C|(h^{2}+\epsilon^{2})^{-\frac{1}{2}}l^{-\nu}\mathcal{H}|_{D^{2}} \leq M(c_{0})(|u_{t}|_{D^{2}}+c_{4}^{6\nu+5})\] \[\leq M(c_{0})(|u_{tt}|_{2}+c_{4}^{6\nu+5}).\]
Due to \(\eqref{eq:2}_{2}\), it holds that for multi-index \(\varsigma\in\mathbb{Z}_{+}^{3}\) with \(|\varsigma|=2\),
\[a_{2}L(\sqrt{h^{2}+\epsilon^{2}}\nabla^{\varsigma}u)=a_{2}\sqrt{h ^{2}+\epsilon^{2}}\nabla^{\varsigma}Lu-a_{2}G(\nabla\sqrt{h^{2}+\epsilon^{2}}, \nabla^{\varsigma}u) \tag{3.125}\] \[= \sqrt{h^{2}+\epsilon^{2}}\nabla^{\varsigma}\big{[}\big{(}\sqrt{h^ {2}+\epsilon^{2}})^{-1}l^{-\nu}\mathcal{H}\big{]}-a_{2}G(\nabla\sqrt{h^{2}+ \epsilon^{2}},\nabla^{\varsigma}u),\]
which, along with (3.100)-(3.101), (3.109), (3.118)-(3.119), (3.122)-(3.124), Lemmas 3.2-3.10 and Lemma 4.3, implies that
\[|\sqrt{h^{2}+\epsilon^{2}}\nabla^{2}u(t)|_{D^{2}}\leq C|\sqrt{h^{2}+\epsilon^{2}}\nabla^{\varsigma}\big{[}\big{(}\sqrt{h^{2}+ \epsilon^{2}})^{-1}l^{-\nu}\mathcal{H}\big{]}|_{2} \tag{3.126}\] \[+C(|\psi|_{\infty}|u|_{D^{3}}+|\nabla\psi|_{3}|\nabla^{2}u|_{6}+| \nabla^{2}u|_{2}|\psi|_{\infty}^{2}|\varphi|_{\infty})\] \[\leq M(c_{0})(|u_{tt}|_{2}+c_{4}^{6\nu+5}).\]
At last, it follows from (3.17), (3.118), (3.124), (3.126) and Lemma 3.5 that
\[\int_{0}^{T_{5}}(|h\nabla^{2}u_{t}|_{2}^{2}+|u_{t}|_{D^{2}}^{2}+|u|_{D^{4}}^{2 }+|h\nabla^{2}u|_{D^{2}}^{2}+|(h\nabla^{2}u)_{t}|_{2}^{2})\mathrm{d}t\leq M(c_ {0}). \tag{3.127}\]
The proof of Lemma 3.11 is complete.
Finally, the following time weighted estimates for the velocity \(u\) hold.
**Lemma 3.12**.: _For \(t\in[0,T_{5}]\),_
\[t|u_{t}(t)|_{D^{2}}^{2}+t|h\nabla^{2}u_{t}(t)|_{2}^{2}+t|u_{tt}(t)|_{2}^{2}+t|u (t)|_{D^{4}}^{2}(t) \leq M(c_{0})c_{4}^{6\nu+4}, \tag{3.128}\] \[\int_{0}^{t}s(|u_{ss}|_{D^{1}_{4}}^{2}+|u_{s}|_{D^{3}}^{2}+| \sqrt{h}u_{ss}|_{D^{1}_{4}}^{2})ds \leq M(c_{0})c_{4}^{6\nu+4}.\]
Proof.: Differentiating (3.103) with respect to \(t\) yields
\[\begin{split}& u_{ttt}+a_{2}\sqrt{h^{2}+\epsilon^{2}}l^{\nu}Lu_{tt} \\ =&-(v\cdot\nabla v)_{tt}-a_{1}(\phi\nabla l)_{tt}-(l \nabla\phi)_{tt}+a_{3}(l^{\nu}\psi\cdot Q(v))_{tt}+a_{2}(g\nabla l^{\nu}\cdot Q (v))_{tt}\\ &-a_{2}(\sqrt{h^{2}+\epsilon^{2}}l^{\nu})_{tt}Lu-2a_{2}(l^{\nu} )_{t}\sqrt{h^{2}+\epsilon^{2}}Lu_{t}-2a_{2}\frac{h}{\sqrt{h^{2}+\epsilon^{2}} }h_{t}l^{\nu}Lu_{t}.\end{split} \tag{3.129}\]
Multiplying (3.129) by \(l^{-\nu}u_{tt}\) and integrating over \(\mathbb{R}^{3}\) give
\[\begin{split}\frac{1}{2}\frac{d}{dt}|l^{-\frac{\nu}{2}}u_{tt}|_{2 }^{2}+a_{2}\alpha|(h^{2}+\epsilon^{2})^{\frac{1}{4}}\nabla u_{tt}|_{2}^{2}+a_ {2}(\alpha+\beta)|(h^{2}+\epsilon^{2})^{\frac{1}{4}}\text{div}u_{tt}|_{2}^{2} =\sum_{i=5}^{8}I_{i},\end{split} \tag{3.130}\]
where \(I_{i}\), \(i=5,6,7,8\), are given and estimated as follows.
\[\begin{split} I_{5}=&\int l^{-\nu}\big{(}-(v\cdot \nabla v)_{tt}-a_{1}(\phi\nabla l)_{tt}-(l\nabla\phi)_{tt}\big{)}\cdot u_{tt} \\ \leq& C|l^{-\frac{\nu}{2}}|_{\infty}\big{(}|\nabla v _{t}|_{6}|v_{t}|_{3}+|\nabla v|_{\infty}|v_{tt}|_{2}+|v|_{\infty}|\nabla v_{tt}| _{2}+|\phi|_{\infty}|\nabla l_{tt}|_{2}\\ &+|\phi_{tt}|_{2}|\nabla l|_{\infty}+|\phi_{t}|_{\infty}|\nabla l _{t}|_{2}+|l_{t}|_{6}|\nabla\phi_{t}|_{3}\\ &+|l_{tt}|_{6}|\nabla\phi|_{3}+|l|_{\infty}|\nabla\phi_{tt}|_{2} )|l^{-\frac{\nu}{2}}u_{tt}|_{2},\end{split}\] \[\begin{split} I_{6}=& a_{3}\int l^{-\nu}(l^{ \nu}\psi\cdot Q(v))_{tt}\cdot u_{tt}\\ \leq& C|l^{-\frac{\nu}{2}}|_{\infty}(|l^{\nu}|_{ \infty}|\psi_{tt}|_{2}|\nabla v|_{\infty}+|l^{\nu-1}|_{\infty}|l_{tt}|_{6}| \psi|_{\infty}|\nabla v|_{3}\\ &+|l^{\nu-2}|_{\infty}|l_{t}|_{6}^{2}|\psi|_{\infty}|\nabla v_{6 }+|l^{\nu}|_{\infty}|\psi_{\infty}|\nabla v_{tt}|_{2}+|\psi_{t}|_{3}|(l^{\nu} )_{t}|_{6}|\nabla v|_{\infty}\\ &+|\psi|_{\infty}|(l^{\nu})_{t}|_{6}|\nabla v_{t}|_{3}+|l^{\nu} |_{\infty}|\psi_{t}|_{3}|\nabla v_{t}|_{6})|l^{-\frac{\nu}{2}}u_{tt}|_{2}, \end{split}\] \[\begin{split} I_{7}=& a_{2}\int l^{-\nu}(g \nabla l^{\nu}\cdot Q(v))_{tt}\cdot u_{tt}\\ \leq& C|l^{-\frac{\nu}{2}}|_{\infty}\Big{(}|l^{\nu- 3}|_{\infty}|g\nabla v|_{\infty}|l_{t}|_{6}^{2}|\nabla l|_{6}+|l^{\nu-2}|_{ \infty}|g\nabla v|_{\infty}|l_{tt}|_{6}|\nabla l|_{3}\\ &+l^{\nu-2}|_{\infty}|g\nabla v|_{\infty}|l_{t}|_{6}|\nabla l_{t }|_{3}+|l^{\nu-1}|_{\infty}|\nabla l_{t}|_{2}|g_{t}|_{\infty}|\nabla v|_{ \infty}\\ &+|l^{\nu-2}|_{\infty}|l_{t}|_{6}(|\nabla l|_{3}|g_{t}|_{\infty}| \nabla v|_{\infty}+|\nabla l|_{6}|g\nabla v_{t}|_{6})\Big{)}|l^{-\frac{\nu}{2} }u_{tt}|_{2}\\ &+C|l^{-1}|_{\infty}|\nabla l_{t}|_{2}|g\nabla v_{t}|_{6}|u_{tt}| _{3}+C|l^{\frac{\nu}{2}-1}|_{\infty}|\nabla l|_{3}|g_{tt}|_{6}|\nabla v|_{ \infty}|l^{-\frac{\nu}{2}}u_{tt}|_{2}\\ &+C|l^{\frac{\nu}{2}-1}|_{\infty}|gh^{-1}|_{\infty}^{\frac{1}{2} }|\sqrt{h}\nabla l|_{\infty}|\sqrt{g}\nabla v_{tt}|_{2}|l^{-\frac{\nu}{2}}u_{ tt}|_{2}\\ &+C|l^{\frac{\nu}{2}-1}|_{\infty}(|\nabla l|_{\infty}|g_{t}|_{ \infty}|\nabla v_{t}|_{2}+|g\nabla v|_{\infty}|\nabla l_{tt}|_{2})|l^{-\frac{ \nu}{2}}u_{tt}|_{2},\end{split}\] \[\begin{split} I_{8}=&-a_{2}\int l^{-\nu}\big{(}( \sqrt{h^{2}+\epsilon^{2}}l^{\nu})_{tt}Lu+2(l^{\nu})_{t}\sqrt{h^{2}+\epsilon^ {2}}Lu_{t}\\ &+\frac{2h}{\sqrt{h^{2}+\epsilon^{2}}}h_{t}l^{\nu}Lu_{t}-l^{\nu} \nabla\sqrt{h^{2}+\epsilon^{2}}\cdot Q(u_{tt})\big{)}\cdot u_{tt}+\frac{1}{2} \int(l^{-\nu})_{t}|u_{tt}|^{2}\\ \leq& C|l^{-\frac{\nu}{2}}|_{\infty}\big{(}|(l^{\nu})_{t}|_{6 }|h_{t}|_{\infty}|\nabla^{2}u|_{3}+|l^{\nu-2}|_{\infty}|l_{t}|_{6}^{2}|\sqrt{h^ {2}+\epsilon^{2}}\nabla^{2}u|_{6}\\ &+|l^{\nu}|_{\infty}|h_{tt}|_{6}|\nabla^{2}u|_{3}+|l^{\nu}|_{ \infty}|h_{t}|_{\infty}^{2}|\varphi|_{\infty}|\nabla^{2}u|_{2}\\ &+|l^{\nu}|_{\infty}|h_{t}|_{\infty}|\varphi|_{\infty}|h\nabla^{2}u |_{2}+|l^{\nu}|_{\infty}|\psi|_{\infty}|\sqrt{h}\nabla u_{tt}|_{2}|\varphi|_{ \infty}^{\frac{1}{2}}\big{)}|l^{-\frac{\nu}{2}}u_{tt}|_{2}\end{split}\]
\[+C|l^{-1}|_{\infty}|l_{t}|_{6}|\sqrt{h^{2}+\epsilon^{2}}\nabla^{2}u_{ t}|_{2}|u_{tt}|_{3} \tag{3.132}\] \[+C|l^{-1}|_{\infty}|l_{tt}|_{6}|\sqrt{h^{2}+\epsilon^{2}}\nabla^{2 }u|_{2}|u_{tt}|_{3}+C|l^{-\frac{\nu}{2}-1}|_{\infty}|l_{t}|_{6}|l^{-\frac{\nu}{ 2}}u_{tt}|_{2}|u_{tt}|_{3}.\]
Multiplying (3.130) by \(t\) and integrating over \((\tau,t)\), one can obtain from the estimates on \(I_{i}\) (\(i=5,...,8\)), (3.17) and Lemmas 3.2-3.11 that
\[t|l^{-\frac{\nu}{2}}u_{tt}(t)|_{2}^{2}+\frac{a_{2}\alpha}{4}\int _{\tau}^{t}s|\sqrt{h}\nabla u_{ss}|_{2}^{2}\mathrm{d}s \tag{3.133}\] \[\leq \tau|l^{-\frac{\nu}{2}}u_{tt}(\tau)|_{2}^{2}+M(c_{0})c_{4}^{6\nu+ 4}(1+t)+M(c_{0})c_{5}^{2\nu+8}\int_{\tau}^{t}s|l^{-\frac{\nu}{2}}u_{ss}|_{2}^{ 2}\mathrm{d}s.\]
Due to (3.118), there exists a sequence \(s_{k}\) such that
\[s_{k}\longrightarrow 0,\quad\text{and}\quad s_{k}|u_{tt}(s_{k},x)|_{2}^{2} \longrightarrow 0,\quad\text{as}\quad k\longrightarrow\infty.\]
Taking \(\tau=s_{k}\) and letting \(k\rightarrow\infty\) in (3.133), one has by Gronwall's inequality that
\[t|u_{tt}(t)|_{2}^{2}+\int_{0}^{t}s|\sqrt{h}\nabla u_{ss}|_{2}^{2}\mathrm{d}s+ \int_{0}^{t}s|\nabla u_{ss}|_{2}^{2}\mathrm{d}s\leq M(c_{0})c_{4}^{6\nu+4}, \tag{3.134}\]
for \(0\leq t\leq T_{5}\).
It follows from (3.124) and (3.134) that
\[t^{\frac{1}{2}}|\nabla^{2}u_{t}(t)|_{2}+t^{\frac{1}{2}}|h\nabla^{2}u_{t}(t)|_ {2}+t^{\frac{1}{2}}|\nabla^{4}u(t)|_{2}\leq M(c_{0})c_{4}^{3\nu+2}. \tag{3.135}\]
Next, to derive the \(L^{2}\) estimate of \(\nabla^{3}u_{t}\), one deals with the \(L^{2}\) estimates of
\[(\nabla\mathcal{G},\nabla\widehat{G}=\nabla G(\nabla\sqrt{h^{2}+\epsilon^{2}},u_{t})).\]
It follows from (3.7), (3.101), (3.110), (3.121) and Lemmas 3.2-3.11 that
\[|\mathcal{G}|_{D_{*}^{1}}\leq C(|u_{tt}|_{D_{*}^{1}}+\|\nabla v\|_{2}|\nabla v_{t}|_{2}+|v|_{ \infty}|\nabla^{2}v_{t}|_{2} \tag{3.136}\] \[+\|l\|_{L^{\infty}\cap D^{1}\cap D^{3}}\|\phi_{t}\|_{2}+\|l_{t} \|_{D_{*}^{1}\cap D^{2}}\|\nabla\phi\|_{2}\] \[+\|l^{\nu-1}\|_{1,\infty}\|t_{L^{\infty}\cap D^{2}}(\|\sqrt{h^{2} +\epsilon^{2}}Lu\|_{1}+|\psi|_{\infty}|\nabla^{2}u|_{2})\] \[+(1+|\psi|_{\infty})(1+|\varphi|_{\infty})\|h_{t}\|_{L^{\infty} \cap D^{2}}\|l^{\nu}\|_{1,\infty}\|\nabla^{2}u\|_{1}\] \[+\|g_{t}\|_{L^{\infty}\cap D^{1}}\|\nabla l^{\nu}\|_{2}\|\nabla v \|_{2}+\|\nabla l^{\nu}\|_{2}(|\nabla g|_{\infty}|\nabla v_{t}|_{2}+|g\nabla^ {2}v_{t}|_{2})\] \[+(|g\nabla v|_{\infty}+|\nabla g|_{\infty}\|\nabla v\|_{2}+\|g \nabla^{2}v\|_{1})\|l_{t}\|_{D_{*}^{1}\cap D^{2}}\|l^{\nu-1}\|_{L^{\infty}\cap D ^{1}\cap D^{3}}\] \[+\|l^{\nu-1}\|_{1,\infty}\|l_{t}\|_{D_{*}^{1}}\|\psi\|_{L^{\infty} \cap D^{1,3}}\|\nabla v\|_{2}\] \[+\|l^{\nu}\|_{1,\infty}\|\psi_{t}\|_{1}\|\nabla v\|_{2}+\|l^{\nu} \|_{1,\infty}\|\psi\|_{L^{\infty}\cap D^{1,3}}\|\nabla v_{t}\|_{1})\] \[\leq M(c_{0})(|\nabla u_{tt}|_{2}+c_{4}^{4\nu+3}|g\nabla^{2}v_{t}|_{2}+c _{4}^{5\nu+5}|t_{t}|_{D^{2}}+c_{4}^{5\nu+7}),\] \[|\widehat{G}|_{D_{*}^{1}}\leq C(|\nabla\sqrt{h^{2}+\epsilon^{2}}|_{\infty}|\nabla^{2}u_{t}|_{2}+| \nabla^{2}\sqrt{h^{2}+\epsilon^{2}}|_{3}|\nabla u_{t}|_{6}\] \[+|\nabla^{3}\sqrt{h^{2}+\epsilon^{2}}|_{2}|u_{t}|_{\infty})\leq M(c_{0})(|u_{t}|_{D^{2}}+c_{4}^{2\nu+3}).\]
Hence (3.120), (3.136), the classical theory for elliptic equations and Lemmas 3.2-3.11 yield that for \(0\leq t\leq T_{5}\),
\[|\sqrt{h^{2}+\epsilon^{2}}u_{t}|_{D^{3}}\leq C|l^{-\nu}\mathcal{G}|_{D^{1}_{*}}+C|G(\nabla\sqrt{h^{2}+\epsilon^{2}},u_{ t})|_{D^{1}_{*}}\] \[\leq M(c_{0})(\|u_{tt}\|_{1}+|u_{t}|_{D^{2}}+c_{4}^{6\nu+7}(|g\nabla^{2} v_{t}|_{2}+|l_{t}|_{D^{2}}+1)),\] \[|\sqrt{h^{2}+\epsilon^{2}}\nabla^{3}u_{t}(t)|_{2}\leq C(|\sqrt{h^{2}+\epsilon^{2}}u_{t}|_{D^{3}}+|u_{t}|_{\infty}|\nabla^{2} \psi|_{2}+|\nabla u_{t}|_{6}|\nabla\psi|_{3}\] \[+|\nabla^{2}u_{t}|_{2}|\psi|_{\infty}+|\nabla u_{t}|_{2}\|\psi\| _{L^{\infty}\cap D^{1,3}\cap D^{2}}^{2}|\varphi|_{\infty}+|u_{t}|_{2}|\psi|_{ \infty}^{3}|\varphi|_{\infty}^{2})\] \[\leq C|\sqrt{h^{2}+\epsilon^{2}}u_{t}|_{D^{3}}+M(c_{0})(|u_{t}|_{D^{2 }}+c_{4}^{2\nu+3}),\]
which, along with (3.113), (3.134)-(3.135) and Lemma 3.5, yields \(\eqref{3.128}_{2}\).
The proof of Lemma 3.12 is complete.
It follows from Lemmas 3.2-3.12 that for \(0\leq t\leq T_{5}=\min\{T^{*},(1+M(c_{0})c_{5})^{-40-10\nu}\}\),
\[\|(\phi-\eta)(t)\|_{D^{1}_{*}\cap D^{3}}^{2}+\|\phi_{t}(t)\|_{2}^ {2}+|\phi_{tt}(t)|_{2}^{2}+\int_{0}^{t}\|\phi_{ss}\|_{1}^{2}\mathrm{d}s\leq Cc_{4}^{6},\] \[\|\psi(t)\|_{L^{q}\cap D^{1,3}\cap D^{2}}^{2}\leq M(c_{0}),\ \ |\psi_{t}(t)|_{2}\leq Cc_{3}^{2},\quad|h_{t}(t)|_{\infty}^{2}\leq Cc_{3}^{3}c_{4},\] \[h(t,x)>\frac{1}{2c_{0}},\ \frac{2}{3}\eta^{-2}<\varphi,\ |\psi_{t}(t)|_{D^{1}_{*}}^{2}+\int_{0}^{t}(|\psi_{ ss}|_{2}^{2}+|h_{ss}|_{6}^{2})\mathrm{d}s\leq Cc_{4}^{4},\] \[\widetilde{C}^{-1}\leq gh^{-1}(t,x)\leq\widetilde{C},\quad|\xi(t)|_{D^{1}_{*}}+|\zeta(t)|_{4}+|h^{-\frac{1}{4}}\nabla^{2}h (t)|_{2}\leq M(c_{0}),\] \[\|n(t)\|_{L^{\infty}\cap D^{1,q}\cap D^{1,4}\cap D^{1,6}\cap D^{2 }\cap D^{3}}\leq M(c_{0}),\quad|n_{t}(t)|_{2}\leq M(c_{0})c_{1},\] \[|n_{t}(t)|_{\infty}+|\nabla n_{t}(t)|_{2}+|\nabla n_{t}(t)|_{6} \leq M(c_{0})c_{4}^{2},\ \ |n_{tt}(t)|_{2}\leq M(c_{0})c_{4}^{3},\] \[|u|_{\infty}^{2}+|\sqrt{h}\nabla u(t)|_{2}^{2}+\|u(t)\|_{1}^{2}+ \int_{0}^{t}\big{(}\|\nabla u\|_{1}^{2}+|u_{s}|_{2}^{2}\big{)}\mathrm{d}s\leq M(c_{0}),\] \[|\nabla l(t)|_{2}^{2}+|h^{\frac{1}{4}}\nabla l(t)|_{2}^{2}+\int_{0 }^{t}(|h^{-\frac{1}{4}}l_{ss}|_{2}^{2}+|\sqrt{h}\nabla^{2}l|_{2}^{2}+|\nabla^{2 }l|_{2}^{2})\mathrm{d}s\leq M(c_{0})c_{1}^{3\nu},\] \[|h^{-\frac{1}{4}}l_{t}(t)|_{2}^{2}+|\sqrt{h}\nabla^{2}l(t)|_{2}^{2 }+\int_{0}^{t}(|h^{\frac{1}{4}}\nabla l_{s}|_{2}^{2}+|\sqrt{h}\nabla^{3}l|_{2} ^{2})\mathrm{d}s\leq M(c_{0})c_{1}^{4\nu+2},\] \[|h^{\frac{1}{4}}\nabla l_{t}(t)|_{2}^{2}+|\sqrt{h}\nabla^{3}l(t)|_{2 }^{2}+\int_{0}^{t}(|h^{-\frac{1}{4}}l_{ss}|_{2}^{2}+|\sqrt{h}\nabla^{2}l_{s}|_ {2}^{2})\mathrm{d}s\leq M(c_{0})c_{1}^{8\nu+6},\] \[t|l_{t}(t)|_{D^{2}}^{2}+t|\sqrt{h}\nabla^{2}l_{t}(t)|_{2}^{2}+t|h ^{-\frac{1}{4}}l_{tt}(t)|_{2}^{2}\leq M(c_{0})c_{1}^{\nu},\] \[\int_{0}^{t}s(|l_{ss}|_{D^{1}_{*}}^{2}+|h^{\frac{1}{4}}l_{ss}|_{D^ {1}_{*}}^{2})\mathrm{d}s\leq M(c_{0}),\quad\frac{1}{2}c_{0}^{-1}\leq l(x,t)\leq \frac{3}{2}c_{0},\] \[(|u|_{D^{2}}^{2}+|h\nabla^{2}u|_{2}^{2}+|u_{t}|_{2}^{2})(t)+\int_{0 }^{t}(|u|_{D^{3}}^{2}+|h\nabla^{2}u|_{D^{1}_{*}}^{2}+|u_{s}|_{D^{1}_{*}}^{2}) \mathrm{d}s\leq M(c_{0}),\] \[(|u_{t}|_{D^{1}_{*}}^{2}+|\sqrt{h}\nabla u_{t}|_{2}^{2}+|u|_{D^{3} }^{2}+|h\nabla^{2}u|_{D^{1}_{*}}^{2})(t)+\int_{0}^{t}|u_{s}|_{D^{2}}^{2}\mathrm{d}s\leq M(c_{0})c_{3}^{2\nu+3},\] \[\int_{0}^{t}(|u_{ss}|_{2}^{2}+|u|_{D^{4}}^{2}+|h\nabla^{2}u|_{D^{2 }}^{2}+|(h\nabla^{2}u)_{s}|_{2}^{2})\mathrm{d}s\leq M(c_{0}),\] \[t|u_{t}(t)|_{D^{2}}^{2}+t|h\nabla^{2}u_{t}(t)|_{2}^{2}+t|u_{tt}(t)|_ {2}^{2}+t|u(t)|_{D^{4}}^{2}\leq M(c_{0})c_{4}^{6\nu+4},\]
\[\int_{0}^{t}s(|u_{ss}|^{2}_{D^{1}_{*}}+|h\nabla^{3}u_{s}|^{2}_{2}+| \sqrt{h}u_{ss}|^{2}_{D^{1}_{*}})\mathrm{d}s\leq c_{1}^{2},\quad c_{1}^{-1}\leq l(t,x)\leq c_{1},\]
\[(|u|^{2}_{D^{2}}+|h\nabla^{2}u|^{2}_{2}+|u_{t}|^{2}_{2})(t)+\int_{0}^{t}(|u|^{2 }_{D^{3}}+|h\nabla^{2}u|^{2}_{D^{4}_{*}}+|u_{s}|^{2}_{D^{1}_{*}})\mathrm{d}s\leq c_{3}^{2},\]
\[(|u|^{2}_{D^{4}_{*}}+|\sqrt{h}\nabla u_{t}|^{2}_{2}+|u|^{2}_{D^{3}}+|h\nabla^{2 }u|^{2}_{D^{4}_{*}})(t)+\int_{0}^{t}|u_{s}|^{2}_{D^{2}}\mathrm{d}s\leq c_{4}^{2},\]
\[\int_{0}^{t}(|u_{ss}|^{2}_{2}+|u|^{2}_{D^{4}}+|h\nabla^{2}u|^{2}_{D^{2}}+|(h \nabla^{2}u)_{s}|^{2}_{2})\mathrm{d}s\leq c_{4}^{2},\]
\[t|u_{t}(t)|^{2}_{D^{2}}+t|h\nabla^{2}u_{t}(t)|^{2}_{2}+t|u_{tt}(t)|^{2}_{2}+t| u(t)|^{2}_{D^{4}}\leq c_{5}^{2},\]
\[\int_{0}^{t}s(|u_{ss}|^{2}_{D^{4}_{*}}+|h\nabla^{3}u_{s}|^{2}_{2}+|\sqrt{h}u_{ ss}|^{2}_{D^{1}_{*}})\mathrm{d}s\leq c_{5}^{2}\]
for \(0\leq t\leq T^{*}\), which are uniformly bounded with respect to both \(\epsilon\) and \(\eta\).
### Vanishing of the artificial dissipations
By the uniform estimates (3.137), one can now obtain the local well-posedness of (3.1) with \(\epsilon=0\) and \(\phi_{0}^{\eta}\geq\eta\) for any constant \(\eta>0\). For simplicity, denote \(B_{R}\) a ball centered at origin with radius \(R\).
**Lemma 3.13**.: _Let (1.17) hold. Assume that \((\phi_{0},u_{0},l_{0},h_{0})\) satisfy (2.7)-(2.8), and there exists a constant \(c_{0}>1\) independent of \(\eta\) such that (3.5) holds. Then there exist a time \(T^{*}>0\), independent of \(\eta\), and a unique strong solution \((\phi^{\eta},u^{\eta},l^{\eta},h^{\eta})\) in \([0,T^{*}]\times\mathbb{R}^{3}\) to (3.1) with \(\epsilon=0\) satisfying (3.4) with \(T\) replaced by \(T^{*}\). Moreover, (3.137) hold for \((\phi^{\eta},u^{\eta},l^{\eta},h^{\eta})\) uniformly (independent of \(\eta\))._
Proof.: The well-posedness of (3.1) with \(\epsilon=0\) can be proved as follows:
**Step 1:** Existence. First, it follows from Lemmas 3.1-3.12 that for every \(\epsilon>0\) and \(\eta>0\), there exist a time \(T^{*}>0\), independent of \((\epsilon,\eta)\), and a unique strong solution \((\phi^{\epsilon,\eta},u^{\epsilon,\eta},l^{\epsilon,\eta},h^{\epsilon,\eta})( t,x)\) in \([0,T^{*}]\times\mathbb{R}^{3}\) to (3.1) satisfying the estimates in (3.137) which are independent of \((\epsilon,\eta)\).
Second, by using of the characteristic method and the standard energy estimates for (3.1)\({}_{4}\), one can show that for \(0\leq t\leq T^{*}\),
\[|h^{\epsilon,\eta}(t)|_{\infty}+|\nabla h^{\epsilon,\eta}(t)|_{2}+|h^{ \epsilon,\eta}_{t}(t)|_{2}\leq C(A,R,c_{v},\digamma,\eta,\alpha,\beta,\gamma, \delta,T^{*},c_{0}). \tag{3.138}\]
Thus, it follows from (3.137)-(3.138) and Lemma 4.2 that for any \(R>0\), there exist a subsequence of solutions (still denoted by) \((\phi^{\epsilon,\eta},u^{\epsilon,\eta},l^{\epsilon,\eta},h^{\epsilon,\eta})\), which converge to a limit \((\phi^{\eta},u^{\eta},l^{\eta},h^{\eta})\) as \(\epsilon\to 0\) in the following strong sense:
\[(\phi^{\epsilon,\eta},u^{\epsilon,\eta},l^{\epsilon,\eta},h^{\epsilon,\eta}) \rightarrow(\phi^{\eta},u^{\eta},l^{\eta},h^{\eta})\quad\text{in}\ \ C([0,T^{*}];H^{2}(B_{R})), \tag{3.139}\]
and in the following weak or weak* sense:
\[(\phi^{\epsilon,\eta}-\eta,u^{\epsilon,\eta})\rightharpoonup( \phi^{\eta}-\eta,u^{\eta})\quad\text{weakly*}\ \ \text{in}\ \ L^{\infty}([0,T^{*}];H^{3}),\] \[(\phi^{\epsilon,\eta}_{t},\psi^{\epsilon,\eta},h^{\epsilon,\eta} _{t})\rightharpoonup(\phi^{\eta}_{t},\psi^{\eta},h^{\eta}_{t})\quad\text{ weakly*}\ \ \text{in}\ \ L^{\infty}([0,T^{*}];H^{2}),\] \[u^{\epsilon,\eta}_{t}\rightharpoonup u^{\eta}_{t}\quad\text{ weakly*}\ \ \text{in}\ \ L^{\infty}([0,T^{*}];H^{1}),\] \[t^{\frac{1}{2}}(\nabla^{2}u^{\epsilon,\eta}_{t},\nabla^{4}u^{ \epsilon,\eta})\rightharpoonup t^{\frac{1}{2}}(\nabla^{2}u^{\eta}_{t},\nabla^ {4}u^{\eta})\quad\text{weakly*}\ \ \text{in}\ \ L^{\infty}([0,T^{*}];L^{2}),\] \[(\phi^{\epsilon,\eta}_{tt},t^{\frac{1}{2}}u^{\epsilon,\eta}_{tt}) \rightharpoonup(\phi^{\eta}_{tt},t^{\frac{1}{2}}u^{\eta}_{tt})\quad\text{ weakly*}\ \ \text{in}\ \ L^{\infty}([0,T^{*}];L^{2}),\] \[\nabla u^{\epsilon,\eta}\rightharpoonup\nabla u^{\eta}\quad\text{ weakly}\ \ \text{in}\ \ L^{2}([0,T^{*}];H^{3}),\] \[(u^{\epsilon,\eta}_{t},\nabla l^{\epsilon,\eta})\rightharpoonup(u^{ \eta}_{t},\nabla l^{\eta})\quad\text{weakly}\ \ \text{in}\ \ L^{2}([0,T^{*}];H^{2}),\] \[\phi^{\epsilon,\eta}_{tt}\rightharpoonup\phi^{\eta}_{tt}\quad \text{weakly}\ \ \text{in}\ \ L^{2}([0,T^{*}];H^{1}),\] \[(\psi^{\epsilon,\eta}_{tt},u^{\epsilon,\eta}_{tt})\rightharpoonup( \psi^{\eta}_{tt},u^{\eta}_{tt})\quad\text{weakly}\ \ \ \text{in}\ \ L^{2}([0,T^{*}];L^{2}), \tag{3.140}\] \[t^{\frac{1}{2}}(\nabla u^{\epsilon,\eta}_{tt},\nabla^{3}u^{ \epsilon,\eta}_{t})\rightharpoonup t^{\frac{1}{2}}(\nabla u^{\eta}_{tt},\nabla^ {3}u^{\eta}_{t})\quad\text{weakly}\ \ \text{in}\ \ L^{2}([0,T^{*}];L^{2}),\] \[l^{\epsilon,\eta}-\bar{l}\rightharpoonup l^{\eta}-\bar{l}\quad\text{ weakly*}\ \ \text{in}\ \ L^{\infty}([0,T^{*}];D^{1}_{*}\cap D^{3}),\] \[(\xi^{\epsilon,\eta},l^{\epsilon,\eta}_{t})\rightharpoonup(\nabla(h^ {\eta})^{\frac{3}{4}},l^{\eta}_{t})\quad\text{weakly*}\ \ \text{in}\ \ L^{\infty}([0,T^{*}];D^{1}_{*}),\] \[\zeta^{\epsilon,\eta}\rightharpoonup\nabla(h^{\eta})^{\frac{3}{8}} \quad\text{weakly*}\ \ \text{in}\ \ L^{\infty}([0,T^{*}];L^{4}),\] \[l^{\epsilon,\eta}_{t}\rightharpoonup l^{\eta}_{t}\quad\text{ weakly}\ \ \ \text{in}\ \ L^{2}([0,T^{*}];D^{1}_{*}\cap D^{2}),\] \[t^{\frac{1}{2}}\nabla^{2}l^{\epsilon,\eta}_{t}\rightharpoonup t^{ \frac{1}{2}}\nabla^{2}l^{\eta}_{t}\quad\text{weakly}\ \ \text{in}\ \ L^{\infty}([0,T^{*}];L^{2}),\] \[t^{\frac{1}{2}}\nabla l^{\epsilon,\eta}_{tt}\rightharpoonup t^{ \frac{1}{2}}\nabla l^{\eta}_{tt}\quad\text{weakly}\ \ \text{in}\ \ L^{2}([0,T^{*}];L^{2}).\]
Then it follows from the lower semi-continuity of weak or weak* convergence that \((\phi^{\eta},u^{\eta},l^{\eta},h^{\eta})\) satisfies also the estimates in (3.137)-(3.138) except those weighted estimates on \((u^{\eta},l^{\eta})\), which, along with (3.139)-(3.140), yields that as \(\epsilon\to 0\),
\[\begin{array}{ll}\sqrt{h^{\epsilon,\eta}}\nabla u^{\epsilon,\eta}\rightharpoonup \sqrt{h^{\eta}}\nabla u^{\eta}&\text{weakly* \ in \ }L^{\infty}([0,T^{*}];L^{2}),\\ \sqrt{h^{\epsilon,\eta}}\nabla u^{\epsilon,\eta}_{t}\rightharpoonup\sqrt{h^{ \eta}}\nabla u^{\eta}_{t}&\text{weakly* \ in \ }L^{\infty}([0,T^{*}];L^{2}),\\ h^{\epsilon,\eta}\nabla^{2}u^{\epsilon,\eta}\rightharpoonup h^{\eta}\nabla^{2 }u^{\eta}&\text{weakly* \ in \ }L^{\infty}([0,T^{*}];H^{1}),\\ (h^{\epsilon,\eta}\nabla^{2}u^{\epsilon,\eta})_{t}\rightharpoonup(h^{\eta} \nabla^{2}u^{\eta})_{t}&\text{weakly \ in \ }L^{2}([0,T^{*}];L^{2}),\\ h^{\epsilon,\eta}\nabla^{2}u^{\epsilon,\eta}\rightharpoonup h^{\eta}\nabla^{2 }u^{\eta}&\text{weakly \ in \ }L^{2}([0,T^{*}];D^{1}_{*}\cap D^{2}),\\ t^{\frac{1}{2}}h^{\epsilon,\eta}\nabla^{2}u^{\epsilon,\eta}_{t}\rightharpoonup t ^{\frac{1}{2}}h^{\eta}\nabla^{2}u^{\eta}_{t}&\text{weakly* \ in \ }L^{\infty}([0,T^{*}];L^{2}),\\ t^{\frac{1}{2}}h^{\epsilon,\eta}\nabla^{3}u^{\epsilon,\eta}_{t}\rightharpoonup t ^{\frac{1}{2}}h^{\eta}\nabla^{3}u^{\eta}_{t}&\text{weakly \ in \ }L^{2}([0,T^{*}];L^{2}),\\ t^{\frac{1}{2}}h^{\epsilon,\eta}\nabla^{4}u^{\epsilon,\eta}\rightharpoonup t ^{\frac{1}{2}}h^{\eta}\nabla^{4}u^{\eta}&\text{weakly \ in \ }L^{2}([0,T^{*}];L^{2}),\\ t^{\frac{1}{2}}\sqrt{h^{\epsilon,\eta}}\nabla u^{\epsilon,\eta}_{tt}\rightharpoonup t ^{\frac{1}{2}}\sqrt{h^{\eta}}\nabla u^{\eta}_{tt}&\text{weakly \ in \ }L^{2}([0,T^{*}];L^{2}),\\ (h^{\epsilon,\eta})^{\frac{1}{4}}\nabla l^{\epsilon,\eta}\rightharpoonup(h^{ \eta})^{\frac{1}{4}}\nabla l^{\eta}&\text{weakly* \ in \ }L^{\infty}([0,T^{*}];L^{2}),\\ (h^{\epsilon,\eta})^{\frac{1}{4}}\nabla l^{\epsilon,\eta}_{t}\rightharpoonup(h^ {\eta})^{\frac{1}{4}}\nabla l^{\eta}_{t}&\text{weakly* \ in \ }L^{\infty}([0,T^{*}];L^{2}),\\ \sqrt{h^{\epsilon,\eta}}\nabla^{2}l^{\epsilon,\eta}\rightharpoonup\sqrt{h^{ \eta}}\nabla^{2}l^{\eta}&\text{weakly* \ in \ }L^{\infty}([0,T^{*}];H^{1}),\\ (\sqrt{h^{\epsilon,\eta}}\nabla^{2}l^{\epsilon,\eta})_{t}\rightharpoonup( \sqrt{h^{\eta}}\nabla^{2}l^{\eta})_{t}&\text{weakly \ in \ }L^{2}([0,T^{*}];L^{2}),\\ \sqrt{h^{\epsilon,\eta}}\nabla^{2}l^{\epsilon,\eta}\rightharpoonup\sqrt{h^{ \eta}}\nabla^{2}l^{\eta}&\text{weakly \ in \ }L^{2}([0,T^{*}];D^{1}_{*}),\\ (h^{\epsilon,\eta})^{-\frac{1}{4}}l^{\epsilon,\eta}_{t}\rightharpoonup(h^{ \eta})^{-\frac{1}{4}}l^{\eta}_{t}&\text{weakly* \ in \ }L^{\infty}([0,T^{*}];L^{2}),\\ (h^{\epsilon,\eta})^{-\frac{1}{4}}(l^{\epsilon,\eta}_{t},l^{\epsilon,\eta}_{t} )\rightharpoonup(h^{\eta})^{-\frac{1}{4}}(l^{\eta}_{t},l^{\eta}_{tt})&\text{ weakly \ in \ }L^{2}([0,T^{*}];L^{2}),\\ t^{\frac{1}{2}}\sqrt{h^{\epsilon,\eta}}\nabla^{2}l^{\epsilon,\eta}_{t}\rightharpoonup t ^{\frac{1}{2}}\sqrt{h^{\eta}}\nabla^{2}l^{\eta}_{t}&\text{weakly* \ in \ }L^{\infty}([0,T^{*}];L^{2}),\\ t^{\frac{1}{2}}(h^{\epsilon,\eta})^{\frac{1}{4}}\nabla l^{\epsilon,\eta}_{tt} \rightharpoonup t^{\frac{1}{2}}(h^{\eta})^{\frac{1}{4}}\nabla l^{\eta}_{tt}& \text{weakly}&\text{ in \ }L^{2}([0,T^{*}];L^{2}),\\ t^{\frac{1}{2}}(h^{\epsilon,\eta})^{-\frac{1}{4}}l^{\epsilon,\eta}_{tt} \rightharpoonup t^{\frac{1}{2}}(h^{\eta})^{-\frac{1}{4}}l^{\eta}_{tt}&\text{ weakly* \ in \ }L^{\infty}([0,T^{*}];L^{2}).\end{array}\]
This, together with the lower semi-continuity of weak or weak* convergence, implies that \((\phi^{\eta},u^{\eta},l^{\eta},h^{\eta})\) satisfies also the uniform weighted estimates on \((u^{\eta},l^{\eta})\).
Next we show that \((\phi^{\eta},u^{\eta},l^{\eta},h^{\eta})\) is a weak solution in the sense of distributions to (3.1) with \(\epsilon=0\). First, multiplying (3.1)\({}_{3}\) by any given \(\mathcal{Y}(t,x)\in C^{\infty}_{c}([0,T^{*})\times\mathbb{R}^{3})\) on both sides, and integrating over \([0,t)\times\mathbb{R}^{3}\) for \(t\in(0,T^{*}]\), one has
\[\begin{split}&\int_{0}^{t}\int_{\mathbb{R}^{3}}\Big{(}l^{\epsilon, \eta}\big{(}(h^{\epsilon,\eta})^{-\frac{1}{2}}\mathcal{Y}\big{)}_{s}-(h^{ \epsilon,\eta})^{-\frac{1}{2}}(v\cdot\nabla)l^{\epsilon,\eta}\mathcal{Y}\Big{)} \mathrm{d}x\mathrm{d}s\\ =&\int(h^{\epsilon,\eta})^{-\frac{1}{2}}l^{\epsilon,\eta} \mathcal{Y}(t,x)-\int(h^{\eta}_{0})^{-\frac{1}{2}}l_{0}\mathcal{Y}(0,x)\\ &-\int_{0}^{t}\int_{\mathbb{R}^{3}}\big{(}a_{4}w^{\nu}((h^{\epsilon, \eta})^{2}+\epsilon^{2})^{\frac{1}{4}}\triangle l^{\epsilon,\eta}+a_{5}w^{ \nu}n^{\epsilon,\eta}g^{\frac{3}{2}}H(v)\big{)}\mathcal{Y}\mathrm{d}x\mathrm{d}s \\ &-\int_{0}^{t}\int_{\mathbb{R}^{3}}\Big{(}a_{6}w^{\nu+1}(h^{ \epsilon,\eta})^{-\frac{1}{2}}\mathrm{div}\psi^{\epsilon,\eta}+\Pi(l^{\epsilon, \eta},h^{\epsilon,\eta},w,g)\Big{)}\mathcal{Y}\mathrm{d}x\mathrm{d}s.\end{split} \tag{3.142}\]
It follows from the uniform estimates obtained above and (3.139)-(3.141) that one can take the limit \(\epsilon\to 0\) in (3.142) to get
\[\begin{split}&\int_{0}^{t}\int_{\mathbb{R}^{3}}\Big{(}l^{\eta} \big{(}(h^{\eta})^{-\frac{1}{2}}\mathcal{Y}\big{)}_{s}-(h^{\eta})^{-\frac{1}{2} }(v\cdot\nabla)l^{\eta}\mathcal{Y}\Big{)}\mathrm{d}x\mathrm{d}s\\ =&\int(h^{\eta})^{-\frac{1}{2}}l^{\eta}\mathcal{Y}(t,x)-\int(h_{0}^{\eta})^{-\frac{1}{2}}l_{0}\mathcal{Y}(0,x)-\int_{0}^{t}\int_{ \mathbb{R}^{3}}\Big{(}a_{4}w^{\nu}\sqrt{h^{\eta}}\triangle l^{\eta}\\ &+a_{5}w^{\nu}n^{\eta}g^{\frac{3}{2}}H(v)+a_{6}w^{\nu+1}(h^{\eta })^{-\frac{1}{2}}\mathrm{div}\psi^{\eta}+\Pi(l^{\eta},h^{\eta},w,g)\Big{)} \mathcal{Y}\mathrm{d}x\mathrm{d}s.\end{split} \tag{3.143}\]
Similarly, one can show that \((\phi^{\eta},u^{\eta},l^{\eta},h^{\eta})\) satisfies also the equations (3.1)\({}_{1}\)-(3.1)\({}_{2}\), (3.1)\({}_{4}\) and the initial data in the sense of distributions. Then \((\phi^{\eta},u^{\eta},l^{\eta},h^{\eta})\) is a weak solution to (3.1) with \(\epsilon=0\) in the sense of distributions and satisfying
\[\begin{split}&\phi^{\eta}-\eta\in L^{\infty}([0,T^{*}];H^{3}), \quad h^{\eta}\in L^{\infty}([0,T^{*}]\times\mathbb{R}^{3}),\\ &(\nabla h^{\eta},h_{t}^{\eta})\in L^{\infty}([0,T^{*}];H^{2}), \quad u^{\eta}\in L^{\infty}([0,T^{*}];H^{3})\cap L^{2}([0,T^{*}];H^{4}),\\ & u_{t}^{\eta}\in L^{\infty}([0,T^{*}];H^{1})\cap L^{2}([0,T^{*} ];D^{2}),\ \ u_{tt}^{\eta}\in L^{2}([0,T^{*}];L^{2}),\\ & t^{\frac{1}{2}}u^{\eta}\in L^{\infty}([0,T^{*}];D^{4}),\quad t ^{\frac{1}{2}}u_{t}^{\eta}\in L^{\infty}([0,T^{*}];D^{2})\cap L^{2}([0,T^{*}] ;D^{3}),\\ & t^{\frac{1}{2}}u_{tt}^{\eta}\in L^{\infty}([0,T^{*}];L^{2})\cap L ^{2}([0,T^{*}];D_{*}^{1}),\quad l^{\eta}-\bar{l}\in L^{\infty}([0,T^{*}];D_{*} ^{1}\cap D^{3}),\\ & l_{t}^{\eta}\in L^{\infty}([0,T^{*}];D_{*}^{1})\cap L^{2}([0,T^ {*}];D^{2}),\quad l_{tt}^{\eta}\in L^{2}([0,T^{*}];L^{2}),\\ & t^{\frac{1}{2}}l_{t}^{\eta}\in L^{\infty}([0,T^{*}];D^{2}), \quad t^{\frac{1}{2}}l_{tt}^{\eta}\in L^{\infty}([0,T^{*}];L^{2})\cap L^{2}([0,T^{*}];D_{*}^{1}).\end{split}\]
Therefore, this weak solution \((\phi^{\eta},u^{\eta},l^{\eta},h^{\eta})\) is actually a strong one.
**Step 2.** Since \(h^{\eta}>\frac{1}{2c_{0}}\), the uniqueness and the time continuity of the solution obtained above can be obtained by the same arguments as in Lemma 3.1.
Thus the proof of Lemma 3.13 is complete.
### Nonlinear approximation solutions away from vacuum
In this subsection, we will prove the local well-posedness of the classical solution to the following Cauchy problem under the assumption that \(\phi_{0}^{\eta}\geq\eta\):
\[\begin{cases}&\phi_{t}^{\eta}+u^{\eta}\cdot\nabla\phi^{\eta}+(\gamma-1) \phi^{\eta}\mathrm{div}u^{\eta}=0,\\ & u_{t}^{\eta}+u^{\eta}\cdot\nabla u^{\eta}+a_{1}\phi^{\eta}\nabla l^{\eta}+ l^{\eta}\nabla\phi^{\eta}+a_{2}(l^{\eta})^{\nu}h^{\eta}Lu^{\eta}\\ =& a_{2}h^{\eta}\nabla(l^{\eta})^{\nu}\cdot Q(u^{\eta})+a_{3}(l^{\eta})^{\nu }\psi^{\eta}\cdot Q(u^{\eta}),\\ &(\phi^{\eta})^{-\iota}(l_{t}^{\eta}+u^{\eta}\cdot\nabla l^{\eta})-a_{4}(\phi ^{\eta})^{\iota}(l^{\eta})^{\nu}\triangle l^{\eta}\\ =& a_{5}(l^{\eta})^{\nu}n^{\eta}(\phi^{\eta})^{3\iota}H(u^{\eta})+a_{6}(l^{ \eta})^{\nu+1}(\phi^{\eta})^{-\iota}\mathrm{div}\psi^{\eta}+\Theta(\phi^{\eta}, l^{\eta},\psi^{\eta}),\\ & h_{t}^{\eta}+u^{\eta}\cdot\nabla h^{\eta}+(\delta-1)(\phi^{\eta})^{2\iota} \mathrm{div}u^{\eta}=0,\\ &(\phi^{\eta},u^{\eta},l^{\eta},h^{\eta})|_{t=0}=(\phi_{0}^{\eta},u_{0}^{\eta},l_{0}^{\eta},h_{0}^{\eta})=(\phi_{0}+\eta,u_{0},l_{0},(\phi_{0}+\eta)^{2\iota} )\text{ in }\mathbb{R}^{3},\\ &(\phi^{\eta},u^{\eta},l^{\eta},h^{\eta})\rightarrow(\eta,0,\bar{l},\eta^{ \eta})\quad\text{as }\,|x|\rightarrow\infty\quad\text{for}\quad\mathrm{t}\geq 0, \end{cases} \tag{3.144}\]
where \(\psi^{\eta}=\frac{a\delta}{\delta-1}\nabla h^{\eta}\) and \(n^{\eta}=(ah^{\eta})^{b}\). For simplicity, in the rest of this subsection, \(C\) will denote a positive generic constant independent of \(\eta\) and \(k\).
**Theorem 3.1**.: _Let (1.17) hold and \(\eta>0\). Assume that the initial data \((\phi_{0},u_{0},l_{0},h_{0})\) satisfy (2.7)-(2.8), and (3.5) holds with a constant \(c_{0}>0\) independent of \(\eta\). Then there exist a time \(T_{*}>0\), independent of \(\eta\), and a unique strong solution \((\phi^{\eta},u^{\eta},l^{\eta},h^{\eta}=\phi^{2\iota})\) in \([0,T_{*}]\times\mathbb{R}^{3}\) to (3.144) satisfying (3.4). Moreover, the uniform estimates (independent of \(\eta\)) (3.137) hold for \((\phi^{\eta},u^{\eta},l^{\eta},h^{\eta})\) with \(T^{*}\) replaced by \(T_{*}\)._
The proof is given by an iteration scheme described below.
Let \((\phi^{0},u^{0},l^{0},h^{0})\) be the solution to the following Cauchy problem
\[\begin{cases}U_{t}+u_{0}\cdot\nabla U=0,\ \ \text{in}\ \ (0,\infty)\times \mathbb{R}^{3},\\ Y_{t}-W\triangle Y=0,\ \ \text{in}\ \ (0,\infty)\times\mathbb{R}^{3},\\ W^{-\frac{1}{2}}Z_{t}-W^{\frac{1}{2}}\triangle Z=0,\ \ \text{in}\ \ (0,\infty)\times \mathbb{R}^{3},\\ W_{t}+u_{0}\cdot\nabla W=0,\ \ \text{in}\ \ (0,\infty)\times\mathbb{R}^{3},\\ (U,Y,Z,W)|_{t=0}=(\phi_{0}^{\eta},u_{0}^{\eta},l_{0}^{\eta},h_{0}^{\eta})=( \phi_{0}+\eta,u_{0},l_{0},(\phi_{0}+\eta)^{2\iota})\ \ \text{in}\ \ \mathbb{R}^{3},\\ (U,Y,Z,W)\rightarrow(\eta,0,\bar{l},\eta^{2\iota})\ \ \text{as}\ \ |x|\to\infty\ \ \ \text{for}\ \ \ \text{t}\geq 0.\end{cases} \tag{3.145}\]
Choose a time \(\bar{T}\in(0,T^{*}]\) small enough such that the uniform estimates (independent of \(\eta\)) (3.137) hold for \((\phi^{0},u^{0},l^{0},h^{0},\psi^{0}=\frac{a\delta}{\delta-1}\nabla h^{0})\) with \(T^{*}\) replaced by \(\bar{T}\).
Proof.: **Step 1:** Existence. One starts with the initial iteration \((v,w,g)=(u^{0},l^{0},h^{0})\), and can obtain a classical solution \((\phi^{1},u^{1},l^{1},h^{1})\) to (3.1) with \(\epsilon=0\). Inductively, given \((u^{k},l^{k},h^{k})\) for \(k\geq 1\), define \((\phi^{k+1},u^{k+1},l^{k+1},h^{k+1})\) by solving the following problem:
\[\begin{cases}&\phi_{t}^{k+1}+u^{k}\cdot\nabla\phi^{k+1}+(\gamma-1)\phi^{k+1} \text{div}u^{k}=0,\\ &(l^{k+1})^{-\nu}(u_{t}^{k+1}+u^{k}\cdot\nabla u^{k}+a_{1}\phi^{k+1}\nabla l^ {k+1}+l^{k+1}\nabla\phi^{k+1})\\ &+a_{2}h^{k+1}Lu^{k+1}=a_{2}(l^{k+1})^{-\nu}h^{k}\nabla(l^{k+1})^{\nu}\cdot Q (u^{k})+a_{3}\psi^{k+1}\cdot Q(u^{k}),\\ &(h^{k+1})^{-\frac{1}{2}}(l_{t}^{k+1}+u^{k}\cdot\nabla l^{k+1})-a_{4}(h^{k+1} )^{\frac{1}{2}}(l^{k})^{\nu}\triangle l^{k+1}\\ =&a_{5}(l^{k})^{\nu}n^{k+1}(h^{k})^{\frac{3}{2}}H(u^{k})+a_{6}(l^{k})^{\nu+1} (h^{k+1})^{-\frac{1}{2}}\text{div}\psi^{k+1}+\Pi^{k+1},\\ &h_{t}^{k+1}+u^{k}\cdot\nabla h^{k+1}+(\delta-1)h^{k}\text{div}u^{k}=0,\\ &(\phi^{k+1},u^{k+1},l^{k+1},h^{k+1})|_{t=0}=(\phi_{0}^{\eta},u_{0}^{\eta},l_{0 }^{\eta},h_{0}^{\eta})\\ =&(\phi_{0}+\eta,u_{0},l_{0},(\phi_{0}+\eta)^{2\iota})\ \ \text{in}\ \ \mathbb{R}^{3},\\ &(\phi^{k+1},u^{k+1},l^{k+1},h^{k+1})\longrightarrow(\eta,0,\bar{l},\eta^{2 \iota})\ \ \text{as}\ \ |x|\to\infty\ \ \ \text{for}\ \ \ \text{t}\geq 0,\end{cases} \tag{3.146}\]
where \(\psi^{k+1}=\frac{a\delta}{\delta-1}\nabla h^{k+1}\), \(n^{k+1}=(ah^{k+1})^{b}\) and
\[\begin{split}\Pi^{k+1}=& a_{7}(l^{k})^{\nu+1}(h^{k+1})^{- \frac{3}{2}}\psi^{k+1}\cdot\psi^{k+1}+a_{8}(l^{k})^{\nu}(h^{k+1})^{-\frac{1}{2 }}\nabla l^{k+1}\cdot\psi^{k+1}\\ &+a_{9}(l^{k})^{\nu-1}(h^{k})^{\frac{1}{2}}\nabla l^{k}\cdot\nabla l ^{k}.\end{split} \tag{3.147}\]
It follows from Lemma 3.13 with \((v,w,g)\) replaced by \((u^{k},l^{k},h^{k})\) and mathematical induction that one can solve (3.146) locally in time to get \((\phi^{k+1},u^{k+1},l^{k+1},\)
\(h^{k+1}\)) satisfying the uniform estimates (3.137). Moreover, \(\psi^{k+1}\) solves
\[\psi^{k+1}_{t}+\nabla(u^{k}\cdot\psi^{k+1})+(\delta-1)\psi^{k}\mathrm{div}u^{k}+ a\delta h^{k}\nabla\mathrm{div}u^{k}=0. \tag{3.148}\]
To show the strong convergence of \((\phi^{k},u^{k},l^{k},\psi^{k})\), we set
\[\begin{split}\bar{\phi}^{k+1}&=\phi^{k+1}-\phi^{k}, \ \ \bar{u}^{k+1}=u^{k+1}-u^{k},\ \ \bar{l}^{k+1}=l^{k+1}-l^{k},\\ \bar{\psi}^{k+1}&=\psi^{k+1}-\psi^{k},\ \ \bar{h}^{k+1}=h^{k+1}-h^{k},\ \ \bar{n}^{k+1}=n^{k+1}-n^{k}.\end{split}\]
Then (3.146) and (3.148) yield
\[\begin{cases}\bar{\phi}^{k+1}_{t}+u^{k}\cdot\nabla\bar{\phi}^{k+1}+\bar{u}^{k }\cdot\nabla\phi^{k}+(\gamma-1)(\bar{\phi}^{k+1}\mathrm{div}u^{k}+\phi^{k} \mathrm{div}\bar{u}^{k})=0,\\ (l^{k+1})^{-\nu}\bar{u}^{k+1}_{t}+a_{2}h^{k+1}L\bar{u}^{k+1}+a_{2}\bar{h}^{k +1}Lu^{k}=\sum_{i=1}^{4}\bar{\mathcal{U}}^{k+1}_{i},\\ (h^{k+1})^{-\frac{1}{2}}\bar{l}^{k+1}_{t}-a_{4}\sqrt{h^{k+1}}(l^{k})^{\nu} \triangle\bar{l}^{k+1}=\sum_{i=1}^{4}\bar{\mathcal{L}}^{k+1}_{i}+\bar{\Pi}^{k+ 1},\\ \bar{\psi}^{k+1}_{t}+\nabla(u^{k}\cdot\bar{\psi}^{k+1}+\bar{u}^{k} \cdot\psi^{k})+(\delta-1)(\bar{\psi}^{k}\mathrm{div}u^{k}+\psi^{k-1}\mathrm{ div}\bar{u}^{k})\\ +a\delta(h^{k}\nabla\mathrm{div}\bar{u}^{k}+\bar{h}^{k}\nabla \mathrm{div}u^{k-1})=0,\\ (\bar{\phi}^{k+1},\bar{u}^{k+1},\bar{l}^{k+1},\bar{\psi}^{k+1})|_{t=0}=(0,0,0, 0)\ \ \text{in}\ \ \mathbb{R}^{3},\\ (\bar{\phi}^{k+1},\bar{u}^{k+1},\bar{l}^{k+1},\bar{\psi}^{k+1})\longrightarrow( 0,0,0,0)\quad\text{as}\ \ |x|\rightarrow\infty\quad\text{for}\quad\text{t}\geq 0,\end{cases} \tag{3.149}\]
where
\[\begin{split}\bar{\mathcal{U}}^{k+1}_{1}=&-(l^{k+1})^{- \nu}(u^{k}\cdot\nabla\bar{u}^{k}+\bar{u}^{k}\cdot\nabla u^{k-1})\\ &-\big{(}(l^{k+1})^{-\nu}-(l^{k})^{-\nu}\big{)}(u^{k}_{t}+u^{k-1} \cdot\nabla u^{k-1}),\\ \bar{\mathcal{U}}^{k+1}_{2}=&-(l^{k+1})^{-\nu}(a_{1}\bar{ \phi}^{k+1}\nabla l^{k+1}+a_{1}\phi^{k}\nabla\bar{l}^{k+1}+\bar{l}^{k+1}\nabla \phi^{k+1}+l^{k}\nabla\bar{\phi}^{k+1})\\ &-\big{(}(l^{k+1})^{-\nu}-(l^{k})^{-\nu}\big{)}(a_{1}\phi^{k} \nabla l^{k}+l^{k}\nabla\phi^{k}),\\ \bar{\mathcal{U}}^{k+1}_{3}=& a_{2}(l^{k+1})^{-\nu}\Big{(}h^{k}\big{(}\nabla(l^{k+1})^{\nu}-\nabla(l^{k})^{ \nu}\big{)}\cdot Q(u^{k})+h^{k}\nabla(l^{k})^{\nu}\cdot Q(\bar{u}^{k})\\ &+\bar{h}^{k}\nabla(l^{k})^{\nu}\cdot Q(u^{k-1})\Big{)}+a_{3}\bar{ \psi}^{k+1}\cdot Q(u^{k})+a_{3}\psi^{k}\cdot Q(\bar{u}^{k}),\\ \bar{\mathcal{U}}^{k+1}_{4}=& a_{2}\big{(}(l^{k+1})^{-\nu}-(l^{k})^{-\nu}\big{)}h^{k-1}\nabla(l^{k})^{\nu} \cdot Q(u^{k-1}),\\ \bar{\mathcal{L}}^{k+1}_{1}=&-(h^{k+1})^{-\frac{1}{2}}(u^{k} \cdot\nabla\bar{l}^{k+1}+\bar{u}^{k}\cdot\nabla l^{k})\\ &-((h^{k+1})^{-\frac{1}{2}}-(h^{k})^{-\frac{1}{2}})(l^{k}_{t}+u^{k-1} \cdot\nabla l^{k}),\\ \bar{\mathcal{L}}^{k+1}_{2}=& a_{4}\big{(}\sqrt{h^{k+1}}((l^{k})^{\nu}-(l^{k-1})^{\nu})+(\sqrt{h^{k+1}}- \sqrt{h^{k}})(l^{k-1})^{\nu}\big{)}\triangle l^{k},\\ \bar{\mathcal{L}}^{k+1}_{3}=& a_{5}(l^{k})^{\nu}n^{k+1}\big{(}(h^{k})^{\frac{3}{2}}(H(u^{k})-H(u^{k-1}))+((h^{k})^{ \frac{3}{2}}-(h^{k-1})^{\frac{3}{2}})H(u^{k-1})\big{)}\\ &+a_{5}(h^{k-1})^{\frac{3}{2}}H(u^{k-1})\big{(}(l^{k})^{\nu}\bar{n}^{k+1}+(( l^{k})^{\nu}-(l^{k-1})^{\nu}n^{k}),\\ \bar{\mathcal{L}}^{k+1}_{4}=& a_{6}(l^{k})^{\nu+1}\big{(}(h^{k+1})^{-\frac{1}{2}}\mathrm{ div}\bar{\psi}^{k+1}+((h^{k+1})^{-\frac{1}{2}}-(h^{k})^{-\frac{1}{2}})\mathrm{ div}\psi^{k}\big{)}\\ &+a_{6}((l^{k})^{\nu+1}-(l^{k-1})^{\nu+1})(h^{k})^{-\frac{1}{2}}\mathrm{ div}\psi^{k},\end{split}\]
\[\bar{\Pi}^{k+1}= a_{7}(l^{k})^{\nu+1}((h^{k+1})^{-\frac{3}{2}}\bar{\psi}^{k+1}\cdot( \psi^{k+1}+\psi^{k})+((h^{k+1})^{-\frac{3}{2}}-(h^{k})^{-\frac{3}{2}})\psi^{k} \cdot\psi^{k})\] \[+a_{7}((l^{k})^{\nu+1}-(l^{k-1})^{\nu+1})(h^{k})^{-\frac{3}{2}} \psi^{k}\cdot\psi^{k}\] \[+a_{8}(l^{k})^{\nu}(h^{k+1})^{-\frac{1}{2}}(\nabla l^{k+1}\cdot \bar{\psi}^{k+1}+\nabla\bar{l}^{k+1}\cdot\psi^{k})\] \[+a_{8}\big{(}(l^{k})^{\nu}((h^{k+1})^{-\frac{1}{2}}-(h^{k})^{- \frac{1}{2}})+((l^{k})^{\nu}-(l^{k-1})^{\nu})(h^{k})^{-\frac{1}{2}}\big{)} \nabla l^{k}\cdot\psi^{k}\] \[+a_{9}(l^{k})^{\nu-1}\sqrt{h^{k}}\nabla\bar{l}^{k}\cdot(\nabla l ^{k}+\nabla l^{k-1})\] \[+a_{9}\big{(}(l^{k})^{\nu-1}(\sqrt{h^{k}}-\sqrt{h^{k-1}})+\sqrt{ h^{k-1}}((l^{k})^{\nu-1}-(l^{k-1})^{\nu-1}))|\nabla l^{k-1}|^{2}.\]
Next, starting from (3.149), one will show that \(\{(\phi^{k},u^{k},l^{k},\psi^{k})\}_{k=1}^{\infty}\) is actually a Cauchy sequence in proper functional spaces, which requires some estimates for \(\bar{\phi}^{k+1}\in H^{2}\), \(\bar{\psi}^{k+1}\in H^{1}\), and \((\bar{u}^{k+1},\bar{l}^{k+1})\) in some suitable weighted \(H^{2}\) spaces. For this purpose, one first needs the following lemma.
**Lemma 3.14**.: \[(\bar{h}^{k+1},\ \bar{\phi}^{k+1})\in L^{\infty}([0,\bar{T}];H^{3})\quad\text{ and }\quad\bar{\psi}^{k+1}\in L^{\infty}([0,\bar{T}];H^{2})\quad\text{for} \quad k=1,2,....\]
The proof follows from the same argument for Lemma 3.11 of [12]. This lemma helps to deal with some singular terms of type \(\infty-\infty\) such as \(a_{2}\bar{h}^{k+1}Lu^{k}\) in \(\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq
which, along with (3.154) and (3.137), yields that
\[\begin{split}\frac{d}{dt}\|\bar{\psi}^{k+1}\|_{1}^{2}\leq& C\sigma^{-1}\|\bar{\psi}^{k+1}\|_{1}^{2}+\sigma(|\sqrt{h^{k}}\nabla\bar{u}^{k}|_{ 2}^{2}+\|\bar{\psi}^{k}\|_{1}^{2}\\ &+|h^{k}\nabla^{2}\bar{u}^{k}|_{2}^{2})+C|h^{k}\nabla^{3}\bar{u} ^{k}|_{2}|\nabla\bar{\psi}^{k+1}|_{2}.\end{split} \tag{3.155}\]
**Step 1.2:** Estimates on \(\bar{l}^{k+1}\). Multiplying \(\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:
It is noted that the second estimate in (3.158)\({}_{2}\) follows from Lemma 3.4. Then according to (3.156)-(3.157), one has
\[\begin{split}&\frac{a_{4}}{2}\frac{d}{dt}|(h^{k+1})^{\frac{1}{4}}(l^ {k})^{\frac{\nu}{2}}\nabla\bar{l}^{k+1}|_{2}^{2}+|(h^{k+1})^{-\frac{1}{4}}\bar{ l}_{t}^{k+1}|_{2}^{2}\\ \leq& C\big{(}(1+|\sqrt{h^{k}}\nabla^{2}l_{t}^{k}|_{ 2})|(h^{k+1})^{\frac{1}{4}}(l^{k})^{\frac{\nu}{2}}\nabla\bar{l}^{k+1}|_{2}^{2} +\sigma^{-3}|(h^{k+1})^{-\frac{1}{4}}\bar{l}_{t}^{k+1}|_{2}^{2}\\ &+\|\bar{\psi}^{k+1}\|_{1}^{2}\big{)}+\sigma(|\bar{\psi}^{k}|_{2} ^{2}+|(l^{k})^{-\frac{\nu}{2}}\bar{u}^{k}|_{2}^{2}+|\sqrt{h^{k}}\nabla\bar{u}^{ k}|_{2}^{2}+|h^{k}\nabla^{2}\bar{u}^{k}|_{2}^{2}\\ &+|(l^{k-1})^{\frac{\nu}{2}}(h^{k})^{\frac{1}{4}}\nabla\bar{l}^{ k}|_{2}^{2}+|(h^{k+1})^{\frac{1}{4}}(l^{k})^{\frac{\nu}{2}}\nabla\bar{l}_{t}^{k+1}|_{ 2}^{2}).\end{split} \tag{3.159}\]
Next, applying \(\partial_{t}\) to (3.149)\({}_{3}\) yields
\[\begin{split}&(h^{k+1})^{-\frac{1}{2}}\bar{l}_{tt}^{k+1}-a_{4}(h^ {k+1})^{\frac{1}{2}}(l^{k})^{\nu}\triangle\bar{l}_{t}^{k+1}\\ =&-(h^{k+1})_{t}^{-\frac{1}{2}}\bar{l}_{t}^{k+1}+a_{ 4}(\sqrt{h^{k+1}}(l^{k})^{\nu})_{t}\triangle\bar{l}^{k+1}+\sum_{i=1}^{4}(\bar {\mathcal{L}}_{i}^{k+1})_{t}+\bar{\Pi}_{t}^{k+1}.\end{split}\]
Then multiplying the above equation by \(\bar{l}_{t}^{k+1}\) and integrating over \(\mathbb{R}^{3}\), one has
\[\frac{1}{2}\frac{d}{dt}|(h^{k+1})^{-\frac{1}{4}}\bar{l}_{t}^{k+1}|_{2}^{2}+a_ {4}|(h^{k+1})^{\frac{1}{4}}(l^{k})^{\frac{\nu}{2}}\nabla\bar{l}_{t}^{k+1}|_{2 }^{2}=\sum_{i=6}^{11}N_{i}. \tag{3.160}\]
Here \(N_{i}\), \(i=6,7,\cdots,11\) are given and estimated as follows:
\[\begin{split} N_{6}=&\int\Big{(}-\frac{1}{2}((h^{k+ 1})^{-\frac{1}{2}})_{t}(\bar{l}_{t}^{k+1})^{2}+(\bar{\mathcal{L}}_{1}^{k+1})_{ t}\Big{)}\bar{l}_{t}^{k+1}\\ \leq& C|(h^{k+1})^{-\frac{1}{4}}\bar{l}_{t}^{k+1}|_{ 2}^{2}+C\Big{(}|(l^{k})^{\frac{\nu}{2}}(h^{k+1})^{\frac{1}{4}}\nabla\bar{l}^{ k+1}|_{2}+|(l^{k})^{-\frac{\nu}{2}}\bar{u}^{k}|_{2}\\ &+|(h^{k+1})^{\frac{1}{4}}(l^{k})^{\frac{\nu}{2}}\nabla\bar{l}_{t }^{k+1}|_{2}+|(l^{k})^{-\frac{\nu}{2}}\bar{u}_{t}^{k}|_{2}+|\bar{u}^{k}|_{ \infty}+|\bar{\psi}_{t}^{k+1}|_{2}\\ &+|\bar{\psi}^{k+1}|_{2}\Big{)}|(h^{k+1})^{-\frac{1}{4}}\bar{l}_{ t}^{k+1}|_{2}+C\Big{(}|(h^{k+1})^{\frac{1}{4}}(l^{k})^{\frac{\nu}{2}}\nabla \bar{l}^{k+1}|_{2}\\ &+|\bar{\psi}^{k+1}|_{2}(1+|(h^{k})^{-\frac{1}{4}}l_{tt}^{k}|_{2} )\Big{)}|(h^{k+1})^{-\frac{1}{4}}\bar{l}_{t}^{k+1}|_{3},\\ N_{7}=&\int\Big{(}(\bar{\mathcal{L}}_{2}^{k+1})_{t}+a _{4}\big{(}\sqrt{h^{k+1}}(l^{k})^{\nu}\big{)}_{t}\triangle\bar{l}^{k+1}\Big{)} \bar{l}_{t}^{k+1}\\ \leq& C\Big{(}(1+|\sqrt{h^{k}}\nabla^{2}l_{t}^{k}|_{ 2})\big{(}|(h^{k})^{\frac{1}{4}}(l^{k-1})^{\frac{\nu}{2}}\nabla\bar{l}^{k}|_{2} \\ &+|\sqrt{h^{k}}\nabla^{2}\bar{l}^{k}|_{2}+\|\bar{\psi}^{k+1}\|_{ 1}\big{)}+|(h^{k})^{-\frac{1}{4}}\bar{l}_{t}^{k}|_{2}+|\bar{\psi}_{t}^{k+1}|_{ 2}\\ &+|\sqrt{h^{k+1}}\nabla^{2}\bar{l}^{k+1}|_{2}+|(h^{k})^{\frac{1}{4 }}\nabla\bar{l}_{t}^{k}|_{2}\Big{)}|(h^{k+1})^{-\frac{1}{4}}\bar{l}_{t}^{k+1}|_{ 2}\\ &+C(|\sqrt{h^{k}}\nabla^{2}l_{t}^{k}|_{2}|\nabla\bar{l}^{k}|_{2}+| \sqrt{h^{k+1}}\nabla^{2}\bar{l}^{k+1}|_{2})|\bar{l}_{t}^{k+1}|_{3},\\ N_{8}=&\int(\bar{\mathcal{L}}_{3}^{k+1})_{t}\bar{l}_{ t}^{k+1}\leq C\Big{(}|h^{k}\nabla\bar{u}^{k}|_{6}+|\nabla\bar{u}^{k}|_{2}+| \bar{\psi}_{t}^{k}|_{2}+|\bar{\psi}_{t}^{k+1}|_{2}\\ &+|(h^{k})^{\frac{1}{4}}\nabla\bar{l}_{t}^{k}|_{2}+\big{(}1+|(h^{k -1}\nabla^{2}u_{t}^{k-1},h^{k}\nabla^{2}u_{t}^{k})|_{2}\big{)}\big{(}|(h^{k})^{ \frac{3}{4}}\nabla\bar{u}^{k}|_{3}\\ &+|(h^{k})^{\frac{1}{4}}(l^{k-1})^{\frac{\nu}{2}}\nabla\bar{l}^{k}| _{2}+|\bar{\psi}^{k+1}|_{2}+|\bar{\psi}^{k}|_{2}\big{)}\Big{)}|(h^{k+1})^{- \frac{1}{4}}\bar{l}_{t}^{k+1}|_{2}\\ &+C|\sqrt{h^{k}}\nabla\bar{u}_{t}^{k}|_{2}|\bar{l}_{t}^{k+1}|_{3}, \end{split}\]
\[N_{9}= \int(\bar{\mathcal{L}}_{4}^{k+1})_{t}\bar{l}_{t}^{k+1}\leq C\Big{(} \|\bar{\psi}^{k+1}\|_{1}+|\bar{\psi}_{t}^{k+1}|_{2}+|(h^{k})^{\frac{1}{4}}(l^{k- 1})^{\frac{\nu}{2}}\nabla\bar{l}^{k}|_{2} \tag{3.162}\] \[+|\sqrt{h^{k}}\nabla^{2}\bar{l}^{k}|_{2}+|(h^{k})^{\frac{1}{4}} \nabla\bar{l}_{t}^{k}|_{2}\Big{)}|(h^{k+1})^{-\frac{1}{4}}\bar{l}_{t}^{k+1}|_{2}\] \[+C\|\bar{\psi}^{k+1}\|_{1}|(h^{k+1})^{\frac{1}{4}}(l^{k})^{\frac{ \nu}{2}}\nabla\bar{l}_{t}^{k+1}|_{2}+N_{*},\] \[N_{10}= \int\bar{\Pi}_{t}^{k+1}\bar{l}_{t}^{k+1}\leq C\Big{(}\|\bar{\psi }^{k+1}\|_{1}+|\bar{\psi}_{t}^{k+1}|_{2}+|\bar{\psi}_{t}^{k}|_{2}+\|\bar{\psi }^{k}\|_{1}\] \[+(1+|\sqrt{h^{k}}\nabla^{2}{l}_{t}^{k}|_{2})(|(h^{k})^{\frac{1}{4 }}(l^{k-1})^{\frac{\nu}{2}}\nabla\bar{l}^{k}|_{2}+|\sqrt{h^{k}}\nabla^{2}\bar{ l}^{k}|_{2})\] \[+|(h^{k})^{\frac{1}{4}}\nabla\bar{l}_{t}^{k}|_{2}+(1+|\sqrt{h^{k} }\nabla^{2}{l}_{t}^{k}|_{2})|(h^{k+1})^{\frac{1}{4}}(l^{k})^{\frac{\nu}{2}} \nabla\bar{l}^{k+1}|_{2}\] \[+|(h^{k})^{-\frac{1}{4}}\bar{l}_{t}^{k}|_{2})|(h^{k+1})^{-\frac{1 }{4}}\bar{l}_{t}^{k+1}|_{2}+C(\|\bar{\psi}^{k+1}\|_{1}+|(h^{k+1})^{-\frac{1}{ 4}}\bar{l}_{t}^{k+1}|_{2}\] \[+|(h^{k+1})^{\frac{1}{4}}(l^{k})^{\frac{\nu}{2}}\nabla\bar{l}^{k+ 1}|_{2})|(h^{k+1})^{\frac{1}{4}}(l^{k})^{\frac{\nu}{2}}\nabla\bar{l}_{t}^{k+1 }|_{2},\] \[N_{11}= -a_{4}\int\big{(}\nabla\sqrt{h^{k+1}}(l^{k})^{\nu}\cdot\nabla\bar {l}_{t}^{k+1}+\sqrt{h^{k+1}}\nabla(l^{k})^{\nu}\nabla\bar{l}_{t}^{k+1}\big{)} \bar{l}_{t}^{k+1}\] \[\leq C|(h^{k+1})^{\frac{1}{4}}(l^{k})^{\frac{\nu}{2}}\nabla\bar{l}_{t }^{k+1}|_{2}|(h^{k+1})^{-\frac{1}{4}}\bar{l}_{t}^{k+1}|_{2},\]
where one has used (3.158). By (3.149)\({}_{4}\), the remaining term \(N_{*}\) in \(N_{9}\) can be treated as follows by the integration by parts,
\[N_{*}= a_{6}\int(l^{k})^{\nu+1}(h^{k+1})^{-\frac{1}{2}}\mathrm{div} \bar{\psi}_{t}^{k+1}\bar{l}_{t}^{k+1} \tag{3.163}\] \[= -a_{6}\int(l^{k})^{\nu+1}(h^{k+1})^{-\frac{1}{2}}\bar{l}_{t}^{k+1 }\mathrm{div}\Big{(}\nabla(u^{k}\cdot\bar{\psi}^{k+1})+\nabla(\bar{u}^{k}\cdot \psi^{k})\] \[+(\delta-1)(\bar{\psi}^{k}\mathrm{div}u^{k}+\psi^{k-1}\mathrm{div} \bar{u}^{k})+a\delta(h^{k}\nabla\mathrm{div}\bar{u}^{k}+\bar{h}^{k}\nabla \mathrm{div}u^{k-1})\Big{)}\] \[\leq C\Big{(}\|\bar{\psi}^{k+1}\|_{1}+\|\bar{\psi}^{k}\|_{1}+|\sqrt{h^ {k}}\nabla\bar{u}^{k}|_{2}+|h^{k}\nabla^{2}\bar{u}^{k}|_{2}\] \[+|h^{k}\nabla^{3}\bar{u}^{k}|_{2}\Big{)}|(h^{k+1})^{-\frac{1}{4}} \bar{l}_{t}^{k+1}|_{2}+C\|\bar{\psi}^{k+1}\|_{1}|(h^{k+1})^{\frac{1}{4}}(l^{k} )^{\frac{\nu}{2}}\nabla\bar{l}_{t}^{k+1}|_{2}.\]
Moreover, (3.149)\({}_{4}\) implies that
\[|\bar{\psi}_{t}^{k+1}|_{2}\leq C\big{(}\|\bar{\psi}^{k+1}\|_{1}+\|\bar{u}^{k}\|_{1}+|\bar{\psi}^{k}|_{ 2}+|h^{k}\nabla^{2}\bar{u}^{k}|_{2}\big{)}. \tag{3.164}\]
Then collecting estimates (3.160)-(3.164) yields that
\[\frac{1}{2}\frac{d}{dt}|(h^{k+1})^{-\frac{1}{4}}\bar{l}_{t}^{k+1}| _{2}^{2}+\frac{a_{4}}{2}|(h^{k+1})^{\frac{1}{4}}(l^{k})^{\frac{\nu}{2}}\nabla \bar{l}_{t}^{k+1}|_{2}^{2} \tag{3.165}\] \[\leq C\sigma^{-2}(1+|\sqrt{h^{k}}\nabla^{2}{l}_{t}^{k}|_{2}^{2}+|(h^{k })^{-\frac{1}{4}}\bar{l}_{tt}^{k}|_{2}^{2}+|h^{k-1}\nabla^{2}{u}_{t}^{k-1}|_{2} ^{2}+|h^{k}\nabla^{2}{u}_{t}^{k}|_{2}^{2})\] \[(\|\bar{\psi}^{k+1}\|_{1}^{2}+|(h^{k+1})^{-\frac{1}{4}}\bar{l}_{t }^{k+1}|_{2}^{2}+|(h^{k+1})^{\frac{1}{4}}(l^{k})^{\frac{\nu}{2}}\nabla\bar{l}^{k +1}|_{2}^{2})+\sigma(\|\bar{\psi}^{k}\|_{1}^{2}\] \[+|(l^{k})^{-\frac{\nu}{2}}\bar{u}^{k}|_{2}^{2}+|\sqrt{h^{k}}\nabla \bar{u}^{k}|_{2}^{2}+|(l^{k})^{-\frac{\nu}{2}}\bar{u}^{k}_{t}|_{2}^{2}+|h^{k} \nabla^{2}\bar{u}^{k}|_{2}^{2}+|(h^{k})^{-\frac{1}{4}}\bar{l}_{t}^{k}|_{2}^{2}\] \[+(1+|\sqrt{h^{k}}\nabla^{2}{l}_{t}^{k}|_{2})|(l^{k-1})^{\frac{\nu}{ 2}}(h^{k})^{\frac{1}{4}}\nabla\bar{l}^{k}|_{2}^{2}+|\sqrt{h^{k}}\nabla^{2}\bar{l} ^{k}|_{2}^{2}+|\sqrt{h^{k+1}}\nabla^{2}\bar{l}^{k+1}|_{2}^{2}\] \[+\|\bar{u}^{k-1}\|_{1}^{2}+|\bar{\psi}^{k-1}|_{2}^{2}+|h^{k-1} \nabla^{2}\bar{u}^{k-1}|_{2}^{2})+C\widetilde{\epsilon}^{-2}|(h^{k+1})^{- \frac{1}{4}}\bar{l}_{t}^{k+1}|_{2}^{2}\] \[+\widetilde{\epsilon}(|(h^{k})^{\frac{1}{4}}\nabla\bar{l}_{t}^{k}|_{ 2}^{2}+|\sqrt{h^{k}}\nabla\bar{u}^{k}_{t}|_{2}^{2})+C|h^{k}\nabla^{3}\bar{u}^{k}| _{2}|(h^{k+1})^{-\frac{1}{4}}\bar{l}_{t}^{k+1}|_{2},\]
where \(\widetilde{\epsilon}\in(0,1)\) is a constant to be determined later.
**Step 1.3:** Estimates on \(\bar{u}^{k+1}\). Multiplying \(\eqref{eq:149}_{2}\) by \(2\bar{u}^{k+1}\) and integrating over \(\mathbb{R}^{3}\) yield that
\[\begin{split}&\frac{d}{dt}|(l^{k+1})^{-\frac{\nu}{2}}\bar{u}^{k+1}| _{2}^{2}+a_{2}\alpha|\sqrt{h^{k+1}}\nabla\bar{u}^{k+1}|_{2}^{2}\\ \leq& C\sigma^{-1}(1+|\nabla^{2}l_{t}^{k+1}|_{2})(l^ {k+1})^{-\frac{\nu}{2}}\bar{u}^{k+1}|_{2}^{2}+\sigma(|\sqrt{h^{k}}\nabla\bar{u }^{k}|_{2}^{2}+|h^{k}\nabla^{2}\bar{u}^{k}|_{2}^{2}\\ &+|\bar{\psi}^{k}|_{2}^{2})+C(\|\bar{\phi}^{k+1}\|_{1}^{2}+|\bar{ \psi}^{k+1}|_{2}^{2}+|\nabla\bar{l}^{k+1}|_{2}^{2}).\end{split} \tag{3.166}\]
Multiplying \(\eqref{eq:149}_{2}\) by \(2\bar{u}_{t}^{k+1}\) and integrating over \(\mathbb{R}^{3}\) give that
\[\begin{split} 2|(l^{k+1})^{-\frac{\nu}{2}}\bar{u}_{t}^{k+1}|_{2}^{2} +\frac{d}{dt}(a_{2}\alpha|\sqrt{h^{k+1}}\nabla\bar{u}^{k+1}|_{2}^{2}\\ +a_{2}(\alpha+\beta)|\sqrt{h^{k+1}}\text{div}\bar{u}^{k+1}|_{2}^{ 2})=&\sum_{i=1}^{6}O_{i},\end{split} \tag{3.167}\]
where, \(O_{i}\), \(i=1,2,\cdots,6\), are defined and estimated as follows:
\[\begin{split} O_{1}=&-2a_{2}\int\bar{h}^{k+1}Lu^{k} \cdot\bar{u}_{t}^{k+1}\leq C|\bar{\psi}^{k+1}|_{2}|(l^{k+1})^{-\frac{\nu}{2}} \bar{u}_{t}^{k+1}|_{2},\\ O_{2}=& 2\int(\bar{\mathcal{U}}_{1}^{k+1}-\frac{ \delta-1}{a\delta}a_{2}\psi^{k+1}\cdot Q(\bar{u}^{k+1}))\cdot\bar{u}_{t}^{k+1} \\ \leq& C(|\sqrt{h^{k}}\nabla\bar{u}^{k}|_{2}+|(h^{k+1} )^{\frac{1}{4}}(l^{k})^{\frac{\nu}{2}}\nabla\bar{l}^{k+1}|_{2}\\ &+|\sqrt{h^{k+1}}\nabla\bar{u}^{k+1}|_{2})|(l^{k+1})^{-\frac{\nu} {2}}\bar{u}_{t}^{k+1}|_{2},\\ O_{3}=& 2\int\bar{\mathcal{U}}_{2}^{k+1}\cdot\bar{u}_{t}^{k+1} \\ \leq& C(\|\bar{\phi}^{k+1}\|_{1}+|(h^{k+1})^{\frac{1} {4}}(l^{k})^{\frac{\nu}{2}}\nabla\bar{l}^{k+1}|_{2})|(l^{k+1})^{-\frac{\nu}{2} }\bar{u}_{t}^{k+1}|_{2},\\ O_{4}=& 2\int\bar{\mathcal{U}}_{3}^{k+1}\cdot\bar{u}_{t}^{k+1 }\leq C(|(h^{k+1})^{\frac{1}{4}}(l^{k})^{\frac{\nu}{2}}\nabla\bar{l}^{k+1}|_{2 }+|\bar{\psi}^{k}|_{2}\\ &+|\bar{\psi}^{k+1}|_{2}+|\sqrt{h^{k}}\nabla\bar{u}^{k}|_{2})|(l^ {k+1})^{-\frac{\nu}{2}}\bar{u}_{t}^{k+1}|_{2},\\ O_{5}=& 2\int\bar{\mathcal{U}}_{4}^{k+1}\cdot\bar{u}_{t}^{k+1 }\leq C|(h^{k+1})^{\frac{1}{4}}(l^{k})^{\frac{\nu}{2}}\nabla\bar{l}^{k+1}|_{2} |(l^{k+1})^{-\frac{\nu}{2}}\bar{u}_{t}^{k+1}|_{2},\\ O_{6}=& a_{2}\int h_{t}^{k+1}(\alpha|\nabla\bar{u}^{k+1 }|^{2}+(\alpha+\beta)|\text{div}\bar{u}^{k+1}|^{2})\leq C|\sqrt{h^{k+1}}\nabla \bar{u}^{k+1}|_{2}^{2}.\end{split} \tag{3.168}\]
It follows from (3.167)-(3.168) and Young's inequality that
\[\begin{split}&|(l^{k+1})^{-\frac{\nu}{2}}\bar{u}_{t}^{k+1}|_{2}^{ 2}+\frac{d}{dt}a_{2}\alpha|\sqrt{h^{k+1}}\nabla\bar{u}^{k+1}|_{2}^{2}\\ \leq& C(|\sqrt{h^{k+1}}\nabla\bar{u}^{k+1}|_{2}^{2}+ \|\bar{\phi}^{k+1}\|_{1}^{2}+|(h^{k+1})^{\frac{1}{4}}(l^{k})^{\frac{\nu}{2}} \nabla\bar{l}^{k+1}|_{2}^{2}+|\bar{\psi}^{k+1}|_{2}^{2}\\ &+\sigma^{-1}|(l^{k+1})^{-\frac{\nu}{2}}\bar{u}_{t}^{k+1}|_{2}^{2} )+\sigma(|\sqrt{h^{k}}\nabla\bar{u}^{k}|_{2}^{2}+|\bar{\psi}^{k}|_{2}^{2}). \end{split} \tag{3.169}\]
Next, applying \(\partial_{t}\) to \(\eqref{eq:149}_{2}\) gives
\[\begin{split}&(l^{k+1})^{-\nu}\bar{u}_{tt}^{k+1}+a_{2}h^{k+1}L \bar{u}_{t}^{k+1}\\ =&-((l^{k+1})^{-\nu})_{t}\bar{u}_{t}^{k+1}-a_{2}h_{t}^{ k+1}L\bar{u}^{k+1}-a_{2}(\bar{h}^{k+1}Lu^{k})_{t}+\sum_{i=1}^{4}(\bar{ \mathcal{U}}_{i}^{k+1})_{t}.\end{split}\]
Then multiplying above system by \(2\bar{u}_{t}^{k+1}\) and integrating over \(\mathbb{R}^{3}\) lead to
\[\begin{split}\frac{d}{dt}|(l^{k+1})^{-\frac{\nu}{2}}\bar{u}_{t}^{k+1} |_{2}^{2}+2a_{2}\alpha|\sqrt{h^{k+1}}\nabla\bar{u}_{t}^{k+1}|_{2}^{2}\\ +2a_{2}(\alpha+\beta)|\sqrt{h^{k+1}}\mathrm{div}\bar{u}_{t}^{k+1} |_{2}^{2}=\sum_{i=7}^{11}O_{i},\end{split} \tag{3.170}\]
where \(O_{i},i=7,8,\cdots,11\) are given and estimated as follows:
\[\begin{split} O_{7}=&\int(-(l^{k+1})^{-\nu})_{t}( \bar{u}_{t}^{k+1})^{2}+(2\bar{\mathcal{U}}_{1}^{k+1})_{t}\cdot\bar{u}_{t}^{k+1 })\\ \leq& C|l_{t}^{k+1}|_{D^{2}}^{\frac{1}{2}}|(l^{k+1})^ {-\frac{\nu}{2}}\bar{u}_{t}^{k+1}|_{2}^{2}+C\big{(}|\sqrt{h^{k}}\nabla\bar{u}_ {t}^{k}|_{2}+|(h^{k+1})^{-\frac{1}{4}}\bar{t}_{t}^{k+1}|_{2}\\ &+\|\nabla\bar{u}^{k}\|_{1}+(1+|u_{tt}^{k}|_{2})|(l^{k})^{\frac{ \nu}{2}}(h^{k+1})^{\frac{1}{4}}\nabla\bar{l}^{k+1}|_{2}\big{)}|(l^{k+1})^{- \frac{\nu}{2}}\bar{u}_{t}^{k+1}|_{2}\\ &+C(1+|u_{tt}^{k}|_{2})|(l^{k})^{\frac{\nu}{2}}(h^{k+1})^{\frac{1 }{4}}\nabla\bar{l}^{k+1}|_{2}|\sqrt{h^{k+1}}\nabla\bar{u}_{t}^{k+1}|_{2}\\ &+C|(h^{k+1})^{-\frac{1}{4}}\bar{t}_{t}^{k+1}|_{2}|\sqrt{h^{k+1}} \nabla\bar{u}_{t}^{k+1}|_{2},\end{split}\]
\[\begin{split} O_{8}=&\int-2a_{2}\big{(}\nabla h^{k+1 }\cdot Q(\bar{u}_{t}^{k+1})+h_{t}^{k+1}L\bar{u}^{k+1}+(\bar{h}^{k+1}Lu^{k})_{t} \big{)}\cdot\bar{u}_{t}^{k+1}\\ \leq& C\big{(}|\sqrt{h^{k+1}}\nabla\bar{u}_{t}^{k+1 }|_{2}+|\nabla^{2}\bar{u}^{k+1}|_{2}+|\bar{\psi}_{t}^{k+1}|_{2}\\ &+|\nabla^{2}u_{t}^{k}|_{2}\|\bar{\psi}^{k+1}\|_{1}\big{)}|(l^{k+ 1})^{-\frac{\nu}{2}}\bar{u}_{t}^{k+1}|_{2},\end{split} \tag{3.171}\] \[\begin{split} O_{9}=&\int 2(\bar{\mathcal{U}}_{ 2}^{k+1})_{t}\cdot\bar{u}_{t}^{k+1}\leq C\big{(}\|\bar{\phi}^{k+1}\|_{2}+\| \bar{\phi}_{t}^{k+1}\|_{1}+|(h^{k+1})^{-\frac{1}{4}}\bar{t}_{t}^{k+1}|_{2}\\ &+(1+|l_{t}^{k}|_{D^{2}}^{\frac{1}{2}}+|l_{t}^{k+1}|_{D^{2}}^{ \frac{1}{2}})|(l^{k})^{\frac{\nu}{2}}(h^{k+1})^{\frac{1}{4}}\nabla\bar{l}^{k+1 }|_{2}\big{)}|(l^{k+1})^{-\frac{\nu}{2}}\bar{u}_{t}^{k+1}|_{2}\\ &+C|(h^{k+1})^{-\frac{1}{4}}\bar{t}_{t}^{k+1}|_{2}|\sqrt{h^{k+1}} \nabla\bar{u}_{t}^{k+1}|_{2},\end{split}\]
\[\begin{split} O_{10}=&\int 2(\bar{\mathcal{U}}_{ 3}^{k+1})_{t}\cdot\bar{u}_{t}^{k+1}\leq C\big{(}(1+|l_{t}^{k+1}|_{D^{2}}\\ &+|h^{k}\nabla^{2}u_{t}^{k}|_{2})|(h^{k+1})^{\frac{1}{4}}(l^{k})^{ \frac{\nu}{2}}\nabla\bar{l}^{k+1}|_{2}+|(h^{k+1})^{-\frac{1}{4}}\bar{t}_{t}^{k +1}|_{2}\\ &+(1+|l_{t}^{k}|_{D^{2}}^{\frac{1}{2}})(\|\nabla\bar{u}^{k}\|_{1} +|h^{k}\nabla^{2}\bar{u}^{k}|_{2}+\|\bar{\psi}^{k}\|_{1})\\ &+|\sqrt{h^{k}}\nabla\bar{u}_{t}^{k}|_{2}+|\bar{\psi}_{t}^{k}|_{2} +|\bar{\psi}_{t}^{k+1}|_{2}+\|\bar{\psi}^{k+1}\|_{1}\big{)}|(l^{k+1})^{-\frac{ \nu}{2}}\bar{u}_{t}^{k+1}|_{2}\\ &+C((1+|h^{k}\nabla^{2}u_{t}^{k}|_{2})|(h^{k+1})^{\frac{1}{4}}(l^{ k})^{\frac{\nu}{2}}\nabla\bar{l}^{k+1}|_{2}\\ &+|(h^{k+1})^{-\frac{1}{4}}\bar{t}_{t}^{k+1}|_{2}+\|\bar{\psi}^{k+ 1}\|_{1})|\sqrt{h^{k+1}}\nabla\bar{u}_{t}^{k+1}|_{2},\end{split}\]
\[\begin{split} O_{11}=&\int 2(\bar{\mathcal{U}}_{ 4}^{k+1})_{t}\cdot\bar{u}_{t}^{k+1}\leq C|(h^{k+1})^{\frac{1}{4}}(l^{k})^{ \frac{\nu}{2}}\nabla\bar{l}^{k+1}|_{2}|\sqrt{h^{k+1}}\nabla\bar{u}_{t}^{k+1}|_{ 2}\\ &+C\big{(}|(h^{k+1})^{-\frac{1}{4}}\bar{l}_{t}^{k+1}|_{2}+|(h^{k+1 })^{\frac{1}{4}}(l^{k})^{\frac{\nu}{2}}\nabla\bar{l}^{k+1}|_{2}\big{)}|(l^{k+1 })^{-\frac{\nu}{2}}\bar{u}_{t}^{k+1}|_{2},\end{split}\]
where one has used integration by parts in \(O_{9}\) and \(O_{10}\) to deal with the corresponding terms related to \(\nabla\bar{l}_{t}^{k+1}\).
It follows from (3.170)-(3.171), (3.164), \(\eqref{eq:1499}_{1}\) and Young's inequality that
\[\begin{split}&\frac{d}{dt}|(l^{k+1})^{-\frac{\nu}{2}}\bar{u}_{t}^{k+1 }|_{2}^{2}+a_{2}\alpha|\sqrt{h^{k+1}}\nabla\bar{u}_{t}^{k+1}|_{2}^{2}\\ \leq& C\sigma^{-1}(1+|l_{t}^{k+1}|_{D^{2}}^{2}+|l_{t} ^{k}|_{D^{2}}+|h^{k}\nabla^{2}u_{t}^{k}|_{2}^{2}+|u_{tt}^{k}|_{2}^{2})(\|\bar{ \psi}^{k+1}\|_{1}^{2}+\|\bar{\phi}^{k+1}\|_{2}^{2}\end{split} \tag{3.172}\]
\[+|(h^{k+1})^{\frac{1}{4}}(l^{k})^{\frac{\nu}{2}}\nabla\bar{l}^{k+1}|_{ 2}^{2}+|(h^{k+1})^{-\frac{1}{4}}\bar{l}_{t}^{k+1}|_{2}^{2}+|(l^{k+1})^{-\frac{ \nu}{2}}\bar{u}_{t}^{k+1}|_{2}^{2}\] \[+C\sigma(|(l^{k})^{-\frac{\nu}{2}}\bar{u}^{k}|_{2}^{2}+|\sqrt{h^{ k}}\nabla\bar{u}^{k}|_{2}^{2}+|h^{k}\nabla^{2}\bar{u}^{k}|_{2}^{2}+|h^{k+1}\nabla^{2} \bar{u}^{k+1}|_{2}^{2}\] \[+\|\bar{\psi}^{k}\|_{1}^{2}+|\bar{\psi}^{k-1}|_{2}^{2}+\|\bar{u}^ {k-1}\|_{1}^{2}+|h^{k-1}\nabla^{2}\bar{u}^{k-1}|_{2}^{2}\] \[+C\widetilde{\epsilon}^{-1}|(l^{k+1})^{-\frac{\nu}{2}}\bar{u}_{t} ^{k+1}|_{2}^{2}+\widetilde{\epsilon}|\sqrt{h^{k}}\nabla\bar{u}_{t}^{k}|_{2}^{2}.\]
**Step 1.4:** Strong convergences of the approximation solutions. By the same arguments used in the derivations of (3.54), (3.102) and (3.111), it follows directly from (3.149)\({}_{2}\)-(3.149)\({}_{3}\) and Lemma 4.3 that
\[|\sqrt{h^{k+1}}\nabla^{2}\bar{l}^{k+1}|_{2}\leq C(|(h^{k+1})^{-\frac{1}{4}}\bar{l}_{t}^{k+1}|_{2}+|\sqrt{h^{k}}\nabla \bar{u}^{k}|_{2}+|(l^{k})^{\frac{\nu}{2}}(h^{k+1})^{\frac{1}{4}}\nabla\bar{l}^ {k+1}|_{2}\] \[+|(l^{k-1})^{\frac{\nu}{2}}(h^{k})^{\frac{1}{4}}\nabla\bar{l}^{k}| _{2}+|\bar{\psi}^{k}|_{2}+\|\bar{\psi}^{k+1}\|_{1}),\] \[|h^{k+1}\nabla^{2}\bar{u}^{k+1}|_{2}\leq C(|(l^{k+1})^{-\frac{\nu}{2}}\bar{u}_{t}^{k+1}|_{2}+|\sqrt{h^{k+1}} \nabla\bar{u}^{k+1}|_{2}+|\sqrt{h^{k}}\nabla\bar{u}^{k}|_{2}\] \[+\|\bar{\phi}^{k+1}\|_{1}+|\bar{\psi}^{k}|_{2}+|\bar{\psi}^{k+1}| _{2}+|(l^{k})^{\frac{\nu}{2}}(h^{k+1})^{\frac{1}{4}}\nabla\bar{l}^{k+1}|_{2}),\] \[|h^{k+1}\nabla^{3}\bar{u}^{k+1}|_{2}\leq C(|(l^{k+1})^{-\frac{\nu}{2}}\bar{u}_{t}^{k+1}|_{2}+|\sqrt{h^{k+1}} \nabla\bar{u}_{t}^{k+1}|_{2}+|\sqrt{h^{k+1}}\nabla\bar{u}^{k+1}|_{2}\] \[+|\sqrt{h^{k}}\nabla\bar{u}^{k}|_{2}+|(h^{k+1})^{-\frac{1}{4}}\bar {l}_{t}^{k+1}|_{2}+|(l^{k-1})^{\frac{\nu}{2}}(h^{k})^{\frac{1}{4}}\nabla\bar{l }^{k}|_{2}\] \[+\|\bar{\phi}^{k+1}\|_{2}+\|\bar{\psi}^{k}\|_{1}+\|\bar{\psi}^{k+ 1}\|_{1}+|(l^{k})^{\frac{\nu}{2}}(h^{k+1})^{\frac{1}{4}}\nabla\bar{l}^{k+1}|_{2}\] \[+|(l^{k})^{-\frac{\nu}{2}}\bar{u}_{t}^{k}|_{2}+|\sqrt{h^{k-1}} \nabla\bar{u}^{k-1}|_{2}+\|\bar{\phi}^{k}\|_{1}+|\bar{\psi}^{k-1}|_{2}),\]
which, along with (3.153), (3.155), (3.159), (3.165), (3.166), (3.169) and (3.172), yields that
\[\frac{d}{dt}(\|\bar{\phi}^{k+1}\|_{2}^{2}+\|\bar{\psi}^{k+1}\|_{1 }^{2}+|(h^{k+1})^{\frac{1}{4}}(l^{k})^{\frac{\nu}{2}}\nabla\bar{l}^{k+1}|_{2}^{ 2}+|(h^{k+1})^{-\frac{1}{4}}\bar{l}_{t}^{k+1}|_{2}^{2}\] \[+|(l^{k+1})^{-\frac{\nu}{2}}\bar{u}^{k+1}|_{2}^{2}+|\sqrt{h^{k+1}} \nabla\bar{u}^{k+1}|_{2}^{2}+|(l^{k+1})^{-\frac{\nu}{2}}\bar{u}_{t}^{k+1}|_{2}^ {2})\] \[+|(h^{k+1})^{-\frac{1}{4}}\bar{l}_{t}^{k+1}|_{2}^{2}+|(h^{k+1})^{ \frac{1}{4}}(l^{k})^{\frac{\nu}{2}}\nabla\bar{l}_{t}^{k+1}|_{2}^{2}+|\sqrt{h^{k +1}}\nabla\bar{u}^{k+1}|_{2}^{2}\] \[+|(l^{k+1})^{-\frac{\nu}{2}}\bar{u}_{t}^{k+1}|_{2}^{2}+|\sqrt{h^{k +1}}\nabla\bar{u}_{t}^{k+1}|_{2}^{2}\] \[\leq \mathcal{F}^{k}(t)(\|\bar{\phi}^{k+1}\|_{2}^{2}+\|\bar{\psi}^{k+1 }\|_{1}^{2}+|(h^{k+1})^{\frac{1}{4}}(l^{k})^{\frac{\nu}{2}}\nabla\bar{l}^{k+1}| _{2}^{2}+|(h^{k+1})^{-\frac{1}{4}}\bar{l}_{t}^{k+1}|_{2}^{2} \tag{3.174}\] \[+|(l^{k+1})^{-\frac{\nu}{2}}\bar{u}^{k+1}|_{2}^{2}+|\sqrt{h^{k+1} }\nabla\bar{u}^{k+1}|_{2}^{2}+|(l^{k+1})^{-\frac{\nu}{2}}\bar{u}_{t}^{k+1}|_{2} ^{2})\] \[+C\sigma\big{(}|\sqrt{h^{k}}\nabla\bar{u}^{k}|_{2}^{2}+\|\bar{\phi} ^{k}\|_{1}^{2}+\|\bar{\psi}^{k}\|_{1}^{2}+|(l^{k})^{-\frac{\nu}{2}}\bar{u}_{t}^{ k}|_{2}^{2}+|\sqrt{h^{k-1}}\nabla\bar{u}^{k-1}|_{2}^{2}\] \[+(1+|\sqrt{h^{k}}\nabla^{2}l_{t}^{k}|_{2})|(l^{k-1})^{\frac{\nu}{2} }(h^{k})^{\frac{1}{4}}\nabla\bar{l}^{k}|_{2}^{2}+|(l^{k})^{-\frac{\nu}{2}}\bar{u} _{t}^{k}|_{2}^{2}+|(h^{k})^{-\frac{1}{4}}\bar{l}_{t}^{k}|_{2}^{2}\] \[+|\bar{\psi}^{k-1}|_{2}^{2}+\|\bar{u}^{k-1}\|_{1}^{2}+|(l^{k-2}) ^{\frac{\nu}{2}}(h^{k-1})^{\frac{1}{4}}\nabla\bar{l}^{k-1}|_{2}^{2}+|(l^{k-1}) ^{-\frac{\nu}{2}}\bar{u}_{t}^{k-1}|_{2}^{2}\] \[+|\sqrt{h^{k-2}}\nabla\bar{u}^{k-2}|_{2}^{2}+\|\bar{\phi}^{k-1} \|_{1}^{2}+|\bar{\psi}^{k-2}|_{2}^{2}\big{)}+C\widetilde{\epsilon}^{-2}(|(h^{k+1}) ^{-\frac{1}{4}}\bar{l}_{t}^{k+1}|_{2}^{2}\] \[+|(h^{k+1})^{-\frac{1}{4}}\bar{u}_{t}^{k+1}|_{2}^{2}+\|\bar{\phi}^{ k+1}\|_{2}^{2}+\|\bar{\psi}^{k+1}\|_{1}^{2})+\widetilde{\epsilon}\mathcal{T}^{k},\]
where \(\mathcal{T}^{k}=(|(h^{k})^{\frac{1}{4}}\nabla\bar{l}_{t}^{k}|_{2}^{2}+|\sqrt{h^{k }}\nabla\bar{u}_{t}^{k}|_{2}^{2})\) and
\[\mathcal{F}^{k}(t)= C\sigma^{-3}(1
Now, define
\[\begin{split}\varGamma^{k+1}(t)=&\sup_{0\leq s\leq t} \|\bar{\phi}^{k+1}\|_{2}^{2}+\sup_{0\leq s\leq t}\|\bar{\psi}^{k+1}\|_{1}^{2}+ \sup_{0\leq s\leq t}|(h^{k+1})^{\frac{1}{4}}(l^{k})^{\frac{\nu}{2}}\nabla\bar{l }^{k+1}|_{2}^{2}\\ &+\sup_{0\leq s\leq t}|(h^{k+1})^{-\frac{1}{4}}\bar{l}_{t}^{k+1}|_ {2}^{2}+\sup_{0\leq s\leq t}|(l^{k+1})^{-\frac{\nu}{2}}\bar{u}^{k+1}|_{2}^{2}\\ &+\sup_{0\leq s\leq t}|\sqrt{h^{k+1}}\nabla\bar{u}^{k+1}|_{2}^{2}+ \sup_{0\leq s\leq t}|(l^{k+1})^{-\frac{\nu}{2}}\bar{u}_{t}^{k+1}|_{2}^{2}. \end{split}\]
Then it follows from (3.174) and Gronwall's inequality that
\[\begin{split}&\varGamma^{k+1}(t)+\int_{0}^{t}\Big{(}|(h^{k+1})^{ -\frac{1}{4}}\bar{l}_{s}^{k+1}|_{2}^{2}+|(h^{k+1})^{\frac{1}{4}}(l^{k})^{\frac {\nu}{2}}\nabla\bar{l}_{s}^{k+1}|_{2}^{2}\\ &+|\sqrt{h^{k+1}}\nabla\bar{u}^{k+1}|_{2}^{2}+|(l^{k+1})^{-\frac{ \nu}{2}}\bar{u}_{s}^{k+1}|_{2}^{2}+|\sqrt{h^{k+1}}\nabla\bar{u}_{s}^{k+1}|_{2} ^{2}\Big{)}\mathrm{d}s\\ \leq& C\Big{(}\int_{0}^{t}\widetilde{\epsilon}(| \sqrt{h^{k}}\nabla\bar{u}_{s}^{k}|_{2}^{2}+|(h^{k})^{\frac{1}{4}}\nabla\bar{l }_{s}^{k}|_{2}^{2})\mathrm{d}s+(t+\sqrt{t})\sigma\varGamma^{k}(t)\\ &+t\sigma\varGamma^{k-1}(t)+t\sigma\varGamma^{k-2}(t)\Big{)}\exp \big{(}C\sigma^{-3}t+C\sigma^{-3}+C\widetilde{\epsilon}^{-2}t).\end{split} \tag{3.175}\]
One can choose \(\sigma\in\big{(}0,\min\{1,\frac{a_{4}}{32},\frac{a_{2}\alpha}{32}\}\big{)}\), \(T_{*}\in(0,\min\{1,\bar{T}\}]\) and \(\widetilde{\epsilon}\in(0,1)\) such that
\[\begin{split} C\widetilde{\epsilon}\exp\big{(}C\sigma^{-3}T_{*}+ C\sigma^{-3}+C\widetilde{\epsilon}^{-2}T_{*}\big{)}\leq&\frac{1}{32}, \\ C\sqrt{T_{*}}\sigma\exp\big{(}C\sigma^{-3}T_{*}+C\sigma^{-3}+C \widetilde{\epsilon}^{-2}T_{*}\big{)}\leq&\frac{1}{32}.\end{split}\]
We can get finally that
\[\begin{split}&\sum_{k=1}^{\infty}\Big{(}\varGamma^{k+1}(T_{*})+ \int_{0}^{T_{*}}(|(h^{k+1})^{-\frac{1}{4}}\bar{l}_{t}^{k+1}|_{2}^{2}+|(h^{k+1} )^{\frac{1}{4}}(l^{k})^{\frac{\nu}{2}}\nabla\bar{l}_{t}^{k+1}|_{2}^{2}\\ &\qquad+|\sqrt{h^{k+1}}\nabla\bar{u}^{k+1}|_{2}^{2}+|(l^{k+1})^{ -\frac{\nu}{2}}\bar{u}_{t}^{k+1}|_{2}^{2}+|\sqrt{h^{k+1}}\nabla\bar{u}_{t}^{k +1}|_{2}^{2})\mathrm{d}t\Big{)}<\infty,\end{split}\]
which, along with the \(k\)-independent estimate (3.137), yields that
\[\begin{split}\lim_{k\to\infty}(\|\bar{\phi}^{k+1}\|_{s^{\prime}} +\|\bar{u}^{k+1}\|_{s^{\prime}}+\|\bar{l}^{k+1}\|_{L^{\infty}\cap D^{1}\cap D^ {s^{\prime}}})=& 0,\\ \lim_{k\to\infty}(|\bar{u}_{t}^{k+1}|_{2}+|\bar{l}_{t}^{k+1}|_{2} +\|\bar{\psi}^{k+1}\|_{L^{\infty}\cap L^{q}}+|\bar{h}^{k+1}|_{\infty})=& 0,\end{split} \tag{3.176}\]
for any \(s^{\prime}\in[1,3)\). Then there exist a subsequence (still denoted by \((\phi^{k},u^{k},l^{k},\psi^{k})\)) and limit functions \((\phi^{\eta},u^{\eta},l^{\eta},\psi^{\eta})\) such that
\[\begin{split}&(\phi^{k}-\eta,u^{k})\to(\phi^{\eta}-\eta,u^{\eta}) \ \ \text{in}\ \ L^{\infty}([0,T_{*}];H^{s^{\prime}}),\\ & l^{k}-\bar{l}\to l^{\eta}-\bar{l}\ \ \text{in}\ \ L^{\infty}([0,T_{*}];L^{\infty}\cap D^{1}\cap D^{s^{\prime}}),\\ &(u_{t}^{k},l_{t}^{k})\to(u_{t}^{\eta},l_{t}^{\eta})\ \ \text{in}\ \ L^{\infty}([0,T_{*}];L^{2}),\\ &\psi^{k}\to\psi^{\eta}\ \ \text{in}\ \ L^{\infty}([0,T_{*}];L^{ \infty}\cap L^{q}),\\ & h^{k}\to h^{\eta}\ \ \text{in}\ \ L^{\infty}([0,T_{*}];L^{ \infty}).\end{split} \tag{3.177}\]
Again due to (3.137), there exists a subsequence (still denoted by \((\phi^{k},u^{k},l^{k},\psi^{k})\)) converging to \((\phi^{\eta},u^{\eta},l^{\eta},\psi^{\eta})\) in the weak or weak* sense. According to the lower
semi-continuity of norms, the corresponding estimates in (3.137) still hold for \((\phi^{\eta},u^{\eta},l^{\eta},\psi^{\eta})\) except those weighted estimates on \(u^{\eta}\) and \(l^{\eta}\), which are independent of \(\eta\).
Next, it remains to show
\[\psi^{\eta}=\frac{a\delta}{\delta-1}\nabla(\phi^{\eta})^{2\iota}. \tag{3.178}\]
Set \(\psi^{*}=\psi^{\eta}-\frac{a\delta}{\delta-1}\nabla(\phi^{\eta})^{2\iota}\). Then it follows from (3.144)\({}_{1}\) and (3.144)\({}_{4}\) that
\[\begin{cases}\psi^{*}_{t}+\sum_{k=1}^{3}A_{k}(u^{\eta})\partial_{k} \psi^{*}+B^{*}(u^{\eta})\psi^{*}=0,\\ \psi^{*}|_{t=0}=0\quad\text{in}\quad\mathbb{R}^{3},\\ \psi^{*}\to 0\quad\text{as}\ \ |x|\to\infty\quad\text{for}\quad\text{t} \geq 0,\end{cases} \tag{3.179}\]
which implies that \(\psi^{*}=0\) in \([0,T_{*}]\times\mathbb{R}^{3}\). Thus (3.178) has been verified.
Note also that
\[(\sqrt{h^{k}}\nabla u^{k},(h^{k})^{\frac{1}{4}}\nabla l^{k},(h^{k})^{-\frac{1 }{4}}l^{k}_{t},h^{k}\nabla^{2}u^{k})\rightharpoonup(\sqrt{h^{\eta}}\nabla u^{ \eta},(h^{\eta})^{\frac{1}{4}}\nabla l^{\eta},(h^{\eta})^{-\frac{1}{4}}l^{\eta }_{t},h^{\eta}\nabla^{2}u^{\eta})\]
weakly* in \(L^{\infty}([0,T_{*}];L^{2})\). Indeed, since
\[\sqrt{h^{k}}-\sqrt{h^{\eta}}= \frac{h^{k}-h^{\eta}}{\sqrt{h^{k}}+\sqrt{h^{\eta}}},\] \[(h^{k})^{\frac{1}{4}}-(h^{\eta})^{\frac{1}{4}}= \frac{h^{k}-h^{\eta}}{(h^{k})^{\frac{3}{4}}+\sqrt{h^{k}}(h^{\eta} )^{\frac{1}{4}}+\sqrt{h^{\eta}}(h^{k})^{\frac{1}{4}}+(h^{\eta})^{\frac{3}{4}}},\] \[(h^{k})^{-\frac{1}{4}}-(h^{\eta})^{-\frac{1}{4}}= \frac{-(h^{k}-h^{\eta})}{(h^{k})^{\frac{1}{4}}(h^{\eta})^{\frac{1 }{4}}\big{(}(h^{k})^{\frac{3}{4}}+\sqrt{h^{k}}(h^{\eta})^{\frac{1}{4}}+\sqrt{ h^{\eta}}(h^{k})^{\frac{1}{4}}+(h^{\eta})^{\frac{3}{4}}\big{)}},\]
and \(h^{k}\) and \(h^{\eta}\) have positive lower bounds independent of \(k\), one gets
\[\|(\sqrt{h^{k}}-\sqrt{h^{\eta}},(h^{k})^{\frac{1}{4}}-(h^{\eta})^{\frac{1}{4} },(h^{k})^{-\frac{1}{4}}-(h^{\eta})^{-\frac{1}{4}})\|_{L^{\infty}([0,T_{*}];L^ {\infty})}\to 0 \tag{3.180}\]
as \(k\to\infty\). Then it follows from (3.180), the uniform estimates (3.137) for \((\phi^{k},u^{k},l^{k},\psi^{k})\), the estimates for \((\phi^{\eta},u^{\eta},l^{\eta},\psi^{\eta})\) obtained above, and (3.177) that
\[\int_{0}^{T_{*}}\int_{\mathbb{R}^{3}}(\sqrt{h^{k}}\nabla u^{k}- \sqrt{h^{\eta}}\nabla u^{\eta})\mathcal{X}\mathrm{d}x\mathrm{d}t\] \[\leq C(\|\sqrt{h^{k}}-\sqrt{h^{\eta}}\|_{L^{\infty}([0,T_{*}];L^{ \infty})}+\|\nabla u^{k}-\nabla u^{\eta}\|_{L^{\infty}([0,T_{*}];L^{2})})T_{* }\to 0\text{ as }k\to\infty,\] \[\int_{0}^{T_{*}}\int_{\mathbb{R}^{3}}((h^{k})^{\frac{1}{4}}\nabla l ^{k}-(h^{\eta})^{\frac{1}{4}}\nabla l^{\eta})\mathcal{X}\mathrm{d}x\mathrm{d}t\] \[\leq C(\|(h^{k})^{\frac{1}{4}}-(h^{\eta})^{\frac{1}{4}}\|_{L^{\infty} ([0,T_{*}];L^{\infty})}+\|\nabla l^{k}-\nabla l^{\eta}\|_{L^{\infty}([0,T_{*}] ;L^{2})})T_{*}\to 0\text{ as }k\to\infty,\] \[\int_{0}^{T_{*}}\int_{\mathbb{R}^{3}}((h^{k})^{-\frac{1}{4}}l^{k} _{t}-(h^{\eta})^{-\frac{1}{4}}l^{\eta}_{t})\mathcal{X}\mathrm{d}x\mathrm{d}t\] \[\leq C(\|(h^{k})^{-\frac{1}{4}}-(h^{\eta})^{-\frac{1}{4}}\|_{L^{ \infty}([0,T_{*}];L^{\infty})}+\|l^{k}_{t}-l^{\eta}_{t}\|_{L^{\infty}([0,T_{* }];L^{2})})T_{*}\to 0\text{ as }k\to\infty,\]
\[\begin{split}&\int_{0}^{T_{*}}\int_{\mathbb{R}^{3}}(h^{k}\nabla^{2}u^{k} -h^{\eta}\nabla^{2}u^{\eta})\mathcal{X}\mathrm{d}x\mathrm{d}t\\ \leq& C(\|h^{k}-h^{\eta}\|_{L^{\infty}([0,T_{*}];L^{ \infty})}+\|\nabla^{2}u^{k}-\nabla^{2}u^{\eta}\|_{L^{\infty}([0,T_{*}];L^{2}) })T_{*}\to 0\text{ as }k\rightarrow\infty,\end{split}\]
for any test function \(\mathcal{X}(t,x)\in C_{c}^{\infty}([0,T_{*})\times\mathbb{R}^{3})\), which, along with the lower semicontinuity of norms, yield the uniform boundedness of \(\sqrt{h^{\eta}}\nabla u^{\eta}\), \((h^{\eta})^{\frac{1}{4}}\nabla l^{\eta}\), \((h^{\eta})^{-\frac{1}{4}}l_{t}^{\eta}\) and \(h^{\eta}\nabla^{2}u^{\eta}\) in \(L^{\infty}([0,T_{*}];L^{2})\) with respect to \(\eta\). Similarly, one can also obtain the other desired weighed estimates. Hence the corresponding weighted estimates for \((u^{\eta},l^{\eta})\) in (3.137) hold also for the limit. Thus, \((\phi^{\eta},u^{\eta},l^{\eta},\psi^{\eta})\) is a weak solution in the sense of distributions to the Cauchy problem (3.144).
**Step 2:** Uniqueness. Let \((\phi_{1},u_{1},l_{1},\psi_{1})\) and \((\phi_{2},u_{2},l_{2},\psi_{2})\) be two strong solutions to the Cauchy problem (3.144) satisfying the estimates in (3.137). Set
\[\begin{split} h_{i}=&\phi_{i}^{2\iota},\quad n_{i}= \left(ah_{i}\right)^{b},\quad i=1,2;\quad\bar{h}=h_{1}-h_{2},\\ \bar{\phi}=&\phi_{1}-\phi_{2},\ \ \bar{u}=u_{1}-u_{2},\ \ \bar{l}=l_{1}-l_{2},\ \ \bar{\psi}=\psi_{1}-\psi_{2}.\end{split}\]
Then (3.144) implies that
\[\begin{split}&\left\{\begin{array}{l}\bar{\phi}_{t}+u_{1}\cdot \nabla\bar{\phi}+\bar{u}\cdot\nabla\phi_{2}+(\gamma-1)(\bar{\phi}\mathrm{div}u _{1}+\phi_{2}\mathrm{div}\bar{u})=0,\\ \bar{u}_{t}+u_{1}\cdot\nabla\bar{u}+l_{1}\nabla\bar{\phi}+a_{1}\phi_{1} \nabla\bar{l}+a_{2}l_{1}^{\nu}h_{1}L\bar{u}\\ =&-\bar{u}\cdot\nabla u_{2}-a_{1}\bar{\phi}\nabla l_{2}- \bar{l}\nabla\phi_{2}-a_{2}(l_{1}^{\nu}h_{1}-l_{2}^{\nu}h_{2})Lu_{2}\\ &+a_{2}(h_{1}\nabla l_{1}^{\nu}\cdot Q(u_{1})-h_{2}\nabla l_{2}^{ \nu}\cdot Q(u_{2}))\\ &+a_{3}(l_{1}^{\nu}\psi_{1}\cdot Q(u_{1})-l_{2}^{\nu}\psi_{2}\cdot Q (u_{2})),\\ &\phi_{1}^{-\iota}(\bar{l}_{t}+u_{1}\cdot\nabla\bar{l}+\bar{u}\cdot \nabla l_{2})-a_{4}\phi_{1}^{\iota}l_{1}^{\nu}\triangle\bar{l}\\ =&-(\phi_{1}^{-\iota}-\phi_{2}^{-\iota})((l_{2})_{t}+u_{2} \cdot\nabla l_{2})+a_{4}(\phi_{1}^{\iota}l_{1}^{\nu}\triangle l_{2}-\phi_{2}^{ \iota}l_{2}^{\nu}\triangle l_{2})\\ &+a_{5}(l_{1}^{\nu}n_{1}\phi_{1}^{3\iota}H(u_{1})-l_{2}^{\nu}n_{2} \phi_{2}^{3\iota}H(u_{2}))\\ &+a_{6}(l_{1}^{\nu+1}\phi_{1}^{-\iota}\mathrm{div}\psi_{1}-l_{2}^{\nu+ 1}\phi_{2}^{-\iota}\mathrm{div}\psi_{2})+\Theta(\phi_{1},l_{1},\psi_{1})- \Theta(\phi_{2},l_{2},\psi_{2}),\\ &\bar{h}_{t}+u_{1}\cdot\nabla\bar{h}+\bar{u}\cdot\nabla h_{2}+(\delta-1 )(\bar{h}\mathrm{div}u_{2}+h_{1}\mathrm{div}\bar{u})=0,\\ &\bar{\psi}_{t}+\sum_{k=1}^{3}A_{k}(u_{1})\partial_{k}\bar{\psi}+B(u_{1 })\bar{\psi}+a\delta(\bar{h}\nabla\mathrm{div}u_{2}+h_{1}\nabla\mathrm{div} \bar{u})\\ =&-\sum_{k=1}^{3}A_{k}(\bar{u})\partial_{k}\psi_{2}-B(\bar{u}) \psi_{2},\\ &(\bar{\phi},\bar{u},\bar{l},\bar{h},\bar{\psi})|_{t=0}=(0,0,0,0,0) \quad\text{in}\quad\mathbb{R}^{3},\\ &(\bar{\phi},\bar{u},\bar{l},\bar{h},\bar{\psi})\longrightarrow(0,0,0,0,0) \quad\text{as}\ \ |x|\rightarrow\infty\quad\text{for}\quad\mathrm{t}\geq 0.\end{split}\right.\end{split} \tag{3.181}\]
Set
\[\begin{split}\Phi(t)=&\|\bar{\phi}\|_{2}^{2}+\|\bar{ \psi}\|_{1}^{2}+|h_{1}^{\frac{1}{4}}l_{1}^{\frac{\nu}{2}}\nabla\bar{l}|_{2}^{2}+|h _{1}^{-\frac{1}{4}}\bar{l}_{t}|_{2}^{2}+|l_{1}^{-\frac{\nu}{2}}\bar{u}|_{2}^{2} \\ &+a_{2}\alpha|\sqrt{h_{1}}\nabla\bar{u}|_{2}^{2}+|l_{1}^{-\frac{ \nu}{2}}\bar{u}_{t}|_{2}^{2}.\end{split}\]
In a similar way as for (3.175), one can show that
\[\frac{d}{dt}\Phi(t)+C\big{(}|h_{1}^{-\frac{1}{4}}\bar{t}_{t}|_{2}^{2}+|h_{1}^{ \frac{1}{4}}l_{1}^{\frac{\nu}{2}}\nabla\bar{t}_{t}|_{2}^{2}+|\nabla\bar{u}|_{2} ^{2}+|l_{1}^{-\frac{\nu}{2}}\bar{u}_{t}|_{2}^{2}+|\sqrt{h_{1}}\nabla\bar{u}_{t }|_{2}^{2}\big{)}\leq H(t)\Phi(t),\]
with a continuous function \(H(t)\) satisfying
\[\int_{0}^{t}H(s)\ \mathrm{d}s\leq C\quad\text{for}\quad 0\leq t\leq T_{*}.\]
It follows from Gronwall's inequality that
\[\bar{\phi}=\bar{l}=0\quad\text{and}\quad\bar{\psi}=\bar{u}=0,\]
which shows the uniqueness.
**Step 3.** The time-continuity follows from the same arguments as in Lemma 3.1.
Thus the proof of Theorem 3.1 is completed.
### Limit to the flow with far field vacuum
Based on the uniform estimates (3.137), we are ready to prove Theorem 2.1.
Proof.: **Step 1:** The locally uniform positivity of \(\phi\). For any \(\eta\in(0,1)\), set
\[\phi_{0}^{\eta}=\phi_{0}+\eta,\ \ \psi_{0}^{\eta}=\frac{a\delta}{\delta-1} \nabla(\phi_{0}+\eta)^{2\iota},\ \ h_{0}^{\eta}=(\phi_{0}+\eta)^{2\iota}.\]
Then the corresponding initial compatibility conditions can be written as
\[\begin{split}&\nabla u_{0}=(\phi_{0}+\eta)^{-\iota}g_{1}^{\eta},\ \ Lu_{0}=(\phi_{0}+\eta)^{-2\iota}g_{2}^{\eta},\\ &\nabla((\phi_{0}+\eta)^{2\iota}Lu_{0})=(\phi_{0}+\eta)^{-\iota} g_{3}^{\eta},\ \ \nabla l_{0}=(\phi_{0}+\eta)^{-\frac{\iota}{2}}g_{4}^{\eta},\\ &\triangle l_{0}=(\phi_{0}+\eta)^{-\frac{3}{2}\iota}g_{5}^{\eta},\ \ \nabla((\phi_{0}+\eta)^{\iota}\triangle l_{0})=(\phi_{0}+\eta)^{-\frac{3}{2} \iota}g_{6}^{\eta},\end{split} \tag{3.182}\]
where \(g_{i}^{\eta}(i=1,2,3,4)\) are given as
\[\begin{cases}g_{1}^{\eta}=\frac{\phi_{0}^{-\iota}}{(\phi_{0}+\eta)^{-\iota}} g_{1},\ \ g_{2}^{\eta}=\frac{\phi_{0}^{-2\iota}}{(\phi_{0}+\eta)^{-2\iota}}g_{2},\\ g_{3}^{\eta}=\frac{\phi_{0}^{-3\iota}}{(\phi_{0}+\eta)^{-3\iota}}(g_{3}- \frac{\eta\nabla\phi_{0}^{2\iota}}{\phi_{0}+\eta}\phi_{0}^{\iota}Lu_{0}),\\ g_{4}^{\eta}=\frac{\phi_{0}^{-\frac{\iota}{2}}}{(\phi_{0}+\eta)^{-\frac{ \iota}{2}}}g_{4},\ \ g_{5}^{\eta}=\frac{\phi_{0}^{-\frac{3}{2}\iota}}{(\phi_{0}+\eta)^{-\frac{3}{ 2}\iota}}g_{5},\\ g_{6}^{\eta}=\frac{\phi_{0}^{-\frac{5}{2}\iota}}{(\phi_{0}+\eta)^{-\frac{5} {2}\iota}}(g_{6}-\frac{\eta\nabla\phi_{0}^{\iota}}{\phi_{0}+\eta}\phi_{0}^{ \frac{3}{2}\iota}\triangle l_{0}).\end{cases}\]
It follows from (2.7)-(2.8) that there exists a \(\eta_{1}>0\) such that if \(0<\eta<\eta_{1}\), then
\[\begin{split}& 2+\eta+\bar{l}+\|\phi_{0}^{\eta}-\eta\|_{D_{1}^{1} \cap D^{3}}+\|u_{0}\|_{3}+\|\nabla h_{0}^{\eta}\|_{L^{q}\cap D^{1,3}\cap D^{2} }\\ &+|(h_{0}^{\eta})^{\frac{1}{4}}\nabla^{3}h_{0}^{\eta}|_{2}+\| \nabla(h_{0}^{\eta})^{\frac{3}{4}}\|_{D_{1}^{4}}+|\nabla(h_{0}^{\eta})^{\frac {3}{8}}|_{4}+|(h_{0}^{\eta})^{-1}|_{\infty}+|g_{1}^{\eta}|_{2}\\ &+|g_{2}^{\eta}|_{2}+|g_{3}^{\eta}|_{2}+|g_{4}^{\eta}|_{2}+|g_{5}^{ \eta}|_{2}+|b_{6}^{\eta}|_{2}+\|l_{0}-\bar{l}\|_{D_{1}^{1}\cap D^{3}}+|l_{0}^ {-1}|_{\infty}\leq&\bar{c}_{0},\end{split} \tag{3.183}\]
where \(\bar{c}_{0}\) is a positive constant independent of \(\eta\). Therefore, it follows from Theorem 3.1 that for the initial data \((\phi_{0}^{\eta},u_{0}^{\eta},l_{0}^{\eta},\psi_{0}^{\eta})\), the problem (3.144) admits a unique strong solution \((\phi^{\eta},u^{\eta},l^{\eta},\psi^{\eta})\) in \([0,T_{*}]\times\mathbb{R}^{3}\) satisfying the local estimate in (3.137) with \(c_{0}\) replaced by \(\bar{c}_{0}\), and the life span \(T_{*}\) is also independent of \(\eta\).
Moreover, \(\phi^{\eta}\) is positive locally (independent of \(\eta\)) as shown below.
**Lemma 3.15**.: _For any \(R_{0}>0\) and \(\eta\in(0,1]\), there esists a constant \(a_{R_{0}}\) independent of \(\eta\) such that_
\[\phi^{\eta}(t,x)\geq a_{R_{0}}>0,\quad\forall(t,x)\in[0,T_{*}]\times B_{R_{0}}. \tag{3.184}\]
The proof follows from the same argument for Lemma 3.9 in [54].
**Step 2:** Taking limit \(\eta\to 0\). It follows from the uniform estimates in (3.137), Lemma 3.15 and Lemma 4.2 that for any \(R>0\), there exist a subsequence of solutions (still denoted by) \((\phi^{\eta},u^{\eta},l^{\eta},h^{\eta})\) such that as \(\eta\to 0\), the convergences in (3.139)-(3.140) hold with \((\phi^{\epsilon,\eta},u^{\epsilon,\eta},l^{\epsilon,\eta},h^{\epsilon,\eta}, \psi^{\epsilon,\eta})\) replaced by \((\phi^{\eta},u^{\eta},l^{\eta},h^{\eta},\psi^{\eta})\), and \((\phi^{\eta},u^{\eta},l^{\eta},h^{\eta},\psi^{\eta})\) replaced by \((\phi,u,l,h,\psi)\). Then by lower semi-continuity of weak convergences, \((\phi,u,l,\psi)\) satisfies the estimates in (3.137) except weighted ones on \((u,l)\).
Moreover, one can verify that:
\[h=\phi^{2\iota},\quad\psi=\frac{a\delta}{\delta-1}\nabla h=\frac{a\delta}{ \delta-1}\nabla\phi^{2\iota}, \tag{3.185}\]
by the same argument as the proof of (3.178).
Furthermore, one has
\[\int_{0}^{T_{*}}\int_{\mathbb{R}^{3}}(h^{\eta}\nabla^{2}u^{\eta} -h\nabla^{2}u)X\mathrm{d}x\mathrm{d}t\] \[= \int_{0}^{T_{*}}\int_{\mathbb{R}^{3}}\big{(}(h^{\eta}-h)\nabla^{ 2}u^{\eta}+h(\nabla^{2}u^{\eta}-\nabla^{2}u))\big{)}X\mathrm{d}x\mathrm{d}t\]
for any \(X(t,x)\in C^{\infty}_{c}([0,T_{*}]\times\mathbb{R}^{3})\), which along with Lemma 3.15, yields that
\[h^{\eta}\nabla^{2}u^{\eta}\rightharpoonup h\nabla^{2}u\ \ \text{ weakly}^{*}\ \ \text{in}\ \ L^{\infty}([0,T_{*}];L^{2}). \tag{3.186}\]
Then by the lower semi-continuity of norms, one has the boundedness of \(h\nabla^{2}u\) in \(L^{\infty}([0,T_{*}];L^{2})\). Similarly, one can also obtain the other desired weighed estimates in (3.137) on \((u,l)\). Furthermore, \((\phi,u,l,\psi)\) is a weak solution to the Cauchy problem (2.2)-(2.6) in the sense of distributions.
**Step 3.** The uniqueness follows from the same argument as for Theorem 3.1.
**Step 4:** Time continuity. First, the time continuity of \((\phi,\psi)\) can be obtained by a similar argument as for Lemma 3.1.
Next, note that (3.137) and Sobolev embedding theorem imply that
\[u\in C([0,T_{*}];H^{2})\cap C([0,T_{*}];\text{weak-}H^{3})\quad\text{and} \quad\phi^{\iota}\nabla u\in C([0,T_{*}];L^{2}). \tag{3.187}\]
It then follows from (2.2)\({}_{2}\) that
\[\phi^{-2\iota}u_{t}\in L^{2}([0,T_{*}];H^{2}),\quad(\phi^{-2\iota}u_{t})_{t} \in L^{2}([0,T_{*}];L^{2}),\]
which implies that \(\phi^{-2\iota}u_{t}\in C([0,T_{*}];H^{1})\). This and the regularity estimates for
\[a_{2}Lu=-l^{-\nu}\phi^{-2\iota}(u_{t}+u\cdot\nabla u+a_{1}\phi\nabla l+l\nabla \phi-a_{2}\phi^{2\iota}\nabla l^{\nu}\cdot Q(u)-a_{3}l^{\nu}\psi\cdot Q(u))\]
show that \(u\in C([0,T_{*}];H^{3})\) immediately.
Moreover, since
\[\phi^{2\iota}\nabla^{2}u\in L^{\infty}([0,T_{*}];H^{1})\cap L^{2}([0,T_{*}];D ^{2})\quad\text{and}\quad(\phi^{2\iota}\nabla^{2}u)_{t}\in L^{2}([0,T_{*}];L^{ 2}),\]
the classical Sobolev embedding theorem implies that
\[\phi^{2\iota}\nabla^{2}u\in C([0,T_{*}];H^{1}).\]
Then the time continuity of \(u_{t}\) follows easily.
Similarly, (3.137) and Sobolev embedding theorem imply that
\[\begin{split}&\nabla l\in C([0,T_{*}];H^{1})\cap C([0,T_{*}]; \text{weak-}H^{2}),\quad\phi^{\frac{1}{2}\iota}\nabla l\in C([0,T_{*}];L^{2}),\\ & l^{-\nu}\phi^{-2\iota}l_{t}\in L^{2}([0,T_{*}];H^{2}),\quad(l^{ -\nu}\phi^{-2\iota}l_{t})_{t}\in L^{2}([0,T_{*}];L^{2}),\end{split} \tag{3.188}\]
which implies \(l^{-\nu}\phi^{-2\iota}l_{t}\in C([0,T_{*}];H^{1})\). This and the regularity estimates for
\[-a_{4}\triangle l=\phi^{-\iota}l^{-\nu}\big{(}-\phi^{-\iota}(l_{t}+u\cdot \nabla l)+a_{5}l^{\nu}n\phi^{3\iota}H(u)+a_{6}l^{\nu+1}\phi^{-\iota}\text{div} \psi+\Theta(\phi,l,\psi)\big{)}\]
show that \(l-\bar{l}\in C([0,T_{*}];D_{*}^{1}\cap D^{3})\) immediately. Then the time continuity of \(l_{t}\) follows easily. Thus (2.9) holds.
In summary, \((\phi,u,l,\psi)\) is the unique strong solution in \([0,T_{*}]\times\mathbb{R}^{3}\) to the Cauchy problem (2.2)-(2.6). Hence the proof of Theorem 2.1 is complete.
### The proof for Theorem 1.1
Now we are ready to establish the local-in-time well-posedness of regular solutions stated in Theorem 1.1 to the Cauchy problem (1.8) with (1.2) and (1.9)-(1.11).
Proof.: **Step 1.** It follows from the initial assumptions (1.18)-(1.19) and Theorem 2.1 that there exists a time \(T_{*}>0\) such that the problem (2.2)-(2.6) has a unique strong solution \((\phi,u,l,\psi)\) satisfying the regularity (2.9), which implies that
\[\phi\in C^{1}([0,T_{*}]\times\mathbb{R}^{3}),\ \ (u,\nabla u)\in C([0,T_{*}] \times\mathbb{R}^{3}),\ \ (l,\nabla l)\in C([0,T_{*}]\times\mathbb{R}^{3}).\]
Set \(\rho=(\frac{\gamma-1}{A\gamma}\phi)^{\frac{1}{\gamma-1}}\) with \(\rho(0,x)=\rho_{0}\). It follows from the relations between \((\varphi,\psi)\) and \(\phi\) that
\[\varphi=a\rho^{1-\delta},\ \ \psi=\frac{\delta}{\delta-1}\nabla\rho^{\delta-1}.\]
Then multiplying \(\eqref{eq:1}_{1}\) by \(\frac{\partial\rho}{\partial\phi}\), \(\eqref{eq:1}_{2}\) by \(\rho\), and \(\eqref{eq:1}_{3}\) by \(Ac_{v}\Big{(}\frac{A\gamma}{\gamma-1}\Big{)}^{\iota}\rho^{\gamma-\frac{1- \delta}{2}}\) respectively shows that the equations in (1.8) are satisfied.
Hence, we have shown that the triple \((\rho,u,S)\) satisfied the Cauchy problem (1.8) with (1.2) and (1.9)-(1.11) in the sense of distributions and the regularities in Definition 1.1. Moreover, it follows from the continuity equation that \(\rho(t,x)>0\) for \((t,x)\in[0,T_{*}]\times\mathbb{R}^{3}\). In summary, the Cauchy problem (1.8) with (1.2) and (1.9)-(1.11) has a unique regular solution \((\rho,u,S)\).
**Step 2.** Now we show that the regular solution obtained above is also a classical one to the problem (1.1)-(1.3) with (1.6) and (1.10)-(1.11) within its life span.
First, according to the regularities of \((\rho,u,S)\) and the fact that
\[\rho(t,x)>0\quad\text{for}\quad(t,x)\in[0,T_{*}]\times\mathbb{R}^{3},\]
one can obtain
\[(\rho,\nabla\rho,\rho_{t},u,\nabla u,S,\nabla S)\in C([0,T_{*}]\times\mathbb{ R}^{3}).\]
Second, by the Sobolev embedding theorem:
\[L^{2}([0,T_{*}];H^{1})\cap W^{1,2}([0,T_{*}];H^{-1})\hookrightarrow C([0,T_{*}];L ^{2}), \tag{3.189}\]
and the regularity (1.20), one gets that
\[tu_{t}\in C([0,T_{*}];H^{2}),\ \ \text{and}\ \ u_{t}\in C([\tau,T_{*}]\times \mathbb{R}^{3}).\]
Next, note that the following elliptic system holds
\[a_{2}Lu= -l^{-\nu}\phi^{-2\iota}(u_{t}+u\cdot\nabla u+a_{1}\phi\nabla l+l \nabla\phi-a_{2}\phi^{2\iota}\nabla l^{\nu}\cdot Q(u)\] \[-a_{3}l^{\nu}\psi\cdot Q(u))\equiv l^{-\nu}\phi^{-2\iota}\mathbb{ M}.\]
It follows from the definition of regular solutions and (1.20) directly that
\[tl^{-\nu}\phi^{-2\iota}\mathbb{M}\in L^{\infty}([0,T_{*}];H^{2}),\]
and
\[(tl^{-\nu}\phi^{-2\iota}\mathbb{M})_{t}= l^{-\nu}\phi^{-2\iota}\mathbb{M}+t(l^{-\nu})_{t}\phi^{-2\iota} \mathbb{M}+tl^{-\nu}(\phi^{-2\iota})_{t}\mathbb{M}\] \[+tl^{-\nu}\phi^{-2\iota}\mathbb{M}_{t}\in L^{2}([0,T_{*}];L^{2}),\]
which, along with the Sobolev embedding theorem:
\[L^{\infty}([0,T_{*}];H^{1})\cap W^{1,2}([0,T_{*}];H^{-1})\hookrightarrow C([0,T _{*}];L^{r}), \tag{3.190}\]
for any \(r\in[2,6)\), yields that
\[tl^{-\nu}\phi^{-2\iota}\mathbb{M}\in C([0,T_{*}];W^{1,4}),\ \ t\nabla^{2}u\in C([0,T_{*}];W^{1,4}).\]
These and the standard elliptic regularity theory yield that \(\nabla^{2}u\in C((0,T_{*}]\times\mathbb{R}^{3})\).
Moreover, it follows from the regularities of \(S_{t}\) and (3.190) that
\[tS_{t}\in C([0,T_{*}];W^{1,4}_{loc}),\]
which, along with \(\theta=AR^{-1}\rho^{\gamma-1}e^{\frac{S}{cv}}\), yields that
\[S_{t}\in C((0,T_{*}]\times\mathbb{R}^{3})\ \ \ \text{and}\ \ \ \theta_{t}\in C((0,T_{*}]\times \mathbb{R}^{3}).\]
Finally, it remains to show that \(\nabla^{2}\theta\in C((0,T_{*}]\times\mathbb{R}^{3})\). It follows from the (1.1)\({}_{3}\), (1.2) and (1.6) that
\[c_{\nu}\rho(\theta_{t}+u\cdot\nabla\theta)+P\text{div}u=\nabla u:\mathbb{T}+ \frac{\dal}{\nu+1}\triangle\theta^{\nu+1},\]
which implies that
\[\frac{\dal}{\nu+1}\triangle\theta^{\nu+1}=c_{\nu}\rho(\theta_{t}+u\cdot \nabla\theta)+P\text{div}u-\nabla u:\mathbb{T}=\dal. \tag{3.191}\]
It follows from \(\theta=\frac{\gamma-1}{R\gamma}\phi l\) and direct calculations that
\[t\dal\in L^{\infty}([0,T_{*}];H^{2}),\ \ \ (t\dal)_{t}\in L^{2}([0,T_{*}];H^{1}). \tag{3.192}\]
Then it follows from (3.190) that
\[t\dal\in C([0,T_{*}];W^{1,4}),\]
which, together with Lemma 4.3 and (3.191), shows that \(\nabla^{2}\theta\in C((0,T_{*}]\times\mathbb{R}^{3})\).
**Step 3.** We show finally that if \(m(0)<\infty\), then \((\rho,u,S)\) preserves the conservation of \((m(t),\mathbb{P}(t),E(t))\). First, we show that \((m(t),\mathbb{P}(t),E(t))\) are all finite.
**Lemma 3.16**.: _Under the additional assumption, \(0<m(0)<\infty\), it holds that_
\[m(t)+|\mathbb{P}(t)|+E(t)<\infty\ \ \ \text{for}\ \ \ t\in[0,T_{*}].\]
This lemma can be proved by the same argument used in Lemma 3.13 of [12].
Now we prove the conservation of total mass, momentum and total energy.
**Lemma 3.17**.: _Under the additional assumption, \(0<m(0)<\infty\), it holds that_
\[m(t)=m(0),\quad\mathbb{P}(t)=\mathbb{P}(0),\quad E(t)=E(0)\quad\text{for}\quad t \in[0,T_{*}].\]
Proof.: First, (1.1)\({}_{2}\) and the regularity of the solution imply that
\[\mathbb{P}_{t}=-\int\mathrm{div}(\rho u\otimes u)-\int\nabla P+\int\mathrm{ div}\mathbb{T}=0, \tag{3.193}\]
where one has used the fact that
\[\rho u^{(i)}u^{(j)},\quad\rho^{\gamma}e^{\frac{S}{c_{v}}}\quad\text{and}\quad \rho^{\delta}e^{\frac{S}{c_{v}}\nu}\nabla u\in W^{1,1}(\mathbb{R}^{3})\quad \text{for}\quad i,\ j=1,\ 2,\ 3.\]
Second, the energy equation (1.1)\({}_{3}\) implies that
\[E_{t}= -\int\mathrm{div}(\rho\mathcal{E}u+Pu-u\mathbb{T}-\kappa(\theta) \nabla\theta)=0, \tag{3.194}\]
where the following facts have been used:
\[\frac{1}{2}\rho|u|^{2}u,\quad\rho^{\gamma}e^{\frac{S}{c_{v}}}u,\quad\rho^{ \delta}e^{\frac{S}{c_{v}}\nu}u\nabla u\quad\text{and}\quad\rho^{\delta}e^{ \frac{S}{c_{v}}\nu}\nabla(\rho^{\gamma-1}e^{\frac{S}{c_{v}}})\in W^{1,1}( \mathbb{R}^{3}).\]
Similarly, one can show the conservation of the total mass.
Hence the proof of Theorem 1.1 is complete.
## 4. Remarks on the asymptotic behavior of \(u\)
First one concerns the non-existence of global in time solution in Theorem 1.2.
Let \(T>0\) be any constant, and \((\rho,u,\theta)\in D(T)\). It follows from the definitions of \(m(t)\), \(\mathbb{P}(t)\) and \(E_{k}(t)\) that
\[|\mathbb{P}(t)|\leq\int\rho(t,x)|u(t,x)|\leq\sqrt{2m(t)E_{k}(t)},\]
which, together with the definition of the solution class \(D(T)\), implies that
\[0<\frac{|\mathbb{P}(0)|^{2}}{2m(0)}\leq E_{k}(t)\leq\frac{1}{2}m(0)|u(t)|_{ \infty}^{2}\quad\text{for}\quad t\in[0,T].\]
Then one obtains that there exists a positive constant \(C_{u}=\frac{|\mathbb{P}(0)|}{m(0)}\) such that
\[|u(t)|_{\infty}\geq C_{u}\quad\text{for}\quad t\in[0,T].\]
Thus one obtains the desired conclusion in Theorem 1.2.
Consequently, one can prove Corollary 1.1 as follows.
Let \((\rho,u,S)\) be the regular solution to the Cauchy problem (1.8) with (1.2) and (1.9)-(1.11) in \([0,T]\times\mathbb{R}^{3}\) obtained in Theorem 1.1. It follows from Theorem 1.1 that \((\rho,u,\theta=AR^{-1}\rho^{\gamma-1}e^{S/c_{v}})\) is a classical solution to the Cauchy problem (1.1)-(1.3) with (1.6) and (1.10)-(1.11) in \([0,T]\times\mathbb{R}^{3}\), and also preserves the conservation of \((m(t),\mathbb{P}(t),E(t))\). Then one has \((\rho,u,\theta)\in D(T)\), which, along with Theorem 1.2, yields Corollary 1.1.
## Appendix: some basic lemmas
For convenience of readers, we list some basic facts which have been used frequently in this paper.
The first one is the well-known Gagliardo-Nirenberg inequality.
**Lemma 4.1**.: _[_28_]_ _Assume that \(f\in L^{q_{1}}\cap D^{i,r}(\mathbb{R}^{d})\) for \(1\leq q_{1},r\leq\infty\). Suppose also that real numbers \(\Xi\) and \(q_{2}\), and natural numbers \(m\), \(i\) and \(j\) satisfy_
\[\frac{1}{q_{2}}=\frac{j}{d}+\left(\frac{1}{r}-\frac{i}{d}\right)\Xi+\frac{1- \Xi}{q_{1}}\quad\text{and}\quad\frac{j}{i}\leq\Xi\leq 1.\]
_Then \(f\in D^{j,q_{2}}(\mathbb{R}^{d})\), and there exists a constant \(C\) depending only on \(i\), \(d\), \(j\), \(q_{1}\), \(r\) and \(\Xi\) such that_
\[\|\nabla^{j}f\|_{L^{q_{2}}}\leq C\|\nabla^{i}f\|_{L^{r}}^{\Xi}\|f\|_{L^{q_{1}} }^{1-\Xi}. \tag{4.1}\]
_Moreover, if \(j=0\), \(ir<d\) and \(q_{1}=\infty\), then it is necessary to make the additional assumption that either \(f\) tends to zero at infinity or that \(f\) lies in \(L^{s}(\mathbb{R}^{d})\) for some finite \(s>0\); if \(1<r<\infty\) and \(i-j-d/r\) is a non-negative integer, then it is necessary to assume also that \(\Xi\neq 1\)._
The second lemma is on compactness theory obtained via the Aubin-Lions Lemma.
**Lemma 4.2**.: _[_46_]_ _Let \(X_{0}\subset X\subset X_{1}\) be three Banach spaces. Suppose that \(X_{0}\) is compactly embedded in \(X\) and \(X\) is continuously embedded in \(X_{1}\). Then the following statements hold._
* _If_ \(J\) _is bounded in_ \(L^{r}([0,T];X_{0})\) _for_ \(1\leq r<+\infty\)_, and_ \(\frac{\partial J}{\partial t}\) _is bounded in_ \(L^{1}([0,T];X_{1})\)_, then_ \(J\) _is relatively compact in_ \(L^{r}([0,T];X)\)_;_
* _If_ \(J\) _is bounded in_ \(L^{\infty}([0,T];X_{0})\) _and_ \(\frac{\partial J}{\partial t}\) _is bounded in_ \(L^{r}([0,T];X_{1})\) _for_ \(r>1\)_, then_ \(J\) _is relatively compact in_ \(C([0,T];X)\)_._
Finally, one needs the following regularity theory for
\[-\alpha\triangle u-(\alpha+\beta)\nabla\mathrm{div}u=Lu=F,\quad u\to 0 \quad\text{as}\quad|x|\to\infty. \tag{4.2}\]
**Lemma 4.3**.: _[_47_]_ _If \(u\in D_{*}^{1,r}(\mathbb{R}^{3})\) with \(1<r<+\infty\) is a weak solution to (4.2), then_
\[|u|_{D^{k+2,r}}\leq C|F|_{D^{k,r}},\]
_where \(C\) depends only on \(\alpha\), \(\beta\) and \(r\)._
The proof can be found in [47].
**Acknowledgement:** This research is partially supported by National Key R&D Program of China (No. 2022YFA1007300), the Fundamental Research Funds for the Central Universities, Zheng Ge Ru Foundation, Hong Kong RGC Earmarked Research Grants CUHK-14301421, CUHK-14300917, CUHK-14302819, and CUHK-14300819. Duan's research is also supported in part by National Natural Science Foundation of China under Grant 12271369. Xin's research is also supported in part by the key project of National Natural Science Foundation of China (No.12131010) and Guangdong Province Basic and Applied Basic Research Foundation 2020B15153
10002. Zhu's research is also supported in part by National Natural Science Foundation of China under Grants 12101395 and 12161141004, The Royal Society-Newton International Fellowships Alumni AL/201021 and AL/211005.
**Conflict of Interest:** The authors declare that they have no conflict of interest. The authors also declare that this manuscript has not been previously published, and will not be submitted elsewhere before your decision.
**Data availability:** Data sharing is not applicable to this article as no data sets were generated or analysed during the current study.
|
2304.07047 | Near Field iToF LIDAR Depth Improvement from Limited Number of Shots | Indirect Time of Flight LiDARs can indirectly calculate the scene's depth
from the phase shift angle between transmitted and received laser signals with
amplitudes modulated at a predefined frequency. Unfortunately, this method
generates ambiguity in calculated depth when the phase shift angle value
exceeds $2\pi$. Current state-of-the-art methods use raw samples generated
using two distinct modulation frequencies to overcome this ambiguity problem.
However, this comes at the cost of increasing laser components' stress and
raising their temperature, which reduces their lifetime and increases power
consumption. In our work, we study two different methods to recover the entire
depth range of the LiDAR using fewer raw data sample shots from a single
modulation frequency with the support of sensor's gray scale output to reduce
the laser components' stress and power consumption. | Mena Nagiub, Thorsten Beuth, Ganesh Sistu, Heinrich Gotzig, Ciarán Eising | 2023-04-14T10:44:59Z | http://arxiv.org/abs/2304.07047v2 | # Near Field iToF LIDAR Depth Improvement from Limited Number of Shots
###### Abstract
Indirect Time of Flight LiDARs can indirectly calculate the scene's depth from the phase shift angle between transmitted and received laser signals with amplitudes modulated at a predefined frequency. Unfortunately, this method generates ambiguity in calculated depth when the phase shift angle value exceeds \(2\pi\). Current state-of-the-art methods use raw samples generated using two distinct modulation frequencies to overcome this ambiguity problem. However, this comes at the cost of increasing laser components' stress and raising their temperature, which reduces their lifetime and increases power consumption. In our work, we study two different methods to recover the entire depth range of the LiDAR using fewer raw data sample shots from a single modulation frequency with the support of sensor's gray scale output to reduce the laser components' stress and power consumption.
near field, LIDAR, iTOF, depth correction, estimation, ambiguity.
## I Introduction
Almost all major automotive manufacturers and relatively new players in the space, such as Google and Apple, are dedicating significant resources to the development of vehicle automation. For high levels of automation, the vehicles require a detailed 3D understanding of their environment. Conventional radar and ultrasonic have limited resolution (and range, in the case of ultrasound [1]). LiDAR uses LED light pulses coupled with accurate measurements of the reflection reception. Using this "time of flight" (ToF) distancing, the vehicle can extract the geometry of its surrounding.
Near Field LiDAR (NFL) sensors are currently used in many applications, including the automotive domain. In this domain, NFL is required to provide point clouds with a depth range of up 100 m and precision reaching up to millimeters. One category of these LiDARs is the indirect Time of Flight LiDARs (iToF). There are several classes of iTOF LiDARs, one being Amplitude Modulated Continuous Wave (AMCW), where the amplitude of the laser signal is modulated using a carrier modulation frequency (usually between 1 MHz and 24 MHz). Usually, the light source amplitude is modulated into a square waveform where \(f\) is the modulation frequency, and \(T\) is the modulation time. Then, the receiving sensor demodulates the received signal. The sensing elements are typically light-sensitive CMOS, so building enough charges from the reflected light signal requires some time, called the exposure time or integration time.
AMCW iTOF Lidars indirectly calculate the distance of pixels through the phase shift angle between the transmitted and received signals. However, due to the modulation of the laser signal amplitude using periodic waves, it is possible to have phase wrapping of the phase shift angle when it exceeds \(2\pi\). This phase wrapping leads to a problem called range ambiguity, as illustrated in figure 1.
The phase difference is calculated using cross-correlation between the received and transmitted signals, then transformed into the distance traveled. This method applies a feature extraction function to the received signal to search for the phase shift. The cross-correlation between the two signals generates a pattern function. Sampling the amplitude of the pattern function at equal steps over one period (\(2\pi\)), for example, \(0^{\circ}\), \(90^{\circ}\), \(180^{\circ}\), and \(270^{\circ}\), can give enough support points to calculate the phase shift difference angle (\(\varphi\)).
Previous work has been done in this track to solve the problem of phase wrapping using computer vision and deep learning methods. Su et al. [2] have focused on solving the phase wrapping problem using deep convolutional layers directly from the raw samples. However, they used four raw data
Fig. 1: Range ambiguity problem.
samples to predict the corresponding depth map. They have proposed an AutoEncoder architecture based on ResNet18. The model has been trained in a GANs configuration to solve the multi-path interference and ambiguity problems together. The solution has yet to consider reducing the number of shots required in each frame; they have also unwrapped the phase up to two cycles. Spoorthi et al. [3] have introduced a similar method to Su et al. to achieve phase unwrapping through deep learning but for Radar signals. Chen et al. [4] focused on solving the power consumption problem by reducing the exposure time. In their work Agresti et al. [5] focused on using deep learning to solve the problem of multi-path inference and have not considered the phase unwrapping but provided an interesting lightweight architecture that can be used to solve the phase wrapping problem as well. Per our knowledge, previous research works focused on either solving the noise problems or unwrapping the phase shift issues up to a limited number of periods.
The contribution of this paper is based on the fact that iToF sensors can generate a gray scale image of the scene based on the ambient light. It is possible to predict a coarse depth map for the scene using convolutional neural networks from that gray scale image. In this case, we can unwrap the ambiguous range generated by the sensor using fewer raw data samples. This contribution provides several advantages:
* It reduces the number of raw samples to reduce the laser components' activation time and increases its lifetime.
* It reduces the consumed power required by the sensor to generate raw data samples.
* It enables the sensor to see beyond the expected range.
According to our previous study [6], enriching the LIDAR point cloud using support from an additional sensor like a Camera is familiar. However, our significant contribution versus state-of-the-art is that we have used that method to reduce the number of LIDAR raw data samples required to build the point cloud.
## II Proposed Method
This research aims to create a method that utilizes computer vision to reduce the required laser shots to generate the final depth map. We will replace part of these shots with a gray scale image generated by the iTOF sensor imager based on the ambient light to achieve this goal. This feature is available in many iToF sensors. For example, Texas Instruments Sensor OPT8241 [7], which is used by Su et al. [2], can generate a 4-bit ambient image for the scene. In our study, we used our Valeo Near Field Lidar sensor [8]. The sensor can generate a full-scale gray scale image for the ambient light environment, as illustrated in Figure 2 (first row). In addition, a deep learning-based computer vision model is used to predict additional depth information, which is used to complement the missing details when the number of laser samples is reduced.
### _Principle of Operation_
Figure 3 shows how the sensor creates an accurate depth map from the raw data samples. The sensor captures four raw data samples (Differential Correlation Sample (DCS)) using modulation frequency \(f_{1}\), typically 24 MHz. These four samples are combined to generate the first depth map \(M_{1}\), with an ambiguity range \(d_{1}\). The sensor then captures four additional raw data samples using modulation frequency \(f_{2}\), usually lower than \(f_{1}\), typically 10 MHz, and creates the second depth map \(M_{2}\) with ambiguous depth range \(d_{2}\). Since \(f_{1}\) is higher than \(f_{2}\), then \(d_{2}\) is farther than \(d_{1}\). Therefore, an ambiguous depth map \(M_{1}\) is generated using high modulation frequency \(f_{1}\), resulting in a high-accuracy depth map with a shorter unambiguous range. Figure 2 (second row) shows examples of ambiguous depth maps. On the other hand, an ambiguous depth map \(M_{2}\) is generated using low modulation frequency \(f_{2}\), which results in a low-accuracy depth map but with a very long unambiguous range. Finally, the consistency check algorithm compares the two depth maps. Then, it generates the corrected depth map \(M\) where the range ambiguity is removed up to 100 m according to the method described in Bulczak et al. [9].
To reduce the required number of laser shots, we propose a different method based on the gray scale image generated by the sensor for the ambient light, as illustrated in figure 4. The point merge algorithm implementation is dependent on the depth correction method whether using regression method or using segmentation method. In this case, we can replace the second group of 4 shots with the gray scale image, which does not require any laser activity. This replacement reduces the required raw samples from 8 shots to 4. We have also studied the possibility of reducing the first group of shots from 4 to 2 only to reduce the required shots even more. In this case, the phase shift angle is calculated as \(\varphi=\tan^{-1}\left(\frac{S_{2}}{S_{3}}\right)\).
Fig. 3: Depth map creation principle of operation.
Fig. 2: Examples of images captured by the Valeo NFL sensor (first row), and equivalent ambiguous depth maps captured at 24 MHz (second row). The maximum unambiguous range is 6.25 m. However, the sensor incorrectly detects objects beyond that range at distances between 0 and 6.25 m. Depth saturation is visible (white) in the center image.
### _Method Architecture_
In our previous paper [6], we have illustrated different architectures for depth prediction from monocular cameras or sensor constellations composed of a synchronized camera and laser sensor. Most of the proposed methods are based on AutoEncoder architecture following UNet architecture. In addition, some of these architectures have used LiDAR input to guide the final depth prediction. In this paper, we have studied two different architectures of the common architectures proposed.
#### Iii-B1 TopfRegNet depth regression model
The first architecture is a supervised depth prediction network based on the AutoEncoder architecture. The depth prediction problem is solved as a regression problem, where the input to the model is the gray scale image generated by the sensor, and the final output is the predicted depth map. The ground truth for this model is the corrected depth image calculated by the sensor using the dual frequency method. The encoder is based on a feature extraction block followed by ResNet18 blocks and a decoder layer composed of 4 transposed convolution layers and four convolution layers in sequence. The final layer will predict the full-depth map. Figure 5 explains the architecture of TopfRegNet. The Pointcloud Merge Algorithm (Figure 4) is then used to correct the ambiguous depth map \(M_{1}\) using (1).
\[d_{f}^{i,j}=\left\{\begin{array}{ll}\left(\left|d_{p}^{i,j}/d_{u}\right|\times d _{u}\right)+d_{M1}^{i,j},&d_{M1}^{i,j}<d_{sat}\\ d_{p}^{i,j},&d_{M1}^{i,j}\geq d_{sat}\end{array}\right. \tag{1}\]
where \(d_{f}^{i,j}\in D_{f}^{w\times h}\) is the final corrected depth pixel value at coordinates \((i,j)\), \(d_{p}\in D_{p}^{w\times h}\) is the predicted depth pixel value at coordinates \((i,j)\), \(d_{M1}\in D_{M1}^{w\times h}\) is the equivalent ambiguous depth pixel value in the ambiguous depth map \(M_{1}\), \(d_{u}\) is an unambiguous range of the depth map \(M_{1}\) using modulation frequency \(f_{1}\), \(d_{sat}\) is the saturation range of the depth map \(M_{1}\) using modulation frequency \(f_{1}\) (normally defined by the sensor provider), \(D_{f}^{w\times h}\) is the final corrected depth map of size \(w\times h\) pixels, \(D_{p}^{w\times h}\) is the predicted depth map by the model, and \(D_{M1}^{w\times h}\) is ambiguous depth map created using modulation frequency \(f_{1}\).
The loss function is designed to push the network to learn: 1) the depth prediction of the scene such that it would be similar to the ground truth, 2) the elimination of the pixels where depth saturation happens, and 3) the maximum depth range that the sensor can achieve. The first two points can be achieved if we penalize the model when the predicted image differs from the ground truth. For this reason, we have used two loss components. The first component is the scale invariant loss, and the second is the structural similarity index measurement loss [10]. The scale invariant loss is used to avoid the problem of depth saturation. Depth saturation could happen due to infinite depth regions, such as pixels representing the sky or open road, or retro-reflective surfaces reflecting a vast amount of light energy like metallic traffic signs. Scale invariant loss fixes this point through penalizing the model if it becomes dependent on the absolute depth of the points, and pushes the model to predict the pixel depth from the scene geometric characteristics in relation to surrounding points. For the third point, we have used a depth-guided loss function. In that loss function, we penalize the model more if there is a higher difference between the predicted depth and ground truth depth. Nevertheless, rather than doing this for all pixels in the image, we will perform this only for the \(N\) pixels with the top depth value in the ground truth image. It will be possible to force the model to learn the maximum possible range of the ground truth depth map without affecting the other lower depth points using this depth-guided loss function. (2) describes the details of the loss function.
\[L_{Reg}=\alpha L_{SI}+\beta L_{SSIM}+\gamma L_{DG} \tag{2}\]
\(L_{SI}\) is the scale invariant loss, given as
\[L_{SI}=\frac{1}{n}\sum_{i}^{n}d_{i}^{2}-\frac{1}{n^{2}}\left(\sum_{i}^{n}d_{i }\right)^{2}\]
where \(d\in D_{diff}^{w\times h}\), and \(D_{diff}^{w\times h}=D_{GT}^{w\times h}-D_{P}^{w\times h}\); \(D_{GT}^{w\times h}\) is the ground truth depth map, and \(D_{P}^{w\times h}\) is the predicted depth map. \(L_{DG}\) is the guided depth loss, given as
\[L_{DG}=L_{2}\left(argmax\left(D_{GT}^{w\times h},N\right)-D_{P}^{w\times h} \left(argmax\left(D_{GT}^{w\times h},N\right)\right)\right)\]
where \(N\) is the number of pixels to be tested for depth guidance, is defined by a kernel of size \((w/10,h/10)\), and \(w\) and \(h\) are the width and height of the frame, respectively. The size of the kernel is selected empirically. \(L_{SSIM}\) is the
Fig. 4: Our proposed method overview.
Fig. 5: TopfRegNet network architecture.
well-known structural similarity index measurement loss. \(\alpha\), \(\beta\), and \(\gamma\) are weights selected empirically to be 0.5, 0.4, and 0.1, respectively.
#### Ii-C2 TofSegNet depth segmentation model
The second architecture solves the problem as a segmentation problem rather than a regression. This architecture divides the total depth range into equally segmented spaces, such as depth bins. Each depth bin is assigned to a segmentation class, so the size of each depth bin is equivalent to the unambiguous range. The ground truth depth maps are segmented as well in the same way. Then rather than training the network to predict the final depth value, it will be trained to predict the segmentation class, which is equivalent to the depth bin. The predicted class value is multiplied by the unambiguous range to convert the depth bin back to a final depth value. Then the conversion algorithm is used to match the ambiguous depth map value to the depth class using (3)
\[d_{f}^{i,j}=\left\{\begin{array}{ll}\left(d_{c}^{i,j}\times d_{u}\right)+d_{ M1}^{i,j},&d_{M1}^{i,j}<d_{sat}\\ \left(d_{c}^{i,j}\times d_{u}\right),&d_{M1}^{i,j}\geq d_{sat}\end{array}\right. \tag{3}\]
given that \(d_{f}^{i,j}\in D_{f}^{w\times h}\) as above, and \(d_{M1}^{i,j}\in D_{M1}^{w\times h}\), and where \(d_{c}^{i,j}\in D_{c}^{w\times h}\) is the predicted depth bin at coordinates \((i,j)\), \(d_{u}\) is the unambiguous range according to frequency \(f_{1}\), \(D_{p}^{w\times h}\) is the predicted depth bins map by the model of size \(w\times h\) pixels.
The semantic segmentation model based on auto encoder architecture structure is trained in a supervised fashion. The input to the model is the gray scale image generated by the sensor, and the final output is the predicted depth bin for each pixel. Ground truth for this model is the corrected depth image calculated by the sensor using the dual frequency method after being segmented to the correct depth bin. The encoder is based on ResNet18, followed by a decoder layer composed of 4 transposed convolution layers and four convolution layers in sequence. The final layer will predict the depth bins map; this model is called TofSegNet, and its architecture is described in Figure 6. The segmented depth map predicted by TofSegNet model is then used to correct the ambiguous depth map \(M_{1}\) using the Segmented Point cloud Merge Algorithm (Figure 4).
The loss function is designed to push the network to learn the correct depth bins; the other two components defined in the previous section are not required since the segmentation process will eliminate the saturation problem implicitly. So, the loss function used here is the cross entropy loss function \(L_{Seg}=L_{CE}\left(D_{GT}^{w\times h},D_{P}^{w\times h}\right)\).
## III Dataset
The dataset used in this paper is generated using Valeo Near Field LiDAR sensors [8]. The dataset comprises 2048 frames created using different objects of different sizes that could be found in typical driving scenarios. Figure 7 shows samples of the ground truth point cloud captured by the sensor using the dual frequency method.
The dataset comprises 2048 samples, each consisting of a gray scale frame, a ground truth depth map generated using a dual frequency method, a depth map generated using 24 MHz frequency using 4 DCS, and a depth map generated using 24 MHz frequency using 2 DCS. The dataset is split into three parts, train, validation, and test using golden split values 80% 10% 10%. The split is done using random selection. The dataset is augmented by flipping each frame around X-axis and Y-axis. In addition, the gray scale images are altered in contrast and brightness.
## IV Experiments and Results
The experiments in this section aim to decide how to reduce the number of shots and which model is most suitable for shots reduction. In addition, we compare the number of required parameters of each model to find which model requires the least number of operations translated into lower power consumption. All models are trained on the same dataset, with frames of size 320 x 240. The implementation of the model is done using PyTorch running on Nvidia RTX 3080. The input gray scale image is 12-bit, while the ground truth depth map is 16-bit. Both are normalized into the value range [0, 1].
The TofRegNet model has been trained using an adaptive learning algorithm with the Adam optimizer, where the learning rate starts at 0.0001 and decays linearly down to 0.00009. Training is over 150 epochs with a batch size of 32. On the other hand, the TofSegNet model has been trained using a constant learning rate at 0.0001 with the Adam optimizer. The model has been trained over 140 epochs and batch size 64.
In our previous paper [6], we have explained that LiDAR-based methods use specific metrics according to the top papers based on famous automotive-based datasets like KITTI and
Fig. 6: TofSegNet network architecture.
Fig. 7: Sample of the dataset ground truth depth frames for the samples shown in Figure 2.
Cityscape datasets. So, we have measured our models based on these metrics in our development system. The metrics measure the final result of the corrected depth map, so we can consider that all methods at the final stage are regression problems. Table I summarizes the results of the experiments executed on the test dataset for the different models using four raw samples versus two raw samples.
A detailed analysis of the results is provided in this section to draw visibility to the experiments performed. We compare the results of the experiments of each model when used to correct the ambiguity problem for ambiguous depth maps created using four raw samples versus maps created using only two raw samples. All the results are illustrated on test frames 0, 40, and 90 to simplify visually comparing the results. Figure 8 shows the experiments executed using the ToffSegNet model versus the ToffSegNet model. The experiments are executed to correct ambiguous depth maps created using four raw samples versus maps created using only two raw samples. The figure is split into three sections. The first section has two rows; the first row shows the gray scale images of the three frames. Gray scale images are the same irrespective of the number of raw samples. The second row shows the ground truth depth generated by the sensor. Then the second section shows the prediction results of the ToffRegNet model versus ToffSegNet model. ToffRegNet model predicts the depth as a regressed value, while ToffSegNet model predicts the depth as a segmentation map. Then comes the exciting part, the third section, which shows how the predicted depth maps are used to correct the ambiguous depth maps. The first row shows the ambiguous depth maps to be corrected, which are generated using 4 DCS versus 2 DCS. These depth maps are created using the laser signal modulated at 24 MHz. A closer look at these frames clearly shows the problem of ambiguity where the depth of objects is wrapped every 6.25 m. Also, it shows that ambiguous depth maps created using 2 DCS suffer from a poorer quality and much noise compared to maps created using 4 DCS. The last two rows are the most interesting as they show the ambiguity problem correction using the depth regression versus depth segmentation. The depth of the objects is unwrapped to cover the whole depth range. As illustrated in the figure, corrected depth using depth regression maps is more homogeneous and continuous than segmentation maps. On the other hand the accuracy of the corrected individual vertices using segmented depth maps is higher than those corrected using regression depth maps. Also, comparing the correction results of the 2 DCS maps versus the 4 DCS maps, the results of the 4 DCS look less noisy than that using 2 DCS.
## V Conclusion
Based on the results of the two methods, it has been illustrated that the concept of point cloud correction can achieve promising performance. Comparing the two methods, it is clear that the ToffRegNet model gives the best overall accuracy and precision and provides homogeneous depth maps. On the other hand, ToffSegNet model provides accurate and precise individual vertices. In terms of the power reduction functionality, we can safely interpret the results as follows; it is possible to use only two raw data samples to build an accurate and precise depth map with support from the gray scale image. However, the two raw samples corrected depth map suffers from noise and inaccuracies due to the limited depth information compared to the four raw samples method. This noisy depth is due to the actual noise in the source depth map of the ambiguous map generated using the 24 MHz modulation frequency. This noise explains why the correction algorithm's performance in the two raw samples' depth maps is worse than the four raw samples' depth maps. Compared to the method developed by Su et al. [2], our methods can predict corrected depth up to the second and third cycles of ambiguity, covering up to 17.5 m. In addition, we can correct the depth using only two raw samples.
|
2304.08374 | Fundamental Sensitivity Limits for non-Hermitian Quantum Sensors | Considering non-Hermitian systems implemented by utilizing enlarged quantum
systems, we determine the fundamental limits for the sensitivity of
non-Hermitian sensors from the perspective of quantum information. We prove
that non-Hermitian sensors do not outperform their Hermitian counterparts
(directly couples to the parameter) in the performance of sensitivity, due to
the invariance of the quantum information about the parameter. By scrutinizing
two concrete non-Hermitian sensing proposals, which are implemented using full
quantum systems, we demonstrate that the sensitivity of these sensors is in
agreement with our predictions. Our theory offers a comprehensive and
model-independent framework for understanding the fundamental limits of
non-Hermitian quantum sensors and builds the bridge over the gap between
non-Hermitian physics and quantum metrology. | Wenkui Ding, Xiaoguang Wang, Shu Chen | 2023-04-17T15:38:29Z | http://arxiv.org/abs/2304.08374v3 | # Fundamental Sensitivity Limits for non-Hermitian Quantum Sensors
###### Abstract
Considering non-Hermitian systems implemented by utilizing enlarged quantum systems, we determine the fundamental limits for the sensitivity of non-Hermitian sensors from the perspective of quantum information. We prove that non-Hermitian sensors do not outperform their Hermitian counterparts (directly couple to the parameter) in the performance of sensitivity, due to the invariance of the quantum information about the parameter. By scrutinizing two concrete non-Hermitian sensing proposals, which are implemented using full quantum systems, we demonstrate that the sensitivity of these sensors is in agreement with our predictions. Our theory offers a comprehensive and model-independent framework for understanding the fundamental limits of non-Hermitian quantum sensors and builds the bridge over the gap between non-Hermitian physics and quantum metrology.
_Introduction.-_ Parallel with the rapid development in quantum technology, quantum metrology [1; 2; 3; 4] and quantum sensing [5; 6] are becoming one of the focuses in quantum science. Quantum sensors exploit quantum coherence or quantum correlations to detect weak or nanoscale signals and exhibit great advantages in accuracy, repeatability and precision. Recently, a number of sensing proposals utilizing novel properties of non-Hermitian physics [7; 8; 9] have been proposed and experimentally demonstrated. For example, non-Hermitian lattice systems with skin effect [10; 11] or non-reciprocity [12] have been suggested to realize enhanced sensing. Specifically, the divergence of the susceptibility near the exceptional point (EP) is exploited to realize enhanced sensing with arbitrary precision [13; 14; 15; 16] and it has been demonstrated using various classical (quasi-classical) physical systems [17; 18; 19; 20; 21] or quantum systems [22; 23]. While these early experiments claimed enhancements compared to conventional Hermitian sensors, subsequent theoretical work has cast doubt on these results [24; 25; 26; 27; 28], suggesting that the reported enhancements may not have fully taken into account the effects of noise. Actually, the sensitivity or precision is defined in terms of signal-to-noise ratio. After taking into account the noise, some theoretical works show the enhancement in sensitivity provided by non-Hermitian sensors may disappear [24; 27]. However, other theoretical works have shown that the enhancement can persist even in the presence of noise [25; 26]. While some recent experiments have demonstrated enhanced sensitivity despite the presence of noise [21; 22], others have shown no such enhancement [23]. Currently, the fundamental limitations imposed by noise on non-Hermitian sensors are still a topic of debate [29], and a definitive conclusion on whether the non-Hermitian physics is superior for sensing is still elusive.
In sensing schemes that rely on quantum systems, quantum noise always arises during the projective measurement of the parameter-dependent quantum state [30]. This noise originates from quantum mechanics and cannot be eliminated, leading to the fundamental sensitivity limit. Quantum metrology focuses on how to beat the standard quantum limit by employing quantum correlations, like entanglement or squeezing [2]. While non-Hermitian systems can serve as an effective description of open system dynamics in certain situations [8; 31], the decoherence and dissipation in open systems are detrimental to the useful quantum features required for metrology [32; 33; 34; 35; 36]. Therefore, the sensitivity enhancement from non-Hermitian sensors, which can be embedded in open systems, is quite counter-intuitive. Various theoretical works have been devoted to analyze the effect from the noise [24; 25; 26; 27; 28], however, these investigations usually require modeling the effect of noise and calculating the dynamics using tools such as the quantum Langevin equation, for specific sensing schemes and probe states. Here, we provide a general conclusion on the fundamental sensitivity limit from the perspective of quantum information [37], without the requirement to solve intricate non-unitary quantum dynamics and independent of specific noise forms, probe states, and measurement regimes. We unambiguously prove that the non-Hermitian quantum sensors do not surpass the ultimate sensitivity of their Hermitian counterparts and cannot achieve arbitrary precision in realistic experimental settings with finite quantum resources.
_Sensitivity bound for unitary parameter encoding.-_ Quantum metrology or quantum parameter estimation is to estimate the parameter \(\lambda\) from the parameter-dependent quantum state \(\rho_{\lambda}\). One crucial step is to make measurements on the quantum state. The measurement can be described by a Hermitian operator \(\Pi\), and the probability of obtaining the measurement outcome \(\xi\), conditioned on the parameter \(\lambda\), is \(P(\xi|\lambda)=\mathrm{Tr}(\Pi\rho_{\lambda})\). We can evaluate the classical Fisher information \(\mathcal{I}_{\lambda}\) corresponding to this specific measurement as
\(\sum_{\xi}P(\xi|\lambda)\left(\frac{\partial\ln P(\xi|\lambda)}{\partial\lambda} \right)^{2}\),which reflects the amount of information about the parameter contained in the distribution of measurement outcomes. Meanwhile, the estimation uncertainty is given by \(\delta^{2}\lambda=\left\langle\left(\frac{\lambda_{\rm est}}{d(\lambda_{\rm est })/d\lambda}-\lambda\right)^{2}\right\rangle\), where \(\lambda_{\rm est}\) is the estimated value (with finite probes \(N\) and trials \(\nu\)) and \(\lambda\) is the true value of the parameter. For the unbiased estimator, we have \(d\langle\lambda_{\rm est}\rangle/d\lambda=1\). In fact, the classical Fisher information bounds the estimation uncertainty achievable in this specific measurement, which fulfills the so-called Cramer-Rao bound: \(\delta\lambda\geq 1/\sqrt{\nu L_{\lambda}}\), where \(\nu\) is the number of repetitions or trials. This bound can be attained asymptotically as \(\nu\rightarrow\infty\). When it is optimized over all possible measurements, we can find the maximal value of the classical Fisher information, known as the quantum Fisher information (QFI) [38], \(\mathcal{I}_{\lambda}\leq F_{\lambda}\). Accordingly, the ultimate precision of parameter estimation for a specific parameter-dependent quantum state can be determined using the quantum Cramer-Rao bound [39], \(\delta\lambda\geq 1/\sqrt{\nu F_{\lambda}}\). The QFI [40] can be determined as \(F_{\lambda}={\rm Tr}[\rho_{\lambda}\mathcal{L}^{2}]\), where \(\mathcal{L}\) is the symmetric logarithmic derivative defined by \(\partial\rho_{\lambda}/\partial\lambda=(\mathcal{L}\rho_{\lambda}+\rho_{ \lambda}\mathcal{L})/2\).
Usually, the parameter-dependent quantum state \(\rho_{\lambda}\) is obtained through time evolution governed by the parameter-dependent Hamiltonian \(\hat{H}_{\lambda}(t)\). To be more specific, with the parameter-independent initial state (probe state) \(\rho_{0}\), the parameter encoding process can be described as \(\rho_{\lambda}(t)=U_{\lambda}(0\to t)\rho_{0}U_{\lambda}^{\dagger}(0 \to t)\), where the unitary time evolution operator \(U_{\lambda}(0\to t)=\mathcal{T}e^{-i\int_{0}^{t}\hat{H}_{\lambda}(s) ds}\), with \(\mathcal{T}\) being the time-ordering operator. In the case where the initial state is a pure state, \(\rho(0)=|\Psi_{0}\rangle\langle\Psi_{0}|\), the QFI can be calculated as \(F_{\lambda}(t)=4(\langle\Psi_{0}|h_{\lambda}^{2}(t)|\Psi_{0}\rangle-|\langle \Psi_{0}|h_{\lambda}(t)|\Psi_{0}\rangle|^{2})\equiv 4{\rm Var}[h_{\lambda}(t)]|_{| \Psi_{0}\rangle}\), where the Hermitian operator \(h_{\lambda}(t)\equiv iU_{\lambda}^{\dagger}(0\to t)\frac{\partial}{ \partial\lambda}U_{\lambda}(0\to t)\) is called the transformed local generator [41; 42]. We have defined \({\rm Var}[\hat{A}]|_{|\Psi\rangle}\) as the variance of the Hermitian operator \(\hat{A}\) with respect to \(|\Psi\rangle\). It satisfies \({\rm Var}[\hat{A}]|_{|\Psi\rangle}\leq||\hat{A}||^{2}/4\) for arbitrary \(|\Psi\rangle\)[43], where the seminorm is defined as \(||\hat{A}||\equiv M_{A}-m_{A}\), with \(M_{A}\) (\(m_{A}\)) being the maximum (minimum) eigenvalue of \(\hat{A}\). Then it follows \(F_{\lambda}(t)\leq||h_{\lambda}(t)||^{2}\equiv F_{\lambda}^{(c)}(t)\), where \(F_{\lambda}^{(c)}(t)\) is defined as the channel QFI, corresponding to the maximum QFI achievable by optimizing over all possible probe states.
For the seminorm, we can also prove the triangle inequality [43] that \(||\hat{A}+\hat{B}||\leq||\hat{A}||+||\hat{B}||\). Using the definition of \(h_{\lambda}\) and the Schrodinger equation \(i\partial U_{\lambda}/\partial t=H_{\lambda}U_{\lambda}\), we can obtain \(\frac{\partial h_{\lambda}}{\partial t}=U_{\lambda}^{\dagger}(0\to t) \frac{\partial H_{\lambda}(t)}{\partial\lambda}U_{\lambda}(0\to t)\). Thus, the transformed local generator can be explicitly represented as \(h_{\lambda}(t)=\int_{0}^{t}U_{\lambda}^{\dagger}(0\to s)\frac{ \partial H_{\lambda}(s)}{\partial\lambda}U_{\lambda}(0\to s)ds\). By applying the triangle inequality, we obtain \(||h_{\lambda}(t)||\leq\int_{0}^{t}\left||U_{\lambda}^{\dagger}(0\to s) \frac{\partial H_{\lambda}(s)}{\partial\lambda}U_{\lambda}(0\to s) \right|\left|ds\right.=\left.\int_{0}^{t}\left|\frac{\partial H_{\lambda}(s)} {\partial\lambda}\right|ds\right|ds\), where we have used the fact that unitary transformations do not change the spectrum of an operator. Therefore, the upper bound of the channel QFI can be obtained as follows [44]:
\[F_{\lambda}^{(c)}(t)\leq\left[\int_{0}^{t}\left|\left|\left|\frac{\partial H_{ \lambda}(s)}{\partial\lambda}\right|\right|ds\right]^{2}. \tag{1}\]
Due to the convexity of QFI, the optimal probe state is always a pure state [45]. Therefore, this bound is naturally applicable for mixed probe states. Similar relations [45; 46] have been obtained using different methods and have been employed to discuss other problems. Furthermore, by utilizing the quantum Cramer-Rao bound, we obtain the lower bound for the estimation uncertainty as follows [47]:
\[\delta\lambda\geq\frac{1}{\sqrt{\nu}\int_{0}^{t}\left|\left|\frac{\partial H_{ \lambda}(s)}{\partial\lambda}\right|\right|ds}. \tag{2}\]
This relation is rather universal and can be used to determine the lower sensitivity bound for various types of quantum sensors, regardless of whether they are based on unitary or non-unitary parameter encoding processes.
In this Letter, we proceed further to investigate the bound on the change rate of the QFI. By the definition of QFI, we obtain that \(\frac{\partial F_{\lambda}}{\partial t}=8\left.{\rm Cov}[\frac{\partial h_{ \lambda}}{\partial t},h_{\lambda}]\right|_{|\Psi_{0}\rangle}\), where the covariance is defined as \({\rm Cov}[\hat{A},\hat{B}]_{|\Psi\rangle}\equiv\frac{1}{2}\langle\Psi|\hat{A}\hat{ B}+\hat{B}\hat{A}|\Psi\rangle-\langle\Psi|\hat{A}|\Psi\rangle\langle\Psi|\hat{B}|\Psi\rangle\). Using the Cauchy-Schwarz inequality, we can prove that \(\left|{\rm Cov}[\hat{A},\hat{B}]\right|\leq\sqrt{{\rm Var}(\hat{A}){\rm Var}( \hat{B})}\). Applying this inequality, we find:
\[\left|{\rm Cov}[\frac{\partial h_{\lambda}}{\partial t},h_{\lambda} (t)]|_{|\Psi_{0}\rangle}\right| \leq\sqrt{{\rm Var}[U_{\lambda}^{\dagger}\frac{\partial H_{ \lambda}}{\partial\lambda}U_{\lambda}]|_{|\Psi_{0}\rangle}}\frac{F_{\lambda}^{1 /2}(t)}{2} \tag{3}\] \[\leq\frac{||\frac{\partial H_{\lambda}}{\partial\lambda}||}{2}\frac{F_ {\lambda}^{1/2}(t)}{2}.\]
After some algebra [48], we prove the following inequality:
\[\left|\frac{\partial F_{\lambda}^{1/2}(t)}{\partial t}\right|\leq\left|\left|\frac{ \partial H_{\lambda}(t)}{\partial\lambda}\right|\right|. \tag{4}\]
Namely, the change rate of the square root of QFI is only bounded by the spectral width of the derivative of the Hamiltonian with respect to the parameter. \(|\partial F_{\lambda}^{1/2}(t)/\partial t|\) measures how fast the quantum information about the parameter flows into or out of the quantum state. It indicates that the quantum parameter encoding process cannot be accelerated by adding auxiliary parameter-independent Hamiltonian extensions.
_Open system and non-Hermitian quantum sensing._-In many situations, such as dynamics in open quantum systems or systems governed by non-Hermitian Hamiltonians, the dynamical process used to encode the parameter may be non-unitary. However, it is often possible
to map these non-unitary processes to equivalent unitary dynamics in an enlarged Hilbert space, by introducing extra degrees of freedom that correspond to the environment. We now make this statement more rigorous for non-unitary sensing schemes. Prior to applying the perturbation that incorporates the parameter to be estimated, the dynamical process in the open quantum system or non-Hermitian system, \(R_{S}:\rho_{S}(0)\rightarrow\rho_{S}(t)\), can be mapped from a unitary evolution in an enlarged system, \(\mathcal{M}(U_{S,E})\to R_{S}\). This unitary time evolution operator for the combined system corresponds to a Hermitian Hamiltonian, \(U_{S,E}\rightarrow\tilde{H}_{\rm tot}\). This Hamiltonian, \(\tilde{H}_{\rm tot}=H_{S}(t)+H_{E}(t)+H_{SE}(t)\), generally contains terms that describe the system \(H_{S}(t)\), the environment \(H_{E}(t)\) and the system-environment interaction \(H_{SE}(t)\). Subsequently, we introduce the perturbation that incorporates the parameter dependence. In most scenarios, including the examples discussed in this work and various non-Hermitian sensing protocols, the parameter of interest directly couples to the degrees of freedom of the system and the perturbation can be represented by a Hermitian Hamiltonian \(H_{1}(\lambda,t)\). As a result, the overall parameter encoding process, corresponding to the dynamical evolution in the open system or non-Hermitian system, can be mapped to a unitary dynamics governed by a Hermitian Hamiltonian \(H_{\lambda}(t)=\tilde{H}_{\rm tot}+H_{1}(\lambda,t)\). By mapping the dynamics to an enlarged system, we circumvent the analysis of intricate non-unitary parameter encoding processes. By resorting to the corresponding unitary evolution in the enlarged system, we can straightforwardly apply the ultimate sensitivity bound in Eq. (2) and the QFI rate bound in Eq. (4).
Since the estimation parameter only associates with the degree of freedom of the system, we have \(\partial H_{\lambda}/\partial\lambda=\partial H_{1}/\partial\lambda\). Thus, the bounds in Eq. (2) and (4) reveal an intriguing insight: the ultimate sensitivity cannot be improved by coupling the system to the environment or by introducing auxiliary Hamiltonians. This is because these additional factors do not increase the amount of information about the parameter or the rate of information encoding. Correspondingly, the non-Hermitian sensor will not outperform its Hermitian counterpart in terms of ultimate sensitivity. We now substantiate this conclusion by analyzing some concrete examples.
_Example I: single-qubit pseudo-Hermitian sensor.-_ A single-qubit pesudo-Hermitian [49] Hamiltonian, described by
\[\hat{H}_{s}=\mathcal{E}_{\lambda}\left(\begin{array}{cc}0&\delta_{\lambda}^{ -1}\\ \delta_{\lambda}&0\end{array}\right), \tag{5}\]
is employed to realize enhanced quantum sensing in Ref. [50], where \(\mathcal{E}_{\lambda}\) and \(\delta_{\lambda}\) depend on the parameter \(\lambda\) that is being estimated. According to the Naimark dilation theory [51; 52], a dilated two-qubit system with a properly prepared initial state can be used to simulate the dynamics of this pseudo-Hermitian Hamiltonian, conditioned on the post-selection measurement of the ancilla qubit [53]. The Hermitian Hamiltonian of this dilated two-qubit system is
\[\hat{H}_{\rm tot}=b\tilde{I}^{(a)}\otimes\hat{a}_{x}^{(s)}-c\hat{\sigma}_{y}^ {(a)}\otimes\hat{\sigma}_{y}^{(s)}+\lambda\tilde{I}^{(a)}\otimes\hat{\sigma}_ {x}^{(s)}, \tag{6}\]
where \(\hat{\sigma}_{\alpha=x,y,z}^{(s)}\) (\(\hat{\sigma}_{\alpha=x,y,z}^{(a)}\)) represents the Pauli operators of the system qubit (ancilla qubit). The coefficients \(b=4\omega\varepsilon(1+\varepsilon)/(1+2\varepsilon)\) and \(c=2\omega\sqrt{\varepsilon(1+\varepsilon)}/(1+2\varepsilon)\), where \(\varepsilon\) and \(\omega\) describe the qubit. This specific dilated Hamiltonian can be mapped to \(\hat{H}_{s}\), with \(\mathcal{E}_{\lambda}=\sqrt{(b+\lambda)^{2}+c^{2}}\) and \(\delta_{\lambda}=(\lambda+2\varepsilon\omega)/\mathcal{E}_{\lambda}\). The time evolution of the quantum state governed by \(\hat{H}_{s}\) is \(|\psi\rangle_{s}=e^{-i\hat{H}_{s}t}|0\rangle_{s}=\cos(\mathcal{E}_{\lambda}t)| 0\rangle_{s}-i\delta_{\lambda}\sin(\mathcal{E}_{\lambda}t)|1\rangle_{s}\). Thus the normalized population in \(|0\rangle_{s}\) is \(S(\lambda,t)=1/[1+\delta_{\lambda}^{2}\tan^{2}(\mathcal{E}_{\lambda}t)]\). In Fig. 1(a), we plot the susceptibility \(\chi_{s}(\lambda)\equiv\partial S/\partial\lambda\) as a function of \(\lambda\) for a fixed evolution time \(t=\tau\equiv\pi/[4\omega\sqrt{\varepsilon(1+\varepsilon)}]\). The result indicates that the maximal value of the susceptibility diverges as \(\varepsilon\to 0\), which corresponds to the eigenstate coalescence. Based on this feature, the authors in Ref. [50] proposed the pseudo-Hermitian enhanced quantum sensing scheme.
On the other hand, for the dilated two-qubit system, the probe state should be prepared as \(|\Psi_{0}\rangle=\left(\sqrt{\frac{1+\varepsilon}{1+2\varepsilon}}|0\rangle_ {a}+\sqrt{\frac{\varepsilon}{1+2\varepsilon}}|1\rangle_{a}\right)\otimes|0 \rangle_{s}\) in order to correctly simulate the non-Hermitian dynamics. The normalized population \(S(\lambda,t)\) actually corresponds to the probability that the system qubit is in state \(|0\rangle_{s}\), conditioned on the ancilla qubit being in state \(|0\rangle_{a}\). Equivalently, by calculating the dynamics of the total system \(|\Psi(\tau)\rangle=e^{-i\hat{H}_{\rm tot}\tau}|\Psi_{0}\rangle\), we can directly evaluate the probability in state \(|0\rangle_{a}\otimes|0\rangle_{s}\) as
\[P_{1}=\frac{1+\varepsilon}{1+2\varepsilon}\cos^{2}\left[t\sqrt{\lambda^{2}+ \frac{8\varepsilon(1+\varepsilon)\lambda\omega}{1+2\varepsilon}+4\varepsilon(1 +\varepsilon)\omega^{2}}\right]. \tag{7}\]
Due to the quantum projection noise, there is uncertainty in the determination of \(P_{1}\). This uncertainty originates from the quantum projective measurement and follows a binomial distribution. The variance of the estimated probability is \(\mathrm{Var}[\hat{P}_{1}]=P_{1}(1-P_{1})/\nu\), where \(\nu\) is the number of trials (repetitions) [48]. Using the error propagation formula, we can evaluate the estimation uncertainty for this specific sensing scheme as \(\delta\lambda=\sqrt{\mathrm{Var}[\hat{P}_{1}]}/[\frac{\partial P_{1}}{\partial \lambda}]\). We plot the sensitivity in Fig. 1(b), which shows no divergence at the corresponding divergent positions of \(\chi_{s}(\lambda)\) in Fig. 1(a). This absence of divergence in the sensitivity is attributed to the fact that the divergence in \(\chi_{s}(\lambda)\) when \(\varepsilon\to 0\) is accompanied by a vanishing success probability in the post-selection measurement. As a comparison, the counterpart Hermitian sensor simply employs \(\hat{V}=\lambda\tilde{I}^{(a)}\otimes\hat{\sigma}_{x}^{(s)}\) as the parameter encoding generator. The sensitivity bound in Eq. (2) indicates \(\delta\lambda\geq\frac{1}{\sqrt{\nu\tau}|\hat{\sigma}_{x}|}=\frac{1}{2\sqrt{ \nu\tau}}\). We plot this ultimate
sensitivity bound in Fig. 1(b) as the blue lines, indicating that the non-Hermitian sensor does not outperform its Hermitian counterpart. Furthermore, the rate of dynamic QFI can be calculated exactly [48] as follows:
\[\frac{\partial F_{\lambda}^{1/2}(t)}{\partial t}=2\frac{\cos^{2}\theta+\sin^{2} \theta\frac{\sin{(2\Omega t)}}{2\Omega t}}{\sqrt{\cos^{2}\theta+\sin^{2}\theta \frac{\sin^{2}(\Omega t)}{(\Omega t)^{2}}}}, \tag{8}\]
where we define \((b+\lambda)/\Omega=\cos\theta\) and \(c/\Omega=\sin\theta\). It follows that \(-2\leq\frac{\partial F_{\lambda}^{1/2}(t)}{\partial t}\leq 2\), which verifies our theory in Eq. (4).
_Example II: EP based sensor using a single trapped ion._-We now consider the sensor based on exceptional point realized in a dissipative single-qubit open system in Ref. [54]. The sensing mechanism relies on an effective periodically driven [55]\(\mathcal{PT}\)-symmetric non-Hermitian Hamiltonian given by
\[\hat{H}_{\mathcal{PT}}=J[1+\cos(\omega t)]\hat{\sigma}_{x}+i\Gamma\hat{ \sigma}_{z}, \tag{9}\]
where \(\hat{\sigma}_{x,z}\) are the Pauli operators, \(J\) is the coupling strength, \(\omega\) is the modulation frequency of the coupling strength, and \(\Gamma\) is the dissipation rate. Actually, the practically implemented Hamiltonian in the experiment is \(\hat{H}^{\prime}_{\mathcal{PT}}=\hat{H}_{\mathcal{PT}}-i\Gamma\hat{I}\), which is a passive \(\mathcal{PT}\)-symmetric system with \(\hat{I}\) being the identity operator. The perturbation applied to the system is \(\hat{H}_{\delta}=\frac{\delta}{2}\cos(\omega_{\delta}t)(\hat{I}-\hat{\sigma}_{ z})\), where \(\delta\) and \(\omega_{\delta}\) are the amplitude and frequency of the perturbation field, respectively, while \(\omega_{\delta}\) is the parameter to be estimated. After the system evolves from specific initial states for a duration of \(T=2\pi/\omega\), we can determine the response energy \(\mathcal{E}_{\text{res}}\) via \(P_{J}(T)-P_{\Gamma}(T)=\sin^{2}(\mathcal{E}_{\text{res}}T)\). Here, the measurable quantities are defined as \(P_{J}(T)=|\langle\uparrow|\,U(T)\,|\downarrow\rangle|^{2}\) and \(P_{\Gamma}(T)=\left|\frac{\langle\uparrow|-\langle\downarrow|}{\sqrt{2}}U(T) \frac{|\uparrow\uparrow\rangle+|\downarrow\rangle|}{\sqrt{2}}\right|^{2}\), with \(U(T)=\mathcal{T}e^{-i\int_{0}^{T}\left[\hat{H}_{\mathcal{PT}}(t)+\hat{H}_{ \delta}(t)\right]dt}\). The absolute value of the response energy \(\mathcal{E}_{\text{res}}\) as a function of \(\omega_{\delta}\) is plotted in Fig. 2(a) [56]. As it is shown, the response energy exhibits sharp dips near the EP [57]. This characteristic feature has motivated the authors in Ref. [54] to suggest the sensing application, since a minor change in \(\omega_{\delta}\) will result in a significant variation in the response energy. Indeed, in Fig. 2(c), we present the susceptibility \(|\partial\mathcal{E}_{\text{res}}/\partial\omega_{\delta}|\) as a function of \(\omega_{\delta}\) and it exhibits a divergence near the EP.
Here, since \(P_{J}\) and \(P_{\Gamma}\) actually correspond to projective measurements on the spin state, the quantum projection noise will result in uncertainties in their determination. The variance of the estimated \(\hat{P}_{J}\) and \(\hat{P}_{\Gamma}\) can be expressed as \(\text{Var}[\hat{P}_{i}]=P_{i}(C_{0}-P_{i})/\nu,\text{with}\ i=J,\Gamma\), where \(\nu\) is the number of trials and \(C_{0}\equiv e^{2\Gamma T}\)[58]. To avoid the complication of dealing with complex response energies, we focus on the region near the EP where \(P_{J}-P_{\Gamma}>0\). Applying the theory of uncertainty propagation, we obtain the uncertainty in the estimation of the response energy as \(\text{Var}[\hat{\mathcal{E}}_{\text{res}}]=\frac{1}{4\nu T^{2}}\frac{C_{0}(P_ {J}+P_{\Gamma})-(P_{J}^{2}+P_{\Gamma}^{2})}{(P_{J}-P_{\Gamma})(1-P_{J}+P_{ \Gamma})}\), where we have used the fact that measurements on \(P_{J}\) and \(P_{\Gamma}\) are independent. We plot the variance of the measured response energy in Fig. 2(b) as a function of \(\omega_{\delta}\), and it shows that the uncertainty in the determination of \(\mathcal{E}_{\text{res}}\) also diverges when \(\omega_{\delta}\) approaches the EP. The overall sensitivity can be evaluated as \(\delta\omega_{\delta}=\sqrt{\text{Var}[\hat{\mathcal{E}}_{\text{res}}]}/| \frac{\partial\mathcal{E}_{\text{res}}}{\partial\omega_{\delta}}|\), and we plot it in Fig. 2(d). It shows that the divergence of the susceptibility is completely compensated by the divergence of the uncertainty, resulting in an overall sensitivity without divergence when approaching the EP. On the other hand, the Hermitian counterpart simply uses \(\hat{H}_{\delta}\) as the parameter encoding generator. According to Eq. (2), the ultimate sensitivity bound is given by \(\delta\omega_{\delta}\geq\frac{\omega_{\delta}^{2}}{\sqrt{\nu\delta^{2}[\sin( \omega_{\delta}T)-\omega_{\delta}T\cos(\omega_{\delta}T)]^{2}}}\). The dashed line in Fig. 2(d) corresponds to this ultimate sensitivity bound. It also demonstrates that the ultimate precision of the Hermitian sensor always exceed the corresponding non-Hermitian sensor.
_Summary and discussion._-In summary, we have unveiled the fundamental sensitivity limit for non-Hermitian sensors in the context of open quantum systems. Our results indicate clearly that non-Hermitian sensors do not outperform their Hermitian counterparts. In fact, when comparing the performance of quantum sensors, it is essential to fix the quantum resources consumed by these sensors. Actually, when resources are unlimited, even ideal Hermitian sensors can theoretically achieve arbitrary precision. However, in practical sensing scenarios, resources are always limited. The number of probes, sensing time, and the number of trials are examples of limited resources. As a result, achieving arbitrary precision is not possible in practical sensing scenarios. The aforementioned instances are characterized by a single probe. Notably, although these cases exhibit divergence in certain measurable quantities, it does not
Figure 1: (a) The susceptibility of the normalized population with respect to \(\lambda\) for different values of \(\varepsilon\). It indicates that the maximal susceptibility diverges as \(\varepsilon\) approaches zero. (b) The sensitivity corresponding to the measurement of the population in the state \(|0\rangle_{a}\otimes|0\rangle_{s}\). It indicates that the sensitivity at the optimal measurement point (corresponding to the maximal susceptibility) does not diverge when \(\varepsilon\) approaches zero. The blue lines represent the sensitivity bound of the Hermitian counterpart.
imply that the sensitivity diverges, leading to 'arbitrary precision', since the sensitivity of their Hermitian counterparts does not diverge (even without Heisenberg scaling for only \(N=1\)).
In Ref.[22], a sensing scheme utilizing an experimentally realized \(\mathcal{PT}\)-symmetric system was reported to enhance the sensitivity by a factor of 8.856 over a conventional Hermitian sensor. However, this enhancement is probably attributed to the non-optimal sensing scheme used for the Hermitian sensor. Furthermore, non-Hermitian lattice systems utilizing the skin effect[10; 11] or the non-reciprocity [12], have claimed exponential scaling of sensitivity with the lattice size. However, our theory shows that the ultimate sensitivity should not depend on the lattice size, as it is solely determined by the subsystem dimension that directly couples to the parameter. Nevertheless, for non-optimal probe states or measurements, the sensitivity may still depend on the lattice size.
Although our work demonstrates that coupling to the environment cannot improve the ultimate sensitivity, when the probe state or the measurement protocol is restricted, adding appropriate auxiliary Hamiltonian may be helpful for approaching the ultimate sensitivity bound [59; 60; 61]. In addition, while our study focuses on non-Hermitian sensors implemented by full quantum systems [62; 63; 64], scrutinizing non-Hermitian sensors based on classical or quasiclassical systems [29] through the perspective of conservation of information is a compelling avenue for future research.
The work is supported by National Key Research and Development Program of China (Grant No. 2021YFA1402104), the NSFC under Grants No.12174436 and No.T2121001 and the Strategic Priority Research Program of Chinese Academy of Sciences under Grant No. XDB33000000.
|
2307.11375 | LatentAugment: Data Augmentation via Guided Manipulation of GAN's Latent
Space | Data Augmentation (DA) is a technique to increase the quantity and diversity
of the training data, and by that alleviate overfitting and improve
generalisation. However, standard DA produces synthetic data for augmentation
with limited diversity. Generative Adversarial Networks (GANs) may unlock
additional information in a dataset by generating synthetic samples having the
appearance of real images. However, these models struggle to simultaneously
address three key requirements: fidelity and high-quality samples; diversity
and mode coverage; and fast sampling. Indeed, GANs generate high-quality
samples rapidly, but have poor mode coverage, limiting their adoption in DA
applications. We propose LatentAugment, a DA strategy that overcomes the low
diversity of GANs, opening up for use in DA applications. Without external
supervision, LatentAugment modifies latent vectors and moves them into latent
space regions to maximise the synthetic images' diversity and fidelity. It is
also agnostic to the dataset and the downstream task. A wide set of experiments
shows that LatentAugment improves the generalisation of a deep model
translating from MRI-to-CT beating both standard DA as well GAN-based sampling.
Moreover, still in comparison with GAN-based sampling, LatentAugment synthetic
samples show superior mode coverage and diversity. Code is available at:
https://github.com/ltronchin/LatentAugment. | Lorenzo Tronchin, Minh H. Vu, Paolo Soda, Tommy Löfstedt | 2023-07-21T06:17:09Z | http://arxiv.org/abs/2307.11375v1 | # LatentAugment: Data Augmentation via Guided Manipulation of GAN's Latent Space
###### Abstract
Data Augmentation (DA) is a technique to increase the quantity and diversity of the training data, and by that alleviate overfitting and improve generalisation. However, standard DA produces synthetic data for augmentation with limited diversity. Generative Adversarial Networks (GANs) may unlock additional information in a dataset by generating synthetic samples having the appearance of real images. However, these models struggle to simultaneously address three key requirements: fidelity and high-quality samples; diversity and mode coverage; and fast sampling. Indeed, GANs generate high-quality samples rapidly, but have poor mode coverage, limiting their adoption in DA applications. We propose LatentAugment, a DA strategy that overcomes the low diversity of GANs, opening up for use in DA applications. Without external supervision, LatentAugment modifies latent vectors and moves them into latent space regions to maximise the synthetic images' diversity and fidelity. It is also agnostic to the dataset and the downstream task. A wide set of experiments shows that LatentAugment improves the generalisation of a deep model translating from MRI-to-CT beating both standard DA as well GAN-based sampling. Moreover, still in comparison with GAN-based sampling, LatentAugment synthetic samples show superior mode coverage and diversity. Code is available at: [https://github.com/ltronchin/LatentAugment](https://github.com/ltronchin/LatentAugment).
Computer vision, image synthesis, medical imaging, Generative Adversarial Networks, mode coverage, generalisation
## I Introduction
Deep learning has recently had many successes in decision tasks, especially when large amounts of data are available [1]. Several explicit or implicit regularisation techniques have been developed to overcome overfitting in cases with less data, such as dropout [2], batch normalisation [3], or transfer learning [4]. However, these methods cannot exploit known input invariances that form constraints for parameter learning, especially not in the regimes with small amounts of data [5].
To cope with this issue, Data Augmentation (DA) has been widely utilised to improve generalisation and robustness when training deep neural networks [6]. Common DA methods in image recognition tasks transform the images via geometric rigid and non-rigid transformations, using image processing primitives, such as _e.g._, translation, rotation, cropping, _etc._[7]. However, in most cases, designing such transformations has relied on human experts with prior knowledge of the dataset. Indeed, even if useful augmentations have been found for a given dataset, they may not transfer to other datasets. For example, horizontal flipping of images during training is an effective data augmentation method for CIFAR-10 (natural images) but not for MNIST (hand-written digits) due to the different symmetries present in these datasets [8].
Recently, some efforts have been directed towards designing an automated process to search for augmentation policies directly from a target dataset [8, 9, 10, 11]. Unfortunately, they still provide restricted variations in the data, limiting the invariances that a target model can learn [5]. Indeed, they rely on pre-specified image processing functions as augmentation operations. Defining the basic operations requires domain knowledge, which may impede their application in more tasks. Moreover, these approaches often use reinforcement learning and their application requires thousands of GPU hours.
Generative Adversarial Networks (GANs) offer a valuable addition to the available set of augmentation techniques. A GAN learns to generate samples from the distribution of training samples, thereby considering all sources of variation within the data. For example, given sufficient training examples of patients with different ventricle sizes, a GAN will learn to generate samples along the continuum of all ventricle sizes. Thus, manipulating the samples generated by GANs allows very complex image transformations. Vahdat _et al._[12] stated three key requirements for generative frameworks to be adopted for real-world problems, including: (i) fidelity, meaning the quality of the generated samples, particularly their realism; (ii) diversity and mode coverage, meaning the variation and variety of the samples that can be generated; and (iii) how fast samples can be generated. The authors identify the challenge posed by these requirements as the "the generative learning trilemma" and concluded that generative models compromise between them. GANs generate high-quality samples rapidly, but they suffer from poor mode coverage. Our hypothesis is that the lack of diversity of GAN-generated images can hinder their applicability for DA purposes. Indeed, existing generative methods generate purely random images without control over what images are generated.
On these grounds, here we address the generative learning trilemma for GANs, and improve their effectiveness for DA purposes by proposing LatentAugment, a new GAN-based augmentation policy that maximises the diversity, _i.e._, the variability in the generated data with respect to the training data distribution; and that maximises the fidelity, _i.e._, how similar the generated images are to real images (high-quality generation). The proposed policy works in the GAN latent
space, which reduces the computational cost compared to working in the image space, and exploits the semantic information that the generator has learned. The rest of the manuscript is organised as follows: section II introduces the state-of-the-art of DA methods and the motivation of LatentAugment. Then, section III presents our novel DA method. In section IV we describe the dataset used to validate the method, the pre-processing phase on the data, the GAN architecture we use, other DA approaches tested for comparative analysis, and the validation strategy adopted. Section V presents and discusses the obtained results, whilst section VI provides concluding remarks.
## II Background and motivations
It is challenging to obtain reliable generalisation in practical applications with small datasets, for example in medical imaging where it remains expensive to acquire informative and noise-free annotations [13]. DA increases the size of the training set by artificially creating new samples and reduces the risk of overfitting when training DL models on datasets of limited sizes [14, 15]. In the rest of this section we adopt the taxonomy on image augmentation techniques proposed by Xu _et al._[16]. They divided DA methods into three main branches: _model-free_, _optimising policy-based_, and _model-based_.
### _Model-free_
Model-free techniques leverage image processing methods, such as geometric transformations and pixel-level manipulation, and are further divided into single-image augmentation and multiple-image augmentation.
Well-know single-image augmentation approaches on natural images include horizontal flips, random cropping, rotation, and translations. These techniques have, for instance, been used in classification and detection tasks [6, 17]. Such approaches simulate intra-class variation, augmenting the data while keeping them close to the training set, _i.e._, they explicitly teach the model to be invariant to the particular transformations used. Other single-image approaches that vary the data more and increase generalisation further. Within this category, intensity transformation changes the image at pixel or patch level: for instance, the former could add independent random noise to be robust to artefacts in the image generation [18], whereas the latter could achieve invariance to occlusions [19, 20, 21, 22].
Multiple-image augmentation methods are executed on more than one image and aim to merge multiple inputs [23, 24, 25]. Examples include SamplePairing [23] and Mixup [24]. In SamplePairing, the images are averaged, and a label is selected among the source images. In Mixup, models are trained on a convex combination of the images and their labels.
Unfortunately, both single- and multiple-image augmentation methods require expertise and manual work to design policies tailored to the domain at hand. This, in turn, requires hyper-parameter optimisation of the transformation settings, such as the probability or the magnitude of the augmentation that is applied, making it difficult to apply a DA policy from one domain to another one.
### _Optimising policy-based_
Learning policies for data augmentation have emerged as an approach to automate augmentation strategies to overcome the weaknesses of model-free methods. These approaches aim to select a well-suited set of augmentation functions, _e.g._, rotation, shift, _etc._, for the dataset at hand and use reinforcement or adversarial learning [26, 27, 28, 8, 9, 10].
For instance, AutoAugment [8] finds the best set of transformations for a proxy task. Extensions and improvements have been proposed [27, 10, 26].
Adversarial training improves the robustness of downstream models by augmenting with difficult samples [28, 9], _i.e._, samples that cause a high training loss. The assumption is that they are useful to improve the generalisation of deep models.
Optimising policy-based strategies learn the augmentation method, improving a downstream task. Nevertheless, they are limited to a set of known, pre-defined transformations, constraining the invariances that are introduced to the dataset.
### _Model-based_
Model-based augmentation simultaneously modifies the style and content of the images, aiming to extend the possible created variance while maintaining fidelity. Such methods use synthetic images a generative model produces to enlarge the original dataset [16]. GANs have attracted increased attention due to their remarkable image-generation performance and have been used for segmentation and classification [29, 30, 31, 32]. GANs provide a way to augment sources of variance in the data that would be challenging to define otherwise. For instance, a GAN trained on a set of images of an organ can synthesise images where the organ varies its size with continuity [33]. Thus, they are able to introduce a type of variance not straightforward to capture with common augmentation methods [16]. Skandarani _et al._[34] argued that GANs could not reproduce the full richness of medical datasets, motivated by experimental results in segmentation where no deep networks trained on large numbers of real and synthetic samples outperform networks trained on only real data.
State-of-the-art GAN augmentation techniques randomly generate synthetic images [34, 35, 36], but have no control over the diversity in the generated images. This is in contrast to the original idea of DA that assumes that effective data transforms should produce samples from an "overlapping but different" distribution [37, 38]. Thus, the lack of control over the GAN-generated images _de facto_ limits their efficacy for DA and is identified as one of the main reasons for the poor performance observed when using synthetic data [36].
### _Motivations_
DA has been affirmed as a technique to artificially increase training set sample variability by transforming data points in a way that, in supervised learning, preserves class labels, and has become an effective tool for tackling data scarcity problems [39]. However, the choice of DA strategy is known to cause large variations in downstream performance and can be difficult to select [8]. While recent works based on
optimising policies have attempted to automate DA [16], they only consider restricted sets of simple transformations, thus limiting the invariances a downstream model can learn. GANs have the potential to take many decisions away from the user, in much the same way as deep learning removed the need for hand-crafted features [40]. However, the main drawback of GAN-based augmentation remains the lack of control over the generated images. While GANs generate high-quality samples rapidly, and thus fulfilling the first two criteria of the generative learning trilemma, they suffer from poor mode coverage, limiting their effectiveness for DA application.
To address this limitation, we propose a novel GAN-based augmentation method that controls the generation of synthetic images in the latent space to improve diversity and fidelity. It is worth noting that the idea of using the latent space has its roots in image editing techniques, which have tackled the issue of lack of direct control over the GAN generation. Indeed, they aim to learn how to navigate the latent space in directions that allow changing the semantics of generated images [41], such as facial attributes [42], memorability of images [43], or camera movements and colour changes [44]. As far as we can tell, this work is not only the first attempt to use latent space manipulation in the context of data augmentation but also the first applied to medical imaging.
## III Methods
We propose an augmentation policy, referred to as _LatentAugment_, that creates a new sample by navigating the latent space of a trained GAN to maximise the diversity of the augmented images while ensuring their fidelity. In this section, we first formulate the DA problem and then detail how LatentAugment works.
### _Problem Formulation_
Let \(\mathcal{X}=[x_{1},x_{2},...,x_{N}]\) denote a training set that consists of \(N\) images. The primary objective of a DA procedure, \(\mathcal{A}\), is to train a downstream model \(\mathcal{M}\) on augmented versions of the images, such that the downstream model generalises better to an independent test set, or to other new data. During training, a common approach is to take a data point, \(x_{i}\), and, before presenting it to \(\mathcal{M}\), compute an augmented image \(\tilde{x}_{i}\), as
\[\tilde{x}_{i}=\begin{cases}\mathcal{A}(x_{i})&\text{if }r\geq p_{aug},\\ x_{i}&\text{otherwise},\end{cases} \tag{1}\]
where \(r\) is a random uniform real number in \([0,1)\) and \(p_{aug}\in[0,1]\) is a threshold probability whether to apply the DA procedure at all (activating or deactivating the DA procedure for that image). We denote the augmented training set as \(\tilde{\mathcal{X}}\). When \(p_{aug}=0\), the downstream model \(\mathcal{M}\) is fed only augmented images, and with \(p_{aug}=1\), the augmentation procedure is disabled. This approach thus does not increase the cardinality of the training dataset, but instead adds a layer of stochasticity to the learning process of \(\mathcal{M}\), and (ideally, but depending on \(\mathcal{A}\)) increases the diversity of the training data.
### _Overview of the framework_
Here we introduce the two main ingredients of the proposed method: GANs and GAN-inversion.
A GAN consists of two networks, a generator \(G\) and a discriminator \(D\). Inspired by game theory, those two networks are trained in an adversarial process where \(G\) generates fake images attempting to fool the discriminator to believe that they are real, while \(D\) attempts to discriminate between the real and fake images [45]. The training process can be described as a min-max game,
\[\min_{G}\max_{D}V(G,D)=\mathbb{E}_{x\sim\mathcal{X}}\big{[}\log D (x)\big{]}\\ +\mathbb{E}_{z\sim\mathcal{Z}}\Big{[}\log\Big{(}1-D\big{(}G(z) \big{)}\Big{)}\Big{]}, \tag{2}\]
where the optimisation is over the parameters of \(G\) and \(D\), \(\mathcal{X}\) is the data used to train the GAN, \(z\) is the noise vector sampled from the latent space, \(\mathcal{Z}\). While generating new images, the generator takes a latent vector, \(z\), and maps it to an image. The min-max loss in Equation 2 and the training process guarantee that the estimated image manifold is aligned with the training image manifold.
In this work, we used the StyleGAN2 (SG2) architecture as the GAN backbone since it is the state-of-the-art GAN model for high-resolution image synthesis [46]. Unlike a traditional generator, the SG2 model introduces a multi-layer perceptron, \(\mathcal{F}\), that maps \(z\) to an intermediate latent space, \(\mathcal{F}(z)=w\in\mathcal{W}\). The generator, \(G\), then synthesises images based on these intermediate latent vectors, \(w\). The latent space mapping allows the generator to learn an intermediate latent space, \(\mathcal{W}\), that is less entangled by design [46, 47], _i.e._, each dimension of \(w\) controls only a single (or a few) features of the generated image. A disentangled feature space is a key desiderata for any GAN-based image editing technique and, hence, also for the LatentAugment method proposed here.
GANs lack the ability to find the latent representation of an input image, which is a necessary step to manipulate images in the latent space for DA purposes. Thus, we exploit GAN-inversion to reverse (invert) the mapping of \(G\), to find a latent vector that recovers a given input image [41]. With the SG2, this is an intermediate latent vector, \(w^{*}\), but when using a GAN without a mapping network such as \(\mathcal{F}\)[48], the inversion instead seeks a latent vector \(z^{*}\in\mathcal{Z}\).
Existing inversion approaches are either learning- or optimisation-based. The former involves training an encoding network to map an input image into the latent space, such that the found latent vector reproduces the input image (directly learning the inverse mapping). The latter directly optimises in the latent space, searching for a latent vector that would regenerate the real input image (not explicitly learning an inverse mapping). The first approach provides a fast solution for image embedding by performing a forward pass through the encoder, but does not generalize beyond the training dataset since it needs to be trained for each inversion task [49]. We, therefore, adopted the second approach, using the optimisation-based method proposed by Karras _et al._[46], which is well-suited for inverting real images in the SG2 latent space, \(\mathcal{W}\).
### _LatentAugment Policy_
#### Iii-C1 Intuition
GANs are commonly used to synthesise new images or other data, but current methods do not allow control over the generation process, especially not so for DA. To augment the training set of a downstream model, SG2-based policies compute \(\tilde{x}=G(w)\), where \(w=\mathcal{F}(z)\) with \(z\) randomly sampled from \(\mathcal{Z}\). This approach, referred to as _Standard SG2 DA_ in the following, is illustrated in panels a) and b) of Figure 1. In this figure, the blue stars illustrate the latent positions of the real samples (retrieved through the inversion procedure), and the white triangles denote randomly sampled points in the latent space that will generate the synthetic images for a Standard SG2 DA procedure. The Standard SG2 DA does not guarantee that such generated images are useful for the downstream task since they may lie outside of the manifold of the real data, illustrated by the shaded area in Figure 1, and may correspond to cases where generated images contain artefacts or are of low quality--such synthetic images would thus have low fidelity. If the generator, \(G\), overfits the training data, the synthetic images would look much like the training images themselves, _i.e._, in the white triangles would overlap the blue stars in Figure 1. Such generated images have low diversity but high fidelity [50].
To formulate a GAN-based DA policy, \(\mathcal{A}\), we may benefit if we guide the generation process to consider the trade-off between fidelity and diversity in the generated images. The augmented dataset, \(\widetilde{\mathcal{X}}\), should contain points _"close but not too close"_ to the training data, as illustrated in panel (c) of Figure 1 by the green circles. To guarantee high-quality images and to avoid artefacts, _i.e._, high fidelity, the synthetic images should be _close_ to the real training images. To ensure diversity, the generated images should not lie _too close_ to the original images, but at some distance from them. With a DA procedure that increases both fidelity and diversity, our hypothesis is that the downstream model, \(\mathcal{M}\), should generalise better.
#### Iii-C2 Loss function
We propose a loss function, \(\mathcal{L}(w)\), that takes into account both fidelity and diversity. The loss function is the weighted sum of four terms, one controlling the fidelity, \(\mathcal{L}_{f}(w)\), and three controlling the diversity, \(\mathcal{L}_{d}(w)\), as
\[\mathcal{L}(w) =\alpha_{f}\mathcal{L}_{f}(w)-\mathcal{L}_{d}(w) \tag{3}\] \[=\alpha_{f}\mathcal{L}_{f}(w)-\] \[\qquad\qquad\underbrace{\big{(}\alpha_{pix}\mathcal{L}_{pix}(w)+ \alpha_{pere}\mathcal{L}_{perec}(w)+\alpha_{lat}\mathcal{L}_{lat}(w)\big{)}}_{ \mathcal{L}_{d}(w)},\]
where \(\alpha_{f}\), \(\alpha_{pix}\), \(\alpha_{perec}\), and \(\alpha_{lat}\) are positive real weights for the following terms,
\[\mathcal{L}_{f}(w) =\log\left(1+\exp^{-D\big{(}G(w)\big{)}}\right), \tag{4}\] \[\mathcal{L}_{pix}(w) =\frac{1}{N}\sum_{j=1}^{N}d_{pix}\big{(}G(w),x_{j}\big{)},\] (5) \[\mathcal{L}_{perc}(w) =\frac{1}{N}\sum_{j=1}^{N}d_{perc}\big{(}G(w),x_{j}\big{)},\] (6) \[\mathcal{L}_{lat}(w) =\frac{1}{N}\sum_{j=1}^{N}d_{lat}(w,w_{j}^{*}), \tag{7}\]
where \(x_{j}\in\mathcal{X}\) and \(w_{j}^{*}\in\mathcal{W}\) denote the \(j\)th training image and its corresponding latent vector (given by the inverse mapping), respectively. While the next paragraphs detail the four loss terms presented in Equations 4-7, \(\mathcal{L}_{d}(w)\) consists of three losses measuring different aspects of diversity of \(\tilde{x}\) from the real images in \(\mathcal{X}\) at different levels of abstraction, including the image space \(\mathcal{L}_{pix}(w)\), a perceptual space \(\mathcal{L}_{perc}(w)\), as well as a semantic space \(\mathcal{L}_{lat}(w)\).
_Fidelity Loss:_ The fidelity loss, \(\mathcal{L}_{f}(w)\), estimates the fidelity of \(\tilde{x}\) as the realness score given by the discriminator, \(D\), of the SG2 model (Equation 4). A low value suggests that the generated images look unrealistic or that they contain features not present in the real data. This realness score is based on observing that the discriminator, \(D\), distinguishes between real and generated images using low-level and high-level image features automatically learnt for this purpose. Blau and Michaeli [51] showed that the generator's ability to fool
Fig. 1: Intuition. Considering \(\mathcal{X}\) and \(\mathcal{W}\) as the data and latent space distributions, respectively. In panel (a), we denote with blue stars the encoded real images. Panel (b). The standard SG2 DA procedure does not provide any guide to the generation process. Thus random latent samples (white triangles) may overlap the manifold of the latent representations of the real images but may also fall arbitrarily far outside the real image distribution, \(\mathcal{X}\). Panel (c). The LatentAugment method adds control to the GAN generation process allowing the synthetic latent vectors to be close to the latent vectors of the real images but not too close (green circles).
the discriminator, which corresponds to a high realness score, correlates with human opinion scores over synthetic images.
_Pixel Loss:_ The _pixel loss_, \(\mathcal{L}_{pix}(w)\), measures how far a generated image, \(\tilde{x}\), is from the training images. Intuitively, large values of \(\mathcal{L}_{pix}(w)\) imply larger diversity in the generated images. The distance metric, \(d_{pix}\), in Equation 5 was the mean squared error between \(\tilde{x}\) and a real image, \(x_{j}\), _i.e._,
\[d_{pix}\left(G(w),x_{j}\right)=\frac{1}{r_{x}c_{x}h_{x}}\|G(w)-x_{j}\|_{2}^{2}, \tag{8}\]
where \(r_{x}\) and \(c_{x}\) denote the number of rows and columns of the images, respectively, and \(h_{x}\) denotes the number of channels.
_Perceptual Loss:_ The pixel loss does not capture perceptual differences between the generated images and the real images. For example, consider two images where one of them is a copy of the first but with a spatial offset of one pixel by column. Then, despite their high perceptual similarity, \(\mathcal{L}_{pix}(w)\) may be large. To address this limitation, we measure how far two images are in a feature space by incorporating a perceptual loss, \(\mathcal{L}_{perc}(w)\). To this end, we use the high-level image feature representations extracted from multiple convolutional layers of a VGG network [52], that was pretrained on the ImageNet dataset [53]. This is the _de facto_ standard feature extractor for perceptual losses [54, 55]. Let \(\phi^{l}(x_{j})\) denote the activation of the \(l\)th convolution layer of the VGG network when processing an image, \(x_{j}\). The \(\phi^{l}(x_{j})\) consists of \(h_{l}\) feature (activation) maps of size \(r_{l}\times c_{l}\). We then define the perceptual distance as,
\[d_{perc}(G(w),x_{j})=\sum_{l=1}^{L}\frac{1}{r_{l}c_{l}h_{l}}\big{\|}\phi^{l} \big{(}G(w)\big{)}-\phi^{l}(x_{j})\big{\|}_{2}^{2}, \tag{9}\]
where \(L\) is the total number of layers used. The perceptual distance is averaged over the training images in Equation 6.
We computed the perceptual distance \(d_{perc}\) considering \(64\times 64\) patches randomly extracted from the real and synthetic images, to save memory, as is common in the literature [56].
_Latent Loss:_ The perceptual loss captures the semantic content and overall spatial structure of an image, but uses a feature representation that is general, and may therefore not be optimal for specific applications. Therefore, to capture specific features in these images, we also incorporated a semantic loss in the latent space of the SG2 model, which has been shown to encode rich semantics [44].
When two latent vectors, \(w_{1},w_{2}\in\mathcal{W}\), are "close" in the latent space, the corresponding images, \(x_{1},x_{2}\in\mathcal{X}\), are semantically similar [47]. The purpose of the _latent loss_, \(\mathcal{L}_{lat}(w)\), is to exploit this property and encourage the augmented images to be some distance away from the latent representations of the latent vectors corresponding to images from the training set. The latent distance, \(d_{lat}\), is defined as the mean squared error between \(w\) and \(w_{j}^{*}\), as
\[d_{lat}(w,w_{j}^{*})=\frac{1}{d_{w}}\|w-w_{j}^{*}\|_{2}^{2}. \tag{10}\]
where \(d_{w}\) is the dimensionality of the latent space.
#### Iii-C3 Navigating the latent space
LatentAugment navigates the latent space, \(\mathcal{W}\), of the SG2 model by minimising the loss function \(\mathcal{L}(w)\),
\[\tilde{w}=\arg\min_{w}\mathcal{L}(w). \tag{11}\]
This procedure is illustrated in Figure 2, where the dashed rectangle specifies the LatentAugment policy. To keep a relation between the original and augmented image (\(x\) and \(\tilde{x}\)), the starting point of LatentAugment is \(w^{*}\), the latent code retrieved from \(x\).
In each step, \(k\), we have a latent vector, \(w^{k}\), and use the generator, \(G\), to reconstruct a corresponding synthetic image, \(\tilde{x}^{k}\). The overall loss \(\mathcal{L}(w^{k})\) is a weighted sum of the four terms, where each weight becomes a hyper-parameter of the policy that determines the relative importance of the different terms, also handling differences in relative scales of the terms. Different weight magnitudes allow following different directions when navigating the latent space, \(\mathcal{W}\), _e.g._, setting \(\alpha_{f}\) to zero causes the policy to navigate the latent space only with respect to diversity. The final latent vector, \(\tilde{w}\), is input to the generator, \(G\), resulting in a corresponding synthetic image, \(\tilde{x}\), that is fed to the downstream model, \(\mathcal{M}\), as per Equation 1.
### _Hyper-parameter search_
LatentAugment navigates the latent space via a gradient-based optimisation to find \(\tilde{w}\). We used the Adam algorithm [57] with \(K\) iterations with learning rate \(\eta\). At each step \(k\), the current weight vector, \(w^{k}\), is updated as,
\[w^{k+1}=w^{k}-\eta\nabla\mathcal{L}(w^{k}). \tag{12}\]
The Adam momentum parameters, \(\beta_{1}\) and \(\beta_{2}\), were set to \(0.9\) and \(0.999\), respectively.
The hyper-parameters of the proposed approach, namely \(p_{aug}\), \(\alpha_{f}\), \(\alpha_{pix}\), \(\alpha_{perc}\), \(\alpha_{lat}\), \(K\), and \(\eta\), control the GAN generation process. By fixing \(p_{aug}\) and tuning \(\alpha_{f}\), \(\alpha_{lat}\), \(\alpha_{pix}\), and \(\alpha_{perc}\), the method specifies the different directions to move on the manifold. By tuning the number of optimisation steps, \(K\), and the learning rate, \(\eta\), the method regulates the intensity of the augmentation on the manifold, and by that how far away the augmented images should be from the real images. Hence, LatentAugment navigates the latent space without the need for any external supervision that requires human labels or pre-trained models.
We propose two approaches to fine-tune the hyper-parameters, both using the tree-structured Parzen estimator [58] for \(50\) iterations. The first minimises the Mean Absolute Error (MAE) in the downstream task on the validation set and thus depends on the particular downstream task. The second maximises the F1 score between real images in the validation set and 50,000 synthetic images generated by LatentAugment. In the experiments here, the validation set contained \(10\)k images. Using the definitions of precision and recall introduced by Kynkaanniemi _et al._[59], which were shown to be well suited to assess both the visual quality and mode coverage of images synthesised by generative models, the F1 score is defined as
\[\text{F1 score}=2\cdot\frac{\text{precision}\cdot\text{recall}}{\text{precision}+ \text{recall}}. \tag{13}\]
In other words, by maximising the F1 score of the SG2, we define a task-agnostic approach that searches for the parameters that make LatentAugment generate both high-quality and diverse synthetic samples. It is worth noting that such an agnostic approach does not set the value of \(p_{aug}\) that, in turn, depends on the specific downstream task. Hence, after maximising the F1 score and setting the hyper-parameter values, we also train the downstream model using values of \(p_{aug}\) in the range \([0.0,1.0)\), divided into \(10\) steps. This allows us to set \(p_{aug}\) also for the F1 score fine-tuning, ensuring a fair comparison between the two approaches searching for the hyper-parameters.
## IV Experiments
Here we detail the dataset and the downstream task on which the LatentAugment policy was tested. We then introduce the implementation details for StyleGAN2. Finally, we describe the experimental comparisons of the DA methods.
### _Dataset and Pre-Processing_
The utility of the described data augmentation methods was evaluated on the downstream application of generating synthetic CT (sCT) images from corresponding Magnetic Resonance (MRI) images. This is an important step in the ambition towards MRI-only radiotherapy [60]. The data for this example were collected between January 2020 and October 2021 at the University Hospital of Umea, Umea, Sweden, from 375 patients (330 male and 45 female) with prostate (243 patients), post-surgery prostate (43 patients), gynacological (21 patients), rectal/anal (34 male and 22 female patients), and bladder (10 male and 2 female patients) cancer. The data contained \(T_{2}\)-weighted MRI images, captured using a GE Signa 3T PET/MRI scanner (GE Healthcare, Chicago, Illinois, United States; the echo time was approximately 90 ms and the repetition times around 14,000 ms), and corresponding CT scans captured using a Philips Brilliance Big Bore (Philips Medical Systems, Cleveland, OH, USA). The images had a resolution of \(512\times 512\) in \(131\) slices per patient, and the slices were subsampled to \(256\times 256\) using linear interpolation. The MRI images were clipped to the range \([0,1900]\) and the CT images to the range \([-1000,2000]\) and then normalized to the range \([0,255]\). The dataset was split according to a hold-out validation where 70 % of the images were used for training the SG2 and the \(\mathcal{M}\) with each \(\mathcal{A}\) procedure, 20 % were used for validation, and 10 % were used as a final test set.
### _Downstream Task_
The downstream task that we employed to evaluate the DA procedure was for MRI-to-CT translation, _i.e._, to generate synthetic CT images from the corresponding MRI images. For this, we employed the Pix2Pix model [61]. The Pix2Pix model is a straight-forward and computationally efficient image-to-image translation model [62] that has demonstrated the ability to generate high-quality images across a variety of tasks [61] including MRI-to-CT [63]. The primary objective was to explore whether and to which extent LatentAugment could improve the final model performance when compared with other DA strategies for a realistic and important medical imaging task. The aim was thus not to pinpoint the best MRI-to-CT translation model, but to use the MRI-to-CT task to evaluate the proposed DA procedure.
To ensure a fair comparison, we used the same training strategy and hyper-parameters for Pix2Pix across all the examined DA policies. We used the default training configuration of the Pix2Pix model (see Isola _et al._[61] for details) except for the number of epochs and the batch size. We set the number of
Fig. 2: High-level schematic representation of LatentAugment policy (dashed rectangle). It first retrieves the latent code \(w^{*}\) of the image \(x\) we aim to augment. Then for each iteration \(k\) of the procedure, we manipulate \(w^{k}\) minimising the loss \(\mathcal{L}(w^{k})\), which is given by a weighted sum of terms computing the fidelity and the diversity of \(x^{k}\), where the latter is measured at multiple-level (spatial, perceptual and semantic). The minimisation process consists in learning the walk from \(w^{*}\) to the \(\tilde{w}\) in an agnostic manner, thus producing the final augmented image \(\tilde{x}\). Note that we used the projection of the real image as a starting point of the policy, _i.e._, \(w^{0}=w^{*}\).
epochs to 40 and scheduled the learning rate to decay linearly over the last \(20\) epochs. The batch size was fixed at \(16\).
### _SG2 training_
In the implemented SG2 model, the generator, \(G\), synthesised paired CT-MRI images, each with the resolution \(256\times 256\). We adhered to the recommended SG2 settings [46], including a batch size of \(16\), a mapping network, \(\mathcal{F}\), with a depth of two, generator and discriminator learning rates set to \(0.0025\), and a regularisation weight of \(R_{1}=0.8192\). The mapping network, \(\mathcal{F}\), learned to map \(z\), from the \(512\)-dimensional latent space \(\mathcal{Z}\), to the intermediate \(512\)-dimensional latent space, \(\mathcal{W}\). The SG2 model was trained for 10,000 iterations; processing 1,000 training images in each iteration.
To avoid overfitting the discriminator, which is common in medical applications with limited data availability, we used the adaptive discriminator augmentation scheme proposed by Karras _et al._[64]. It adjusts the probability of applying image-based transformations during SG2 learning, such as translation, shift, _etc._1 by a fixed amount according to an overfitting/underfitting heuristic. The transformations included pixel blitting operations: horizontal flips (xflip) and integer translations (int); and geometric transformations: isotropic scaling (scale), rotation (rotate), anisotropic scaling (aniso), and fractional translation (frac). The magnitude of each operation was adjusted to avoid generating implausible images, while the probability of performing each transformation was adaptively tuned during the SG2 training [64]. The quality of the generated _paired_ CT-MRI images was assessed by comparing to CT and MRI images generated using two SG2 models when trained for each modality separately. To determine the performance differences due only to the multimodal image generation, we turned off the adaptive discriminator augmentation here. Then, to understand which image transformation set was best suited for the multimodal training of SG2, we performed a grid search over the transformation space.
Footnote 1: Refer to Karras _et al._[64] for the list of available transformations.
We designed a total of six experiments, and repeated each experiment three times, training a total of \(18\) SG2 models. To evaluate the performance, we used the Frechet Inception Distance (FID) [65]. The FID measures the dissimilarity of the densities of two (assumed) Gaussian distributions in the feature space of an Inception-V3 model [66] (one pre-trained on the ImageNet dataset) [65]. The FID score was computed using the whole training data set and 50,000 synthetic images. In the evaluation, each modality was considered separately, _i.e._, FID scores were computed for the generated CT and MRI image modalities separately. Then, we selected the SG2 model with the lower FID score during training. In the unimodal training, the metric was based on the one modality considered. Finally, we used the Friedman test [67] to assess whether there were any significant differences between the SG2 configurations in terms of their FID scores.
### _Comparative analysis_
To evaluate the performance of LatentAugment, we compared it to a baseline model, a Pix2Pix model without data augmentation (denoted _Baseline_). We also compared to the common approach of performing a composition of image transformations (denoted _Standard DA_) [16]. We also compared to generating images directly from randomly sampled intermediate latent vectors (denoted _Standard SG2 DA_), an SG2 augmentation policy common in the literature [16]. The last two procedures are explained in more detail in the following subsections.
#### Iii-D1 Standard DA
Let an image transformation be \(T:\mathcal{X}\rightarrow\mathcal{X}\), defined on the input image space \(\mathcal{X}\). Each transformation \(T\in\mathbb{T}\) takes a magnitude parameter, \(\mu\), that determines the intensity of the operation, _e.g._, the number of degrees to rotate an image by. Note that some operations (_e.g._, horizontal or vertical flips) do not use a magnitude parameter. Let \(\tau\) be a sequence of \(N_{\tau}\) image transformations and magnitude parameters, \(((T_{1},\mu_{1}),(T_{2},\mu_{2}),\ldots,(T_{N_{\tau}},\mu_{N_{\tau}}))\). Each operation is applied in sequence, with probability \(p_{aug}\). Hence, the output of \(\tau\) is the result of a composition of image transformations, \(\tau(x)=T_{N_{\tau}}(\cdots T_{2}(T_{1}(x;\mu_{1});\mu_{2})\cdots;\mu_{N_{ \tau}})\) and yields an augmented image, \(\tilde{x}=\tau(x)\). Note that when using this type of augmentation, it is necessary to fine-tune the transformation pipeline. Indeed, specifying the type, order, and magnitude of the operations is essential to preserve image labels.
We used horizontal flips (xflip) and affine/non-affine transformations, which are known to be well-suited in the medical domain [68]. Within the affine transformations, we considered rotations (rotate) and fractional translations (frac). The non-affine transformation considered was elastic deformations (deform) [69]. We set the magnitude range to rotate images (rotate) to \([-3,3]\) degrees and for translations (frac) to \([-5\%,5\%]\), meaning the percentage of pixels to shift the image by. The elastic deformations were implemented with a receptive field of \(63\), and the standard deviation of a Gaussian filter was \(32\). The elastic deformations performed a smooth displacement of the pixels in the images exploiting a randomly generated displacement field that was convolved with a Gaussian filter.
#### Iii-D2 Standard SG2 DA
Given a latent vector \(z\) from \(\mathcal{Z}\), sampled from an isometric standard normal prior distribution, the latent vector was mapped to the intermediate latent space, \(\mathcal{W}\), by the mapping network, \(w=\mathcal{F}(z)\). Next, the SG2 generator takes the intermediate latent vector, \(w\), and generates the corresponding image, \(\tilde{x}=G(w)\).
#### Iii-D3 Hyper-parameter search
The DA approaches were applied according to the augmentation rule in Equation 1, where \(p_{aug}\) controlled the number of synthetic samples used to train \(\mathcal{M}\) in each epoch. As stated in section III, when \(p_{aug}=0\) we fed \(\mathcal{M}\) only augmented images, while when \(p_{aug}=1\) we would only use real images. For each procedure, we directly augmented the paired CT-MRI images. For the Standard DA, we carried out an exhaustive search over the image transformation space that consisted of xflip, affine (rotate and frac) and non-affine (deform) transformations. With a transformation space with \(3\) options, there are a total of \(7\) combinations, where, for each combination, we allow \(p_{aug}\) to assume values from \([0.0,1.0)\) in ten steps for a total of \(70\) experiments. For the Standard SG2 DA, we conducted a total of \(10\) experiments using the Pix2Pix model, for a grid of
\(p_{aug}\) values in the interval \([0.0,1.0)\). For each experiment, we evaluated the downstream model's MAE on the validation set (within the body), searching for the parameter configuration that minimised the MAE.
### _Validation approach_
The metrics used to evaluate the final performance were: the MAE, Structural Similarity (SSIM), Peak Signal-to-Noise Ratio (PSNR), and Learned Perceptual Image Patch Similarity (LPIPS) computed on the test patients. The MAE was evaluated within the body, excluding air with Hounsfield Unit (HU) values below \(-500\). To assess whether the Pix2Pix models, trained using the three DA policies, performed similarly, we used the Friedman test [67], and, in case statistically significant differences were detected (\(p<0.05\)), we used the Nemenyi posthoc test [70] to detect pairwise differences between the augmentation policies. We also evaluated the computational overhead for each DA policy compared to the baseline computing the throughput, _i.e._, the time (in seconds) needed to augment a batch of \(16\) images with a resolution of \(256\times 256\). In order to have a mean estimate of throughput, we report the mean value registered during the training. The PyTorch implementation of LatentAugment and all models, are available at [https://github.com/ltronchin/LatentAugment](https://github.com/ltronchin/LatentAugment). The experiments were performed using one NVIDIA RTX A5000 GPU.
## V Results and discussion
In this section, we first discuss the generation of SG2. Then we present the evaluation of LatentAugment in three experiments: (1) A quantitative and qualitative comparison of the proposed LatentAugment policy to existing DA methods (subsection V-B), (2) an assessment of LatentAugment sensitivity to hyper-parameter settings (subsections V-C and V-D), and (3) an exploration of LatentAugment ability to tackle the generative learning trilemma (subsection V-E).
### _SG2 image generation assessment_
In Table I we show the results from the different SG2 settings detailed in subsection IV-C. The table is organised in two sections. In the first, the SG2 was trained without DA: the first and second rows show the FID scores in the case of unimodal training, whilst the third row corresponds to multimodal training. These results show that multimodal training leads to reduced performance compared to unimodal models: this could be expected since the multimodal SG2 has to learn a more complex data manifold comprising two modalities. The second section of Table I shows FID scores when the SG2 was trained with three sets of geometric transformations. When comparing these three lines against the third row of the previous section, we notice that DA allows decreasing FID values, which correspond to better performance. In particular, when augmenting the data by scaling, rotation, anisotropic scaling, and fractional translation (fifth row in Table I) we get the lowest FID scores for the multimodal training, with a mean of \(8.11\) (between the FIDs for CT and MRI). Hence, in all the next experiments using DA we employ the SG2 generator trained with such four transformations; an example of its generation is shown in Figure 3. Moreover, the Friedman test reveals that no significant differences between any of the experiments reported in Table I) exist (\(p=0.45\)).
### _Analysis of the downstream task performance_
The LatentAugment hyper-parameter search presented in subsection III-D returned two sets of values. The first minimised the MAE on the validation set, and it consists of \(p_{aug}=0.7\), \(\alpha_{pix}=3\), \(\alpha_{perc}=1\), \(\alpha_{lat}=0.1\). The second, sought to maximise the F1 score between synthetic and the real images, and it set \(\alpha_{pix}=0.1\), \(\alpha_{perc}=10\), \(\alpha_{lat}=0.001\), \(\alpha_{f}=0.01\), \(K=9\), \(\eta=0.01\). Let us recall that the hyper-parameter search based on F1 score does not include \(p_{aug}\) because it does not depend on a downstream task, as described in subsection III-D: the additional grid search on \(p_{aug}\) for the F1 score fine tuning sets \(p_{aug}=0.8\). It is worth noting that despite targeting different metrics, both methods set the same values for the hyper-parameters controlling the intensity of the
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multicolumn{5}{c}{Adaptive discriminator augmentation} & \multicolumn{3}{c}{FID \(\downarrow\)} \\ \hline xflip & int & scale & rotate & aniso & frac & CT & MRI \\ \hline \(\mathcal{X}\) & \(\mathcal{X}\) & \(\mathcal{X}\) & \(\mathcal{X}\) & \(\mathcal{X}\) & \(\mathcal{X}\) & \(\mathbf{6.79\pm 0.47}\) & \\ \(\mathcal{X}\) & \(\mathcal{X}\) & \(\mathcal{X}\) & \(\mathcal{X}\) & \(\mathcal{X}\) & & & \(8.81\pm 0.32\) \\ \(\mathcal{X}\) & \(\mathcal{X}\) & \(\mathcal{X}\) & \(\mathcal{X}\) & \(\mathcal{X}\) & \(\mathcal{X}\) & \(14.97\pm 0.78\) & \(10.44\pm 0.68\) \\ \hline \(\mathcal{Y}\) & \(\mathcal{Y}\) & \(\mathcal{X}\) & \(\mathcal{X}\) & \(\mathcal{X}\) & \(\mathcal{X}\) & \(11.58\pm 0.83\) & \(8.04\pm 0.52\) \\ \(\mathcal{X}\) & \(\mathcal{X}\) & \(\mathcal{Y}\) & \(\mathcal{Y}\) & \(\mathcal{Y}\) & \(9.10\pm 0.71\) & \(\mathbf{7.12\pm 0.24}\) \\ \(\mathcal{Y}\) & \(\mathcal{Y}\) & \(\mathcal{Y}\) & \(\mathcal{Y}\) & \(\mathcal{Y}\) & \(10.62\pm 1.07\) & \(11.35\pm 0.64\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: The first three rows report the average and the standard error of FID values achieved during the unimodal and multimodal setting of SG2. Lower FID scores indicate higher-quality images. The last three rows show FID scores for three sets of geometric transformations used during SG2 training (acronyms are defined in subsection IV-C). The best results are reported in bold.
Fig. 3: Left: Random synthetic samples generated by SG2. Right: Real images are randomly drawn from the training set.
augmentation on the manifold (\(K\) and \(\eta\)). This underscores a potential correlation between the inherent diversity of synthetic images and their efficacy in DA applications: the more diverse the synthetic images, the more effective the DA.
Let us now turn our attention to hyper-parameter searches for Standard DA and Standard SG2 DA, which are the other approaches used for comparison (subsection IV-D). For Standard DA we find that the best set of hyper-parameters includes \(p_{aug}=0.9\), xflip, affine (rotate and frac), and non-affine (deform). For Standard SG2 DA, \(p_{aug}=0.7\) performed the best.
It is interesting to notice that \(p_{aug}\), controlling the probability of using augmented images, falls between \(0.7\) and \(1.0\) in all the four hyper-parameter searches, two for LatentAugment, one for Standard DA, and another for Standard SG2 DA cases. This suggests that training the downstream model with at least \(70\%\) real images ensures increased performance.
Table II presents downstream task results by summarising the MRI-to-CT translation performance on the test set for all the DA methods. For each DA, it shows, the objective optimised by the hyper-parameter search on the validation set, the performance metrics (MAE, SSIM, PSNR, and LPIPS), the time required to augment a batch of \(16\) samples with resolution \(256\times 256\) (the throughput). It is worth noting that the proposed LatentAugment method performs better than the Baseline by a large margin with both objectives. With the F1 score objective, LatentAugment achieves a \(13.8\%\) decrease in MAE, \(0.9\%\) and \(2.0\%\) increases in SSIM and PSNR respectively, and a \(7.1\%\) decrease in LPIPS relative to Baseline. The MAE objective resulted in similar performance gains, except for a smaller decrease in LPIPS (\(3.7\%\) decrease compared to Baseline). The difference in LPIPS between the two LatentAugment settings is likely due to the search procedure that rewards pixel-based differences (MAE objective), compared to perceptual differences (F1 score objective). This can also be explained by looking at the values of \(\alpha_{perc}\). Indeed, with the MAE objective \(\alpha_{perc}\) is an order of magnitude smaller than the one retrieved by the F1 score objective, _i.e._, 1 vs 10. When comparing the results of Standard DA and Standard SG2 DA (lines 2 and 3 in Table II) against LatentAugment, we argue that the latter is able to generate synthetic images that allow \(\mathcal{M}\) to generalise better. We also notice that both Standard DA and Standard SG2 DA perform better than the Baseline. However, the performance gains are smaller than the ones achieved by LatentAugment. Indeed, on the one hand, Standard DA can produce only a limited set of possible sources of variation, relying on the image primitives included in the transformations set, while, on the other hand, Standard SG2 DA lacks control over the generation process, suffering from poor mode coverage.
The last column of Table II reveals that the sampling process of LatentAugment has a larger throughput when creating a batch of augmented images than the Standard DA and the Standard SG2 DA, as it will be discussed in section V-E.
The Friedman test performed for each metric detects significant differences among the methods for all metrics (p-value \(<\) 0.05). We therefore applied the Nemenyi post-hoc test, whose results comparing all pairs of DA methods are
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Method & Objective\({}^{*}\) & MAE \(\downarrow\) & SSIM \(\uparrow\) & PSNR \(\uparrow\) & LPIPS \(\downarrow\) & Throughput\({}^{*}[\sec]\downarrow\) \\ \hline Baseline & & \(45.64\) & \(9.29\cdot 10^{-1}\) & \(33.62\) & \(6.57\cdot 10^{-2}\) & \\ Standard DA & MAE & \(44.73\) & \(9.31\cdot 10^{-1}\) & \(33.82\) & \(6.32\cdot 10^{-2}\) & \(3.48\cdot 10^{-2}\) \\ Standard SG2 DA & MAE & \(42.26\) & \(9.34\cdot 10^{-1}\) & \(34.19\) & \(6.45\cdot 10^{-2}\) & \(\mathbf{3.09\cdot 10^{-2}}\) \\ \hline LatentAugment & F1 score & \(\mathbf{39.32}\) & \(\mathbf{9.37\cdot 10^{-1}}\) & \(34.29\) & \(\mathbf{6.10\cdot 10^{-2}}\) & \(2.55\) \\ \hline \hline \end{tabular}
\end{table} TABLE II: Downstream task performance on the test set. We marked each metric with a lower or upper arrow to define whether it has to be minimised or maximised. The best results for each metric are reported in bold. *: Objective defines the metric used to perform the hyper-optimisation procedure of each DA policy on the validation set. **: Throughput is computed on a single NVIDIA RTX A5000, with a batch size equal to 16 and an image resolution equal to \(256\times 256\times 2\).
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline & Baseline & & & \(0\) & \(-\) & \(-\) & \(-\) & \(-3\) \\ & Standard DA & 0 & \(-\) & \(-\) & \(-\) & \(-\) & \(-3\) \\ & Standard SG2 DA & \(+\) & \(+\) & \(-\) & \(-\) & \(-\) & \(0\) \\ & LatentAugment\({}^{*}\) & \(+\) & \(+\) & \(+\) & \(0\) & \(+3\) \\ \hline \multirow{4}{*}{**} & Baseline & & \(0\) & \(-\) & \(-\) & \(-\) & \(-3\) \\ & Standard DA & 0 & \(-\) & \(-\) & \(-\) & \(-\) & \(-3\) \\ & Standard SG2 DA & \(+\) & \(+\) & \(-\) & \(-\) & \(-\) & \(0\) \\ & LatentAugment\({}^{*}\) & \(+\) & \(+\) & \(+\) & \(0\) & \(+3\) \\ & LatentAugment\({}^{**}\) & \(+\) & \(+\) & \(+\) & \(0\) & \(+3\) \\ \hline \multirow{4}{*}{**} & Baseline & & \(0\) & \(-\) & \(-\) & \(-\) & \(-3\) \\ & Standard DA & 0 & \(-\) & \(-\) & \(-\) & \(-3\) \\ & Standard SG2 DA & \(+\) & \(+\) & \(-\) & \(-\) & \(0\) & \(+1\) \\ & LatentAugment\({}^{**}\) & \(+\) & \(+\) & \(0\) & \(0\) & \(+2\) \\ \hline \multirow{4}{*}{**} & Baseline & & \(-\) & \(0\) & \(-\) & \(-\) & \(-3\) \\ & Standard DA & \(+\) & \(0\) & \(0\) & \(-\) & \(0\) \\ & Standard SG2 DA & 0 & 0 & 0 & \(-\) & \(-1\) \\ & LatentAugment\({}^{**}\) & \(+\) & \(+\) & \(+\) & \(+\) & \(+4\) \\ \hline \multirow{4}{*}{**} & Baseline & & \(-\) & \(0\) & \(-\) & \(-3\) \\ & Standard DA & \(+\) & \(0\) & \(0\) & \(-\) & \(0\) \\ & Standard SG2 DA & 0 & 0 & 0 & \(-\) & \(-1\) \\ & LatentAugment\({}^{**}\) & \(+\) & \(0\) & \(0\) & \(-\) & \(+1\) \\ & LatentAugment\({}^{**}\) & \(+\) & \(+\) & \(+\) & \(+\) & \(+4\) \\ \hline \multirow{4}{*}{**} & \multirow{2}{*}{**} & \multirow{2}{*}{**} & \multirow{2}{*}{**} & \multirow{2}{*}{**} & \multirow{2}{*}{**} & \multirow{2}{*}{**} & \multirow{2}{*}{**} \\ & Standard DA & \(+\) & \(0\) & \(0\) & \(-\) & \(0\) \\ & Standard SG2 DA & 0 & 0 & 0 & \(-\) & \(-1\) \\ & LatentAugment\({}^{**}\) & \(+\) & \(0\) & \(0\) & \(-\) & \(+1\) \\ & LatentAugment\({}^{**}\) & \(+\) & \(+\) & \(+\) & \(+\) & \(+4\) \\ \hline \hline \end{tabular}
\end{table} TABLE III: Results of the Nemenyi post-hoc test for MAE, SSIM, PSRN, LPIPS values in Table II when comparing all the DA methods. A zero (0) means non-significant difference, and a minus \(-\) (plus \(+\)) means ranked statistically significantly lower (higher) when comparing a method in a row to a method in a column. *: LatentAugment using a set of parameters optimised on MAE. **: LatentAugment using a set of parameters optimised on F1 score.
shown in Table III. When comparing pairs of methods, the symbol minus (\(-\)) indicates that a method in a row has a performance ranking that is statistically significantly lower than a method in a column. Straightforwardly, the symbols zero (\(0\)), and plus (\(+\)) correspond to no significant difference and statistically significantly higher ranking, respectively. The table also includes a last column reporting a score computed as the sum of Nemenyi directions, where \(+\) and \(-\) mean adding and subtracting one unit, respectively: hence, the larger the values, the more times each DA method wins over the others. The score ranges from \(-4\) to \(+4\): a negative value of \(-4\) indicates that the DA approach in the row always loses against all the methods in the columns, whilst \(+4\) denotes the opposite situation. The first two lines of each metric in Table III reveal no significant difference between Standard DA and the Baseline. This suggests that applying standard image transformations like rotation and shifts does not significantly improve the performance on the MRI-to-CT translation task. Moreover, when comparing the third line against the first and last two lines for each metric, we find Standard SG2 DA to be significantly better than both the Baseline and the Standard DA, but significantly worse than LatentAugment. Notably, LatentAugment outperforms the other DA methods. There is no significant difference in the LatentAugment scores for the two objectives (last two lines for each metric), except for the LPIPS metric, where the F1 score objective performs significantly better than the MAE objective.
The qualitative assessment of the different DA methods is illustrated in Table IV and Figure 4. Table IV presents the sCT images generated from an example MRI image when using each DA method. The reference CT and MRI images are in the first column, and the differences between the ground truth CT and the generated sCT images (\(\Delta=\text{CT}-\text{sCT}\)) are in the following columns. The heatmaps were normalised to a range from 0 (no error) to 1 (maximum error), and illustrate the errors in the reconstructed sCT images for each method. From this visual analysis, the Pix2Pix trained without DA (Baseline) demonstrates a higher bone reconstruction error, while Standard DA and Standard SG2 DA yield more accurate reconstruction in the same regions. All tested DA methods, except LatentAugment, fail to accurately reconstruct the urinary bladder from the MRI image.
In Figure 4, we illustrate the mean MAE test set error at different HU values. HU is a quantitative scale that describes the radiodensity in medical CT and provides an accurate tissue type density. Air is represented by a value of \(-1000\) and bone between \(500\) (cancellous bone) to \(2000\) (dense bone). Values centred around \(0\) and within an interval of \(350\) denote soft tissues. We observe that LatentAugment has a smaller MAE for both soft tissue and bone while having a worse performance for HU values smaller than \(-500\): however, such values indicate areas outside the body and, therefore, are not of interest in the task of MRI-to-CT translation. Thus, these values are not included in the computation of the MAE when it is used as a metric.
### _On the importance of the hyper-parameters_
Here we aim to determine the sensitivity of the downstream model performance to LatentAugment hyper-parameters. This analysis not only clarifies the parameters that influence the success of the augmentation, _i.e._, better downstream model performance but also provides a hint on how to set them. To this goal, we trained a random forest regressor to predict the value of a performance metric for each of the \(50\) combinations of the hyper-parameter search having MAE as the objective. Indeed, we excluded the experiments minimising the F1 score as they do not directly consider the downstream model performance. Each combination is a sample for the regressor, and it is represented in \(\mathbb{R}^{7}\) because there are \(7\) LatentAugment hyper-parameters (_i.e._, \(p_{aug}\), \(\eta\), \(K\), \(\alpha_{lat}\), \(\alpha_{pix}\), \(\alpha_{perc}\), and \(\alpha_{f}\)). The ground truth is the MAE obtained by evaluating the downstream model on the validation set. This procedure was repeated for the other three metrics, _i.e._ SSIM, PSNR, and LPIPS. We used a random forest as a regression algorithm for its capability to handle complex data, enhance prediction accuracy, and provide interpretability through feature importance [71]. We trained the random forest using \(1,000\) bootstrap rounds and, then, it provides a measure of importance for each feature. This score is computed as the mean impurity reduction that it brought, and it is usually named Gini importance [72]. We also computed the Normalised Root Mean Squared Errors (NRMSE) on the test set. Such values are reported in round parenthesis in the legend of Figure 5. The same figure shows the results of the features' importance study: the bars are the mean of the Gini importance of each hyper-parameter for each metric, and the error bars correspond to bootstrapped standard errors. The taller the bar, the more important the parameter is for predicting the metric. We observe that \(p_{aug}\) is the most important hyper-parameter: we speculate that this happens
Fig. 4: An illustration of the downstream model performance of the MAE as a function of HU. Values between \(-350\) HU to \(350\) HU denotes soft tissues. Values below \(-500\) and those above about \(500\) are air and bones, respectively. *: LatentAugment with the MAE objective. **: LatentAugment with the F1 score objective.
since it primarily controls each DA policy and needs to be carefully tuned.
To deepen the analysis on \(p_{aug}\) and answer the question "How does the amount of augmented data added affect the DA's improvement?" we show in Figure 6 the validation MAE score obtained across the \(50\) hyper-parameter search experiments as a function of \(p_{aug}\). The minimum MAE score in each \(p_{aug}\) range is represented by a red star and is the one that a hyper-parameter search procedure would have selected. A grey \(\ times\) marks the MAE values from all the other experiments. The minimum MAE score was obtained in the interval \([0.6,0.8)\), indicating that some 60% to 80% of the training images should be real for a good result. Indeed, lower values of \(p_{aug}\) (stronger augmentation) involve higher amounts of augmented data in the training of \(\mathcal{M}\) with the risk of harnessing the stability of the training, _i.e._, if the Pix2Pix discriminator never sees what the training images really look like, it is not clear if it can guide the generator properly in generating non leaked synthetic images. In practical terms, the downstream model might learn the noise in the augmented images as part of the real data distribution, which could result in the noise from the MRI being mistakenly translated to the sCT as well. On the other hand, using too many real images, _i.e._, higher values of \(p_{aug}\) may result in not making \(\mathcal{M}\) in seeing enough synthetic samples to generalise better. This also
Fig. 5: Impact of each LatentAugment hyper-parameter (on the x-axis) in terms of Gini importance: each bar is a metric Each bar for each metric is the mean hyper-parameter Gini importance, whilst the error bars are bootstrapped standard errors. Values in round parentheses in the legend are the mean NRMSE, and the related bootstrapped standard error.
Fig. 6: Effect of \(p_{aug}\) on downstream model \(\mathcal{M}\). We denote as red start the minimum MAE value computed in each interval and with the grey \(\times\) all the experiments that have a \(p_{aug}\) in the considered interval. We divide the possible range of \(p_{aug}\) values in intervals of \(0.2\).
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline \(\text{sCT},\ \text{\AA}\) & \(\text{Method}\) & Baseline & Standard DA & Standard SG2 DA & LatentAugment (ours)* \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Qualitative comparisons between DA methods. The first column reports the reference CT and MRI test images, respectively. Each column denotes a different DA used to train \(\mathcal{M}\). For each DA method, we report the sCT along with the heatmaps \(\Delta=\text{CT}-\text{sCT}\). *: left column reports LatentAugment results using a set of parameters optimised on MAE. The right column reports LatentAugment results using a set of hyper-parameters optimised on F1 score.
reinforces the observation that the dataset should contain at least \(70\%\) real images when training \(\mathcal{M}\) for all the DA methods (subsection V-B).
Turning out the attention to the importance of the regularisation weights, Figure 5 seems not to highlight any difference in the importance of each parameter. The diversity terms \(\alpha_{lat}\), \(\alpha_{pix}\), and \(\alpha_{perc}\) appear to be more important than the fidelity term, \(\alpha_{f}\), highlighting that the diversity plays a crucial role in making the augmentation effective for the downstream task. Moreover, the regularisation weights appear to be more important than the intensity of the transformations, adjusted by \(\eta\) and \(K\). Thus, we argue that it is more important to define useful augmentation direction in the latent space, to the intensity of the transformations.
### _Ablation study_
We now discuss deactivating the diversity-fidelity terms in LatentAugment. To this end, we start from the best parameter configuration found running the MAE-based hyper-parameter search on the validation set, _i.e._, \(K=9\), \(\eta=0.01\), \(\alpha_{lat}=0.1\), \(\alpha_{pix}=3\), \(\alpha_{perc}=1\), and \(\alpha_{f}=0.01\) as it directly correlates with the downstream model performance. From this starting point, we performed five additional experiments training \(\mathcal{M}\) using a perturbed parameter set for LatentAugment. In the first four experiments, we focused on the diversity weights \(\alpha_{pix},\alpha_{perc},\alpha_{lat}\), while in the last experiment, we turned off the fidelity weight, \(\alpha_{f}\).
The results are in the Table V: the first four columns report the diversity and fidelity weights, while the last two show the MAE achieved on the test set and the throughput, respectively. Round parenthesis in each tabular denotes the percentage of MAE and throughput variation when ablating the diversity-fidelity terms compared to the references hyper-parameters set (first line in the table). We achieved the most significant increase in MAE when we deactivated all the diversity loss terms by setting \(\alpha_{lat}\), \(\alpha_{pix}\), and \(\alpha_{perc}\) to 0 (\(6.6\%\) of MAE increase). Without any regularisation that ensures the augmented images are diverse, we only seek the latent space for the direction that ensures fidelity, a desideratum already satisfied by the SG2's generated images without performing any editing policy. In other words, if we do not search the latent space for diversity, we do not need to set a condition to maintain the synthetic images in the manifold of the real data. Indeed, the difference is very small between the MAE obtained within this configuration (\(41.99\) in the second line of Table V) and the Standard SG2 DA (\(42.26\) in the third line of Table II). The sampling time decreases substantially (by \(25.1\%\)) without the diversity terms. To determine the diversity terms that cause the MAE drop, we performed three additional experiments by turning off one diversity term at a time. By observing lines three, four, and five of the table, we notice that latent loss is the most important term. Indeed removing the latent loss (\(\alpha_{lat}=0\)), produces an MAE increase of \(4.8\%\) compared to an MAE increase of \(2.7\%\) and \(1.3\%\) when ablating the perceptual loss (\(\alpha_{perc}=0\)) and the pixel loss (\(\alpha_{pix}=0\)), respectively. This result supports our hypothesis that the highly-structured semantic hierarchy in deep generative representations can be exploited to develop a DA method that manipulates generative models' latent space. Moreover, removing the latent loss does not cause a substantial reduction in the throughput, confirming the suitability of working in the latent space from a computational point of view. In contrast, perceptual loss is the most time-expensive term requiring a forward pass through the VGG network.
trick [47] to investigate the diversity-fidelity trade-off: a truncation of \(1.0\) means searching for the maximum diversity, while a truncation of \(0.0\) means searching for the maximum fidelity for the SG2 synthetic images according to the formula (10). The results are shown in Fig. 6.
Fig. 7: First row: precision-recall comparison of LatentAugment (green circle) and Standard SG2 DA (white triangles), respectively. Second and third rows: qualitative examples of real and augmented samples for configurations A, H, and E.
\(w^{{}^{\prime}}=\overline{w}+\psi(w-\overline{w})\), where \(\psi\in[0,1]\) and denotes the truncation parameter and \(\overline{w}\) represents the average learned representation in the training data [47]. We sampled four values of \(\psi\), once for each letter reported in the plot, _i.e._, for E: 0.0, F: 0.3, G: 0.7, H: 1.0 (white triangles in the uppermost plots in Figure 7). For LatentAugment, we randomly sampled four parameter configurations (A, B, C, and D in green in the same plots). Observing such plots we notice that LatentAugment, even randomly sampling its hyper-parameters, always beats Standard SG2 DA in terms of diversity since all the green triangles are further to the right on the diversity x-axis compared to white triangles, a finding worth for both modalities. Moreover, turning now our attention to the fidelity y-axis while keeping on the Standard SG2 DA configuration with the best diversity (H), LatentAugment shows a comparable or higher fidelity. This is not a limitation of LatentAugment because it is known that SG2 already generates high-quality samples [12]. Thus, we conclude that LatentAugment can synthesise more diverse images than Standard DA SG2, having at least an equal visual quality.
The second and third rows of Figure 7 offer a visual comparison of the images generated from both approaches compared to the real ones. From left to right, we report six real CT (MRI) images, the augmentation results using configuration A for LatentAugment (best diversity and fidelity), and the augmentation results using configurations H and E for Standard SG2 DA, which correspond to maximum diversity and fidelity, respectively. When observing the real and LatentAugment images in Figures 7(a), 7(e) and Figures 7(b), 7(f), respectively, we notice that our method infers in the real images new source of variance than traditional augmentation approaches. Indeed, the LatentAugment images provide a smooth variation of the real ones while retaining the main content (main body structure) and style (texture, colour, etc.). This allows LatentAugment to create realistic but diverse images with respect to those in the training set, avoiding samples that are out-of-distribution. Furthermore, the comparison of Figures 7(c), 7(g), 7(d), 7(h) against real images shows that such a relation is not satisfied by Standard SG2 DA, which provides synthetic samples that are randomly sampled.
With reference to the third issue of the trilemma, which is related to the throughput of the generation, Table II) shows that LatentAugment has a larger throughput than Standard SG2 DA. Nevertheless, this value is still reasonable compared to the inference time of recently emerged diffusion models [73, 74, 75]. Indeed, for the sake of comparison, we run a pre-trained latent diffusion model [75] proposed to reduce the computational requirements compared to pixel-based diffusion models. In the sampling procedure, we set the number of steps to create an image equal to 50. It takes \(16.8\) seconds to generate a single \(256\times 256\) image using a single NVIDIA A100 GPU, which is 2.4 times faster than the NVIDIA RTX A5000 GPU used in our experiments; we are forced to change GPU for memory issues. Thus, we deem that LatentAugment does not break the sampling speed requirement of the trilemma. Moreover, the increased computational cost only applies when training the downstream model and not during inference. Hence, once deployed, the downstream model will benefit from better performance thanks to LatentAugment, without any added computational cost.
## VI Conclusion
In this work we propose LatentAugment, a new method that navigates the latent space to improve the diversity and mode coverage of GAN synthetic images, enabling their adoption for DA purposes. LatentAugment starts from the real image's latent representation and steers the latent space to maximise the spatial, perceptual, and semantic diversity of the generated images. Moreover, it controls fidelity by maximising the realness score of the augmented images.
When compared to Standard DA and Standard SG2 DA, we demonstrated the feasibility of LatentAugment because it increased the generalisation performance of a deep model in the downstream application of MRI-to-CT translation. Moreover, LatentAugment consistently improves GAN-generated images in terms of precision and recall, implying improved mode coverage while maintaining high-quality outputs, thus fulfilling the missing criteria of diversity and mode coverage in the generative learning trilemma. A reflection on this work highlights three main avenues for future work. The first concerns the increased computational overhead required for our policy to manipulate the real images in the GAN latent space. In this respect, it would be warranted to investigate more computationally tractable approaches to steer the latent vectors in the GAN latent space for DA purposes. As recently emerged diffusion models have shown promising results in image quality and mode coverage, the second direction of future investigation will compare our approach to such models. Third, let us remember that LatentAugment is now independent of the downstream task, which alleviates the need for domain expertise and makes the method work for all tasks. Nevertheless, we plan to develop a variation of LatentAugment to incorporate information from the downstream task, _e.g._, performance or overfitting issues, that will let us deepen how much this could be beneficial.
## Acknowledgements
This research was partially supported by Lion's Cancer Research Foundation in Northern Sweden (Grant No. LP 18-2182 and No. LP 22-2319). We also acknowledge financial support from PNRR MUR project PE0000013-FAIR (Italy). Resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) and the Swedish National Infrastructure for Computing (SNIC) at Alvis @ C3SE partially funded by the Swedish Research Council through grant agreements no. 2022-06725 and no. 2018-05973.
## Author contribution
L.T.: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data Curation, Writing - Original Draft, Writing - Review & Editing, Visualization. M.H.V.: Software, Visualization. P.S.: Conceptualization, Methodology, Validation, Formal analysis, Writing - Review & Editing, Supervision. T.L.: Conceptualization, Methodology, Validation, Formal analysis, Writing - Review & Editing, Supervision. |
2306.09175 | A Novel Approach to Encode Two-Way Epistatic Interactions Between Single
Nucleotide Polymorphisms | Modelling gene-gene epistatic interactions when computing genetic risk scores
is not a well-explored subfield of genetics and could have potential to improve
risk stratification in practice. Though applications of machine learning (ML)
show promise as an avenue of improvement for current genetic risk assesments,
they frequently suffer from the problem of two many features and to little
data. We propose a method that when combined with ML allows information from
individual genetic contributors to be preserved while incorporating information
on their interactions in a single feature. This allows second-order analysis,
while simultaneously increasing the number of input features to ML models as
little as possible. We presented three methods that can be utilized to account
for genetic interactions. We found that interaction methods that preserved
information from the constituent SNPs performed significantly better than the
simplest interaction method. Since the currently available ML methods are able
to account for complex interactions, utilizing raw SNP genotypes alone is
sufficient because the simplest model outperforms all the interaction methods
Given that understanding and accounting for epistatic interactions is one of
the most promising avenues for increasing explained variability in heritable
disease, this work represents a first step toward an algorithmic interaction
method that preserves the information in each component. This is relevant not
only because of potential improvements in model quality, but also because
explicit interaction terms allow a human readable interpretation of potential
interaction pathways within the disease. | Nathaniel Gunter, Prashanthi Vemuri, Vijay Ramanan, Robel K Gebre | 2023-06-15T14:55:08Z | http://arxiv.org/abs/2306.09175v1 | # A Novel Approach to Encode Two-Way Epistatic Interactions Between Single Nucleotide Polymorphisms
###### Abstract
Modelling gene-gene epistatic interactions when computing genetic risk scores is not a well-explored subfield of genetics and could have potential to improve risk stratification in practice. Though applications of machine learning (ML) show promise as an avenue of improvement for current genetic risk assessments, they frequently suffer from the problem of two many features and to little data. We propose a method that when combined with ML allows information from individual genetic contributors to be preserved while incorporating information on their interactions in a single feature. This allows second-order analysis, while simultaneously increasing the number of input features to ML models as little as possible. **Methods**
Data was retrieved from the Aging and Dementia Neuroimaging Initiative, consisting of genotype, age, sex, genetic principal components, and amyloid centiloid metrics. The data were then put through one of three genetic interaction methods, and the resultant interaction features were used as inputs for a series of machine learning models. The \(r^{2}\) association between actual and predicted amyloid centiloid was collected for each of a hundred runs and used to construct distributions for each set of generated features.
**Results**
We presented three methods that can be utilized to account for genetic interactions. We found that interaction methods that preserved information from the constituent SNPs performed significantly better than the simplest interaction method. Since the currently available ML methods are able to account for complex interactions, utilizing raw SNP genotypes alone is sufficient because the simplest model outperforms all the interaction methods
**Discussion**
Given that understanding and accounting for epistatic interactions is one of the most promising avenues for increasing explained variability in heritable disease, this work represents a first step toward an algorithmic interaction method that preserves the information in each component. This is relevant not only because of potential improvements in model quality, but also because explicit interaction terms allow a human readable interpretation of potential interaction pathways within the disease.
## 1 Introduction
Many diseases are polygenic in nature. Additionally, many biological disease pathways are known to rely on more than one single nucleotide polymorphism (SNP). Methods exist to aggregate genetic influence on a disease across large parts of the genome, primarily polygenic risk scores (PRS), which are a linear combination of the dosage of individual SNPs. Despite this, influence resulting from interactions between is difficult to account for using a PRS due to the need for a weighting value for each SNP, which does not exist for interactions.
Using Alzheimer's Disease (AD) as a model disease (specifically data from the Alzheimer's Disease Neuroimaging Initiative) we show that explicit encoding of two gene interaction terms benefits from more nuance used to include the information from both component SNPs as well as interaction information. This second-order encoding is a first step towards understanding combinations of genes that may have an outsized impact when compared to the constituent genes considered individually, potentially due to biological pathways that depend on multiple risk alleles being present.
## 2 Methods
### Participant Selection and Description
The Alzheimer's Disease Neuroimaging Initiative (ADNI) is a longitudinal multicenter study to facilitate development of clinical, imaging, genetic, and biochemical markers for the early detection and tracking of AD [14, 15]. Individuals were recruited from over 50 sites across the United States and Canada. Further information about ADNI can be found at [http://adni.loni.usc.edu/](http://adni.loni.usc.edu/). Primary inclusion critera included the presence of genome-wide SNP genotype data and cross-sectional amyloid PET data.
#### 2.1.1 Genomic Data
Array data was acquired from a large genome wide association study (GWAS) and filtered for standard quality control metrics, described previously [10, 12]. Processed genotype files for participants from the various ADNI study phases were downloaded from the LONI web data sharing platform. Overly related samples were removed as described previously [11]. Genome-wide imputation using the TOPMed Imputation Server [5, 13] and TOPMed GRch38/hg38 build reference panel was performed separately within each batch (by GWAS array) and then merged. Monomorphic variants and SNPs with low imputation quality (\(r^{2}<0.8\)) were removed. This resulted in 16,502,548 variants (8,054769 with MAF \(\geq 1\%\)) for 1661 individuals within the ADNI dataset.
#### 2.1.2 Neuroimaging Data
Amyloid PET imaging was performed with \({}^{18}\)F-florbetapir (AV-45) using acquisition and processing protocols as described at [http://www.adni-info.org](http://www.adni-info.org), and with summary measures of global cortical amyloid load downloaded from the ADNI database [6]. Specifically, the celloid (CL) scale [7] metric was used as the outcome of interest for global amyloid PET burden.
### Dataset Feature Selection
To select a minimum viable feature set for proof of concept in this work, we focused first on the top 20 independent SNPs defined by largest effect size for association with clinical AD according to a recent large case/control genome wide association study [1]. _APOEe4_ and _APOEe2_ were also included due to well-known associations with AD [3, 4, 9]. From this starting point, SNPs with minor allele frequency < 5% were excluded to remove sparse features. Age, sex, and the first five genetic principal component eigenvectors were included in all models as covariates. A dataset was created from each of the following encoding methods, using interactions with _APOEe4_.
### Encoding Methods for Explicit SNP-SNP Interaction Terms
The simplest possible encoding scheme for whether two SNPs are interacting is to assign an and operator between the SNP dosages. Because this results in either a one (if both SNPs are present) or a zero (if at least one SNP is missing), we call this binary encoding. Explicitly, this is:
\[A\times B=\begin{cases}0&\text{if }A=0,B=0\\ 1&\text{if }A\neq 0\text{ and }B\neq 0\end{cases}. \tag{1}\]
This is perhaps the most common method of consideration when attempting to define SNP-SNP interactions. However, it also fails to distinguish, which SNP is missing. Even in cases where both SNPs are present, there may be subtle differences between having one copy of each allele and two that this method fails to capture.
The next option that one may approach would likely be a linear encoding method as is here:
\[A\times B=\begin{cases}A-B&\text{if }A\neq B\\ A+B&\text{if }A=B\end{cases}. \tag{2}\]
This improves the variability preserved, allowing for distinction between \((A,B)=(2,2)\) and \((A,B)=(1,1)\) for example. However, this introduces a new problem, specifically that we now have \((1,0)=(2,1)\), a notable defect.
Having discovered issues with both potential simpler methods, the next step was to add and subtract in quadrature. This allowed a preservation of all variance allowed by dosage values of zero, one or two by using the following schema:
\[A\times B=\begin{cases}\sqrt{A^{2}-B^{2}}&\text{if }A>B\\ \sqrt{A^{2}+B^{2}}&\text{if }A=B\\ -\sqrt{|A^{2}-B^{2}|}&\text{if }A<B\end{cases}. \tag{3}\]
As with the linear encoding method, we add instead of subtract values on the diagonal to prevent it from simply going to zero. By using this method, we condense two individual SNPs into a single term that includes both the information in each individual feature and the interaction information between the two SNPs. Heatmaps illustrating differences between can be seen in Figure 1.
### Machine Learning Pipeline
A series of machine learning (ML) models were applied to each dataset, and the best performing in each was selected for comparison to the other datasets. Models were chosen for robustness to the multicolinearity introduced by our encoding method (all interactions with _APOEe4_ should have some collinearity, for example). Several linear regressions with varying penalization methods were used as a baseline, before moving to more advanced methods. Tree based ensemble methods were applied as the more robust models, including XGBoost and a Random Forest. Finally, a stacked meta-regression was constructed from the three best models in the sequence. All models excepting XGBoost were sourced from the scikit-learn library [8]. XGBoost was sourced from the xgboost package [2]. This collection of models was run a hundred times on each dataset, and the \(r^{2}\) correlation between actual and predicted amyloid celluloid values was collected as the primary outcome for each run. The hundred runs were then used to construct a sample distribution of \(r^{2}\) values, boxplots of which are shown in Figure 2.
## 3 Results
### Linear and Quadratic Encoding Improve Results Over Binary Encoding
Both linear and quadratic encoding schemes significantly (p<0.0001) improve the correlation between predicted and actual amyloid CL values(Fig 2). Though the difference between linear and quadratric encoding was not significant, we still see that quadratic encoding does slightly better, likely as a result of preserving the entire variance in the interacting dosages.
Figure 1: Heatmaps of encoded values for all values of SNP A and SNP B. Note that as encoding grows more complex, more variance within the pairing of SNPs is observed.
## 4 Discussion
We found that across various explicit encoding methods for two-way SNP interactions, those that take into account relative dosage performed significantly better than those that do not. Any interaction we applied did worse than the raw genotype dataset, though because we applied only interactions with _APOEe4_ this may be due to interactions that are not accounted for among the rest of the SNPs in our set. This study represents a potential first step towards explicitly encoding gene-gene interactions such that feature importance can provide biological insights in an intuitive manner.
Figure 2: Boxplot comparison of best model for each interaction encoding method applied to _APOEe4_ interactions, as well as the raw SNP genotype. |
2305.10174 | Utilising high-dimensional data in randomised clinical trials: a review
of methods and practice | Introduction: Even in effectively conducted randomised trials, the
probability of a successful study remains relatively low. With recent advances
in the next-generation sequencing technologies, there is a rapidly growing
number of high-dimensional data, including genetic, molecular and phenotypic
information, that have improved our understanding of driver genes, drug
targets, and drug mechanisms of action. The leveraging of high-dimensional data
holds promise for increased success of clinical trials. Methods: We provide an
overview of methods for utilising high-dimensional data in clinical trials. We
also investigate the use of these methods in practice through a review of
recently published randomised clinical trials that utilise high-dimensional
genetic data. The review includes articles that were published between 2019 and
2021, identified through the PubMed database. Results: Out of 174 screened
articles, 100 (57.5%) were randomised clinical trials that collected
high-dimensional data. The most common clinical area was oncology (30%),
followed by chronic diseases (28%), nutrition and ageing (18%) and
cardiovascular diseases (7%). The most common types of data analysed were gene
expression data (70%), followed by DNA data (21%). The most common method of
analysis (36.3%) was univariable analysis. Articles that described
multivariable analyses used standard statistical methods. Most of the clinical
trials had two arms. Discussion: New methodological approaches are required for
more efficient analysis of the increasing amount of high-dimensional data
collected in randomised clinical trials. We highlight the limitations and
barriers to the current use of high-dimensional data in trials, and suggest
potential avenues for improvement and future work. | Svetlana Cherlin, Theophile Bigirumurame, Michael J Grayling, Jérémie Nsengimana, Luke Ouma, Aida Santaolalla, Fang Wan, S Faye Williamson, James M S Wason | 2023-05-17T12:59:05Z | http://arxiv.org/abs/2305.10174v2 | # Utilising high-dimensional data in randomised clinical trials: a review of methods and practice
###### Abstract
**Introduction:** Even in effectively conducted randomised trials, the probability of a successful study remains relatively low. With recent advances in the next-generation sequencing technologies, there is a rapidly growing number of high-dimensional data, including genetic, molecular and phenotypic information, that have improved our understanding of driver genes, drug targets, and drug mechanisms of action. The leveraging of high-dimensional data holds promise for increased success of clinical trials.
**Methods:** We provide an overview of methods for utilising high-dimensional data in clinical trials. We also investigate the use of these methods in practice through a review of recently published randomised clinical trials that utilise high-dimensional genetic data. The review includes articles that were published between 2019 and 2021, identified through the _PubMed_ database.
**Results:** Out of 174 screened articles, 100 (57.5%) were randomised clinical trials that collected high-dimensional data. The most common clinical area was oncology (30%), followed by chronic diseases (28%), nutrition and ageing (18%) and cardiovascular diseases (7%). The most common types of data analysed were gene expression data (70%), followed by DNA data (21%). The most common method of analysis (36.3%) was univariable analysis. Articles that described multivariable analyses used standard statistical methods. Most of the clinical trials had two arms.
**Discussion:** New methodological approaches are required for more efficient analysis of the increasing amount of high-dimensional data collected in randomised clinical trials. We highlight the limitations and barriers to the current use of high-dimensional data in trials, and suggest potential avenues for improvement and future work.
Genetic data, High-dimensional information, Precision medicine, Randomised clinical trials, Statistical analysis
## 1 Introduction
Randomised controlled trials (RCTs) are the gold standard for assessing the safety and efficacy of an experimental treatment. However, despite the growing cost and time associated with developing and evaluating drugs, the probability of success of RCTs is relatively low.[1] One of the reasons is that there is rarely a "one size fits all" approach in most clinical areas because treatment typically has a heterogeneous effect on patients with different pathogenic mechanisms. For example, in a study that investigated predictors of response to Methotrexate in early rheumatoid arthritis, [2]
75% of the patients experienced a good response rate, according to the EULAR response criteria.[3] This study found that several demographic and clinical characteristics (including age, sex, smoking status and symptom duration) are associated with response to Methotrexate. Subsequently, a double-blind phase IV clinical trial in patients with rheumatoid arthritis identified genetic markers that could partly explain the heterogeneity of response to Methotrexate. [4]
With recent advances in the next-generation sequencing technologies, there is a rapidly growing number of human molecular biomarkers that could inform drug mechanisms and increase the success of clinical trials. [5] Molecular biomarkers are measurable molecular characteristics (small molecules) that could identify relatively homogeneous disease subsets in terms of clinical features, diagnosis, prognosis, or response to treatment. With the advent of personalised medicine, molecular biomarkers are gaining importance in clinical research.[6] The most common types of molecular biomarkers are genomic biomarkers such as deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). Single nucleotide polymorphisms (SNPs), which are the most abundant type of genetic variation, represent a difference in a single nucleotide. SNPs are often measured (genotyped) across the genome, and the associations between genome-wide SNPs and different human traits, i.e. genome-wide association studies (GWAS), are extensively used in genetics.[7] GWAS to date have analysed hundreds of thousands of genetic variants generated by next-generation sequencing technologies.
Proteomics and metabolomics also play an important role in many medical applications and are being increasingly used in drug research and development.[8] Proteomics is a study of molecules in proteins that allows characterisation of protein structure and function. Protein biomarkers are also increasingly used in clinical trials in patient stratification, disease diagnosis, and prognosis.[9] Another commonly used genetic biomarker is gene expression, which is a process that regulates the amount of protein or other molecules expressed by the cell, and thus is measured by the amount of the molecules or protein. The advantage of microarray technology is to allow for gene expression profiling which consists of measuring levels of thousands of genes. Changes in gene expression can reflect the change in a cell's environment, such as disease state,[10] response to treatment [11] or treatment side effect.[12] Metabolites, small molecules produced by the body when it breaks down food or drugs, are useful for biomarker discovery because they can be utilised to examine the underlying biochemical activity of cells. Modern technologies, such as mass spectrometry, allow for a large number of metabolies to be measured thus creating a metabolomic profile. [13] Metabolic changes are informative of the response to treatment and therefore have the potential to be useful in clinical trials.[14] For example, a randomised placebo-controlled clinical trial that examined the effect of sertraline on major depressive disorder patients found that baseline metabolic signatures could be predictive of response or non-response to sertraline.[15]
In clinical trials, biomarkers serve multiple purposes, such as prognosis of the likely progression of a disease, and prediction of the likely clinical outcome.[16] Prognostic biomarkers are those that are associated with disease prognosis in the absence of treatment or in the presence of a standard of care treatment. Predictive biomarkers are those that are associated with the effectiveness of a specific treatment. Predictive biomarkers could be used to identify subsets of patients who are likely to respond to treatment. For example, a pooled analysis of randomised trials found that women whose breast tumours have overexpressed the human epidermal growth factor receptor 2 (\(HER2\)) protein or amplified \(HER2\) gene (\(HER2\)-positive) benefited from adjuvant treatment with anthracyclines, while women with \(HER2\)-negative breast tumours derived no added benefits from adjuvant chemotherapy with anthracyclines.[17] Thus, the \(HER2\) status of a breast tumour is a predictive biomarker for response to adjuvant treatment with anthracyclines. Prognostic and predictive biomarkers are usually measured once, before the start of treatment. Biomarkers that are measured repeatedly during the trial could be used as a surrogate endpoint, i.e. as a proxy for a clinical endpoint. Biomarkers based on a continuous single gene measurement can be used as classifiers by considering a threshold, or a series of thresholds, to specify a biomarker-positive and biomarker-negative group [18, 19]. However, identifying single-gene biomarkers requires knowledge and biological interpretation of the disease pathway, which may not always be available.
Recent advances in whole genome biotechnology allow for measuring multiple genetic variants during clinical trials.[20, 21, 22, 23] This allows biomarkers across multiple genes to be developed, i.e. biomarkers based on high-dimensional data. A variety of predictive and prognostic biomarkers based on high-dimensional molecular profiling have been proposed in oncology.[24, 25, 26] These biomarkers are especially relevant for finding potential responders to a treatment in settings where an assay for identifying biomarker-positive patients is not yet available.[27] While prognostic biomarkers based on high-dimensional data are becoming increasingly available, predictive biomarkers based on high-dimensional data are rare due to the challenge of understanding a treatment's mechanism of action.[28] Additional challenges of using high-dimensional data are identifying which biomarkers to include in the model, and how to effectively/appropriately combine the individual biomarkers.[29]
In this paper, we provide an overview of several statistical methods for utilising high-dimensional data in the analysis of RCTs. We also present a review of recently published clinical trials that utilised high-dimensional data to investigate how often various methods have been used in practice.
## 2 Overview of methods for utilising high-dimensional data in clinical trials
In this section, we describe statistical methods used for analysing high-dimensional data in RCTs; many of which have been implemented in standard statistical software such as R. [30] A summary of these methods is provided in Table 1. When considering suitability of the methods, it is important to distinguish between testing for association and prediction. Association tests, such as the Chi-Square test, can shed light on the biological processes by providing better understanding of the phenomenon in question. Association tests are useful for testing hypotheses about the differences between the groups of observations, such as the difference between the treatment arms, or for finding biomarkers that are associated with response to treatment.
In prediction analysis, statistical models such as regression are applied to data in order to build predictors that could be applied to future studies. The quality of prediction should be assessed on an independent dataset using some measure of the discrepancy between the observed and predicted outcomes.
Some of the methods we review in this manuscript focus on either testing for association or on prediction, while others focus on both. However, it is important to note that models that have high power to detect associations do not necessarily have high predictive power.[31]
### Notation
In this section we describe a two-arm RCT where participant \(i\) (\(i=1,\ldots,n\)) is randomised to either an intervention arm (\(t_{i}=1\)) or control arm (\(t_{i}=0\)). For each participant \(i\), a set of \(j=1,\ldots,m\) biomarkers, \(x_{ij}\), are collected, and an outcome \(y_{i}\) is measured. Regression modelling is often used to model the outcome \(y_{i}\) as a function of the covariates \(x_{ij}\), which are measurable quantities related to the outcome. For different types of outcome, different types of regression are used. The most common types are linear regression (for continuous outcomes), logistic regression (for binary outcomes) and Cox regression (for time-to-event outcomes). Linear regression models the mean of the continuous outcome, assuming that the outcome is normally distributed. Logistic regression models the log odds, \(\mathrm{logit}(p_{i})=\log\left(\frac{P(Y_{i}=1)}{1-P(Y_{i}=1)}\right)\), where \(P(Y_{i}=1)\) denotes the probability of a successful outcome. Cox regression models the hazard ratio of an event at time \(t\), \(\log\left(\frac{h_{i}(t)}{h_{0i}(t)}\right)\), where \(h_{0i}(t)\) is the baseline hazard at time \(t\). In these regression models, the link function of the response variable connects the covariates with the expected value of the outcome variable in a linear way, while the covariates are being weighted by their coefficients. The null hypothesis of a specific coefficient being zero represents testing for an effect of the corresponding covariate.
### Unvariable approach
A univariable approach consists of testing a single biomarker's relationship to a response variable. In linear regression, the outcome \(y_{i}\) for patient \(i\) takes the form
\[y_{i}=\beta_{j0}+\beta_{j1}t_{i}+\beta_{j2}x_{ij}+\beta_{j3}t_{i}x_{ij}+ \epsilon_{i},\]
where \(\epsilon_{i}\sim N(0,\sigma^{2})\) is the error term. In logistic regression, the probability of the outcome \(y_{i}\) for patient \(i\) takes the form:
\[\mathrm{logit}(p_{i})=\beta_{j0}+\beta_{j1}t_{i}+\beta_{j2}x_{ij}+\beta_{j3}t_ {i}x_{ij}.\]
In Cox regression, the hazard ratio of an event at time \(t\) for patient \(i\) takes the form:
\[\log\left(\frac{h_{i}(t)}{h_{0i}(t)}\right)=\beta_{j1}t_{i}+\beta_{j2}x_{ij}+ \beta_{j3}t_{i}x_{ij}.\]
The null hypothesis \(H_{j2}:\beta_{j2}\) = 0 represents testing for a prognostic effect of biomarker \(j\), while the null hypothesis \(H_{j3}:\beta_{j3}\) = 0 represents testing for a predictive effect of biomarker \(j\). These hypotheses could then be tested using a Wald test, for example.
Applying statistical tests to one biomarker at a time could result in an inflated number of false positives, due to multiple independent comparisons.[32] To prevent this, the Bonferroni correction [33] is often applied, which adjusts the significance level of individual tests to level \(\alpha/m\), where \(m\) is the number of tests and \(\alpha\) is the desired family-wise
error rate. To reduce multiple testing burden, a two-step procedure has been proposed [34] that accounts for correlation between the biomarkers via penalised regression. In the first stage of the procedure, a screening test selects a subset of biomarkers, and in the second stage, only the selected biomarkers are tested for interaction.
An additional challenge in detecting interactions is due to the large sample size required to obtain high power.[35, 36] In the case of a binary biomarker, in which the trial population can be divided into biomarker-positive and biomarker-negative subgroups, the sample size for testing a null hypothesis of no interaction is at least four times higher than the sample size needed to test the main effect (see Appendix).
Unvariable analysis models are straightforward to fit and produce intuitive results. However, in the real word there is often more than just one biomarker involved. Analysing one biomarker at a time ignores the correlation between the biomarkers, which could lead to incorrectly concluding that some biomarkers are predictive.
### Multivariable approach
A multivariable regression takes into account two or more biomarkers. Similarly to the univariable regression, there are three commonly used regression types: linear (for continuous outcomes), logistic (for binary outcomes) and Cox regression (for time-to-event outcomes), which take the following form when \(m\) biomarkers are simultaneously adjusted for:
\[y_{i}=\beta_{j0}+\beta_{1}t_{i}+\sum_{j=1}^{m}\beta_{j2}x_{ij}+\sum_{j=1}^{m} \beta_{j3}t_{i}x_{ij}+\epsilon_{i},\]
\[\mathrm{logit}(p_{i})=\beta_{j0}+\beta_{1}t_{i}+\sum_{j=1}^{m}\beta_{j2}x_{ij} +\sum_{j=1}^{m}\beta_{j3}t_{i}x_{ij},\;\mathrm{and}\]
\[\log\left(\frac{h_{i}(t)}{h_{0i}(t)}\right)=\beta_{1}t_{i}+\sum_{j=1}^{m} \beta_{j2}x_{ij}+\sum_{j=1}^{m}\beta_{j3}t_{i}x_{ij},\]
respectively. Multivariable analysis estimates the contribution of each biomarker \(x_{ij}\) while adjusting for the effect of other biomarkers or covariates. Therefore, unlike univariable analyses, it takes into account correlation between biomarkers.
The main drawback of the multivariable approach is the large number of parameters that may be included. With high-dimensional data, this approach can lead to a model with more parameters than observations (i.e. the "curse of dimensionality"). In this case, multivariable linear regression cannot be used because the unique ordinary least squares estimators of the regression coefficients are not defined. To reduce the complexity of the model, several variable
\begin{table}
\begin{tabular}{l|l|l|l} \hline Method & Summary & Advantages & Disadvantages \\ \hline Unvariable approach & Testing one biomarker at a time & Simplicity. & Multiple testing issue \\ Multivariable approach & Testing a number of biomarkers simultaneously & Fitting a single model for a several biomarkers & Overfitting \\ Penalised approach & Penalises regression coefficients, causing & Prevention of overfitting & Tuning of parameters \\ & them to shrink, maybe to zero & & \\ Random forests & Collection of regression or classification & Allows modelling non- & Lack of intuitive interpretation \\ & trees & & \\ Support vector machines & Building a classifier by fitting a hyperplane & Allows modelling non- & Computational complexity \\ & between different groups of observations & & \\ Cluster analysis & Grouping data based on a measure of similarity & Allows modelling non- & Sensitivity to outliers \\ Gene sets and networks & Undirected graphs representing associations & Dimensionality reduction & Computational complexity \\ & between the genes & & \\ Principal component & Transforming high-dimensional data into & Allows modelling non- & Lack of intuitive interpretation of the principal components \\ analysis & low-dimensional variables that account for most of the original data’s variation & Finding group of patients benefiting from treatment & Multiple testing issue \\ \hline \end{tabular}
\end{table}
Table 1: Summary, advantages and disadvantages of methods utilising high-dimensional data.
selection approaches have been proposed, including machine learning approaches (discussed below). However, a large number of parameters in the model could still lead to overfitting, which is the phenomenon of modelling the observed data too precisely so that it captures the noise in the data. In this case, the model shows an inferior performance when applied to a new dataset. To reduce the potential effects of overfitting, a rule-of-thumb is that at least ten events are required per variable in logistic and Cox regression models, though this rule is often debated.[37] For linear regression estimated using ordinary least squares, the number of covariates that can be included in the model is generally higher; it has been shown that two subjects per value would be sufficient for adequate estimation of regression coefficients.[38]
### Regularised (penalised) regression
Regularised, or penalised, approaches penalise models by shrinking the estimates of the regression coefficients. Suppose a regression model with a \((m+1)\)-dimensional vector of covariates \(\mathbf{\beta}=(\beta_{0},\beta_{1},\ldots,\beta_{m})^{T}\) is fitted by maximising the log-likelihood function \(\ell(\mathbf{\beta})\). In penalised regression, \(\ell(\mathbf{\beta})\) is maximised subject to a penalty function \(P(\mathbf{\beta})\) and a regularisation parameter \(\lambda\), that is, \(\hat{\mathbf{\beta}}=\operatorname*{argmax}[\ell(\mathbf{\beta})-\lambda P(\mathbf{ \beta})]\). As a result, the regression coefficient estimate \(\hat{\mathbf{\beta}}\) is shrunk towards zero in comparison to the maximum likelihood estimate, with \(\lambda\) controlling the amount of shrinkage.
The method induces different degrees of sparsity, depending of the type of penalty used. For example, the Least Absolute Shrinkage and Selection Operator (LASSO) regression [39] allows shrinkage of the coefficients to zero by penalising the model with \(P(\mathbf{\beta})=||\mathbf{\beta}||_{\ell_{1}}=\sum_{j=1}^{m}|\beta_{j}|\) and is therefore a sparse method which allows for variable selection. Another type of penalised regression is ridge regression [40] in which the penalty function has the form \(P(\mathbf{\beta})=||\mathbf{\beta}||_{\ell_{2}}=\sum_{j=1}^{m}\beta_{j}^{2}\). Ridge regression shrinks the coefficients _towards_ zero, however it does not shrink them to zero. Elastic net [41] is a type of penalised regression in which both penalties are used, i.e.
\[\hat{\mathbf{\beta}}=\operatorname*{argmax}\left[\ell(\mathbf{\beta})-\lambda\left( \eta\sum_{j=1}^{m}|\beta_{j}|+\frac{1-\eta}{2}\sum_{j=1}^{m}\beta_{j}^{2} \right)\right].\]
The combination of the penalties is controlled by a penalty weight parameter \(\eta\). When \(\eta=1\), the elastic net is identical to LASSO, whereas when \(\eta=0\) it is identical to ridge. Elastic net combines setting of the coefficients to zero using LASSO and shrinking of the coefficients using ridge, to improve the model's performance.
A penalised logistic regression model, which included ten genes, was used to predict the overall complete pathologic response rate in a phase II genomic study of ixabepilone as neoadjuvant treatment treatment for breast cancer.[42] A pharmacogenetic study used ridge regression to predict a response to treatment.[11] It has been found that using LASSO regression improved the accuracy of the treatment effect estimator in a RCT.[43] A review of neoadjuvant clinical trials in breast cancer that analysed gene expression data [44] found that penalised methods outperform competing methods when applied to estrogen receptor-positive (ER+) early breast cancer patients treated with neoadjuvant aromatase inhibitor letrozol. However, an application of a penalised high-dimensional Cox model to an early breast cancer RCT of chemotherapy with or without adjuvant trastuzumab resulted in highly variable expected survival probabilities with very large confidence intervals.[45]
Group-lasso [46] is a special case of LASSO that performs selection of important groups of variables. For example, the groups could represent specific biological pathways of the biomarkers, or variables that reflect a specific aspect of a treatment. Extending the group-lasso by considering interactions,[47] however, can result in many false positive interactions for high-dimensional problems.
Penalised regression requires optimisation of the penalty parameter, which could be done using cross-validation. In the cross-validation procedure, a model is fitted to a subset of the data and its accuracy is assessed on a different subset of the data. The process is repeated multiple times with different partitions of the data for fitting (training subset) and assessing (testing subset). Parameters that lead to the best accuracy are chosen. However, when cross-validation is used to examine model performance, tuning of the parameters requires nested cross-validation, in which the inner cross-validation (for tuning of parameters) is encapsulated inside the outer cross-validation (for assessing model performance). This procedure requires large sample sizes. It is also necessary to ensure homogeneous partitioning of the data with respect to important features, in order to achieve a valid cross-validation procedure [48].
### Machine learning approaches
Machine learning is a class of algorithms that analyse data based on existing (training) data.[49] Machine learning algorithms can either be supervised or unsupervised, with the difference being the labelling of the input data. In supervised machine learning algorithms such as classification, the training data is labelled, while in unsupervised methods such as clustering, the training data is not labelled. Supervised approaches are used for predictive modelling
when the classification of the training data is known in advance, and the trained algorithm is used to predict or classify new data with unknown classification, such as response or non-response to treatment in clinical trials. Unsupervised methods are used for feature selection problems, such as identifying a predictive biomarker in the context of biomarker analysis, and dimensionality reduction [50].
#### 2.5.1 Random forests
Random forests are a type of high-dimensional nonparametric model aimed at prediction,[51] and therefore belong to the class of supervised machine learning algorithms. They are represented as a collection of regression trees (for a continuous outcome) or classification trees (for a binary outcome). Each tree is a decision model that consists of a recursive partitioning of a dataset into subsets that are determined by a randomly selected group of input variables. The subsets are homogeneous with respect to the group of variables. At each node of a tree, different groups of variables might be used. Random forests are formed by trees constructed from training datasets sampled with replacement from the original dataset. The remaining samples form the testing datasets and are used for assessing prediction accuracy. For example, the probability of misclassifying an observation could be used as a measure of prediction accuracy. Random forests are flexible in that regression and classification trees can incorporate non-linear interactions between the variables.[52]
Traditional random forests are designed for one treatment group and are therefore suitable for prognostic, rather than predictive, purposes. A few adaptations of the method for more than one treatment group have been developed that facilitate identification of a subset of patients who benefit from the treatment. For example, the "Virtual Twins" method [53] is a random forest-based method of identifying a subgroup of enhanced treatment effect by incorporating treatment-covariate interactions.
A variation of the random forest has been developed, that uses a measure based on a difference in survival times as an alternative to the accuracy prediction, for deciding on a best possible split.[54] When applied to a phase III RCT with high-dimensional SNP data, this approach has been shown to outperform a univariable analysis. The challenges of this method include specifying model parameters, such as the number of trees in the forest.
#### 2.5.2 Support vector machines
Support vector machines (SVM) are a supervised machine learning method for building a classifier that can be used to account for non-linear relationships between variables.[55] SVM assign an observation to a specific category, or class, by fitting a hyperplane between the samples from different classes so that the distance between the hyperplane to the nearest sample is maximised. This distance is maximised using support vectors, i.e. data points that are closer to the hyperplane. SVM involve transforming the data using a kernel function to allow linear separation of the data. An advantage of SVM is that it can effectively incorporate high-dimensional data that can be noisy and/or correlated. It has been widely applied to classification problems using high-dimensional biomarkers.[56, 57, 58, 59, 60] SVM could be used in RCTs if treatment-covariate interaction effects are introduced into the feature space of SVM. Using SVM constructed from the combination of brain imaging and demographic and clinical biomarkers, a group of Mild Cognitive Impairment patients who were most likely to cognitively decline has been identified.[61] Limitations of SVM include their computational complexity, especially the need to optimise their parameters.
### Cluster analysis
Clustering methods are unsupervised methods of grouping data based on some measure of similarity, so that the observations in each group are similar (but dissimilar to those in other groups). The most common measure of similarity between the observations is correlation. Traditional clustering methods include hierarchical clustering and partitioning.[62] In hierarchical clustering, the data is organised into a tree-shape structure (a dendogram) constructed from hierarchical series of nested clusters, while partitioning does not assume hierarchical relationships between clusters. An example of partitioning is \(k\)-means clustering, which partitions the data into a pre-specified number \(k\) of mutually exclusive groups so that the the sum of the squared distances between the members of the group and the means of the clusters is minimised.[63] Another example is Partitioning Around Medoids clustering, which is similar to the \(k\)-means but is more robust to outliers.[64]
Hierarchical clustering employs agglomerative and divisive strategies. Hierarchical agglomerative clustering starts by treating each sample as a separate cluster and then merges the most similar clusters together. This process is repeated iteratively until all samples are clustered. Hierarchical divisive clustering starts by treating all the observation as one cluster, and them recursively splits the cluster into two, until the desired number of clusters is obtained. Hierarchical clustering could be used to analyse genes that are differentially expressed between different experimental conditions, such as the different treatment groups in clinical trials. To estimate the number of clusters in the dataset, consensus
clustering could be used which utilises bootstrapping to classify each observation multiple times. Finally, observations are assigned to the cluster with the highest consensus score and the number of clusters is derived from objective metrics.[65] Other methods, called model-based clustering, exploit the same idea of making clustering robust to model misspecification and estimation of the number of clusters. They assume that observations follow a mixture of distributions rather than belonging to discrete classes.
Clustering is often used in gene expression analysis because it simplifies visualisation and allows one to trace specific biological pathways.[66, 67, 68, 69] Moreover, it can be used to identify specific disease subtypes. For example, hierarchical clustering was able to identify pre- and post-vaccine samples in a study of the effect of an influenza vaccination on gene expression. [70]
### Gene sets and networks
Gene networks belong to the class of the unsupervised machine learning algorithms. They are undirected graphs with nodes representing genes and edges representing gene-gene associations. Genes with similar co-expression patterns are then grouped into modules using clustering techniques. Different types of co-expression networks are discussed elsewhere.[71, 72]
Weighted gene co-expression network analysis (WGCNA) [73] is a common co-expression network method that is used for finding clusters of highly correlated genes. It summarises the clusters using the representative gene (the eigengene), thus performing dimensionality reduction. The eigengene is a vector that represents the expression of all the genes in the model. WGCNA has been used to analyse metabolites in an ancillary study of vitamin D supplementation for the prevention of asthma.[74] The eigenvalues for the modules of metabolites were used to find association with asthma.
Gene set enrichment (GSE) is another subtype of gene networks that clusters genes into pre-defined sets that share common biological functions, and summarises the gene expressions into a single score for each set. Scores represent the extent of the differences in gene expression between the phenotypic classes of interest, for example tumours that are responsive or non-responsive to treatment. Testing the statistical significance of the scores allows detection of an enrichment signal.[75] GSE analysis has been used to compare advanced colorectal cancer subtypes in a RCT of first-line treatment of metastatic colorectal cancer.[76] Gene networks are useful for dimensionality reduction of a large number of correlated genes. To our knowledge, this method has not been used for comparing treatment arms or finding predictive biomarkers in clinical trials.
### Principal component analysis
Principal component analysis (PCA) is a statistical technique that provides information on directions of variability in data. PCA consists of transforming high-dimensional data into a lower-dimensional set of variables (principal components) such that the first principal component (PC) is associated with the largest source of variation, the second PC with the largest remaining source of variation and so on. The procedure of computing the PCs involves computing the eigenvalues and eigenvectors for the covariance matrix of standardised data. PCs are formed by transforming the original data using a matrix constructed from the eigenvectors.[77]
Each PC is constructed as a linear combination of the original high-dimensional data in such a way that the PCs are mutually uncorrelated. Thus, PCs could prevent multicollinearity issues in regression models and be very useful for correlated biomarkers. Once computed, the PCs can be used as covariates in linear regression models, as well as a dimensionality reduction technique for clustering. PCA also makes high-dimensional data more suitable for visualisation. For example, PCs are widely used to identify genetic variation associated with geographic region, [78] with most geographic variation explained by the first two PCs. However, it has been shown that in the analysis of gene expression data, many more PCs might be needed to detect relevant variability, depending on the sample sizes and effect sizes. [79]
A challenge of PCA is the interpretation of the PCs, as well as identifying the most informative PCs. In the field of clinical trials, the use of PCA is limited to finding prognostic rather than predictive biomarkers.
### Adaptive signature design and risk scores
Adaptive signature design methods utilise high-dimensional data to construct a low-dimensional (or scalar) signature. They combine information from multiple genetic markers to create a signature that could be used for diagnostic, prognostic or predictive purposes. Adaptive signatures are motivated by the fact that genetics play an important role in the heterogeneity of disease progression and response to treatment, and could therefore be used to facilitate personalised medicine. The original adaptive signature design constructed a low-dimensional signature based on the interaction
between the treatment and the high-dimensional baseline biomarker data. [27, 80] It was developed for situations with no pre-defined predictive biomarker and utilised a threshold on the number of biomarkers included in the signature. Initially, two non-overlapping groups of trial participants have been used to develop and validate the signature,[27] while later a cross-validation has been implemented, which uses patient information more efficiently. [80]
A few studies [81, 82, 83, 84] construct a signature as a sum of the effects of the interactions between the treatment and each of the covariates separately. In these methods, the adaptive signature is represented by a single score for each patient. Specifically, for a binary outcome, a single covariate logistic model is fitted for each biomarker \(j=1,\ldots,m\) as follows:
\[\mathrm{logit}(p_{i})=\beta_{0}+\beta_{1}t_{i}+\beta_{j2}x_{ij}+\beta_{j3}t_{ i}x_{ij},\]
where \(p_{i}\) is the probability of the outcome of interest, \(\beta_{j2}\) represents a prognostic effect of biomarker \(j\), and \(\beta_{j3}\) represents a predictive effect of biomarker \(j\).
A risk score for patient \(i\) (\(RS_{i}\)) is computed as the sum of the maximum likelihood estimate of the treatment-covariate interaction coefficients \(\hat{\beta}_{j3}\) weighted by the value of the biomarker \(x_{ij}\), i.e.
\[RS_{i}=\sum_{j=1}^{m}\hat{\beta}_{j3}x_{ij}.\]
The collection of risk scores \(RS_{i}\) for all \(i\) could be subdivided in different ways to represent different strata of patients in terms of the predicted treatment benefit. [83, 84] At the end of the trial, a test is performed for the overall comparison between the arms, as well as for the comparison between the arms in the subgroup, using an \(\alpha\)-splitting approach to control the type I error rate. Alternatively, they could be used as covariates to test for an association with the outcome.[82]
Adaptive signature designs often utilise a combination of the previously described approaches. For example, an adaptive signature which is predictive of response to MAGE-A3 immunotherapeutic in patients with metastatic melanoma has been developed and validated in a randomised phase II trial [85] using a variation of PCA and hierarchical clustering. Another phase II trial [86] used a combination of a scoring system and the penalised approach. For each patient \(i\), the following risk score was constructed that represented the hazard ratio under the two treatments on the logarithmic scale:
\[RS_{i}=\mathrm{log}[h_{0}(t|\mathbf{x}_{i})]-\mathrm{log}[h_{1}(t|\mathbf{x}_{i})],\]
where \(h_{j}(t|\mathbf{x}_{i})\) is the hazard rate for treatment \(j=\{0,1\}\) for patient \(i\), and \(\mathbf{x}_{i}\) is the vector of gene expressions for patient \(i\). The hazard functions were estimated with penalised Cox regression.
The adaptive signature designs are applied in a post-hoc manner, i.e. they identify the subgroup of patients at the end of the trial and therefore do not fit the classical definition of an adaptive design. Rather, they are adaptive in the sense that they allow adaptive selection of patient subgroups. For example, an adaptive signature design has been proposed that finds the optimal subgroup in terms of maximising the power for identifying treatment benefit. [87] To address the issue of adaptive changes in trials, the risk scores-based adaptive signature has been utilised in the adaptive enrichment framework, where the trial population is adaptively enriched with patients who are predicted to benefit from the treatment.[88]
In summary, adaptive signature designs have the advantage of improving the efficiency of clinical trials by identifying enhanced benefit subgroups. More reliably identifying patient subgroups who benefit from the treatment would prevent the situation in which a potentially effective treatment is disregarded because the treatment effect in the overall population is overlooked. Moreover, adaptive signature designs have the potential to avoid patients who receive no benefit from receiving the treatment, thus preventing unnecessary exposure to possible side effects. However, adaptive signature designs come with a statistical challenge of a multiple comparisons issue. Additionally, there may be a need for dimensionality reduction in situations with a large number of baseline biomarkers. [89]
## 3 Current use of methods for utilising high-dimensional data in RCTs
### Review methods
We performed the following literature search of RCTs using the _PubMed_ database:
("gene expression" OR "nucleotide" OR "*omic" OR "genetic signature" OR "SNPs") AND (trial[Title/Abstract]) AND ((ffrrt[Filter]) AND
(randomizedcontrolledtrial[Filter]) AND (2019/5/1:2021/5/1[pdat])) This search, performed on June 2021, covers publication of RCTs between May 1st 2019 and May 1st 2021, with at least one of the terms: "gene expression", "nucleotide", "omics", "genetic signature" or "SNPs", appearing in the title or abstract. We included full-text articles published in English. The search identified 174 papers which were screened for eligibility.
After preliminary screening of titles and abstracts, eight reviewers (SC, TB, MJG, JN, LO, FW, SFW, JMSW) independently assessed the full text of relevant publications for final inclusion.
Papers were deemed eligible if they described RCTs that collected high-dimensional data. Here, data variables refer to biological variables collected at randomisation that could be used for comparing between the treatment arms or stratifying patients. For the purpose of this review, we adopted a flexible definition of high-dimensionality with respect to the number of variables. Specifically, we included studies containing at least 10 variables as they could benefit from methods suitable for high-dimensional data. An additional study that analysed 7 SNPs was included in this review as it used a multivariable approach.
We analysed the type of high-dimensional data (e.g. DNA, gene expression, etc.), the number of covariates used, purpose of collecting high-dimensional data, method of analysis of high-dimensional data, clinical area, and number of treatment arms. See the Supplementary Materials for the full summary of extracted data.
### Results
Out of the 174 papers returned, 100 (57.5%) met the inclusion criteria. A summary of the data extracted from included articles is given in Table 2.
Most of the articles were for clinical trials in oncology (30%) and various chronic diseases (28%), including liver, kidney, rheumatic and respiratory diseases. Other clinical areas included nutrition and ageing (18%) and cardiovascular diseases (7%).
The majority of articles (70%) analysed gene expression data. The second most common type of data analysed was DNA data (21%), including genome-wide SNP data. Five percent of the articles analysed metabolomic data, protein data and data from questionnaires. Four percent of the articles analysed multiple types of data.
We divided the number of analysed covariates into four categories: "\(<\)10", "10-100", "101-1000", and "\(>\)1000". A large proportion of the analysed articles (41%) had 10-100 covariates available for analysis. A similar proportion of articles (38%) used \(>\)1000 covariates in their analysis. Fewer studies (20%) had 101-1000 covariates for the analysis, and one study had seven covariates, thus falling into the "\(<\)10" category.
The methods used in the analyses and their advantages are summarised in Table 1. 42% of the studies used multiple methods of analysis. The most common analysis technique was a univariable analysis (36.3%), followed by a multivariable analysis (17.5%), cluster analysis (11.25%) and PCA (10.6%). Other methods that were reported included: gene networks, multiple correspondence analysis, [90] penalised approaches and risk scores, Shannon entropy and Simpson index, [91, 92] significance analysis of microarrays, [93] single sample predictor classifier,[94] SVM.
Most trials had two arms (79%), followed by three-arm (17%) and four-arm trials (4%). In this review, we only analysed RCTs and therefore single-arm studies have been excluded.
The purpose of collecting high-dimensional data varied substantially between trials and was often not reported clearly. For those trials where it was reported, categorisation of the reasoning proved challenging. Some trials used high-dimensional data as the (primary or secondary) outcome by analysing the effect of the intervention on gene expression, for example. In some cases, high-dimensional data was used to explore predictive biomarkers or to compare treatment arms. In other cases, the prognostic properties of the high-dimensional data were investigated, i.e. they did not compare the treatment arms but analysed the data as if it were observational.
Figures 1-3 show the distribution of different types of data, methods of analysis and clinical areas, respectively, stratified by the number of covariates. With regards to data types, most studies used gene expression or DNA and had 10-100 covariates, 100-1000 covariates or \(>\)1000 (Figure 1). Regarding analysis methods, most studies using univariable and multivariable approaches utilised 101-1000 covariates, while gene sets and networks, clustering, and PCA most commonly used 10-100 covariates (Figure 2). In oncology, chronic diseases, and nutrition and ageing, the most common number of variables was 10-100; a substantial number of studies across all clinical areas analysed a larger number of covariates (101-1000 and \(>\)1000, Figure 3).
Figure 1: Number of covariates per type of high-dimensional data.
Figure 2: Number of covariates per method of analysis.
## 4 Discussion
In this paper, we provided an overview of methods for analysing high-dimensional data collected in clinical trials. We also reviewed 100 recently published articles reporting RCTs that utilised high-dimensional data to identify which methods are typically used in practice. Although we focused on high-dimensional genetic data, the methods described could be applied to other types of high-dimensional data, such as questionnaires, imaging data or data from wearable technologies.
In our search, gene expression and DNA data were the most common data analysed, covering a combined total of 91% of the high-dimensional data types included. A majority of the articles collected a large number of genetic data (\(>\)1000 variables), which reflects the progress in high-throughput technologies and highlights the need for increased uptake of more sophisticated methods to utilise the high-dimensional data efficiently.
Although most of the trials we reviewed had two-arms, over 20% had three or four arms. This reflects the additional complexity and challenges of utilising high-dimensional data in conjunction with multi-arm trials. Most clinical trials in this review were in the areas of oncology (30%) and several chronic diseases (28%). One of the challenges of trials for chronic diseases is learning how best to treat patients in the long-term. In particular, different treatments might be used for patients at different disease stages. Therefore, efficient methods that utilise changes in high-dimensional data over time are needed, for example methods that utilise longitudinal modelling.
\begin{table}
\begin{tabular}{l l l} \hline
**Question** & **Answer** & \(\boldsymbol{n}\) **(\%)** \\ \hline & **Gene expression** & **70 (70\%)** \\ & DNA & 21 (21\%) \\ Type of & Metabolomic data & 1 (1\%) \\ high-dimensional data & Multiple data types1 & 4 (4\%) \\ & Proteomic data & 3 (3\%) \\ & Questionnaire & 1 (1\%) \\ \hline & \(<\)10 & 1 (1\%) \\ Number of covariates & **10–100** & **41 (41\%)** \\ used & 101–1000 & 20 (20\%) \\ & \(>\)1000 & 38 (38\%) \\ \hline & **Univariate approach** & **58 (36.3\%)** \\ & Multivariable approach & 28 (17.5\%) \\ & Gene sets and networks & 12 (7.5\%) \\ & Cluster analysis & 18 (11.25\%) \\ Method of analysis2 & Principal component analysis & 17 (10.6\%) \\ & Penalised regression & 5 (3.1\%) \\ & Risk scores & 4 (2.5\%) \\ & Not stated & 2 (1.25\%) \\ & Other3 & 16 (10\%) \\ \hline & **Oncology** & **30 (30\%)** \\ & Chronic diseases & 28 (28\%) \\ Clinical area & Nutrition and ageing & 18 (18\%) \\ & Cardiovascular diseases & 7 (7\%) \\ & Other4 & 17 (17\%) \\ \hline Number of treatment & **2** & **79 (79\%)** \\ arms & 3 & 17 (17\%) \\ & 4 & 4 (4\%) \\ \hline \end{tabular}
* Questionnaires, omics data, biochemical characteristics and laboratory parameters.
* The denominator used to compute these percentages is 160, because 42 (42%) studies used multiple methods of analysis.
* Functional analysis, Shannon entropy and Simpson index, significance analysis of microarrays, single sample predictor classifier, SVM.
* HIV, malaria, mental health, neuropathy, ophthalmology.
\end{table}
Table 2: Summary of extracted data. The denominator used to compute the percentages is 100 (number of eligible papers) unless specified. The most common answers appear in bold.
Although we found some examples of more sophisticated methods being used to analyse high-dimensional data, the majority implemented straightforward approaches to examine interactions, such as univariable analysis. Methods such as machine learning, penalised approaches and risk scores appeared rarely in the analysis. For example, LASSO was seldom used despite being widely studied and having advantages. Adaptive signature design was not used. In some studies, high-dimensional data were measured, but only a small proportion of it was analysed. Therefore, there is strong potential for much more efficient use of high-dimensional data.
We investigated the distribution of the number of covariates across data types, methods of analysis and clinical areas. The number of covariates varied widely in each of these settings, highlighting the need for developing methods that would be applicable to data of different orders of magnitude.
High-dimensional data was collected for a variety of reasons, from being the primary outcome to identifying prognostic biomarkers in exploratory analysis to investigating biological pathways. However, few studies used high-dimensional data to compare treatments or to identify predictive biomarkers, which highlights a gap and presents an opportunity to use the data more effectively.
The limited use of sophisticated methods could be explained by perceived complexities and limitations of using high-dimensional data in clinical trials. Firstly, high-dimensionality of the data still requires _a priori_ knowledge of the disease mechanism, in the form of existing disease classification, to efficiently reduce the dimensionality of the data.[24] Secondly, there may be a discrepancy between the signature constructed from genetic data and its biological meaning, which obscures the intuitive interpretation of high-dimensional data. For example, it has been found that a large number of breast cancer signatures constructed from a variety of gene sets do not explain the biological mechanism of the disease.[95] In oncology, the most common field that collected high-dimensional data according to this review, this leads to genetic signatures being rarely used in clinical trials. It has been suggested that incorporating different types of omics data and using standardised methodology has the potential to make more effective use of high-dimensional data in clinical trials in order to improve patient outcomes. [96] In this review, we have only described the methods that were used in the analysed studies. Alternative methods, such as Bayesian classifiers, [97] also have the potential to analyse high-dimensional data in clinical trials.
In conclusion, although we only used a single database and limited timelines, we show that an increasing number of clinical trials are collecting high-dimensional data. Many of them could benefit from implementing more sophisticated analysis methods, such as those outlined in this manuscript. Further research is needed to make full use of the high-dimensional data collected in RCTs.
Figure 3: Number of covariates per clinical area.
Appendix
Consider a hypothetical randomised placebo-controlled clinical trial of \(n\) participants with a normally distributed outcome \(N(\mu_{0},\sigma^{2})\) for the control arm, and \(N(\mu_{1},\sigma^{2})\) for the experimental arm, The number of participants in each group is the same (\(n/2\)). We would like to test \(H_{0}:\delta=0\) where \(\delta=\mu_{1}-\mu_{0}\). A Wald statistic to test \(H_{0}\) would be
\[W=\frac{\hat{\delta}}{\sqrt{\frac{4\sigma^{2}}{n}}}.\]
For a two-sided \(\alpha\) significance level, the sample size \(n\) required for power \(1-\beta\) is
\[n=\frac{4(Z_{1-\alpha/2}+Z_{1-\beta})^{2}\sigma^{2}}{\delta^{2}}.\]
Now suppose we have a binary biomarker that divides the population into biomarker-positive and biomarker-negative patients, with \(r\) being the proportion of biomarker-positive patients. We assume that the treatment effect is \(\delta_{+}=\mu_{1+}-\mu_{0+}\) in biomarker-positive patients, and \(\delta_{-}=\mu_{1-}-\mu_{0-}\) in biomarker-negative patients. The treatment-biomarker interaction effect, \(\delta_{+}-\delta_{-}\), could be estimated by
\[\hat{\delta}_{+}-\hat{\delta}_{-}=(\hat{\mu}_{1+}-\hat{\mu}_{0+ })-(\hat{\mu}_{1-}-\hat{\mu}_{0-})\] \[\sim N\left(\delta_{+}-\delta_{-},\frac{\sigma^{2}}{rn}+\frac{ \sigma^{2}}{rn}+\frac{\sigma^{2}}{(1-r)n}+\frac{\sigma^{2}}{(1-r)n}\right).\]
A Wald statistic to test \(H_{0}:\delta_{+}-\delta_{-}=0\) would be
\[W_{int}=\frac{\hat{\delta}_{+}-\hat{\delta}_{-}}{\sqrt{\frac{\sigma^{2}}{rn}+ \frac{\sigma^{2}}{rn}+\frac{\sigma^{2}}{(1-r)n}+\frac{\sigma^{2}}{(1-r)n}}}= \frac{\hat{\delta}_{+}-\hat{\delta}_{-}}{\sqrt{\frac{4\sigma^{2}}{nr(1-r)}}}.\]
For a two-sided \(\alpha\) significance level, the sample size \(n_{int}\) required for power \(1-\beta\) is
\[n_{int}=\frac{4(Z_{1-\alpha/2}+Z_{1-\beta})^{2}\sigma^{2}}{(\delta_{+}-\delta _{-})^{2}r(1-r)}.\]
Thus, \(n_{int}=\frac{n}{r(1-r)}\), i.e the sample size required to detect treatment-biomarker interaction increases by factor \(\frac{1}{r(1-r)}\), with \(\min\left\{\frac{r}{1(1-r)}\right\}=4\) for \(r=0.5\). Therefore, the sample size for detecting the treatment-biomarker interaction is at least four times higher than the sample size needed to detect the main treatment effect.
## Appendix B Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
## Appendix C Funding
This research was supported by the Medical Research Council (MR/S014357/1). JMSW, LO and SC are funded by the National Institute for Health and Care Research (NIHR301614).
|
2304.07721 | A Novel end-to-end Framework for Occluded Pixel Reconstruction with
Spatio-temporal Features for Improved Person Re-identification | Person re-identification is vital for monitoring and tracking crowd movement
to enhance public security. However, re-identification in the presence of
occlusion substantially reduces the performance of existing systems and is a
challenging area. In this work, we propose a plausible solution to this problem
by developing effective occlusion detection and reconstruction framework for
RGB images/videos consisting of Deep Neural Networks. Specifically, a CNN-based
occlusion detection model classifies individual input frames, followed by a
Conv-LSTM and Autoencoder to reconstruct the occluded pixels corresponding to
the occluded frames for sequential (video) and non-sequential (image) data,
respectively. The quality of the reconstructed RGB frames is further refined
and fine-tuned using a Conditional Generative Adversarial Network (cGAN). Our
method is evaluated on four well-known public data sets of the domain, and the
qualitative reconstruction results are indeed appealing. Quantitative
evaluation in terms of re-identification accuracy of the Siamese network showed
an exceptional Rank-1 accuracy after occluded pixel reconstruction on various
datasets. A comparative analysis with state-of-the-art approaches also
demonstrates the robustness of our work for use in real-life surveillance
systems. | Prathistith Raj Medi, Ghanta Sai Krishna, Praneeth Nemani, Satyanarayana Vollala, Santosh Kumar | 2023-04-16T08:14:29Z | http://arxiv.org/abs/2304.07721v1 | A Novel end-to-end Framework for Occluded Pixel Reconstruction with Spatio-temporal Features for Improved Person Re-identification
###### Abstract
Person re-identification is vital for monitoring and tracking crowd movement to enhance public security. However, re-identification in the presence of occlusion substantially reduces the performance of existing systems and is a challenging area. In this work, we propose a plausible solution to this problem by developing effective occlusion detection and reconstruction framework for RGB images/videos consisting of Deep Neural Networks. Specifically, a CNN-based occlusion detection model classifies individual input frames, followed by a Conv-LSTM and Autoencoder to reconstruct the occluded pixels corresponding to the occluded frames for sequential (video) and non-sequential (image) data, respectively. The quality of the reconstructed RGB frames is further refined and fine-tuned using a Conditional Generative Adversarial Network (cGAN). Our method is evaluated on four well-known public data sets of the domain, and the qualitative reconstruction results are indeed appealing. Quantitative evaluation in terms of re-identification accuracy of the Siamese network showed an exceptional Rank-1 accuracy after occluded pixel reconstruction on various datasets. A comparative analysis with state-of-the-art approaches also demonstrates the robustness of our work for use in real-life surveillance systems.
Person Re-identification, Generative Adversarial Network (_GAN_), Convolutional Long Short-Term Memory (_Conv-LSTM_), Autoencoder, Occluded-pixel Reconstruction, Convolutional Neural Network (_CNN_), Siamese Network +
Footnote †: publicationid: pubid: pubid: 978-1-5386-5541-2/18/$31.00 ©2018 IEEE
## I Introduction
Person re-identification refers to finding one-to-one correspondence between person images captured by a pair of cameras with non-overlapping fields of view [1]. A network of surveillance cameras are installed for monitoring in most public spaces, such as railway stations, airports, shopping malls, hospitals, and office buildings. Manual analysis of this large amount of video data to perform person re-identification or other video surveillance tasks is laborious and time-intensive. In this work, we propose an automated computer vision-based approach for re-identification for both image and video data in the presence of occlusion. Several re-identification approaches have been developed that operate on unoccluded frames [2]. However, in most real-life situations, occlusion is an inevitable occurrence. It emerges when static or dynamic objects appear between the camera field-of-view and the target subject, causing certain regions of the target subject to get obstructed. The presence of occlusion degrades the performance of traditional image-based re-identification techniques, and the problem is likely to amplify if multiple consecutive frames in a video are occluded. Previous approaches to handling occlusion in re-identification attempt to leverage pose estimation and visible areas i.e., the spatial information of unoccluded pixels in the frame while ignoring the temporal relation between adjacent frames [3, 4]. To date, not much focus has been given to occluded pixel reconstruction in video-based person re-identification with temporal features. In this work, we address a plausible solution to this problem by proposing a novel multi-model architecture for occlusion reconstruction from both image and video data. For video or sequential data, we propose a _Conv-LSTM_ model to reconstruct the occluded pixels in a frame by exploiting the spatio-temporal information from previous frames followed by fine-tuning with _cGAN_[5]. For image data, we propose an _Autoencoder_ model for occlusion reconstruction followed by the same _cGAN_-based fine-tuning to enhance the reconstruction further. Employing the _cGAN_-based fine-tuning stage helps reduce noise and artifacts, thereby preserving better translation in-variance in the reconstructed frames or images generated by the previous networks. Finally, a siamese network-based re-identification framework has been used to perform the identity matching. The overall multi-model occluded pixel reconstruction re-identification strategy has been detailed in Section 3.
To summarize the main contributions of the paper are as follows:
1. Occlusion detection & reconstruction for efficient re-identification using advanced deep learning techniques is a significant contribution of this work. To the best of our
knowledge, re-identification from video data consisting of sequential frames with occlusion reconstruction using spatio-temporal features has not been emphasized much in the past, which we aspired to solve here.
2. Practical approaches to occlusion reconstruction using a multi-modal framework have been proposed for both sequential (video) and non-sequential (image) data. A _Conv-LSTM_ and _Autoencoder_ for video and image data, respectively, followed by _cGAN_ to generate a visually appealing reconstruction in occluded regions, is proposed.
3. An extensive experimental evaluation and comparative analysis of our method with other state-of-the-art approaches using four publicly available data sets is accomplished.
## II Background and Related Work
This section presents an overview of the existing approaches to person re-identification broadly: still images, sequential frames, i.e., videos, and occlusion.
The methods including re-identification from still images rely entirely on visual descriptors and do not incorporate any external contextual information for establishing correspondence between images [2]. In contrast recently, several deep learning-based person re-identification techniques have been developed by researchers worldwide [6, 7]. The approach proposed in [6] presents a two-stream network, namely _OSNet_, to address discriminative person-specific learning with generalizable property for cross-dataset discrepancies by employing instance normalization layers in _OSNet_. The work proposed in [7], solved the wrong label and poor quantity data problem where a weighted label correction based on cross-entropy is introduced to solve wrong labeling and to correct the two label errors weighted triplet loss.
Previous works on temporal modeling methods for video-based person re-identification use sequential models like recurrent neural networks (RNN). In [8], McLaughlin et al. first introduced the concept of modeling temporal information from frames using a RNN in which the average of RNN cell outputs is used as clip level representations. Further, an unsupervised approach for improved label estimation in person re-identification is presented based on a dynamic graph matching (DGM) framework in which intermediate labels are used to iteratively refine the framework for better labeling [9]. A modality-aware collaborative ensemble learning method is employed in [10] to handle the modality discrepancy at both the feature-level and the classifier-level during re-identification. Attention-based models have also been used in person re-identification to extract spatio-temporal features from video sequences A multi-attention convolutional neural network (MA-CNN) for part localization and fine-grained feature learning is presented by Zheng et al. in [11]. While _Co-attention Siamese Network_ (COSNet) segments and encodes useful image features, the work in [12] addresses the challenging problem of saliency shift by employing saliency-shift-aware _Conv-LSTM_ layer which can efficiently capture video saliency dynamics through learning human attention-shift behavior. However, none of these approaches are specifically tuned to perform re-identification in the presence of occlusion.
Gao et al. [13] proposed a pose-guided Visible Part Matching (PVPM) model to learn attentions with discriminative part features. In a cross-graph embedded-alignment (CGEA) layer is used to embed topology information to local features and predict similarity scores for matching. A solution to tackle the degradation of recognition performance caused by occlusion is addressed in in which the authors proposed a two-step solution: (i) extract non-occluded human body features through pose estimation, and (ii) locate person body parts by
Fig. 1: Overview of the proposed Occluded pixel Reconstruction and Person Re-identification Framework for both Image and Video data with Spatial and Spatio-temporal modelling respectively
utilizing the detected human keypoints in different occlusion situations.
From the extensive literature survey, we observe that person re-identification by exploiting the spatio-temporal information from sequential video data for occlusion reconstruction and re-identification has not been addressed in the literature. Also, the quality of reconstructed frames by the existing models using only spatial features needs to be improved significantly. In this work, we propose novel and effective approaches to occlusion detection and reconstruction for both sequential and non-sequential image frames with a focus on enhancing the quality of the reconstruction thereby achieve a better re-identification performance.
## III Proposed Work
The proposed work has been presented in two modules detailing the occlusion detection and the proposed Multi-modal framework for occlusion reconstruction and re-identification with Siamese network. These modules are explained in the following three sub-sections.
### _Multi-modal Framework for Occlusion Reconstruction_
Initially for occluded frame detection we employ a CNN with ResNet-34 architecture to classify if a given frame is occluded or not. We essentially use it as a binary classifier with the last layer consisting sigmoid activation function with the outcome '1' or '0' representing 'occluded' and 'un-occluded' classes respectively while the input to the classifier is an RGB frame. The ResNet-34 is trained using synthetically occluded images and un-occluded images with the corresponding labels. Adam optimizer is used to train the model using binary cross-entropy loss (see Eq 1) considering a learning rate of _0.001_.
\[L_{bce}=-\frac{1}{N}\sum_{i=1}^{N}{Y_{i}log(p(Y_{i}))}+(1-Y_{i})log(1-p(Y_{i})) \tag{1}\]
In (1), \(Y_{i}\) and \((1-Y_{i})\) represent two possible classes with target probabilities and N stands for the number of training samples.
Further, occlusion reconstruction is carried out for only those frames characterized as 'occluded' by the Occlusion Detection model. To reconstruct an occluded frame in a video sequence, we follow a two-step process: (i) first a _Conv-LSTM_ is employed to reconstruct the occluded frame or region using the spatio-temporal features from previous frames, and (ii) next the predicted frame is fine-tuned using a Conditional Generative Adversarial Network (_cGAN_) [5]. The use of _Conv-LSTM_ in the first step helps in fair reconstruction of the occluded pixels with the spatio-temporal information. However, the image generated by _Conv-LSTM_ contains minor irregularities and noise. To refine further in producing visually appealing reconstructed frames, _cGAN_ has been employed on top of _Conv-LSTM_ as it is known to have demonstrated robustness in handling image translation tasks. In case non-sequential frames are available instead of a video sequence, _Conv-LSTM_ has not been used for occlusion reconstruction since the spatio-temporal information cannot be exploited in such cases. Instead, occlusion reconstruction in each occluded frame is accomplished using an Autoencoder network. The reconstructed frame is henceforth used to perform re-identification using a Siamese network. We next describe in detail each of the networks.
#### Iii-A1 Coarse Occlusion Reconstruction Using Conv-LSTM / Autoencoder
_Conv-LSTM_ is a specially modified version of _LSTM_ (Long Short-Term Memory) [14] in which the convolution operation replaces the matrix multiplication inside the _LSTM_ cell at every gate. This network is capable of capturing spatial features and learn long-term dependencies over time from sequential multi-dimensional data [15]. The following equations show the various operations involved in the _Conv-LSTM_ layer. Here, \(i_{t}\), \(f_{t}\) and \(o_{t}\) represent input, forget, and output gates respectively, and \(X_{1}\dots X_{t}\) represent inputs to the layer while \(C_{1}\dots C_{t}\) and \(H_{1}\dots H_{t}\) represent cell outputs and hidden states, respectively, and \(W\)s are the weight matrices. The symbols \(*\) and \(\circ\) represent the convolution operator and Hadamard Product, respectively.
\[i_{t}=\sigma(W_{xi}*X_{t}+W_{hi}*H_{t-1}+W_{cl}\circ C_{t-1}+b_{i}) \tag{2}\]
\[f_{t}=\sigma(W_{xf}*X_{t}+W_{hf}*H_{t-1}+W_{cf}\circ C_{t-1}+b_{f}) \tag{3}\]
\[C_{t}=f_{t}\circ C_{t-1}+i_{t}\circ tanh(W_{xc}*X_{t}+W_{hc}*H_{t-1}+b_{c}) \tag{4}\]
\[o_{t}=\sigma(W_{xo}*X_{t}+W_{ho}*H_{t-1}+W_{co}\circ C_{t}+b_{o}) \tag{5}\]
\[H_{t}=o_{t}\circ tanh(C_{t}) \tag{6}\]
An insight view of the _Conv-LSTM_ architecture is given in Fig 1 and Table I presents the layer-wise architecture of the model. Also, an occluded frame (say, _Frame n_) is predicted by fusing the spatio-temporal information given by the respective occluded frame (i.e., _Frame n_) along with the previous frames (i.e., _Frame n-1_ and _Frame n-2_).
Each layer of the _Conv-LSTM_ model except the last layer returns the sequences to the next layer and _ReLU_ activation has been used in all the layers. The model is trained for 800 epochs with a batch size of 64 by employing Adam optimizer and binary cross-entropy loss.
For non-sequential data the reconstruction is accomplished using only the spatial information present in the occluded frame as there is no temporal information. Therefore, an Autoencoder model is trained for reconstructing the occluded region. The Autoencoder model is trained with binary cross-entropy loss and Adam optimizer for 1000 epochs around which the model is seen to achieve convergence.
#### Iii-A2 Fine-tuning the reconstructed frames with cGAN
The reconstructed frames by _Conv-LSTM_ have minor irregularities in the regions where the actual occlusion was present. _cGAN_ is used to enhance further by fine-tuning the output of the _Conv-LSTM_ model. It takes a single frame as an input generated by the _Conv-LSTM / Autoencoder_ and translates similar to ground-truth, thereby generating a fine-tuned or enhanced frame. In this work the output of _Conv-LSTM_ is a condition on which the Generator i.e., the default 2D UNet along with the patch-GAN discriminator learns to translate closer to the ground-truth which is the original un-occluded frame. We train our _cGAN_ for 3000 epochs in 3 sessions by saving checkpoints, with a batch size of 1. The binary cross-entropy loss and L1 loss times \(\lambda\) is used to train this model, where \(\lambda\) = 100 as shown in 7.
\[\min_{G}\max_{D}\mathbb{E}_{x\sim p_{\text{data}}(x)}[\log D(x)]+\mathbb{E}_{x \sim p_{\text{r}}(x)}[1-\log D(G(z))]+\lambda||L1|| \tag{7}\]
The combination of _ConvLSTM_ and _cGAN_ produces robust and effective occlusion-free frames.
### _Re-identification using Siamese Network_
Siamese networks are particularly effective in predicting whether a given pair of input images are similar or not. As in any Siamese network, a pair of input images are passed through two identical channels of convolutional layers, and the difference in the feature embedding is used as a measure of the dissimilarity between the given images. As shown in the figure, in this work, the parallel channels consist of Resnet-101 as basline architecture. Finally, a feature difference block of the two channels is used to get the difference or distance between the embeddings. Finally, the similarity score between a pair of feature vectors that represent the input images is computed, and hence estimate if the two images are similar or not.
The siamese network is trained with positive and negative pairs of images, where positive pairs are those of same identities, while negative pairs are those of different identities. Adam optimizer with contrastive loss is used to optimize the network parameters. In (8), \(Y\) is the predicted label (either \(0\) or _1_). Here, _Y=1_ indicates the image pairs belong to the same class, whereas _Y=0_ indicates that the image pairs are from different classes, \(D_{w}\) represents the output score obtained from the softmax layer.
\[L_{Contr}=(1-Y)\frac{1}{2}(D_{w})^{2}+(Y)\frac{1}{2}\left\{max(0,m-D_{w})\right\} ^{2}. \tag{8}\]
The network outputs a score between \(0\) to \(1\) depicting whether the given pair of input images belong to the same or different identities. We observe that at convergence, the network loss is very low, i.e., \(10^{-4}\).
## IV Results and Discussion
Initially, we start with the dataset description followed by the evaluation settings, experimental results and comparison with existing state-of-the-art approaches.
Four publicly available data sets have been used to test the performance of our proposed approach. The relevant details of each data set, i.e., data set name, number of subjects, if the corresponding dataset is occluded and if it is synthetically occluded in Table II. Among the datasets it is evident that the
Occluded ReID and Partial iLIDS are non-sequential (images) while iLIDS-VID and MARS are sequential (videos). For each of the non-sequential data sets mentioned in Table II, the synthetically occluded frames corresponding to all the subjects are used for training all the models, whereas the images tagged as 'occluded' in the dataset are used for testing. All the Neural Network models used in the study have been implemented using the Tensorflow framework on a system having 90 GB RAM and one NVIDIA TITAN Xp GPU with two Geforce GTX GPU having 34 GB GPU memory capacity in total. The evaluation metrics that we use are the mean Average Precision (mAP) and Cumulative Matching Characteristics (CMC).
In order to evaluate the effectiveness of the individual models in our proposed multi-modal framework we obtain the output frames at each stage and train the siamese network thereby obtaining and comparing the Cumulative Matching Characteristics (CMC) Rank-1 accuracy as shown in Table III.
Comparing with the base models in Table III it is observed that both the Cumulative Matching Characteristics (CMC) rank-1 accuracy and mean Average Precision (mAP) metrics are significantly influenced by the reconstruction followed by fine-tuning. Our framework improves the Cumulative Matching Characteristics (CMC) rank-1 accuracy of Occluded ReID, Partial iLIDS, iLIDS-VID and MARS datasets from 79.9% to 80.7%, 71.3% to 73.1%, 86.4% to 87.8% and 85.0% to 87.5% respectively. While the mean Average Precision (mAP) of both Occluded ReID and MARS improved from 68.2 to 70.1 and 78.7 to 80.3 respectively demonstrating the robustness and effectiveness of occlusion reconstruction.
A few sample results from our proposed multi-modal reconstruction framework are shown in Fig 2. (a) represents the results of the sequential data while (b) represents the
results of non-sequential data. Additionally, the it also includes the frames before and after employing the _cGAN_-based fine-tuning.
Further, we demonstrate a comparative analysis of the proposed re-identification approach with other popular state-of-the-art approaches, namely, [16, 17, 18, 19, 20, 21, 22, 23, 1] in terms of Cumulative Matching Characteristics (CMC) rank-1 re-identification accuracy and mean Average Precision (mAP) represented in brackets '( )' on four public datasets in Table IV and Table V. Although a few approaches have been developed specifically to work in unoccluded scenarios nevertheless our proposed approach shows superior performance.
## V Conclusions
In this work, we propose a novel approach for both image (non-sequential) and video-based (sequential) re-identification in the presence of occlusion. A novel occlusion reconstruction framework is proposed which is a combination of _Conv-LSTM_ and _cGAN_ for video data that uses spatio-temporal information of sequential frames, whereas for image data it is a combination of _Autoencoder_ and _cGAN_. After the reconstruction of occluded pixels, a siamese network is used to obtain feature vectors of n-dimensions that are used in computing a similarity score based on which re-identification is executed. Qualitative results show that the approaches proposed for occlusion reconstruction for image and video data are quite effective. Quantitative results by means of re-identification accuracy show that on average our work outperforms the state-of-the-art re-identification approaches, and it performs consistently well for most datasets. This emphasizes the robustness and applicability of our approach in real-time surveillance systems. In future, the generalizability of the proposed framework may be tested on a more extensive open world re-identification databases.
|
2306.06246 | Record Deduplication for Entity Distribution Modeling in ASR Transcripts | Voice digital assistants must keep up with trending search queries. We rely
on a speech recognition model using contextual biasing with a rapidly updated
set of entities, instead of frequent model retraining, to keep up with trends.
There are several challenges with this approach: (1) the entity set must be
frequently reconstructed, (2) the entity set is of limited size due to latency
and accuracy trade-offs, and (3) finding the true entity distribution for
biasing is complicated by ASR misrecognition. We address these challenges and
define an entity set by modeling customers true requested entity distribution
from ASR output in production using record deduplication, a technique from the
field of entity resolution. Record deduplication resolves or deduplicates
coreferences, including misrecognitions, of the same latent entity. Our method
successfully retrieves 95% of misrecognized entities and when used for
contextual biasing shows an estimated 5% relative word error rate reduction. | Tianyu Huang, Chung Hoon Hong, Carl Wivagg, Kanna Shimizu | 2023-06-09T20:42:11Z | http://arxiv.org/abs/2306.06246v1 | # Record Deduplication for Entity Distribution Modeling in ASR Transcripts
###### Abstract
Voice digital assistants must keep up with trending search queries. We rely on a speech recognition model using contextual biasing with a rapidly updated set of entities, instead of frequent model retraining, to keep up with trends. There are several challenges with this approach: (1) the entity set must be frequently reconstructed, (2) the entity set is of limited size due to latency and accuracy trade-offs, and (3) finding the true entity distribution for biasing is complicated by ASR misrecognition. We address these challenges and define an entity set by modeling customers' true requested entity distribution from ASR output in production using record deduplication, a technique from the field of entity resolution. Record deduplication resolves or deduplicates coreferences, including misrecognitions, of the same latent entity. Our method successfully retrieves 95% of misrecognized entities and when used for contextual biasing shows an estimated 5% relative word error rate reduction.
Tianyu Huang, Chung Hoon Hong, Carl Wivragg, Kanna Shimizu Alexa AI, Amazon.com Inc, Boston, MA, USA {htianyu, honchung, cawivragg, kannashi}@amazon.com
**Index Terms**: Automatic Speech Recognition, Entity Resolution, Record Deduplication, Contextual Biasing
## 1 Introduction
Voice digital assistants perform automatic speech recognition (ASR), natural language understanding (NLU), and entity resolution (ER) over a variety of domains, such as smart home, Q&A, and entertainment. ASR serves a key role as it is upstream to all other functions, and its performance can determine how successfully the voice assistant meets customer needs. In modern systems, the ASR component is an "end-to-end" model based on architectures such as recurrent neural network transducers (RNN-T) [1] or listen-attend-spell (LAS) [2]. End-to-end ASR models map speech directly from voice data to graphemes without an intermediate phoneme step. As a consequence, relative to component-based models, end-to-end models struggle particularly with proper nouns and rare words [3].
In this work, we focus on optimizing an end-to-end ASR model in the entertainment domain, where customers search for artists, songs, TV shows, and movies to play back. In entertainment, proper nouns and rare words are common with requests such as "play Metro Bonomi" (a popular singer) or "search for Bridgerton" (a popular Netflix show). Therefore, one key challenge is spoken entity recognition, where we require speech recognition to perform at scale for these proper nouns, many of which can be easily confused with more common words. Furthermore, our model must rapidly adapt to a non-stationary distribution of requests where new hit songs, movies, and shows come out every week and may rocket to viral popularity levels with little warning. In this situation, we need to reconcile the time, cost, and engineering challenges of frequent model retraining and production release against the need to match the fast shifting customer request distribution.
We address these challenges with a twofold strategy. First, we leverage contextual biasing, where we alter the output probability of recognized entities without the latency or computational expense of full model retraining. This is accomplished by shallow fusion, which uses on-the-fly rescoring to adjust output probabilities during runtime inference with an externalized list of scored entities [4]. Updating biasing in this way does not require potentially costly model retraining and redepolupment. However, contextual biasing is not without limitations: lengthy lists of entity names may degrade model accuracy or increase inference latency, which is critical for a fast response to a customer request. Consequently, biasing lists are limited to an entity budget of a few thousand entities. A few thousand does not even meet the number of new songs released every week, much less the entirety of other entertainment media content such as new movies, TV shows, and podcast episodes.
The second part of our strategy is to optimize the utility of the entity budget and to best capture the non-stationary aspects of the request distribution. We accomplish this by starting with the observed entity distribution in the ASR output stream. The stream is a distorted version of the actual customer request distribution because of misrecognitions. For example, _Archive 81_, a TV show, can been misrecognized as "arcade eighty one", "r. kelly one", or several other results depending on the speaker conditions. We deduplicate these multiple references to the same entity by leveraging clustering techniques developed for record deduplication in ER research. After we correct for the distortions and reconstruct the entity distribution, we select entities and optimize weights to bias towards frequently misrecognized entities.
## 2 Related Work
We are aware of relatively little previous work to optimize the use of a shallow fusion entity budget. In early shallow fusion research, a weighted FST representing a language model is derived from training data or some other external reference [4, 5]; however, the assumption that the entity distribution - or entity language model - can be derived from training data or other external data sources does not hold for practical applications. Thus, more recently, one group used a forecasting model to predict entity popularity and preemptively populate a biasing list [6]. However, this approach limited itself to forecasting trends based on exact matches of the entity name to the reference in the user request. Thus, there is a particular risk of missing the entities most in need of contextual biasing: those for which the ASR output frequently differs from the entity name. In our work, we explicitly model the entity distribution, allowing us in principle to identify the most popular entities regardless of how they are
referred to in user requests.
A subsequent line of research [7] used comparison of entity references to entities in knowledge graphs to enhance ASR performance, but this differs in many respects from our approach of co-comparison of entity references to other references, and ultimately was not used to model the entity distribution for contextual biasing.
## 3 Model Design
Our model uses a record deduplication framework. We aim to identify different references to video title entities, like _Bridgerton_ or _Archive 81_, in ASR outputs. Different ASR outputs may refer to the same entity because of ASR errors. For instance, "arcade eighty one" is likely a reference to _Archive 81_ just as "archive eighty one" is. So these request patterns are duplicates from the perspective of the entity being requested. In ER terms, the two request patterns are coreferent, a term we will use repeatedly going forward. We aim to deduplicate the total set of references to video title entities by clustering the references to each entity.
Formally, for a set \(R\) of unique entity references drawn from a non-unique set of user requests \(U\) each containing one reference, we seek to create subsets of \(R\) corresponding to coreferences to some latent set of entities \(E\). If the subset of references \(r_{1},r_{2},...,r_{n}\) are coreferent to entity \(e_{x}\in E\) and \(p_{r_{i}}\) is the probability of encountering \(r_{i}\) in \(U\), then \(p_{e_{x}}=\sum_{i=1}^{n}p_{r_{i}}\) and is the probability of a randomly selected user request referring to the entity \(e_{x}\). The set of all \(p_{e}\) is the latent entity distribution.
### Acquisition of References from the ASR Model
The record deduplication model takes as inputs the ASR outputs from our in-house RNN-T model, which consists of an encoder LSTM and prediction network layer with an embedding layer. The model also has a shallow fusion-based language model that is used to bias to global and personalized catalogs [8]. The model was trained on over 200k hours of interactions with a voice assistant [9]. We use this model to get the \(n\)-best ASR recognitions from user requests or synthetic voice samples. In practice, ASR outputs are usually further processed by an NLU model to segment entity references and assign them to entity classes, but for this work, we confined our attention to a single entity class (video content titles) and to requests consisting only of references with no surrounding text.
### Record Deduplication Model
The record deduplication model transforms the ASR outputs, i.e., the \(n\)-best transcriptions of voice requests, into clusters of entity references, each of which is expected to contain references to exactly one entity. Record deduplication consists of three discrete steps: blocking, comparison, and clustering (Figure 1). Blocking places references that are likely to be coreferences into a "block", or group of potential matches. The comparison task, which is analogous to other comparison, matching, and linking tasks in ER, consists of an all-against-all comparison of the set of potential coreferences in a block. The comparison model, which produces a \([0,1]\)-bounded estimate of whether two references are linked, can be either a rule-based system or a machine learning model. Finally, clustering thresholds the similarity output, producing a final decision on which references are coreferent.
#### 3.2.1 Blocking
We group our video title requests as a single block for input to comparison. However, in principle and without any methodological changes, our model could be extended to handle user requests for different types of content entities by repeating the procedure with a new block for each type identified in the request set, which could be done by an NLU model that performs entity recognition and classification.
#### 3.2.2 Comparison Model Features
For some block \(B\) containing \(n\) references \(r_{1},r_{2},...,r_{n}\), we next perform comparison: we construct a similarity matrix \(S\) where element \(s_{ij}\) is the similarity of \(r_{i}\) and \(r_{j}\), for some \(r_{i},r_{j}\in B\). In this work, we investigate several methods of computing \(s_{ij}\). In all cases \(s_{ij}\) is on the interval \([0,1]\).
All of our model configurations use ASR \(n\)-best cooccurrence. A list of ASR \(n\)-best candidate recognitions is automatically produced as part of the output of most ASR systems. The ASR \(n\)-best cooccurrence rate is a language-agnostic proxy for phonetic similarity. For a given \(r_{i}\) and \(r_{j}\), tabulate \(c_{ij}\), the mean rate of cooccurrence per occurrence, using \(c_{ij}=\big{(}p(r_{i}|r_{j})+p(r_{j}|r_{i})\big{)}/2\).
ASR \(n\)-bests have been used in a variety of contexts to improve model output; in modern ASR systems, they are understood to contain potentially relevant information for accurate recognition [10, 11]. Importantly, our system looks across aggregate cooccurrences and so is resilient to noise.
We also computed similarities between references from the perspective of cooccurrence in user histories (the item-item matrix in a collaborative filtering framework [12]). For any pair of references \(r_{i}\), \(r_{j}\), their item similarity \(u_{ij}\) is defined as \(u_{ij}=(U_{i}\cdot U_{j})/(||U_{i}||\cdot||U_{j}||)\), where \(U_{i}\) is the frequency vector of requested references aggregated from all users who requested content by reference \(r_{i}\) over a predefined time length, i.e., \(U_{i}=(n_{1,i},n_{2,i},...)\) where \(n_{j,i}=\sum_{k}v_{k}^{j}\) and \(v_{k}^{j}\) is the number of times when the \(k\)-th user who requested \(r_{i}\), also requested \(r_{j}\). Intuitively, the score is measuring the similarity between the users who made the individual references, and the underlying assumption is that misrecognitions come from a relatively similar group of users to those with correct recognitions.
#### 3.2.3 Comparison Model Training
Our initial proof of concept model uses ASR \(n\)-best cooccurrence alone to measure similarity. For more advanced models, to weigh ASR \(n\)-best cooccurrence and item similarity, we formulated the comparison task as a binary classification problem and trained a machine learning model to provide similarity scores on the \([0,1]\) continuum for candidate pairs of entity references.
The training dataset consists of ASR and item similarities derived from pairs of references labeled \(1\) if they are coreference and \(0\) otherwise. We mine the ground truth labels from user interactions with our digital assistant. When a user selects
Figure 1: Record Deduplication Model Design. a) Blocking b) Entity Comparison c) Clustering.
an item from a list of displayed results after a query, we observe whether the most-clicked item has a clickthrough rate (average number of clicks per impression) greater than 50%. If so, we consider the query unambiguous in the sense that it is intended to target a specific video title (e.g., "star wars a new hope") as opposed to a broad search (e.g., "star wars movies"), and label the query and entity name as a reference pair. The positive training data are made up of (1) query pairs with the same intended video and (2) the same query as both members of the pair. The negative training data are created by sampling from query pairs with different intended videos. We use an equal amount of positive and negative samples for model training.
We used several classification algorithms to train the comparison models, including logistic regression, decision tree, and support vector classification [13]1. All models are trained with an 80%/20% train-test data split. Note that the goal of the comparison model is to give binary decisions connecting similar query pairs, analogous to the link prediction task [15] in network theory for cluster detection on the generated graph [16].
Footnote 1: The models are trained using the _scikit-learn_ package [14] and the training parameters are kept to module defaults.
#### 3.2.4 Clustering Algorithm
For the proof-of-concept model using similarity from ASR \(n\)-bests, we convert the similarity matrix \(S\) to an adjacency matrix \(A\) with \(a_{ij}\in\{0,1\}\) via thresholding, with the threshold empirically tuned based on the training data. For the classifier models, \(A\) is constructed directly from the classifier output labels, which are also in \(\{0,1\}\). The sets of adjacent elements form clusters \(c_{1},c_{2},...,c_{m}\) with \(m\leq|R|\). For some cluster \(c_{i}\), if \(r_{x}\in c_{i}\) and \(r_{y}\in c_{i}\), we conclude that \(r_{x}\) and \(r_{y}\) are coreferent.
### Character Edit Baseline
To contrast record deduplication with a simpler approach, we performed similarity comparison between ASR inputs and outputs from the public dataset using character edit distance. If the distance between an ASR output and its respective input was less than or equal to the next closest edit distance to other ASR inputs, we considered these entities matched (true positive). Otherwise, we considered them mismatched and counted both a false positive for the entity the ASR output was matched to and a false negative for the entity it was not associated with.
### Application to Shallow Fusion
Simply boosting the top \(k\) values in the set of all \(p_{e}\) would provide an improvement over an entity distribution derived from static training data. In practice, we use user feedback signals to determine which references to each latent entity are likely mis-recognitions and perform contextual biasing on the entity names most likely to be both requested and misrecognized.
## 4 Data and Evaluation
We use two input datasets: a publicly reproducible one using Amazon Polly and an in-house one derived from anonymized/de-identified real user interactions with our digital assistant in English. The in-house data was required because the item-item matrix will not have meaningful information if computed from synthetic interactions. We thus use the public dataset for an initial proof-of-concept model using only the ASR \(n\)-best cooccurrence feature described in Section 3.2.2 and then move to the anonymized/de-identified in-house dataset for more complicated models involving item-item cooccurrence.
Since the publicly reproducible dataset is generated from known text, the ground truth to compute model accuracy is readily available. For the anonymized/de-identified in-house dataset, we rely on sets of coreferent pairs identified by user behavior, as described in Sections 4.2 and 4.3.
### Public Dataset
We generated 900 synthetic voice samples [17] using Amazon Polly [18] on randomly selected movie titles from the MovieLens 25M Dataset [19]. Before inputting the data for audio synthesis, we preprocessed and added tokenization operations to the movie titles to change official movie titles into spoken forms (e.g., "Tiny Times III" to "tiny times three") to reflect speech patterns of voice digital assistant consumers. We used nine different voice profiles from English (US) language variants available on Amazon Polly to generate nine samples for each entity. For the audio synthesis, neural text-to-speech engine was chosen to create high-quality audio streams. Amazon Polly's synthesize-speech command was used to generate the audio and convert it to a wav file.
### Recall Computation
Since we do not have ground truth for our anonymized/de-identified in-house dataset, we instead rely on feedback extracted from user interaction sessions. To identify related ASR output variants that should be clustered together (or deduplicated) in record deduplication, we consider cases where a user request for some reference \(r_{a}\) to entity \(e_{x}\) does not result in a satisfactory response, and as a consequence, the user repeats the request, perhaps with clearer enunciation or in a louder voice, resulting in ASR recognition variant reference \(r_{b}\). If many users repeat a given pair \((r_{a},r_{b})\), we can conclude that the two variants are coreferent to \(e_{x}\). Counting the number of such pairings recalled as edges in the record deduplication cluster output gives us an estimate of the model's sensitivity: these pairings are true positives, while known edges that could have been output but were not are false negatives.
The recall metrics do not equate to word or sentence error rates for ASR models, since they describe the _relative_ improvement in ASR sentence error rate.
### Precision Computation
We identify false positive reference pairs output by the record deduplication model by treating reference-item/entity pairs in Section 3.2.3 as ground truth: to calculate precision, if \(r_{a}\) and \(r_{b}\) are clustered together in record deduplication outputs, but we have \(r_{a}\) resolved to \(e_{x}\) with high user satisfaction and \(r_{b}\) resolved to \(e_{y}\), \(y\neq x\), then \((r_{a},r_{b})\) is counted as a false positive. The precision is the total number of edges output minus such false positives divided by the total number of edges output for which both entities have ER results with positive user feedback.
Although it is possible that a reference could resolve to two entities, this is rare in practice (\(<0.1\%\)).
## 5 Results and Discussion
We performed two sets of experiments: an initial proof-of-concept using an open dataset and similarity as measured by ASR \(n\)-best cooccurrence, and then a more extensive set of comparisons of different possible comparison models contain
ing both of the ASR \(n\)-best feature and the item similarity feature described in Section 3.2.2.
### Initial Proof of Concept
ASR \(n\)-best cooccurrence by itself achieved a recall of \(0.997\) on the synthetic dataset of ASR errors created using Amazon Polly (Table 1), with no loss of precision. In other words, it successfully grouped misrecognized references with correctly recognized coreferents \(99.7\%\) of the time, without incorrectly grouping any references to different entities. By comparison, simply using a nearest neighbors approach to grouping misrecognized ASR variants with correctly recognized coreferences recalled only \(50.0\%\) of errors, with a significant cost to precision. However, these results translated poorly to the anonymized/deidentified in-house dataset based on live traffic for our digital assistant. For this dataset, the method achieved a recall of only \(0.918\), with a precision of \(0.913\).
The loss in performance likely arises from several factors. First, in the more limited public dataset, we were able to use an ASR \(n\)-best with \(n=5\), while the cost and latency requirements on the large anonymized/deidentified in-house dataset, which was generated at runtime, required \(n\leq 2\).
Additional performance loss likely results from the substantially wider distribution of errors encountered in live traffic. Live recordings may also contain misdirected traffic from the NLU model performing the blocking.
### Results Including Item-Item Cooccurrence
To improve accuracy, we introduced the item-item similarity feature. The hypothesis behind this feature is that users making malformed or misrecognized entity references should be similar to users requesting the same entity through a correctly recognized reference. In other words, we hypothesized that distinct groups of users will request particular entities, and that request misrecognition is to some extent random within each group for a particular entity.
We produced comparison scores using a linear weighting of the two features as well as tree- and SVM-based models (Table 2). The linear and tree-based models recovered 36% and 49% respectively of the \(F_{1}\) loss in moving to live data, but the SVM only recovered 20% of performance, probably because of its higher dependence on hyperparameter tuning.
### Results on Live Data
The cluster output provides a model of entity requests; combined with system logs, we can quantify the number of entity requests in each cluster that led to the user being served the correct entity (because the ASR output was sufficiently similar to the canonical form that downstream ER could perform correctly). We utilize the record deduplication's model of traffic to perform contextual biasing on the runtime ASR model (Table 3), boosting the effective LM probability of outputting canonical entities as in [8] over incorrectly resolved variants. For evaluation, we use machine generated transcripts from record deduplication as reference transcripts. Performing contextual boosting on entities from record deduplication shows relative word error rate (WER) reduction of 0.67% over randomly selected, anonymized, and annotated user utterances. By comparison, using a selection of the top \(k\) most-mentioned entities actually _increased_ the error rate by 2.78%. This potentially surprising finding is consistent with the increase in WER seen for the "popular last week" heuristic in [7]. The modest size of the record deduplication result likely arises from the high diversity of entities in our total dataset. In a smaller selection ("modeled only" in Table 3) of utterances that each contain a coreference used in record deduplication, the relative improvement was 13.01%. This figure is comparable to the WER reductions reported in [8] using a different approach, but the data distributions are very different. Extrapolating from the 0.67% relative WER by a factor of \(1/0.1301\), the entities selected for boosting account for roughly 5% of the misrecognitions in our live distribution.
## 6 Conclusion
Our work demonstrates that using a comparison model within an entity deduplication framework is an effective way of building a model of entity requests. Using this entity distribution and the information it contains about frequently misrecognized entities, we can utilize a shallow fusion entity budget more effectively than a naive baseline.
Two promising ways to develop the record deduplication model are (1) the use of a deep phonetic similarity model, similar to [20] to improve performance in the comparison task and (2) using community detection approaches like spectral clustering [21] for the clustering task.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Dataset & Model & Recall & Precision & \(F_{1}\) \\ \hline Public & RNN-T only & 0 & 1 & 0.000 \\ Public & Edit similarity & 0.500 & 0.455 & 0.476 \\ Public & Record dedup. & 0.997 & 1.000 & 0.998 \\ Anon. Real & Record dedup. & 0.922 & 0.913 & 0.917 \\ \hline \hline \end{tabular}
\end{table}
Table 1: ASR \(n\)-Best Model Record Deduplication Results
\begin{table}
\begin{tabular}{l l l l} \hline \hline Model & Recall & Precision & \(F_{1}\) \\ \hline \(n\)-best-only & 0.922 & 0.913 & 0.917 \\ Linear & 0.934 & 0.958 & 0.946 \\ Tree & 0.954 & 0.959 & 0.957 \\ SVM & 0.970 & 0.899 & 0.933 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Combined \(n\)-Best/Item-Item Cooccurrence Model Record Deduplication Results on Anonymized In-House Dataset
\begin{table}
\begin{tabular}{l l l} \hline \hline Model & Refs in Dataset & rel. WER (\%) \\ \hline base & full & 0 \\ base + TopK entities & full & 2.78 \\ base + Record dedup. & full & -0.67 \\ base & modeled only & 0 \\ base + Record dedup. & modeled only & -13.01 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Relative WER Results on Anonymized In-House Dataset |
2308.01085 | Spatial Intelligence of a Self-driving Car and Rule-Based Decision
Making | In this paper we show how rule-based decision making can be combined with
traditional motion planning techniques to achieve human-like behavior of a
self-driving vehicle in complex traffic situations. We give and discuss
examples of decision rules in autonomous driving. We draw on these examples to
illustrate that developing techniques for spatial awareness of robots is an
exciting activity which deserves more attention from spatial reasoning
community that it had received so far. | Stanislav Kikot | 2023-08-02T11:27:41Z | http://arxiv.org/abs/2308.01085v1 | # Spatial Intelligence of a Self-driving Car and Rule-Based Decision Making
###### Abstract
In this paper we show how rule-based decision making can be combined with traditional motion planning techniques to achieve human-like behavior of a self-driving (SD) vehicle in complex traffic situations. We give and discuss examples of decision rules in autonomous driving. We draw on these examples to illustrate that developing techniques for spatial awareness of robots is an exciting activity which deserves more attention from spatial reasoning community that it had received so far.
Keywords:rule-based decision making motion planning self-driving
## 1 Introduction
In this paper we report on our experience at Sber Automotive Technologies in developing a motion planner that would be equally fit for urban street, closed areas and intercity highways. Founded in 2020, the company possesses a fleet of about 300 autonomous vehicles (AVs) ranging from golf-cars to semitrailer trucks. Out of many aspects of motion-planning here we focus on interacting with other agents in a safe, predictable, ethical and confident manner. We argue that rule-based decision making can play an important role in supplying robots with this natural to humans and animals'spatial intelligence' ability.
In Section 2 we give a brief description of our SD-system and give a more detailed account of its motion planning unit. In Section 3 we highlight a few challenges in designing interaction with other agents and discuss their rule-based solutions. In Section 4 we explain how we test our motion planner and rule sets to ensure their quality and avoid regression. The paper ends with a brief survey of sources we relied on while creating the planner and a rule-praising conclusion.
## 2 System overview
### High-level view of self-driving software
The high-level dataflow of our SD-software is presented on the left of Figure 1. _Perception_ integrates information from multiple sensors (such as cameras, lidars and radars), recognises positions and velocities of other agents as well as static obstacles and traffic light signals and creates a 'digital representation' of the traffic situation. This representation is then passed to _Prediction_ which supplies
each agent with a bunch of possible trajectories and tries to predict their intentions. All this information is fed to _Planning_. This block also has access to the current position and velocity of the AV coming from _Localization_ and the information about road surface marking and local traffic rules coming from high-definition maps of the area. _Planning_ guides the AV towards the goal point by repeatedly calculating a trajectory endowed with a velocity profile taking into account the road code and the dynamic environment. The _Control_ module receives an updated trajectory a few times per second and drives the AV along it by interacting with steering and velocity actuators through the CAN bus relying on feedback from _Localization_.
### Main parts of the motion planner
Our motion planner has a three-level architecture with decision making at the end. It consists of the following parts (see Figure 1, on the right). First, a _strategic planner_ provides a bunch of strategic plans for reaching the goal point. A _strategic plan_ is a sequence of actions such as'move forward along lane A' or 'change from lane B into lane C'. Strategic plans are generated using a combination of path-search techniques in the lane graph of the road network and its sliced version. Then _maneuver constructor_ turns each of these plans into a _local planning task_ (LPT). Examples of LPTs are given in Figure 2. Each task consists of the initial position of the AV (green rectangle with a bullet inside), a target zone (shown in orange), road borders that the AV should not cross (blue), obstacle areas that must be avoided (red) and penalty zones that increase the path cost. The _shape planner_ solves LPTs and produces an optimal curve on the road surface leading from the initial position of the AV to the target zone
Figure 1: High-level design of our SD-software and the main parts of its planning unit.
avoiding obstacle areas. The _velocity planner_ takes into account dynamic objects and marks the points of the trajectory with timestamps when they should be passed (or equivalently, supplies it with a velocity profile). Sometimes the velocity planner can make a decision to stop at a certain point of the trajectory (e.g. to give way to a pedestrian). Finally, the _decision maker_ or _path selector_ selects one of the calculated trajectories relying on their comfort and safety metrics.
The reason we put the decision module at the end is that in some cases such as lane changing in dynamic environment the right decision can be made only after the trajectories for two possible courses of action (change lane versus push forward) have been fully calculated.
### Interaction with other agents
Based on the shape of a candidate path, others' intentions and other aspects of the traffic situation we single out four ways of interacting with other agents:
(Ignore) Ignore the agent
(Drive Around) React by changing the shape of trajectory
(Give Way) Stop in front of the agent's route
(Follow) Move into the agent's place while keeping distance
For example, we ignore another car if we have right-of-way and the other car moves sufficiently slowly to react and stop. We give way to a pedestrian if they cross the road, but if a pedestrian walks along the edge of a road, we try to overtake it. And if a pedestrian walks away from us in the middle of the road, and we cannot overtake them, then we follow them. Similarly we follow the vehicles which are in front of us and move away from us, but we give way to vehicles that move at angle close to 90 degrees to our trajectory.
Now a few words about how these reactions are implemented. In case of (Ignore) we drive as if the agent did not exist. For (Drive Around) we generate an obstacle area around the agent's personal space and put it into the corresponding LPT. In (Give Way) and (Follow) we react to an agent by changing not the shape
Figure 2: Examples of LPTs. In (a) the AV has to pass between two parked cars. Maneuver (b) is obtained from a ‘change-left, then change-right’ combination in a strategic plan. This combination serves to drive around obstacles that block the current lane. The \(\sqcap\)-shaped cut in (b) results from a solid line on the road surface between the lanes that the AV should not cross. In (c) there is a bridge support to the right of the AV, which results in the right road border turning right at right angle. This makes this turn hard to pass.
of trajectory but the velocity profile. Thus in these cases no obstacle areas are added to LPTs and so no steering reaction is produced. Instead we reduce the velocity. In (Give Way) we stop the AV in front of other agent's personal space. In (Follow) we calculate the distance to the agent we must keep for safety and plan our velocity accordingly.
The reason we distinguish between (Give Way) and (Follow) is that in (Give Way) unlike (Follow) one cannot talk about desirable safety distance as a function of velocities of the AV and the other agent.
## 3 Instances of rule-based decision making
### When to drive around another car?
A naive rule "drive around everything that does not move" may pay off in closed areas with low traffic. In multilane urban roads with dense traffic it leads to unneeded attempts of lane change in situations when the car in front of us becomes stationary, but vehicles in adjacent lanes continue to move. An alternative approach is to put on the map all parking slots and drive around only those stationary cars that overlap with these areas. It may work in locations when parking rules are strictly enforced together with a teleoperation service, which allows a teleoperator to mark particular stationary vehicles as requiring driving around. We also develop a machine learning solution to this decision problem.
### Interacting with cars that overtake ego
By the time we started testing our SD-software on high-speed highways we had tested it for a year in urban environment with traffic often changing from dense to stationary with emphasis on safety. As a consequence, that version of the software produced undesirable braking when other vehicles overtook ego on public highways, because it tried to stabilize the distance faster than human drivers.
It took a few months to change this behavior and thouroughly test a new speed controller for safety. During this period our fleet was supplied with a rule "ignore cars in front of ego if they have non-negative acceleration and move sufficiently faster then ego", which allowed us to test other components on highway in parallel with the work on the velocity planner.
### Pulling away from oncoming cars
In private roads without median marking it is important to react to oncoming cars by pulling away from them to the roadway edge. It took awhile to figure out the right shape of the corresponding obstacle area. A naive idea to use the same 'personal space area' of other vehicles as in other cases of interaction, which is based on the bunch of their possible trajectories, failed for many reasons. First, oncoming cars usually, but not always, pull away from us to their edge of the road, and it is hard to predict this behavior exactly. Second, there is a
fundamental difference between interacting with an oncoming car and a static obstacle. If it is not possible to drive around a static obstacle, the AV should produce no steering reaction, but this is not so for and oncoming car, where shifting to the edge of the roadway is always needed. Thirdly, the border of the obstacle area should be parallel to the road edge, as otherwise the AV does not move close to edge sufficiently long.
These difficulties were overcome by implementing a rule that in presence of an oncoming car generates an obstacle area shaped as a narrow strip along the left border of the driving area. The width of the strip is selected based on the road geometry and the position of the other car in a way to ensure that the resulting LPT has a solution (see Figure 3).
### Ignoring pedestrians that do not intend to get in front of AV
It is important to distinguish between pedestrians who intend to cross the road in front of the AV and which require braking reaction and those who do not have such intention (such as C in Figure 4). To achieve this we introduced a rule "ignore all pedestrians who collide with the area under the AV". First this area was taken as in Figure 4, (a), then, after studying a case when pedestrian A was mistakenly ignored, reduced to (b), and then extended on the sides to (c) due to the case with pedestrian B who led to unneeded braking with area (b).
### Rules versus Finite State Machines and Behavior Trees
Consider the rule "extend by additional 25 cm all obstacles that are further than 12 metres from the AV." This rule may seem counterintuitive, but at some point we adopted it as a compromise between giving a hard extension to all objects (which is safer, but may lead to the AV getting stuck in places where a human driver can pass) and driving without extra extension of obstacles. This rule causes the AV to reduce the velocity (up to a stop, if needed) before an obstacle that may require the AV to drive around it, and win time to recognise its position, status, and shape more accurately and calculate the trajectory around it. This two-stage behavior can be coded using such techniques as finite state
Figure 3: Incorrect (a) and correct (b) obstacle areas for oncoming cars.
machines or behaviour trees. However, we prefer the rule-based approach due to its succinctness and the fact that you do not need to specially take care of state change in case there are many cars parked next to each other.
### Yet another controversial rule
AVs sometimes create dangerous situations by being insufficiently confident during the 'who goes first' negotiations. The rule "ignore other vehicle, if we arrive at the collision point by at least 2 seconds earlier" can make their driving style more assertive and closer to that of an average human driver. Given that indiscriminate use of this rule may result in dangerous driving, we believe that at certain situations this or a similar rule can be adopted. Our experiments in the testing ground described in the next section show that it improves performance of the AV by 30 cases out of 260 in a scenario, where it has to turn left through a dense traffic in the opposite lane.
### Give Way or Follow?
An elegant geometric solution for distinguishing between "give way" and "follow" reactions to other cars was discovered when we faced a junction depicted on the left of Figure 5. If the other car is on our path, we compare the angle between our and agent's trajectories with 55 degrees. Otherwise we do the same and, in addition, compare the angle between our and agent's orientations with 45 degrees. It is readily checked that under this rule we always 'follow' car B, but 'give way' to car C until it is on our path. This rule also works fine for basic cased on the right of Figure 5.
Figure 4: Which pedestrians should be ignored? (B and C, but not A)
## 4 Testing and evaluating the motion planner
Every change in the code of the motion planner undergoes regression testing in a continuous integration platform, which in addition to unit tests runs simulation of the \(Prediction+Planning+Control\) part of the SD-pipeline and checks that no collision occur in 10 basic scenarios of interaction with other agents. In addition, the changes in the shape planner are evaluated on a constantly expanding tasksset of around 1000 LPTs, where the attention is paid to the distribution of the time required for task solving, special quality features of trajectories (such as distance to obstacles and road border, smoothness, accelerations and jerks), and the rate of tasks that are successfully solved. Finally the change goes into the quality assurance (QA) team, which gives it an hour-long drive in diverse urban environments and logs any noticed defect or change in behavior of the AV. There is also a collection of a few thousands 'night test' simulation scenarios which help to track down and correct rarely occuring regressions.
For A/B testing of different solutions we repeatedly reproduce a traffic situation in the testing ground and count the numbers of cases with acceptable and inacceptable behavior of the AV for each of solutions. An example of such a simulation is shown in Figure 6.
## 5 Sources of the motion planner
The strategic planner works on _network of lanelets_ introduced in [1]. The maneuver constructor, based on Boost Geometry, seems to be original. The shape planner uses an optimised version of heuristic search from [2] followed by quadratic and Newtonian spline optimization procedures. The velocity planner runs OSQP solver [6] on the _s-t-graph_ of the traffic situation [3] based on predicted trajectories of other vehicles. Path-velocity decomposition dates back to [4]. Our reaction system is inspired by the Mobileye paper on responsibility-sensitive safety [5]. However, most of the rules were created by analysing cases of incorrect interaction with other agents in urban envoronment reported by our QA team.
Figure 5: Car A is the AV. Car B overtakes us, and so requires (Follow) reaction. On the other hand car C needs a (Give Way) due to local traffic rules.
## 6 Conclusion
The main purpose of this paper is to attract attention of RR community to practical problems of autonomous driving and demonstrate that designing logical rules that provide robots with spatial intelligence and analysing them integrationally and game-theoretically is an exciting activity. We show that rule-based decision making can play an important role in SD-software by providing use-cases when logical rules helped to improve performance of the motion planner. Also rules provide a nice language for talking about behavior logic of software and analysing bugs: given a case of undesirable behavior, one may say that this or that rule did or did not work as expected, and explain why. Finally, some of the rules can be slots and baselines for plugging machine learning solutions.
|
2301.12294 | Machine-learning-informed parameter estimation improves the reliability
of spinal cord diffusion MRI | Purpose: We address the challenge of inaccurate parameter estimation in
diffusion MRI when the signal-to-noise ratio (SNR) is very low, as in the
spinal cord. The accuracy of conventional maximum-likelihood estimation (MLE)
depends highly on initialisation. Unfavourable choices could result in
suboptimal parameter estimates. Current methods to address this issue, such as
grid search (GS) can increase computation time substantially. Methods: We
propose a machine learning (ML) informed MLE approach that combines
conventional MLE with ML approaches synergistically. ML-based methods have been
developed recently to improve the speed and precision of parameter estimation.
However, they can generate high systematic bias in estimated parameters when
SNR is low. In the proposed ML-MLE approach, an artificial neural network model
is trained to provide sensible initialisation for MLE efficiently, with the
final solution determined by MLE, avoiding biases typically affecting pure ML
estimations. Results: Using parameter estimation of neurite orientation
dispersion and density imaging as an example, simulation and in vivo
experiments suggest that the ML-MLE method can reduce outlier estimates from
conventional MLE in white matter voxels affected by CSF contamination. It also
accelerates computation compared to GS-MLE. Conclusion: The ML-MLE method can
improve the reliability of parameter estimation with reduced computation time
compared to GS-MLE, making it a practical tool for diffusion dataset with low
SNR. | Ting Gong, Francesco Grussu, Claudia A. M. Gandini Wheeler-Kingshott, Daniel C Alexander, Hui Zhang | 2023-01-28T20:52:08Z | http://arxiv.org/abs/2301.12294v1 | Machine-learning-informed parameter estimation improves the reliability of spinal cord diffusion MRI
###### Abstract
**Purpose**: We address the challenge of inaccurate parameter estimation in diffusion MRI when the signal-to-noise ratio (SNR) is very low, as in the spinal cord. The accuracy of conventional maximum-likelihood estimation (MLE) depends highly on initialisation. Unfavourable choices could result in suboptimal parameter estimates. Current methods to address this issue, such as grid search (GS) can increase computation time substantially.
**Methods**: We propose a machine learning (ML) informed MLE approach that combines conventional MLE with ML approaches synergistically. ML-based methods have been developed recently to improve the speed and precision of parameter estimation. However, they can generate high systematic bias in estimated parameters when SNR is low. In the proposed ML-MLE approach, an artificial neural network model is trained to provide sensible initialisation for MLE efficiently, with the final solution determined by MLE, avoiding biases typically affecting pure ML estimations.
**Results**: Using parameter estimation of neurite orientation dispersion and density imaging as an example, simulation and _in vivo_ experiments suggest that the ML-MLE method can reduce outlier estimates from conventional MLE in white matter voxels affected by CSF contamination. It also accelerates computation compared to GS-MLE.
**Conclusion**: The ML-MLE method can improve the reliability of parameter estimation with reduced computation time compared to GS-MLE, making it a practical tool for diffusion dataset with low SNR.
**Keywords: spinal cord; diffusion MRI; machine learning**
## 1 Introduction
Being a non-invasive tool to characterise the microstructure of neural tissue in the central nervous system, diffusion MRI has been widely used to study the brain and is increasingly used to examine the spinal cord. In the spinal cord, most diffusion MRI studies have focused on using apparent diffusion coefficient (1) or diffusion tensor imaging (2) to evaluate the axonal integrity following pathological changes, such as in spinal cord injury (3-5), multiple sclerosis (6-8) and amyotrophic lateral sclerosis (9-11). In recent years,
more studies have explored and applied advanced diffusion methods to investigate neuronal morphology-related microstructural properties in the spinal cord [12, 13]. These methods provide novel biomarkers for characterising microstructural alterations in spinal cord pathology, such as Neurite orientation dispersion and density imaging (NODDI) [14] applied to multiple sclerosis[15, 16, 17]. Nevertheless, compared to their applications in the brain, the exploration of such methods in the spinal cord is still limited due to the difficult imaging environment of the spine [18, 19] and its limited size. Though advanced spinal cord MRI has experienced tremendous advances both in terms of image acquisition [20, 21] and analysis [22, 23], the challenge of intrinsic lower signal-to-noise ratio (SNR) than typically seen in the brain remains.
The low SNR in spinal cord DWIs can lead to inaccurate parameter estimation, challenging the application of diffusion methods - especially advanced ones - in spinal cord studies. When fitting a microstructure model to measurements of low SNR, a conventional method such as the maximum likelihood estimation (MLE) is often used with numerical optimisation [24]. The MLE method finds the best estimate that gives the highest likelihood of the measurements under an appropriate noise model. This procedure of finding such a best estimate generally involves an iterative algorithm starting from some initial guess in the parameter space. Inaccurate estimation can happen when the optimisation is stopped at a suboptimal location. In this case, choosing a suitable starting point for nonlinear optimisation can be crucial to ensure convergence to the correct solution.
Several approaches have been developed to improve conventional fitting, which however are usually performed voxel by voxel and therefore could increase computation time significantly. One such method is to conduct a grid search (GS) in the parameter space before optimisation to find the initial guess that is more likely to give the best likelihood [25, 26]. Other more time-consuming approaches include the multi-start method that repeats the optimisation multiple times with different starting points and stochastic methods such as simulated annealing [27] and full Markov-Chain Monte Carlo [28]. Besides non-linear optimisation methods, linearisation is used in some methods to mitigate the starting point problem at the cost of possible approximation errors [29].
Recently, machine learning (ML)-based techniques have been developed as an alternative to the conventional estimation approach in diffusion MRI [30, 31, 32, 33, 34, 35, 36]. These methods are known for their speed and precision. A trained ML model can estimate microstructure parameters in large datasets almost instantly. However, a recent study suggests ML estimation can generate systematic bias in estimated parameters especially when SNR is low [37]. This bias could hinder the applicability and interpretability of ML-based methods in clinical settings.
To address the challenge of parameter estimation under low SNR, we propose an ML-informed MLE (ML-MLE) approach that combines the conventional MLE and the ML approaches synergistically. The approach initialises the MLE efficiently and optimally by training an artificial neural network (ANN) model. Suitable initialisations can be identified instantly for large datasets through a network inference step, which saves
computation time compared to GS-MLE. At the same time, the final solutions are determined by MLE, avoiding biases typically affecting pure ML estimation.
## 2 Theory
This section describes of the conventional MLE method, and the GS procedure to find the starting point for MLE optimisation.
### Mle
The MLE method finds the parameter estimate that maximises the likelihood of the data we measure under a statistical model of the noise. This is mathematically described as:
\[\widetilde{\mathbf{\theta}}=\ arg\max_{\mathbf{\theta}}p(\mathbf{y}|\mathbf{\theta}) \tag{1}\]
where \(\mathbf{y}=[y_{1},...,y_{N}]\) is the vector of measured signals and \(\mathbf{\theta}\) the vector of parameters of interest, and \(p(\mathbf{y}|\mathbf{\theta})\) is the probability density of observing \(\mathbf{y}\) given \(\mathbf{\theta}\). Under the typical assumption that the noise on each measurement is independent and identical,
\[\widetilde{\mathbf{\theta}}=\ arg\max_{\mathbf{\theta}}\sum_{n=1}^{N}\log p(y_{n}|\mathbf{ \theta}) \tag{2}\]
Given that the magnitude of MR signals is independently Rician distributed(38), the probability density of observing \(y_{n}\) given \(\mathbf{\theta}\) can then be expressed as(24):
\[p(y_{n}|\mathbf{\theta})=\ \frac{y_{n}}{\sigma^{2}}\ e^{\left(-\frac{y_{n}^{2}+S(y_{n },\mathbf{g_{n}};\mathbf{\theta})^{2}}{2\sigma^{2}}\right)}I_{0}(\frac{y_{n}S(y_{n}, \mathbf{g_{n}};\mathbf{\theta})}{\sigma^{2}}) \tag{3}\]
where \(S(b_{n},\mathbf{g_{n}};\mathbf{\theta})\) is the noise-free signal predicted by the forward model with the diffusion sensitising factor \(b_{n}\), gradient direction \(\mathbf{g_{n}}\) and tissue parameter \(\mathbf{\theta}\); \(\sigma\) is the standard deviation of noise level and could be approximated by the standard deviation of S(b=0) measurements. \(I_{0}\) is the modified Bessel function of the first kind with order zero.
To solve this problem, choosing a good starting guess for \(\mathbf{\theta}\) can be crucial. \(\mathbf{\theta}\) is then optimised iteratively using non-linear optimisation until the maximum log-likelihood is found under some stopping criterion of choice.
### Grid search for choosing the starting point
In a grid search method, the best starting point is chosen by firstly computing the log-likelihood of a set of locations in the parameter space, and then comparing the likelihoods and setting the starting location as the parameter combination that gives the maximum likelihood. The set of locations is chosen to reside on a regular grid in the parameter space. Each parameter is allowed to take a set of evenly spaced values within its plausible range (26).
The GS process of finding the starting point can be very time-consuming, and the time increases as the number of voxel increases. Depending on the dimension of the parameter space, i.e., the number of parameters to estimate in the model (\(n\)), and the number of values sampled for each parameter for searching (\(N_{k}\)), the number of evaluations required for each voxel is \(N=\prod_{k=1}^{n}N_{k}\), which grows quickly as the dimension of parameter space increases.
## 3 Methods
This section describes the proposed ML-informed method for finding the starting point for MLE, followed by implementation details and experiments for assessment of the ML-MLE method.
### ML-Informed method
With ML-informed method, the starting locations of a large dataset can be generated directly and efficiently with an ANN model. The ANN model is designed to map the diffusion signals to the diffusion model-derived parameters. The training of such a model is performed on a simulated dataset with known ground truth and then applied to the target dataset to get starting points close to optimal. The simulated training dataset is generated using a forward diffusion model with uniformly sampled tissue parameters [(37)] and the same imaging protocol as the target dataset. Once the model is trained on the simulated dataset, it can be applied instantly to any datasets with the same diffusion acquisition protocol to get the starting points.
### Implementation details
The NODDI model [(14)] is investigated as a demonstration of estimating advanced diffusion parameters in the spinal cord. The parameters of interest include intra-neurite fraction \(f_{in}\), orientation dispersion index \(ODI\), and free water fraction \(f_{iso}\). As the fibre bundle in the healthy spinal cord is highly aligned with the superior-inferior direction of the body, the fibre orientation in the model is set to the principal direction estimated from diffusion tensor [(2)] before estimating the 3 parameters of interest.
For the ML estimation of ML-MLE method, the ANN follows a standard architecture [(33; 39)]. It contains an input layer with the number of channels equal to the number of DWI volumes including the b=0 signal, 3 hidden layers with 150 units in each, and an output layer for the 3 NODDI parameters. The loss function is defined as the mean square error of the target parameters. The rectified linear unit is used as the activation function for all the hidden layers and Sigmoid is used for the output layer to guarantee the ranges of output parameters are between 0 and 1.
The training dataset is generated synthetically with the NODDI model from known tissue parameter values of \(f_{in},ODI\) and \(f_{iso}\), all ranging from 0 to1, with \(f_{in}\) and \(ODI\) uniformly distributed to achieve lower estimation bias [(37)]; the distribution for \(f_{iso}\) contains more samples below 0.4 as including high \(f_{iso}\) samples for
training will bias other model parameters and \(f_{iso}\) from in vivo dataset are generally below 0.3 in the cord GM and WM (supplementary materials M1). A total of one million samples are generated for training. The same diffusion sampling scheme as the target datasets is used to synthesise the signal \(S\). Rician noise is added to the simulated signal by \(S_{rician}=\sqrt{(S+N_{1}(0,\sigma)^{2}+N_{2}(0,\sigma^{2})}\), where \(\sigma\) is the standard deviation of Gaussian noise level. The SNR levels for in vivo acquisition vary among different segments of the cord and are typically below 10 when estimated using b=0 signals [(40)]. For the training datasets, the SNR can be chosen to be the value estimated from the target in vivo datasets. This is the strategy adopted here, resulting in an SNR level of 10 to demonstrate the method. Because b=0 signals naturally vary between voxels in in vivo datasets, the evaluation with in vivo data enables the generalisation of the trained model to be evaluated for different SNRs.
### Experiments
The ML-MLE method is evaluated in terms of computation speed and accuracy, and compared to GS-MLE and direct ML estimation. Specifically, the computation time is compared between ML-MLE and GS-MLE with simulation and _in vivo_ datasets. The accuracy and precision of estimation are compared for all the methods with simulation datasets. Finally, the findings from the simulation are further demonstrated on _in vivo_ datasets.
For the comparison to the conventional method, GS-MLE is implemented based on the NODDI MATLAB toolbox on the _in vivo_ and simulated datasets. The grids for searching are uniformly sampled from 0 to 1 with a separation of 0.25 for \(f_{in}\), \(OD1\), and \(f_{iso}\), resulting in 125 locations in the parameter space. The location that gives the highest log-likelihood is used as the starting point for MLE.
#### 3.3.1 _In vivo_ data
_In vivo_ spinal cord data from a previous study [(12)] were retrospectively analysed. These consisted of scans acquired with a 3T Philips Achieva scanner from 5 healthy subjects. A multi-shell diffusion protocol optimised for NODDI parameter estimation was used for data acquisition [(14)]: 30 and 60 diffusion gradient directions were applied respectively for the first shell at b=711 s/mm\({}^{2}\) and the second shell at b=2855 s/mm\({}^{2}\); 6 repetitions of b = 0 images were interleaved through the whole session. Scans were performed axial-oblique by carefully aligning with the slice-selection direction (z) on a sagittal localiser. All images underwent pre-processing steps for motion correction and tissue segmentation of white matter (WM) and grey matter (GM). Details about acquisition parameters and data pre-processing can be found in the original study [(12)]. The SNR levels of the datasets are measured and reported in supplementary materials M2.
#### 3.3.2 Simulation
Diffusion measurements are simulated for quantitative comparison of all methods. Noise-free signals \(S(b_{n},\mathbf{g_{n}};\mathbf{\theta})\) are synthesised using the NODDI model with the same diffusion imaging protocol \((b_{n},\mathbf{g_{n}})\) as
the _in vivo_ data acquisition, and with typical sets of tissue parameters (\(\mathbf{\theta}\)) suggested in the previous study (12). Specifically, tissue parameters of \(f_{in}=[0.35,0.45,0.55,0.65,0.75]\), \(ODI=[0.02,0.12,0.22]\) are simulated. Different levels of CSF contamination are explored with \(f_{iso}=[0,0.1,0.2,0.3]\). For each combination of tissue parameters, noisy measurements are generated 100 times by adding random Rician noise to the corresponding noise-free signals as described in the training dataset. An SNR level of 10 for the b=0 signal is assessed.
## 4 Results
### Computation time
The computation time for the ML-MLE (about 175 s/ 1000 voxels and 215 s/ 1000 voxels for simulated and _in vivo_ dataset) is about 1.75 times faster than the GS-MLE (about 300 s/ 1000 voxels and 400 s/ 1000 voxels for simulated and _in vivo_ dataset). The training time for the ANN model is about 7 mins on a single CPU. Once the model is trained, it is applied to all new datasets including simulated and _in vivo_ data to get the initialisation for MLE. The application of the model to new datasets is completed almost instantly.
### Simulation: accuracy and precision of estimation
Figure 1 demonstrates the joint distribution of \(f_{in}\), and \(f_{iso}\) from noisy simulations for each method in the WM (ODI =0.02). While the MLE-based method can give accurate estimates of \(f_{in}\) and \(f_{iso}\) in most cases, the \(f_{in}\) estimation from GS-MLE was stuck at an outlier value of 1 for certain noise realisations in all combinations of tissue parameters (Figure 1.A). These outliers, however, are eliminated in the ML-informed MLE method (Figure 1.B), therefore improving the accuracy and precision of estimation especially when there is high CSF contamination. The estimates directly from the ML method improve the precision significantly but induce systematic biases in the parameters (Figure 1.C). Specifically, \(f_{iso}\) is overestimated in simulations with low CSF contamination and underestimated in simulations with high CSF contamination; \(f_{in}\), is underestimated in simulations with high CSF contamination. Distributions for a wider range of \(f_{in}\) can be found in supplementary materials M3.
Figure 2 demonstrates the joint distribution of \(f_{in}\) and ODI from noisy simulations for each method without (\(f_{iso}=0\)) and with high CSF contamination (\(f_{iso}=0.3\)). Compared to \(f_{in}\) and \(f_{iso}\) estimation, ODI estimation is less affected by the noise or methods used. When ODI is low as in WM (ODI =0.02), its estimate precision is higher than those with high ODI, especially for direct ML estimation.
### Robust _in vivo_ estimation
The _in vivo_ results agree well with the simulation findings. Figure 3 shows parameter maps from example slices of a single subject. The ML-MLE gives an overall similar estimation to GS-MLE in the GM but eliminates most of the outlier estimates of \(f_{in}\) in the WM, likely affected by CSF contamination indicated by high \(f_{iso}\). The ML estimated \(f_{in}\), are systematically lower than GS-MLE; the ML estimated \(f_{iso}\) are systematically higher than GS-MLE in GM regions with lower CSF contamination.
Figure 1: 2D distribution of estimated intra-neurite fraction \(f_{in}\) and free water fraction \(f_{iso}\) from noisy simulation of WM with (A) GS-MLE, (B) ML-MLE and (c) ML estimation. For each set of tissue parameters, the ground truth is marked as a red square. An example of outlier estimates of \(f_{in}\) from GS-MLE is indicated by the orange arrow. ML estimation generates a negative bias in the estimated \(f_{in}\) when there is CSF contamination, which is typical in the WM of the spinal cord.
Figure 2: 2D distribution of estimated intra-neurite fraction \(f_{in}\) and orientation dispersion index ODI from noisy simulation with (A) GS-MLE, (B) ML-MLE and (c) ML estimation. For each set of tissue parameters, the ground truth is marked as a red square. The estimation accuracy and precision of ODI is less affected by CSF contamination compared to \(f_{in}\) for all methods.
These findings are consistent across all subjects with varying SNR levels. Figure 4 shows the distribution of \(f_{in}\) estimation in both WM and GM. Outliers estimates of \(f_{in}\), are found for GS-MLE estimation for all subjects, which are largely eliminated by ML-MLE, bringing closer the mean and median values of the distribution. ML estimation while improving the precision, gives systematically lower estimates of \(f_{in}\), in all subjects. Table 1 gives the summary of mean and standard deviations of parameter estimates in WM and GM from all the subjects. With the elimination of outliers from ML-MLE in the WM, the standard deviations of \(f_{in}\) estimates are lower than the GS-MLE method, indicating improved precision. In GM, the two methods give similar mean values and standard deviations. ML estimation gives systematically lower mean estimates of \(f_{in}\), in all subjects.
Figure 3: Example image slices of estimated \(f_{in}\), \(f_{iso}\) and ODI maps from a typical subject from different methods (A-C); WM and GM masks are overlayed on \(f_{in}\) maps estimated with GS-MLE. Outliers in the cord from the GS-MLE method are mostly in the WM as indicated by the white arrow. ML-MLE reduces these outliers in WM while giving similar estimation in GM compared to GS-MLE.
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c|c c|c c|c c} \hline & \multicolumn{8}{c|}{WM} & \multicolumn{8}{c|}{GM} \\ \cline{2-13} \(f_{in}\) & \multicolumn{2}{c|}{GS-MLE} & \multicolumn{2}{c|}{ML-MLE} & \multicolumn{2}{c|}{ML} & \multicolumn{2}{c|}{MS-MLE} & \multicolumn{2}{c|}{ML-MLE} & \multicolumn{2}{c|}{ML} \\ \cline{2-13} & mean & std & mean & std & mean & std & mean & std & mean & std & mean & std \\ \hline S1 & 0.617 & 0.149 & 0.566 & 0.102 & 0.519 & 0.102 & 0.499 & 0.100 & 0.494 & 0.093 & 0.470 & 0.112 \\ S2 & 0.571 & 0.114 & 0.549 & 0.083 & 0.489 & 0.060 & 0.516 & 0.098 & 0.516 & 0.097 & 0.462 & 0.070 \\ S3 & 0.567 & 0.123 & 0.548 & 0.094 & 0.491 & 0.063 & 0.488 & 0.098 & 0.480 & 0.087 & 0.473 & 0.064 \\ S4 & 0.662 & 0.143 & 0.625 & 0.107 & 0.551 & 0.080 & 0.588 & 0.126 & 0.574 & 0.111 & 0.494 & 0.079 \\ S5 & 0.585 & 0.141 & 0.562 & 0.106 & 0.492 & 0.076 & 0.528 & 0.118 & 0.523 & 0.111 & 0.455 & 0.068 \\ \hline \end{tabular} \begin{tabular}{c|c c|c c|c c|c c|c c|c c|c c} \hline & \multicolumn{8}{c|}{WM} & \multicolumn{8}{c|}{GM} \\ \cline{2-13} \(f_{iso}\) & \multicolumn{2}{c|}{GS-MLE} & \multicolumn{2}{c|}{ML-MLE} & \multicolumn{2}{c|}{ML} & \multicolumn{2}{c|}{GS-MLE} & \multicolumn{2}{c|}{ML-MLE} & \multicolumn{2}{c|}{ML} \\ \cline{2-13} & mean & std & mean & std & mean & std & mean & std & mean & std & mean & std \\ \hline S1 & 0.167 & 0.157 & 0.163 & 0.136 & 0.131 & 0.093 & 0.106 & 0.115 & 0.104 & 0.112 & 0.068 & 0.051 \\ S2 & 0.089 & 0.113 & 0.081 & 0.099 & 0.113 & 0.058 & 0.047 & 0.076 & 0.048 & 0.074 & 0.068 & 0.036 \\ S3 & 0.101 & 0.123 & 0.098 & 0.111 & 0.104 & 0.060 & 0.060 & 0.088 & 0.058 & 0.080 & 0.060 & 0.038 \\ S4 & 0.145 & 0.144 & 0.132 & 0.134 & 0.150 & 0.083 & 0.091 & 0.112 & 0.087 & 0.108 & 0.095 & 0.062 \\ S5 & 0.157 & 0.150 & 0.154 & 0.140 & 0.129 & 0.077 & 0.103 & 0.121 & 0.104 & 0.116 & 0.084 & 0.061 \\ \hline \end{tabular}
\begin{tabular}{c|c|c c|c c|c c|c c|c c|c c} \hline & \multicolumn{8}{c|}{WM} & \multicolumn{8}{c|}{GM} \\ \cline{2-13} \(ODI\) & \multicolumn{2}{c|}{GS-MLE} & \multicolumn{2}{c|}{ML-MLE} & \multicolumn{2}{c|}{ML} & \multicolumn{2}{c|}{GS-MLE} & \multicolumn{2}{c|}{ML-MLE} & \multicolumn{2}{c|}{ML} \\ \cline{2-13} & mean & std & mean & std & mean & std & mean & std & mean & std & mean & std \\ \hline S1 & 0.030 & 0.044 & 0.021 & 0.050 & 0.013 & 0.047 & 0.108 & 0.075 & 0.103 & 0.079 & 0.082 & 0.080 \\ S2 & 0.035 & 0.032 & 0.026 & 0.032 & 0.015 & 0.019 & 0.146 & 0.091 & 0.143 & 0.094 & 0.116 & 0.100 \\ S3 & 0.027 & 0.024 & 0.020 & 0.023 & 0.012 & 0.013 & 0.073 & 0.054 & 0.067 & 0.056 & 0.056 & 0.047 \\ S4 & 0.053 & 0.068 & 0.044 & 0.071 & 0.039 & 0.097 & 0.116 & 0.072 & 0.108 & 0.078 & 0.077 & 0.071 \\ S5 & 0.057 & 0.092 & 0.045 & 0.086 & 0.035 & 0.101 & 0.138 & 0.116 & 0.133 & 0.117 & 0.104 & 0.118 \\ \hline \end{tabular}
\end{table}
Table 1: Mean and standard deviation (std) of parameters in WM and GM from all subjects. ML-MLE reduces the std of \(f_{in}\) in WM due to the elimination of outlier estimates in GS-MLE.
Figure 4: Distributions of \(f_{in}\) from all subjects in the WM and GM. Outliers generated from GS-MLE in the WM are observed for all subjects, which are reduced in ML-MLE estimation, giving a closer mean and median in the distribution; ML estimation gives systematically lower \(f_{in}\) estimates than the MLE-based method in all subjects.
## 5 Discussion & Conclusion
In summary, this study proposes an ML-informed MLE approach to address the challenge of unreliable microstructure parameter estimation under low SNR spinal cord diffusion MRI data. In testing NODDI-derived parameters, the ML-informed method can reduce outlier estimates from conventional MLE and avoid high biases from pure ML estimation. The proposed method also speeds up the computation compared to GS-MLE, making it a promising tool for future applications.
MLE can provide a consistent and efficient approach to parameter estimation problems in diffusion MRI while being sensitive to the choice of starting points. On the positive side, MLE is known to provide unbiased estimates as the sample size increases; different diffusion models and the Rician noise model in DWI data can be considered. However, when the measured signals are very noisy, the variance of estimation can increase, decreasing the precision of estimation. As is shown in our results, by providing starting points from ML estimation, some outlier estimates generated from such low SNR data can be eliminated, hence improved precision of estimates can be achieved.
Our study confirms previous finding that the high precision of direct ML estimation can contain strong biases, especially under low SNR (37). In our cases, when CSF contamination is low, ML can generate relatively accurate \(f_{in}\) estimation, but the biases go up quickly when there is CSF contamination; while biases of \(f_{in}\) are always negative, the biases of \(f_{iso}\) can be both positive and negative. The non-uniform pattern of bias makes ML estimation unpredictable for pathological tissue and can hence hamper its clinical utility.
In finding a starting point for the MLE, the ML estimation, though biased, is likely to find a solution within the basin of attraction of the global minimum with a much shorter computation time than the GS method. For grid search methods, to find such a reliable starting point, the density of the searching grids needs to be increased which will lead to an even longer computation time.
Our method uses a three-layer ANN model trained on simulated data to generate the initial guesses. While the model already gives low mean square errors on training and testing dataset, future work will explore bias and variance trade-offs and improvement of overall estimation performance by further exploring factors like network architecture, training samples distribution, choice of training labels (41), and separate optimisation for each parameter etc.
The proposed ML-MLE method includes direct ML estimation, providing an opportunity to combine them for certain tasks to improve the outcome. While the ML-MLE estimates are less biased, ML estimation gives higher precision in the parameter maps. The performance of some clinical tasks, such as classification, may not depend on parameter-estimation accuracy or precision alone. With a task-driven assessment of parameter estimation (42), we may be able to choose between ML-MLE and ML estimation or combine them together to improve the outcome.
**Supplementary Material**
**M1.** The distributions of \(f_{iso}\) in the whole cord, cord WM and cord GM from in vivo dataset and the distribution of \(f_{iso}\) used in training.
**M2.** The SNR levels for the images of the five in vivo datasets. The signal levels are estimated from the mean signals within the whole cord, cord WM and cord GM from the segmentation on the b=0 images and diffusion-weighted images. The noise levels are estimated from the standard deviations of signals outside the cord body (4 squares in the corners of the background).
**M3.** The SNR levels for the images of the five in vivo datasets. The signal levels are estimated from the mean signals within the whole cord, cord WM and cord GM from the segmentation on the b=0 images and diffusion-weighted images. The noise levels are estimated from the standard deviations of signals outside the cord body (4 squares in the corners of the background).
**M3.** 2D distribution of estimated intra-neurite fraction \(f_{in}\) and free water fraction \(f_{iso}\) from noisy simulations M3.(A) GS-MLE
## Acknowledgements
TG is supported by the Medical Research Council (MRC reference: MR/T046473/1). FG is supported by the investigator-initiated PREdICT study at the Vall d'Hebron Institute of Oncology (Barcelona), funded by AstraZeneca. AstraZeneca did not influence data acquisition, analysis, result interpretation and the decision to submit this work in its present form. FG receives the support of a fellowship from the "la Caixa" Foundation (ID 100010434). The fellowship code is "LCF/BQ/PR22/11920010".
The acquisition of the data used in this study was supported by the UCL Grand Challenges scheme, an initiative of the UCL School of Life and Medical Sciences, UCLH/UCL Biomedical Research Centre and the Specialist Biomedical Research Centres at Moorfields/UCL and Great Ormond Street/UCL, as well as by a programme grant from the UK MS Society (grant 892/08). The MRI scanner of the NMR Research Unit, Queen Square Multiple Sclerosis Centre, is supported by the National Institute for Health Research University College London Hospitals Biomedical Research Centre. CGWK receives grant funding from The Multiple Sclerosis Society (grant #77), Wings for Life (#169111), BRC(#BRC704/CAP/CGW), MRC (#MR/S026088/1), and Ataxia UK. CGWK is a shareholder in Queen Square Analytics Ltd.
|
2306.16055 | Neutrino spectrum and energy loss rates due to weak processes on hot
$^{56}$Fe in pre-supernova environment | Applying TQRPA calculations of Gamow--Teller strength functions in hot
nuclei, we compute the (anti)neutrino spectra and energy loss rates arising
from weak processes on hot $^{56}$Fe under pre-supernova conditions. We use a
realistic pre-supernova model calculated by the stellar evolution code MESA.
Taking into account both charged and neutral current processes, we demonstrate
that weak reactions with hot nuclei can produce high-energy (anti)neutrinos. We
also show that, for hot nuclei, the energy loss via (anti)neutrino emission is
significantly larger than that for nuclei in their ground state. It is found
that the neutral current de-excitation via the $\nu\bar\nu$-pair emission is
presumably a dominant source of antineutrinos. In accordance with other
studies, we confirm that the so-called single-state approximation for neutrino
spectra might fail under certain pre-supernova conditions. } | Alan A. Dzhioev, A. V. Yudin, N. V. Dunina-Barkovskaya, A. I. Vdovin | 2023-06-28T09:37:39Z | http://arxiv.org/abs/2306.16055v1 | Neutrino Spectrum and Energy Loss Rates Due to Weak Processes on Hot \({}^{56}\)Fe in Pre-Supernova Environment
###### Abstract
Applying TQRPA calculations of Gamow-Teller strength functions in hot nuclei, we compute the (anti)neutrino spectra and energy loss rates arising from weak processes on hot \({}^{56}\)Fe under pre-supernova conditions. We use a realistic pre-supernova model calculated by the stellar evolution code MESA. Taking into account both charged and neutral current processes, we demonstrate that weak reactions with hot nuclei can produce high-energy (anti)neutrinos. We also show that, for hot nuclei, the energy loss via (anti)neutrino emission is significantly larger than that for nuclei in their ground state. It is found that the neutral current de-excitation via the \(v\bar{v}\)-pair emission is presumably a dominant source of antineutrinos. In accordance with other studies, we confirm that the so-called single-state approximation for neutrino spectra might fail under certain pre-supernova conditions.
pre-supernova; hot nuclei; stellar evolution code MESA; (ant)neutrino spectra; (ant)neutrino energy loss rates Article
## 1 Introduction
It is well known that the production and propagation of (anti)neutrinos in the stellar matter are important ingredients of the computer modeling of stellar evolution. According to the theory, in stellar interiors with both high temperature and density, neutrino emission makes a major contribution to energy loss, removes entropy from the stellar core and accelerates the evolution of the star [1; 2; 3]. The observation of neutrinos from supernova SN1987A confirmed and advanced our understanding of core-collapse supernova explosion. Recent remarkable progress in neutrino detection techniques may enable the registration of neutrinos from new sources. Some of the candidates are pre-supernova (anti)neutrinos emitted from the core of a massive star just before the collapse [4]. Although pre-supernova (anti)neutrinos have not been detected to date, their observation would offer a possibility for studying the physical processes that lead to core collapse and would be a warning of an upcoming explosion.
In [5; 6], the role of charged current nuclear weak processes (electron and positron capture, \(\beta^{+}\)-decay) in the neutrino emission from a pre-supernova star was studied. It was found that, under certain conditions, nuclear processes compete with thermal processes (plasmon decay, pair annihilation, etc.) in their contribution to the (anti)neutrino flux or even dominate in the energy window relevant for detection. However, it was pointed out that, while total emissivities are relatively robust, the highest-energy tails of the neutrino spectrum, in the detectable window, are very sensitive to the details of the calculations. Specifically, the source of the error lies in the single-strength approximation [7] that was adopted in [5; 6] for nuclear processes. In [8], an exploratory study of this error was performed and it was shown that the specific neutrino spectrum obtained from the single-strength approximation could miss important features.
High-temperature stellar plasma allows nuclei to access excited states in accordance with the Boltzmann distribution. In [7], the single-strength approximation was derived assuming that (i) weak processes on a thermally excited state in the parent nucleus lead to the Gamow-Teller (GT) transition to a single state in the daughter nucleus and that (ii) the Brink hypothesis is valid, i.e., the GT strength function is the same for all excited states. The violation of the Brink hypothesis for thermally excited (hot) nuclei was demonstrated for both charge-exchange [9; 10; 11] and charge-neutral [12; 13] Gamow-Teller strength functions, and it was shown that, under certain stellar conditions, thermal effects on the GT strength significantly affect the rates and cross-sections of the nuclear weak process (as can also be seen in recent reviews [14; 15; 16]).
In this paper, we apply the formalism of [9; 10; 11; 12; 13; 14; 15; 16] to study electron (anti)neutrino spectra and energy loss rates arising from weak processes on hot \({}^{56}\)Fe under conditions realized in the pre-supernova environment. Besides the charged current weak nuclear processes considered in [5; 6], we also take into account the neutral current de-excitation of hot \({}^{56}\)Fe via neutrino-antineutrino pair emission. The main goal of the present work is to study how thermal effects on the GT strength function and \(\nu\bar{\nu}\)-pair emission affects the (anti)neutrino spectra and energy loss rates.
## 2 Method
To compute (anti)neutrino spectra and energy loss rates due to weak processes on hot nuclei, we apply a method which is based on the statistical formulation of the nuclear many-body problem at finite temperature. In this method, rather than compute GT strength distributions for individual thermally excited states, we determine a thermal averaged strength function for the GT operator
\[S_{\mathrm{GT}_{\pm,0}}(E,T)=\sum_{i,f}p_{i}(T)B_{if}^{(\pm,0)}\delta(E-E_{if }), \tag{1}\]
where \(p_{i}(T)=e^{-E_{i}/kT}/Z(T)\) is the Boltzmann population factor for a parent state \(i\) at a temperature \(T\), \(B_{if}^{\pm,0}=|\langle f\|\mathrm{GT}_{\pm,0}\|i\rangle|^{2}/(2J_{i}+1)\) is the reduced transition probability (transition strength) from the state \(i\) to the state \(f\) in the daughter nucleus; \(\mathrm{GT}_{0}=\bar{\sigma}t_{0}\) for neutral current reactions and \(\mathrm{GT}_{\mp}=\bar{\sigma}t_{\pm}\) for charged current reactions. The zero component of the isospin operator is denoted by \(t_{0}\), while \(t_{-}\) and \(t_{+}\) are the isospin-lowering (\(t_{-}|n\rangle=|p\rangle\)) and isospin-rising (\(t_{+}|p\rangle=|n\rangle\)) operators. Thus, '\(0\)' refers to the \(\nu\bar{\nu}\)-pair emission, '\(-\)' to positron capture (PC) and \(\beta^{-}\)-decay, and '\(+\)' to electron capture (EC) and \(\beta^{+}\)-decay. The transition energy between initial and final states is given by \(E_{if}=Q+E_{f}-E_{i}\), where \(E_{i}\) and \(E_{f}\) are the excitation energies of the parent and daughter nuclei, and \(Q=M_{f}-M_{i}\) is the ground-state reaction threshold (for neutral current reactions \(Q=0\)). The definition of \(S_{\mathrm{GT}}(E,T)\) implies that at \(T\neq 0\) the strength function is defined for both positive (\(E>0\)) and negative (\(E<0\)) energy domains. The latter corresponds to the de-excitation of thermally excited states to states at lower energies. In addition, low-energy transitions between excited states become possible at \(T\neq 0\).
Obviously, the explicit state-by-state calculation of \(S_{\mathrm{GT}_{\pm,0}}(E,T)\) is hardly possible due to the extremely large number of nuclear states thermally populated at stellar temperatures. To compute the temperature-dependent strength function (1), we apply the TQRPA framework which is a technique based on the quasiparticle random phase approximation (QRPA) extended to the finite temperature by the superoperator formalism in the Liouville space [14]. The central concept of the TQRPA framework is the thermal vacuum \(|0(T)\rangle\), a pure state in the Liouville space, which corresponds to the grand canonical density matrix operator for the hot nucleus. The time-translation operator in the Liouville space is the so-called thermal Hamiltonian \(\mathcal{H}\) constructed from the nuclear Hamiltonian after introducing particle creation and annihilation superoperators. Within the TQRPA, the strength func
tion (1) is expressed in terms of the transition matrix elements from the thermal vacuum to eigenstates (thermal phonons) of the thermal Hamiltonian \(\mathcal{H}|Q_{i}\rangle=\omega_{i}|Q_{i}\rangle\):
\[S_{\mathrm{GT}_{\pm,0}}(E,T)=\sum_{i}\mathcal{B}_{i}^{(\pm,0)}\delta(E-\omega_{ i}\mp\Delta_{np}). \tag{2}\]
Here, \(\mathcal{B}_{i}^{(\pm,0)}=|\langle Q_{i}||\mathrm{GT}_{\pm,0}||0(T)\rangle|^{2}\) is the transition strength to the \(i\)th state of a hot nucleus and \(E_{i}^{(\pm,0)}=\omega_{i}\pm\Delta_{np}\) is the transition energy; \(\Delta_{np}=0\) for charge-neutral transitions, while for charge-exchange transitions \(\Delta_{np}=\delta\lambda_{np}+\delta M_{np}\), where \(\delta\lambda_{np}=\lambda_{n}-\lambda_{p}\) is the difference between neutron and proton chemical potentials in the nucleus, and \(\delta M_{np}=1.293\,\mathrm{MeV}\) is the neutron-proton mass splitting. Note that the eigenvalues of the thermal Hamiltonian, \(\omega_{i}\), take both positive and negative values. The latter contribute to the strength function only at \(T\neq 0\). We also stress that the strength function (2) obeys the detailed balance principle:
\[S_{\mathrm{GT}_{0}}(-E,T)=\mathrm{e}^{-E/kT}S_{\mathrm{GT}_{0}}(E,T) \tag{3}\]
for charge-neutral GT transitions, and
\[S_{\mathrm{GT}_{\mp}}(-E,T)=\mathrm{e}^{-(E\mp\Delta_{np})/kT}S_{\mathrm{GT}_ {\pm}}(E,T) \tag{4}\]
for charge-exchange GT transitions. This property makes the approach thermodynamically consistent.
In what follows, we assume that emitted (anti)neutrinos freely leave the star. Then, we can write the following expressions for electron (anti)neutrino spectra resulting from the GT transition from the thermal vacuum to the \(i\)th state of a hot nucleus:
* Electron or positron capture \[\lambda_{i}^{\mathrm{EC},\,PC}(E_{\nu})=\frac{G_{\mathrm{F}}^{2}V_{ \mathrm{ud}}^{2}(g_{A}^{*})^{2}}{2\pi^{3}h^{7}c^{6}}\mathcal{B}_{i}^{(\pm)}(E_ {\nu}+E_{i}^{(\pm)})[(E_{\nu}+E_{i}^{(\pm)})^{2}-m_{e}^{2}c^{4}]^{1/2}E_{\nu}^{2} \\ \times f_{e^{\mp}}(E_{\nu}+E_{i}^{(\pm)})F(\pm Z,E_{\nu}+E_{i}^{( \pm)})\Theta(E_{\nu}+E_{i}^{(\pm)}-m_{e}c^{2}),\] (5) where upper (lower) sign corresponds to EC (PC);
* \(\beta^{\mp}\)-decay \[\lambda_{i}^{\beta^{\mp}}(E_{\nu})=\frac{G_{\mathrm{F}}^{2}V_{ \mathrm{ud}}^{2}(g_{A}^{*})^{2}}{2\pi^{3}h^{7}c^{6}}\mathcal{B}_{i}^{(\mp)}(- E_{\nu}-E_{i}^{(\mp)})[(-E_{\nu}-E_{i}^{(\mp)})^{2}-m_{e}^{2}c^{4}]^{1/2}E_{\nu}^{2} \\ \times[1-f_{e^{\mp}}(-E_{\nu}-E_{i}^{(\mp)})]F(\pm Z+1,-E_{\nu}-E_ {i}^{(\mp)})\Theta(-E_{\nu}-E_{i}^{(\mp)}-m_{e}c^{2}),\] (6) where upper (lower) sign corresponds to \(\beta^{-}\)- (\(\beta^{+}\)-)decay;
* \(\nu\vartheta\)-pair emission produces the same spectra for \(\nu_{e}\) and \(\bar{\nu}_{e}\) (The spectrum of other (anti)neutrino flavors is also given by (7).) \[\lambda_{i}^{\nu\vartheta}(E_{\nu}) = \frac{G_{\mathrm{F}}^{2}g_{A}^{2}}{2\pi^{3}h^{7}c^{6}}\mathcal{B }_{i}^{(0)}(-E_{\nu}-E_{i}^{(0)})^{2}E_{\nu}^{2}\Theta(-E_{\nu}-E_{i}^{(0)}).\] (7)
In the above expressions, \(G_{\mathrm{F}}\) denotes the Fermi coupling constant, \(V_{\mathrm{ud}}\) is the up-down element of the Cabibbo-Kobayashi-Maskava quark-mixing matrix and \(g_{A}=-1.27\) is the weak axial coupling constant. Note that, for charged current reactions, we use the effective coupling constant \(g_{A}^{*}=0.74g_{A}\) that takes into account the observed quenching of the \(\mathrm{GT}_{\pm}\) strength. The function \(f_{e^{-}(e^{+})}(E)\) is the Fermi-Dirac distribution for electrons (positrons), and the Fermi function \(F(Z,E)\) takes the distortion of the charged lepton wave function by the Coulomb field of the nucleus into account. It follows from the energy conservation that, for capture reactions, the electron (positron) energy is given by \(E_{e^{\mp}}=E_{\nu}+E_{i}^{(\pm)}\), while for the \(\beta^{\pm}\)-decay, we have \(E_{e^{\mp}}+E_{\nu}=-E_{i}^{(\mp)}\), and \(E_{\nu}+E_{\nu}=-E_{i}^{(0)}\) for \(\nu\vartheta\)-pair
emission. Obviously, only negative-energy transitions (\(E_{i}^{(\pm,0)}<0\)) contribute to \(\beta^{\mp}\)-decay and \(v\bar{v}\)-pair emission.
Summation over different contributions \(\mathbf{x}=\) EC, \(\beta^{+}\), \(\nu\bar{\nu}\) (PC, \(\beta^{-}\), \(\nu\bar{\nu}\)) and final states \(i\) of a hot nucleus gives us the total (anti)neutrino spectrum
\[\lambda(E_{\nu})=\sum_{\mathbf{x}}\sum_{i}\lambda_{i}^{\mathbf{x}}(E_{\nu}). \tag{8}\]
Then, the integration over \(E_{\nu}\) yields the neutrino emission (\(\Lambda\)) and energy-loss (\(P\)) rates
\[\Lambda=\int\lambda(E_{\nu})\,dE_{\nu},\quad P=\int\lambda(E_{\nu})E_{\nu}dE_{ \nu}. \tag{9}\]
## 3 Results
### Pre-Supernova Model
To study (anti)neutrino production and energy loss rates due to hot \({}^{56}\)Fe in the pre-supernova environment, we use the model 25_79_Op005_ml from Farmer et al. [17]. It is a typical pre-supernova model with a good mass resolution and a core temperature that is high enough for our estimates. Its name means that the initial mass of the model was 25\(M_{\odot}\), the nuclear reaction network was mesa_79.net, the maximum mass of a computational cell was 0.005\(M_{\odot}\), and the mass loss during the stellar evolution was taken into account (see details in [17]).
The authors of [17] employed the stellar evolution code MESA [18], version 7624. In output, MESA gives the time-evolving profiles of density \(\rho\) (in g/ccm), temperature \(T_{9}\equiv T(K)/10^{9}\), electron fraction \(Y_{\mathrm{e}}\) and mass fraction \(X_{i}\) of various isotopes. The profile that we use corresponds to the onset of core collapse, which is defined as the time when the infall velocity exceeds 1000 km/s anywhere in the star. The respective density, temperature and electron fraction profiles along the mass coordinate are demonstrated in the top panels of Figure 1. In the bottom panel of Figure 1, we show the mass fraction profiles of the most
Figure 1: **Top** panels: density, temperature and electron fraction profiles along the mass coordinate for the 25_79_Op005_ml pre-supernova model at the onset of the core collapse. **Bottom** panel: the respective mass fraction distribution of the most dominant isotopes.
dominant isotopes. We see that the \({}^{56}\)Fe isotope is dominant up to \(m<1.3M_{\odot}\). It is in this hot and dense central part of the star that the main neutrino flux is born.
We calculate (anti)neutrino spectra and energy loss rates due to hot \({}^{56}\)Fe at six specific points on the mass coordinate for which \(X_{{}^{56}Fe}>0.5\). To select these points, in the MESA output file, we first identify the mass coordinate \(m_{(6)}\) where \(X_{{}^{56}Fe}\) takes the value closest to 0.5. Then, the remaining five points are taken from the MESA output and are uniformly distributed over the interval \([0,m_{(6)}]\). The values of \(m_{(n)}\) with the respective values of the radial coordinate, \({}^{56}\)Fe mass fraction, density, temperature, electron fraction and electron chemical potential are given in Table 1. It is clearly seen from Table 1 that the range of temperature and density varies widely, while the electron fraction remains almost unchanged. The resulting chemical potential reduces four times when we move along the mass coordinate from point \(m_{(1)}\) to \(m_{(6)}\). Thus, the selected points enable us to consider weak nuclear processes under rather different representative pre-supernova conditions.
### Thermal Effects on Gamow-Teller Strength Functions in \({}^{56}\)Fe
Before discussing (anti)neutrino spectra and energy loss rates, we consider the thermal evolution of the GT strength functions in \({}^{56}\)Fe. In Figure 2, the GT\({}_{0,\mp}\) strength functions are displayed at three temperatures relevant in the pre-supernova context. To emphasize thermal effects, the ground-state strength functions are also shown in each panel. The choice of the nuclear model and its parameters for TQRPA calculations is discussed in [15]. Here, we just mention that the strength functions in Figure 2 are obtained by applying self-consistent calculations based on the SkM* parametrization of the Skyrme effective force. As shown in [15], zero-temperature QRPA calculations with the SkM* force fairly accurately reproduce both experimental data and shell-model results on the GT\({}_{0,\mp}\) resonance in the ground state of \({}^{56}\)Fe (TQRPA calculations performed with Skyrme forces SLy4, SkO' and SGII [9; 10; 11; 12; 13; 14; 15] clearly demonstrate that thermal effects on the GT strength functions do not depend on a particular choice of the parametrization--for this reason, all the results presented below concerning the temperature dependence of (anti)neutrino spectra and energy loss rates remain valid for other Skyrme parametrizations.). According to the present calculations, the main contribution to the GT\({}_{0}\) resonance (\(E\approx 15\) MeV) in \({}^{56}\)Fe comes from proton and neutron charge-neutral single-particle transitions \(1f_{7/2}\to 1f_{5/2}\), while the GT\({}_{-}\) and GT\({}_{+}\) resonances at energies of \(E\approx 15\) MeV and \(E\approx 6\) MeV, respectively, are mainly formed by the \(1f_{7/2}\to 1f_{5/2}\) charge-exchange transitions.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(\mathbf{(n)}\) & \(m\left(M_{\odot}\right)\) & \(R\left(R_{\odot}\right)\) & \(X_{{}^{56}Fe}\) & \(T_{9}\) & \(\log(\rho)\) & \(Y_{e}\) & \(\mu_{e}\) (MeV)\({}^{1}\) \\ \hline \(\left(1\right)\) & \(1.953\times 10^{-6}\) & \(6.41\times 10^{-6}\) & 0.95822 & 9.79138 & 10.01954 & 0.46029 & 8.451 \\ \(\left(2\right)\) & \(0.26005\) & \(3.75\times 10^{-4}\) & 0.95791 & 9.33784 & 9.72765 & 0.46197 & 6.679 \\ \(\left(3\right)\) & \(0.51745\) & \(5.22\times 10^{-4}\) & 0.95872 & 8.95570 & 9.49678 & 0.46304 & 5.527 \\ \(\left(4\right)\) & \(0.77778\) & \(6.71\times 10^{-4}\) & 0.95356 & 8.49247 & 9.23424 & 0.46379 & 4.435 \\ \(\left(5\right)\) & \(1.03597\) & \(8.51\times 10^{-4}\) & 0.88566 & 7.86298 & 8.90200 & 0.46725 & 3.340 \\ \(\left(6\right)\) & \(1.29568\) & \(1.13\times 10^{-3}\) & 0.497 & 6.97018 & 8.39942 & 0.47616 & 2.126 \\ \hline \hline \end{tabular} \({}^{1}\) The chemical potential \(\mu_{e}\) is defined to include the rest mass so that \(\mu_{e}=-\mu_{e^{+}}\). The value of \(\mu_{e}\) is determined from the density \(\rho Y_{e}\) by inverting the relation \(\rho Y_{e}=(\pi^{2}h^{3}N_{\Lambda})^{-1}\int\limits_{0}^{\infty}(f_{e^{-}}- f_{e^{+}})^{2}dp\).
\end{table}
Table 1: Six specific points on the mass coordinate where the (anti)neutrino spectra and energy loss rates due to hot \({}^{56}\)Fe are computed.
As seen from the plots, the thermal effects can noticeably change the strength functions. First, since our TQRPA calculations do not support the Brink hypothesis, the GT strength for upward (\(E>0\)) transitions exhibits a temperature dependence. Namely, due to the vanishing of pairing correlations and thermal weakening of the residual particle-hole interaction, the GT\({}_{0,\mp}\) resonance moves to lower energies. Moreover, the thermal smearing of the nuclear Fermi surfaces unblocks the low-energy GT transitions. In charge-exchange strength functions \(S_{GT_{-}}\) and \(S_{GT_{+}}\), these transitions lead to the appearance of the GT strength below the ground-state reaction threshold \(Q\) (for \({}^{56}\)Fe \(\rightarrow\)\({}^{56}\)Mn reactions \(Q=4.207\) MeV, and for \({}^{56}\)Fe \(\rightarrow\)\({}^{56}\)Co reactions \(Q=4.055\) MeV), while in the GT\({}_{0}\) distribution, finite temperature unblocks a low-energy strength below the experimental energy of the first \(1^{+}\) state in \({}^{56}\)Fe (\(E_{1^{+}_{1}}\approx 3.12\) MeV). Second, a temperature rise increases the population of nuclear excited states and enables downward (\(E<0\)) transitions in accordance with the detailed balance relations (3) and ( 4). Comparing the GT\({}_{-}\) and GT\({}_{+}\) distributions at \(T\neq 0\), we see that the main contribution to the negative-energy GT\({}_{-}\) strength comes from the transition, which is inverse to the GT\({}_{+}\) resonance. At the same time, the main contribution to the GT\({}_{+}\) strength at \(E<0\) comes from transitions inverse to low-energy GT\({}_{-}\) transitions, while the contribution of the transition inverse to the GT\({}_{-}\) resonance is small. The reason for this is that the GT\({}_{-}\) resonance is much higher in energy than the GT\({}_{+}\) resonance, and therefore its inverse transition is strongly suppressed by the Boltzmann exponential factor in the detailed balance relation (4). In the GT\({}_{0}\) distribution, negative-energy transitions inverse to low-energy ones and to the excitation of the GT\({}_{0}\) resonance contribute to the downward strength.
It is important to emphasize that, since upward and downward strengths are connected by the detailed balance relation, thermal effects on the upward GT strength also influence the downward strength. In [12], this influence was studied by comparing the running (cumulative) sums for the GT\({}_{0}\) downward strength calculated using and without
Figure 2: GT\({}_{0}\) (**left** column), GT\({}_{-}\) (**middle** column) and GT\({}_{+}\) (**right** column) strength functions \(S_{\rm GT}\) for \({}^{56}\)Fe calculated at \(T_{9}=7.0\) (**upper** row), \(T_{9}=8.5\) (**middle** row) and \(T_{9}=10.0\) (**lower** row). The blue bars represent the ground-state strength functions.
using the Brink hypothesis. In particular, it was shown that both the thermal unblocking of low-energy strength and lowering the GT resonance significantly enhance the strength of negative-energy transitions. Eventually, this enhancement should have important consequences for (anti)neutrinos emitted due de-excitation and decay processes.
### (Anti)neutrino Spectra and Energy Loss Rates
We now demonstrate the \(\nu_{e}\) and \(\bar{\nu}_{e}\) energy spectra for six selected points inside the star. Figure 3 shows the contribution of different nuclear weak processes to neutrino spectra for each point from Table 1. Although the shape and intensity of the spectrum depend on the temperature, density and electron fraction, there are features common for all points. Namely, for all points, the spectra are dominated by the EC contribution that exhibits a low-energy peak and a high-energy tail. The latter gradually transforms into the second peak when we move from the center of the star. Our analysis shows that low-energy neutrinos are emitted after electron capture excites the GT\({}_{+}\) resonance state, while high-energy neutrinos are caused by thermally unblocked low- and negative-energy GT\({}_{+}\) transitions.
In Figure 3, the importance of thermal effects is illustrated by comparing neutrino spectra produced by hot \({}^{56}\)Fe with that produced by a cold nucleus, when only EC is possible. As clearly seen from Figure 3, the thermal unblocking of the GT\({}_{+}\) strength at \(E<0\) (see the right panels in Figure 2) leads to the appearance of high-energy neutrinos in the spectra, whose fraction increases when we move from point (1) to (6). Moreover, as shown in the figure, the temperature-induced lowering of the GT\({}_{+}\) resonance amplifies the low-energy (\(E_{\nu_{e}}<5\) MeV) part of the spectra and shifts its maximum to higher energies.
The contribution of different nuclear weak processes to the antineutrino spectra produced by hot \({}^{56}\)Fe is shown in Figure 4. Our calculations clearly demonstrate the dominance of \(\nu_{e}\bar{\nu}_{e}\)-pair emission in the antineutrino spectra under pre-supernova conditions when the \(\beta^{-}\)-decay is strongly blocked by the electron chemical potential. The obtained \(\nu_{e}\bar{\nu}_{e}\)-spectra have a narrow low-energy peak at \(E_{\nu}\approx 1-2\) MeV and a broad high-energy one peaking around \(E_{\nu}\approx 5\) MeV. By matching with the GT\({}_{0}\) strength function in Figure 2, we conclude that the former arises due to thermally unblocked low-energy downward GT\({}_{0}\) transitions, while high-energy antineutrinos are emitted from the \(\nu_{e}\bar{\nu}_{e}\)-decay of the thermally populated
Figure 3: Neutrino spectra produced by \({}^{56}\)Fe due to electron capture (EC), \(\beta^{+}\)-decay and \(\nu\bar{\nu}\)-pair emission. Each set of curves corresponds to a specific point (n) (n=1, 2, 3, 4, 5, 6) on the mass coordinate listed in Table 1. The dashed curves represent neutrino spectra arising from EC on the ground state of \({}^{56}\)Fe.
GT\({}_{0}\) resonance. Since the GT\({}_{0}\) resonance in \({}^{56}\)Fe is located at relatively high energy, its thermal population rapidly decreases at low temperatures, leading to a decrease in the fraction of high-energy antineutrinos. Nevertheless, amongst weak nuclear processes, it is the \(v_{e}v_{e}\)-decay of the GT\({}_{0}\) resonance that produces the high-energy antineutrinos of all flavors under pre-supernova conditions listed in Table 1. It is also seen from Figure 4 that temperature reduction has a modest impact on the intensity of low-energy antineutrinos emitted due to the \(v_{e}\bar{v}_{e}\)-decay. At the same time, the reduction in the chemical potential \(\mu_{e}\) unblocks \(\beta^{-}\)-decay, which also emits low-energy antineutrinos. As a result, when we move from the center of the star, the contributions of the \(v_{e}\bar{v}_{e}\)-pair emission and \(\beta^{-}\)-decay to the low-energy antineutrino spectrum become comparable.
Figure 5 shows the evolution of the total (anti)neutrino spectrum \(\lambda\) (8) as we move from the center of the star. Since electron capture is a dominant source of neutrinos, the reduction in the chemical potential \(\mu_{\rm e}\) below the GT\({}_{+}\) resonance energy decreases the low-energy peaks in \(\lambda\) by about four orders of magnitude, while the high-energy tail is reduced by approximately three orders of magnitude. For this reason, a relative fraction of high-energy neutrinos in the spectrum increases. As discussed above, contributions from the \(v\bar{v}\)-pair emission and \(\beta^{-}\)-decay to emission of low-energy antineutrinos demonstrate opposite trends when we move from points (1) to (6). Therefore, the intensity of the low-energy antineutrino emission is rather unsensitive to the change in pre-supernova conditions. At the same time, the intensity of high-energy antineutrino emission is reduced by more than two orders of magnitude as the temperature decreases from \(T_{9}\approx 9.8\) to \(T_{9}\approx 7.0\).
In Figure 6, the emission rates \(\Lambda\), energy-loss rates \(P\), and the average energy \(\langle E_{\nu}\rangle=P/\Lambda\) for the electron (anti)neutrinos emitted due to weak processes with hot \({}^{56}\)Fe are shown. Referring to the figure, neutrino rates demonstrate a strong dependence under pre-supernova conditions. Compared with the ground-state rates, we conclude that temperature lowering gives a minor contribution to a severe reduction in the neutrino rates and the latter is mainly caused by the chemical potential decrease. In contrast, as pair emission only depends on temperature and \(\beta^{-}\)-decay rate increases when \(\mu_{\rm e}\) decreases, the computed antineutrino rates demonstrate a more modest dependence under pre-supernova
Figure 4: Antineutrino spectra produced by \({}^{56}\)Fe due to positron capture (PC), \(\beta^{-}\)–decay and \(v\bar{v}\)-pair emission. Each set of curves corresponds to a specific point (n) (n=1, 2, 3, 4, 5, 6) on the mass coordinate listed in Table 1. The dashed curves represent antineutrino spectra arising from PC on the ground state of \({}^{56}\)Fe.
conditions. We also see that the finite temperature of the nucleus plays a more important role for antineutrino rates than for neutrino ones.
As for the average energy, for emitted neutrinos, it varies rather weakly around \(\langle E_{\nu}\rangle\approx 4.7\) MeV. This stability is a result of the increasing fraction of high-energy neutrinos emitted by de-excitation processes, which compensates the decrease in available electron energy when we move from the center of the star. This is clearly seen if we compute \(\langle E_{\nu}\rangle\) for the cold \({}^{56}\)Fe. In that case, \(\langle E_{\nu}\rangle\) is essentially lower and shows a decreasing trend. At the same time, the average energy of antineutrinos demonstrates non-monotonic behavior due to the competition between \(\nu\bar{v}\)-decay and \(\beta^{-}\)-decay. Moreover, since, in decay processes, the released energy is shared among two emitted particles, the antineutrino average energy is smaller than that for neutrinos.
## 4 Discussion and Perspectives
Neutrino spectra shown in Figure 3 confirm the conclusion of Ref. [7] that the single-strength approximation can be applied under stellar conditions with the electron chemical potential high enough to allow the excitation of the GT\({}_{+}\) resonance by electron capture. Such conditions occur during the collapse phase. However, our calculations clearly demonstrate that this approximation can fail in the pre-supernova phase when negative-energy GT
Figure 5: Neutrino and antineutrino spectra \(\lambda\) due to weak processes with hot \({}^{56}\)Fe for specific points (n) (n=1, 2, 3, 4, 5, 6) on the mass coordinate listed in Table 1.
Figure 6: Neutrino (**top**) and antineutrino (**bottom**) emission rate \(\Lambda\), energy-loss rate \(P\) and average energy \(\langle E_{\nu}\rangle\) due to hot \({}^{56}\)Fe for specific points (n) (n=1, 2, 3, 4, 5, 6) on the mass coordinate listed in Table 1. The dashed curves show \(\Lambda\), \(P\), and \(\langle E_{\nu}\rangle\) calculated for the ground state of \({}^{56}\)Fe.
transitions from thermally excited states noticeably contribute to electron capture and the resulting neutrino energy spectrum is double-peaked. On the whole, the present thermodynamically consistent calculations of electron neutrino spectra performed without assuming the Brink hypothesis indicate that the thermal effects on the GT\({}_{+}\) strength function shift the spectrum to higher energies, and thus make the neutrino detection more likely.
The inclusion of \(v\bar{v}\)-pair emission into consideration shows that this neutral current process might be a dominant source of high-energy antineutrinos emitted via the de-excitation of the GT\({}_{0}\) resonance. Considering that the energy of the GT\({}_{0}\) resonance is related to the spin-orbit splitting, the high-energy peak in antineutrino spectra can be easily parameterized. Moreover, since the \(v\bar{v}\)-pair emission only depends on temperature, the detection of high-energy pre-supernova antineutrinos might be a test for thermodynamic conditions in the stellar interior.
The next evident step in our study of the role of nuclear weak processes in presupernova (anti)neutrino production is to compute overall (anti)neutrino spectra and energy loss rates as well as their time evolution for different stellar progenitors. To this end, calculations such as those performed for \({}^{56}\)Fe are needed for isotopes abundant in the stellar core and then, in the integration over the whole core, these should be performed for several time steps. Concerning the possibility of (anti)neutrino detection, we should take into account (ant)neutrino flavor oscillation, which changes the initial flavor composition of the pre-supernova (anti)neutrino flux.
Conceptualization: A.A.D. and A.V.Y.; formal analysis: A.A.D., A.V.Y., N.V.D.-B. and A.I.V.; software: A.A.D., A.V.Y. and N.V.D.-B.; writing--original draft preparation: A.A.D.; writing--review and editing: A.A.D., A.I.V., A.V.Y. and N.V.D.-B. All authors have read and agreed to the published version of the manuscript.
A.V.Y. thanks RSF 21-12-00061 grant for support.
Not applicable.
The authors declare no conflict of interest.
|
2307.01600 | Surface relief grating near-eye display waveguide design | A near-eye display device (NED) is a visual optical system that places a
miniature display in front of the human eye to provide an immersive viewing
experience. NEDs have been playing an irreplaceable role in both early military
flight applications and today's civil and entertainment applications. In this
paper, we propose an easy-to-machine design of a near-eye display based on
surface relief grating waveguides, taking into account the experience of
previous designs of near-eye displays, the superior performance of the design,
and the accuracy level of existing grating processing. The design is designed
to meet the requirements of large field of view and large outgoing pupil
extension as much as possible. The design is insensitive to the incident angle
and achieves a full-field field-of-view angle of 40{\deg}, an angular
uniformity error of 20% for diffraction efficiency, and an average diffraction
efficiency of 80% for the full field of view. Based on the design, the overall
simulation of the optical path of the NED device is completed, and the
illumination uniformity of the outgoing pupil expansion of the device is
analyzed through simulation. | Haodong Wang, Donglin Ma | 2023-07-04T09:39:20Z | http://arxiv.org/abs/2307.01600v1 | # Surface relief grating near-eye display waveguide design
###### Abstract
A near-eye display device (NED) is a visual optical system that places a miniature display in front of the human eye to provide an immersive viewing experience. NEDs have been playing an irreplaceable role in both early military flight applications and today's civil and entertainment applications. In this paper, we propose an easy-to-machine design of a near-eye display based on surface relief grating waveguides, taking into account the experience of previous designs of near-eye displays, the superior performance of the design, and the accuracy level of existing grating processing. The design is designed to meet the requirements of large field of view and large outgoing pupil extension as much as possible. The design is insensitive to the incident angle and achieves a full-field field-of-view angle of 40\({}^{\circ}\), an angular uniformity error of 20% for diffraction efficiency, and an average diffraction efficiency of 80% for the full field of view. Based on the design, the overall simulation of the optical path of the NED device is completed, and the illumination uniformity of the outgoing pupil expansion of the device is analyzed through simulation.
## 1 Introduction
Augmented Reality (AR) technology is a technology that superimposes virtual images on real-world things for display interaction. Augmented reality augments or expands the real scene by using image information generated by computer technology, allowing users of augmented reality devices to observe both the real scene around them and the computer-generated augmented information [1-4]. Unlike Virtual Reality (VR) technology, users of augmented reality devices obtain virtual augmented information without losing real scene information. 2012 Google Glass augmented reality glasses launched by Google Inc. marked the official entry of augmented reality near-eye display system into the consumer market [5]. Among the various technical solutions for transmitting optical paths in near-eye display devices, the most common ones are Bird Bath, free-form surface [6-7], and optical waveguide [8-12]. Among them, Bird Bath scheme is the more mature scheme and the most adopted scheme in the commercial field. Bird Bath scheme has excellent display effect, but its size is too large compared with ordinary glasses, so it is not generally considered as the future direction of AR glasses. The advantages of the free-form solution are similar to those of the Bird Bath solution, with excellent display effects and higher optical efficiency than the Bird Bath solution, but the free-form process is complex and there are difficulties in mass production, and the size is also larger than that of ordinary glasses. The optical waveguide solution is currently regarded as the most likely solution for the future of AR glasses, because it has a comparable volume with ordinary glasses, which maximally meets the human fantasy of future AR glasses. Optical waveguide technology can be classified into geometric optical waveguide technology and diffractive optical waveguide technology according to waveguide classification, and Lumus launched DK-40 augmented reality glasses based on geometric optical waveguide technology in 2013 [13]. Geometric light waveguide technology is based on the principle of geometric optics, inserting reflector arrays in the waveguide as a technical solution, the advantage of which is good imaging quality, but the disadvantage is the complex preparation process of reflector arrays, difficult to replicate, low yield of finished products and the existence of ghost images.In 2015, Microsoft launched Hololens smart AR glasses based on diffractive light waveguide [14].
Diffractive waveguide technology is based on the diffraction principle of grating, and there are mainly two technical solutions: Surface Relief Grating (SRG) and Volume Hologram Grating (VHG), of which the surface relief grating solution uses photolithography to make the master and uses nanoimprinting technology to replicate, which is simple to replicate and The product yield is high, which has great advantages in mass production, but the disadvantage is that due to the dispersion effect of the grating, the imaging will appear rainbow effect, and the color uniformity is poor. Body holographic grating is prepared by holographic interference exposure technology, body holographic grating can solve the dispersion problem of surface relief grating, but the disadvantage is that the preparation process is complex, the stability of mass production is poor.
In summary, geometric optical technology solutions for Near-eye Display (NED) devices are difficult to solve the problem of small FOV, large volume and poor illumination and imaging. Optical waveguide technology even in the current obvious disadvantages, but due to its emergence and development time is relatively short, it has been in the mainstream of the current consumer market position, it can be expected that optical waveguide technology for NED devices still has a lot of room for improvement and expansion and is expected to become the optimal solution for near-eye display devices. Combined with the existing grating processing accuracy level, as far as possible to meet the large field of view large pupil expansion needs, this paper proposes a surface relief grating waveguide based on the design of near-eye display devices.
## 2 Principle
### Near-Eye Display (NED) Optical System
NED optical system as a visual system has a special image generation method and transmission path, and there are some differences in the principle and design requirements with the traditional visual system. For example, in waveguide-based NED devices, the coupled-in region of the waveguide is required to have high diffraction efficiency, and it needs to have a large exit pupil area and uniform illumination distribution. These requirements are to achieve better performance and experience.
The main technical solutions for the implementation of waveguide-based near-eye display systems are geometric optical waveguides and grating waveguides, as shown in Figure 1. Among them, the grating waveguides are classified as surface relief grating and body holographic grating, and the main difference is the type of coupled-in and coupled-out grating microstructure. For all three types of NEDs, the common infrastructure includes microdisplay, collimating lens set, and optical waveguide.
Figure 1: Waveguide type near-eye display schematic
### Surface Relief Grating Waveguide
The surface relief grating has a subwavelength level periodic microsurface structure, and the diffracted beam modulated by it propagates along different diffraction levels and directions, as shown in Figure 2(a), depending on the wavelength of light, the incident angle and the waveguide material. When monochromatic light is considered, the grating period needs to be optimized to obtain a defined diffraction angle. The diffraction efficiency, i.e., the energy ratio of diffracted light at the target level (\(\pm 1\) level), is an important indicator of the energy efficiency of the whole NED system. The diffraction efficiency of the grating with subwavelength periodic structure depends on the slot type and structural parameters of the grating, such as slot depth, tilt angle of the grating, duty cycle, and grating and coating materials, etc. The grating microstructure is shown in Figure 2(b).
The key to the design of surface relief grating NED is to design and optimize the structure of coupled-in and coupled-out gratings. In the coupled-in region, high diffraction efficiency and good angular uniformity are required. In the coupled-out region, it is necessary to achieve pupil expansion and uniform luminance distribution. These elements are crucial for the design of surface relief grating NEDs.
In order to meet the design requirements of large exit pupil area, large field of view, and uniform brightness in the extended exit pupil area, the optical waveguide and grating structure need to be designed accurately. Since the feature size of the surface relief grating is at subwavelength level, the scalar diffraction theory cannot accurately calculate the results, so a rigorous vector diffraction analysis method is required.
Figure 2: (a) Surface relief grating schematic; (b) Surface relief grating microstructure diagram
By using the vector diffraction analysis method, the diffraction effect of the surface relief grating can be calculated more accurately. This method takes into account the polarization characteristics of light and the details of the waveguide structure, and can provide more accurate results. Through rigorous vector diffraction analysis, the performance of the grating can be evaluated and the structural parameters of the grating can be optimized to achieve brightness uniformity in the large exit pupil region, large field of view and extended exit pupil region as required by the design.
To obtain the maximum diffraction efficiency while reducing stray light from other diffraction stages, the SRG is designed to concentrate the diffracted energy in 1 diffraction stage by the grating diffraction equation
\[\Lambda\left(\sin i+\sin\theta\right)=\pm k\lambda\ \ \ \left(k=1,2,3\cdots\right) \tag{1}\]
Where \(\wedge\) is the grating period constant, \(i\) is the incident angle, \(\theta\) is the diffraction angle, k is the diffraction level, and \(\lambda\) is the wavelength of the incident light. In order for the diffracted light to satisfy the total reflection condition and propagate in the optical waveguide, the grating period d should be less than the wavelength \(\lambda\). In a subwavelength grating, the coupling effect of the components of the electromagnetic field on the boundary surface is not negligible. In this case, the approximate solutions of scalar diffraction theories (e.g. Kirchhoff diffraction theory, Rayleigh-Sommerfeld theory) are no longer applicable. Therefore, a rigorous vector diffraction theory approach is needed to solve the system of Maxwell's equations to obtain accurate diffraction efficiency results for subwavelength gratings.
Rigorous Coupled Wave Analysis (RCWA) is a rigorous vector diffraction theory method proposed by M.G. Moharam and T.K. Gaylord in 1980 [15]. It solves the electromagnetic field vector at each diffraction level by expanding the electromagnetic field vector into coupled wave components at each level, using electromagnetic field boundary conditions at the boundary interface at each level, and by mathematical recursion.
In order to meet the proposed grating design requirements, an analysis of the grating diffraction optical path is required. The expression of the incident light vector K is
\[k_{m}=\frac{2\pi}{\lambda}n_{1}\left(\sin\theta_{0}\cos\varphi_{0},\cos\theta_ {0},\sin\theta_{0}\sin\varphi_{0}\right) \tag{2}\]
where \(\theta\), \(\varphi\) are the vector angles in the Cartesian coordinate system, then the vector of the mth diffraction level is
\[k_{i,m}=\frac{2\pi}{\lambda}n_{1}\left(\sin\theta_{i,m}\cos\varphi_{0,m},\cos \theta_{i,m},\sin\theta_{i,m}\sin\varphi_{i,m}\right) \tag{3}\]
where i=1 indicates the incident layer and i=0 indicates the reflected layer. For the K-vector incident light, the grating equation is obtained as
\[n_{i}\sin\theta_{i,m}\sin\varphi_{i,m}=n_{1}\sin\theta_{0}\sin\varphi_{0}=\gamma \tag{4}\]
\[n_{i}\sin\theta_{i,m}\cos\varphi_{i,m}=n_{1}\sin\theta_{0}\sin\varphi_{0}+m \frac{\lambda}{\Lambda}=\alpha_{0}+m\frac{\lambda}{\Lambda} \tag{5}\]
At the same time, the diffracted light needs to satisfy the total reflection angle requirement in the waveguide, which yields
\[\Lambda<\min\left\{\frac{\lambda}{\sqrt[]{1-\gamma^{2}}-\alpha_{0}},\frac{\lambda }{\sqrt[]{1-\gamma^{2}}+\alpha_{0}}\right\} \tag{6}\]
The grating period \(\wedge\) does not depend on the refractive index of the substrate, and the grating period and the refractive index n of the waveguide material jointly determine the field of view of the NED device. In the grating structure, the polarization mode of the incident light has a significant effect on the diffraction efficiency of the grating. According to the rigorous coupling wave analysis, the TE polarization mode has a higher diffraction efficiency compared to the TM polarization mode under the subwavelength grating condition. Therefore, we choose the TE polarization mode as the incident light.
In the grating coupling, two cases, transmission coupling and reflection coupling, need to be considered. In surface relief grating design, reflection coupling usually achieves higher diffraction efficiency. Therefore, we choose to use reflection-coupled gratings.
To achieve higher reflection diffraction efficiency, a metal substrate is used at the bottom of the grating with a complex refractive index of silver at 525 nm of 0.130+3.159i. The smaller real part and larger imaginary part of silver help to improve the reflection diffraction efficiency of the TE polarization component and absorb more light from the TM polarization component.
To ensure a good uniformity of diffraction efficiency over the full field of view, we coated a titanium dioxide film on the grating surface. The refractive index of titanium dioxide at 525 nm is about 2.985, which is much higher than that of the silver substrate. With the phase-matching condition, the effect of the incident angle on the diffraction efficiency can be significantly reduced, and thus the uniformity of the diffraction efficiency over the full field of view can be obtained.
## 3 Analysis and Discussion
### Projection Lens Design
The projection optical lens plays the role of collimating the light beam from the miniature display in this near-eye display device, and in order to achieve the light weight of the whole device and control the cost, fewer and lower cost lenses are used as much as possible. The field of view of the projection lens system is set to 20\({}^{\circ}\)\(\times\)40\({}^{\circ}\).
The optimized projection lens system is shown in Figure 3.
The total length of the collimation system is 37mm, and the overall length of the system is smaller and the lens size is smaller while obtaining a large field of view with good image quality, which is analyzed as follows:
Modulation Transfer Function (MTF) refers to the modulation system as a function of spatial frequency, which is the most important way to evaluate the imaging performance of the current optical system. Spatial frequency is usually expressed as lp/mm per millimeter line pair, and the modulation system is expressed in the form of luminance contrast of the line pair. As shown in Figure 4, the MTF curves for all fields of view at 20 lp/mm are greater than 0.2, meeting the requirements of the visual optical system.
### Diffraction Grating Waveguide Design
Fig. 4: Modulation Transfer Function curve for each field of view of the projection lens system
Fig. 3: Optimization of the resulting projection lens system
We write the RCWA algorithm in Matlab and use Matlab to calculate and optimize the diffraction grating waveguide. Figure 5 shows the design of a diffractive waveguide with a diagonal field of view of 40\({}^{\circ}\) and an aspect ratio of 16:9. The design wavelength is 525 nm, the substrate material is N-LAF2, and the refractive index is 1.74.
Matlab software is used to optimize the design of the coupled-in grating with RCWA algorithm. As shown in Fig. 6, the simulation results of the diffraction efficiency of the optimized coupled-in grating in the full field of view and the design wavelength range are shown, where the curve is the optimized diffraction efficiency curve, the diffraction efficiency can reach up to more than 90% at the center wavelength of 525 nm center field of view, the diffraction efficiency of more than 60% at the edge field of view, and the system field of view angle can reach 40\({}^{\circ}\), meanwhile, the average diffraction efficiency reaches 80%, and the uniformity error is 20%
Fig. 5: Surface relief grating waveguide near-eye display system
Fig. 6: In-coupling grating diffraction efficiency optimization results
When optimizing the design of the coupled-out grating, the uniformity of energy in the pupil exit region needs to be considered, especially for the illumination uniformity in the extended region of large field of view and large pupil exit. In this design, the optimization of six coupled-out gratings should be closely related.
In practice, the diffraction efficiency of the grating cannot reach 100% and the reflectance cannot reach 0%. In addition, the diffraction efficiency cannot be exactly the same in the full field of view and the full operating band. Therefore, when performing design optimization, it is necessary to reasonably allocate the energy of each grating reflection stage R0 and diffraction stage R-1.
Through reasonable energy allocation, the design optimization of each coupled-out grating in the pupil region can be achieved to achieve better illumination uniformity. Such a design facilitates the subsequent realization of illumination uniformity in the extended region of large field of view and large pupil exit, while meeting the requirements of the NED device input coupler.
When considering the coupled-out grating design, the following strategies can be adopted:
1. The coupled grating 1 is optimized to have a high uniformity of reflected energy R0 and diffracted energy R-1 over the full field of view, which ensures a more uniform beam energy distribution over the full field of view in the coupled region. At the same time, in order to make the corresponding diffraction level energy of the six coupled gratings consistent, the diffraction level energy R-1 of the coupled grating 1 should be reduced and the reflection energy R0 should be increased as much as possible.
2. For the other three coupled-out gratings, the energy distribution of the six coupled-out gratings can be achieved by increasing the diffraction level energy R-1 and decreasing the reflection energy R0, while maintaining the diffraction energy distribution of each field of view as much as possible.
3. It is worth noting that the transmission efficiency of the full field of view should be considered when designing the coupled-out grating, which is related to the ambient light transmission rate of the whole near-eye display system.
However, it should be noted that in practice the light incident on the pupil exit region is not a plane wave of uniform energy. The natural vignetting and edge field illumination attenuation as well as the full-field diffraction efficiency inhomogeneity of the coupled-in grating can affect the design of the coupled-out grating. One of the most important indicators of the imaging performance of NED devices is the uniformity of the extended pupil illumination.
The coupled-in grating and coupled-out grating are coupled into the waveguide, and a comparison of the simulation results of the forward-incidence system before and after the optimization strategy is shown in Figure 7.
After pupil exit uniformity optimization, the simulation results of the full-field incidence system are shown in Figure 8. The illuminance of each region in the eye movement range is 0.044, 0.041 and 0.038 (V/m)2, respectively, with a uniformity error of 12%, which meets the
Fig. 7: Display effect of pupil area after optimization of pupil expansion uniformity
human eye viewing requirements. It is worth pointing out that the main light can be optimized according to the specific structure of the waveguide, which can also achieve a better display effect in the pupil exit area.
The final coupled-in and coupled-out grating data obtained after optimization are shown in Table 1. From the data in the table, it can be seen that the ratio of modulation depth to line width is less than 2 for all the gratings in this design, which is highly processable and easy to implement. The diffraction efficiency of the coupled grating is shown in Fig. 9(a), and the transmission rate of the coupled grating is shown in Fig. 9(b), and the transmission rate of the full field of view is above 70%. The waveguide cross section is shown in Fig. 10.
\begin{table}
\begin{tabular}{c c c c c} \hline Gratings & Angle/\({}^{\circ}\) & Fill factor & Depth/nm & Coating/nm \\ \hline In-couple & 25 & 66\% & 415 & 140 \\ Out-couple1 & 0 & 18\% & 143 & 166 \\ Out-couple2 & 0 & 18\% & 158 & 166 \\ Out-couple3 & 0 & 18\% & 179 & 262 \\ Out-couple4 & 0 & 18\% & 232 & 221 \\ Out-couple5 & 0 & 18\% & 304 & 144 \\ Out-couple6 & 0 & 18\% & 384 & 205 \\ \hline \end{tabular}
\end{table}
Table 1: **Optimization of the final In-coupling and Out-coupling gratings**
Figure 8: Illumination of the pupil exit area in the full field of view
Fig. 10: Waveguide cross-section diagram
Fig. 9: (a) Out-coupling grating full-field diffraction efficiency; (b) Out-coupling grating full-field transmittance
### Design results and testing results
Now based on our simulation in Zemax as well as VirtualLab, our provided AR module has superior imaging performance. Our built testing facility has validated our design results. The design performance based on software simulation as well as the testing performance can be listed in the following table.
## 4 Conclusion
In this paper, we focus on the design and optimization of near-eye augmented reality devices (NEDs), using surface relief lenticular waveguide technology to achieve lightweight, compact, and portable devices with large field of view and extended pupil area, as well as good imaging performance, so that users can enjoy a good augmented reality experience even in motion.
In this study, we propose a near-eye augmented reality device that is compact, portable, easy to process and prepare, and has a large field of view and extended pupil area.
For the design of grating couplers in NED devices, we designed the coupled-in grating and coupled-out grating systems respectively. The initial grating structure design is obtained by rigorous coupling wave analysis, and the coupled-in grating is optimized. We propose a coupled-in grating design that achieves large field-of-view diffraction efficiency uniformity. The design is insensitive to the angle of incidence, with a full field-of-view angle of 40\({}^{\circ}\), and with TE polarization incidence, the angular uniformity of diffraction efficiency reaches 80%, and the average diffraction efficiency of the full field of view reaches 80%.
To meet the requirement of illumination uniformity in the extended pupil area, we propose the design of six coupled-out gratings, and reasonably allocate the reflected energy R\(\neg\)0 and diffraction level energy R\(\neg\)-1 for each grating, and optimize the diffraction efficiency of the coupled-out gratings in the pupil area by combining the designed collimation system and the illumination uniformity of the coupled-in gratings.
Finally, we performed the overall simulation of the optical path of the NED device and analyzed the illumination uniformity in the extended pupil region. The simulation results show that after the optimization, the eye movement range of the NED device is 20mm x 16mm, and the illumination uniformity of the extended pupil area is improved from 23% to 88%. At the same time, the imaging quality of the device in the full field of view meets the human eye viewing requirements, and the MTF is greater than 0.4 at 18 lp/mm.
|
2303.02826 | Quickest Change Detection in Statistically Periodic Processes with
Unknown Post-Change Distribution | Algorithms are developed for the quickest detection of a change in
statistically periodic processes. These are processes in which the statistical
properties are nonstationary but repeat after a fixed time interval. It is
assumed that the pre-change law is known to the decision maker but the
post-change law is unknown. In this framework, three families of problems are
studied: robust quickest change detection, joint quickest change detection and
classification, and multislot quickest change detection. In the multislot
problem, the exact slot within a period where a change may occur is unknown.
Algorithms are proposed for each problem, and either exact optimality or
asymptotic optimal in the low false alarm regime is proved for each of them.
The developed algorithms are then used for anomaly detection in traffic data
and arrhythmia detection and identification in electrocardiogram (ECG) data.
The effectiveness of the algorithms is also demonstrated on simulated data. | Yousef Oleyaeimotlagh, Taposh Banerjee, Ahmad Taha, Eugene John | 2023-03-06T01:42:32Z | http://arxiv.org/abs/2303.02826v1 | # Quickest Change Detection in Statistically Periodic Processes with Unknown Post-Change Distribution
###### Abstract
Algorithms are developed for the quickest detection of a change in statistically periodic processes. These are processes in which the statistical properties are nonstationary but repeat after a fixed time interval. It is assumed that the pre-change law is known to the decision maker but the post-change law is unknown. In this framework, three families of problems are studied: robust quickest change detection, joint quickest change detection and classification, and multislot quickest change detection. In the multislot problem, the exact slot within a period where a change may occur is unknown. Algorithms are proposed for each problem, and either exact optimality or asymptotic optimal in the low false alarm regime is proved for each of them. The developed algorithms are then used for anomaly detection in traffic data and arrhythmia detection and identification in electrocardiogram (ECG) data. The effectiveness of the algorithms is also demonstrated on simulated data.
R obuset change detection, joint change detection and fault isolation, multislot change detection, anomaly detection, traffic data, arrhythmia detection and identification.
## 1 Introduction
In the classical problem of quickest change detection (see [21], [25], [27]), a decision maker observes a stochastic process with a given distribution. At some point in time, the distribution of the process changes. The problem objective is to detect this change in distribution as quickly as possible, with minimum possible delay, subject to a constraint on the rate of false alarms. This problem has applications in statistical process control ([23]), sensor networks ([6]), cyber-physical system monitoring ([11]), regime changes in neural data ([1]), traffic monitoring ([8]), and in general, anomaly detection ([7, 8]).
In many applications of anomaly detection, the observed process has statistically periodic behavior. Some examples are as follows:
1. _Arrhythmia detection in ECG Data_: The electrocardiography (ECG) data has an almost periodic waveform pattern with a series of P waves, QRS complexes, and ST segments. An arrhythmia can cause a change in this regular pattern ([14]).
2. _Detecting changes in neural spike data_: In certain brain-computer interface (BCI) studies ([29]), an identical experiment is performed on an animal in a series of trials leading to similar firing patterns in each trial. An event or a trigger (which is part of the experiment) can change the firing pattern after a certain trial.
3. _Anomaly detection in city traffic data_: The count of vehicles at a street intersection in New York City (NYC) has been found to show regular patterns of busy and quiet periods ([2; 3; 4; 7; 8]). Congestion or an accident can cause a drop or increase in these vehicle counts.
4. _Social network data_: The count of Instagram messages posted near a CCTV camera in NYC has also been found to show approximately periodic behavior ([2; 3; 4; 7; 8]).
5. _Congestion mode detection on highways_: In traffic density estimation problems, it is of interest to detect the mode (congested or uncongested) of the traffic first before deciding on a model to be used for estimation ([28]). Motivated by the NYC data behavior, the traffic intensity in this application can also be modeled as statistically periodic.
In [5], a new class of stochastic processes, called independent and periodically identically distributed (i.p.i.d.) processes, has been introduced to model statistically periodic data. In this process, the sequence of random variables is independent and the distribution of the variables is periodic with a given period \(T\).
Statistically periodic processes can also be modeled using cyclostationary processes ([13]). However, modeling using i.p.i.d. processes allow for sample-level detection and the development of strong optimality theory.
In [5], a Bayesian theory is developed for quickest change detection in i.p.i.d. processes. It is shown that, similar to the i.i.d. setting, it is optimal to use the Shiryaev statistic, i.e., the _a-posteriori_ probability that the change has already occurred given the data, for change detection. However, in the i.p.i.d. setting, a change is declared when the sequence of Shiryaev statistics crosses a sequence of time-varying but periodic thresholds. It is also shown that a single-threshold test is asymptotically optimal, as the constraint on the probability of a false alarm goes to zero. The proposed algorithm can also be implemented recursively and using finite memory. Thus, the set-up of i.p.i.d. processes gives an example of a non-i.i.d. setting in which exactly optimal algorithm can be implemented efficiently. The results in [5] is valid when both pre- and post-change distributions are known.
In this paper, we consider the problem of quickest change detection in i.p.i.d. processes when the post-change law is unknown. We consider three different formulations of the problem in minimax and Bayesian settings:
1. _Robust quickest change detection_: In Section 2, we first consider the problem of robust quickest change detection in i.p.i.d. processes. In this problem, we assume that the post-change family of distributions is not known but belongs to an uncertainty class. We further assume that the post-change family has a distribution that is least favorable. We then show that the algorithm designed using the least favorable distribution is minimax robust for the Bayesian delay metric.
2. _Quickest detection and fault identification_: In Section 3, we consider the problem in which the post-change distribution is unknown but belongs to a finite class of distributions. For this setup, we solve the problem of joint quickest change detection and isolation in i.p.i.d. processes. We also apply the developed algorithm to real ECG data to detect heart arrhythmia.
3. _Multislot quickest change detection_: In Section 4, we consider the problem of multislot quickest change detection in i.p.i.d. processes. In this problem, the exact time slots in a given period where the change can occur are unknown. We show that a mixture-based test is asymptotically optimal.
A salient feature of our work is that in addition to developing the optimality theory for the proposed algorithms, we also apply them to real or simulation data to demonstrate their effectiveness. Specifically, in Section 5.1, we study anomaly detection in traffic data. In Section 5.4, we apply the developed algorithms to arrhythmia detection and isolation in ECG data. In Section 5.3, Section 5.2, and Section 5.4.5, we also apply our algorithms to simulated data to show their effectiveness.
## 2 Robust Quickest Change Detection
### Model and Problem Formulation
We first define the process that we will use to model statistically periodic random processes in this paper.
**Definition 2.1** ([5]).: A random process \(\{X_{n}\}\) is called independent and periodically identically distributed (i.p.i.d) if
1. The random variables \(\{X_{n}\}\) are independent.
2. If \(X_{n}\) has density \(f_{n}\), for \(n\geq 1\), then there is a positive integer \(T\) such that the sequence of densities \(\{f_{n}\}\) is periodic with period \(T\): \[f_{n+T}=f_{n},\quad\forall n\geq 1.\]
The law of an i.p.i.d. process is completely characterized by the finite-dimensional product distribution of \((X_{1},\ldots,X_{T})\) or the set of densities \((f_{1},\cdots,f_{T})\), and we say that the process is i.p.i.d. with the law \((f_{1},\cdots,f_{T})\). The change point problem of interest is the following. In the normal regime, the data is modeled as an i.p.i.d. process with law \((f_{1},\cdots,f_{T})\). At some point in time, due to an event, the distribution of the i.p.i.d. process deviates from \((f_{1},\cdots,f_{T})\). Specifically, consider another periodic sequence of densities \(\{g_{n}\}\) such that
\[g_{n+T}=g_{n},\quad\forall n\geq 1.\]
It is assumed that at the change point \(\nu\), the law of the i.p.i.d. process switches from \((f_{1},\cdots,f_{T})\) to \((g_{1},\cdots,g_{T})\):
\[X_{n}\sim\begin{cases}f_{n},&\quad\forall n<\nu,\\ g_{n}&\quad\forall n\geq\nu.\end{cases} \tag{1}\]
The densities \((g_{1},\cdots,g_{T})\) need not be all different from the set of densities \((f_{1},\cdots,f_{T})\), but we assume that there exists at least an \(i\) such that they are dif
ferent:
\[g_{i}\neq f_{i},\quad\text{for some }i=1,2,\cdots,T. \tag{2}\]
In this paper, we assume that the post-change law \((g_{1},\cdots,g_{T})\) is unknown. Further, there are \(T\) families of distributions \(\{\mathcal{P}_{i}\}_{i=1}^{T}\) such that
\[g_{i}\in\mathcal{P}_{i},\quad i=1,2,\ldots,T.\]
The families \(\{\mathcal{P}_{i}\}_{i=1}^{T}\) are known to the decision maker. Below, we use the notation
\[G=(g_{1},g_{2},\ldots,g_{T})\]
to denote the post-change i.p.i.d. law.
Let \(\tau\) be a stopping time for the process \(\{X_{n}\}\), i.e., a positive integer-valued random variable such that the event \(\{\tau\leq n\}\) belongs to the \(\sigma\)-algebra generated by \(X_{1},\cdots,X_{n}\). In other words, whether or not \(\tau\leq n\) is completely determined by the first \(n\) observations. We declare that a change has occurred at the stopping time \(\tau\). To find the best stopping rule to detect the change in distribution, we need a performance criterion. Towards this end, we model the change point \(\nu\) as a random variable with a prior distribution given by
\[\pi_{n}=\mathsf{P}(\nu=n),\quad\text{ for }n=1,2,\cdots.\]
For each \(n\in\mathbb{N}\), we use \(\mathsf{P}_{n}^{G}\) to denote the law of the observation process \(\{X_{n}\}\) when the change occurs at \(\nu=n\) and the post-change law is \(G\). We use \(\mathsf{E}_{n}^{G}\) to denote the corresponding expectation. Using this notation, we define the average probability measure
\[\mathsf{P}^{\pi,G}=\sum_{n=1}^{\infty}\pi_{n}\,\mathsf{P}_{n}^{G}.\]
To capture a penalty for the false alarms, in the event that the stopping time occurs before the change, we use the probability of a false alarm defined as
\[\mathsf{P}^{\pi,G}(\tau<\nu).\]
Note that the probability of a false alarm \(\mathsf{P}^{\pi,G}(\tau<\nu)\) is not a function of the post-change law \(G\). Hence, in the following, we suppress the mention of \(G\) and refer to the probability of false alarm only by
\[\mathsf{P}^{\pi}(\tau<\nu).\]
To penalize the detection delay, we use the average detection delay given by
\[\mathsf{E}^{\pi,G}\left[(\tau-\nu)^{+}\right],\]
where \(x^{+}=\max\{x,0\}\).
The optimization problem we are interested in solving is
\[\inf_{\tau\in\mathbf{C}_{\alpha}}\ \sup_{G:g_{i}\in\mathcal{P}_{i},i\leq T}\ \mathsf{E}^{\pi,G}\left[(\tau-\nu)^{+}\right], \tag{3}\]
where
\[\mathbf{C}_{\alpha}=\left\{\tau:\mathsf{P}^{\pi}(\tau<\nu)\leq \alpha\right\},\]
and \(\alpha\) is a given constraint on the probability of a false alarm.
In the case when the family of distributions \(\{\mathcal{P}_{i}\}_{i=1}^{T}\) are singleton sets, i.e. when the post-change law is known and fixed \(G\), a Lagrangian relaxation of this problem was investigated in [5]. Understanding the solution reported in [5] is fundamental to solving the robust problem in (3). In the next section, we discuss the solution provided in [5] and also its implication for the constrained version in (3).
### Exactly and Asymptotically Optimal Solutions for Known Post-Change Law
For known post-change law \(G=(g_{1},\ldots,g_{T})\) and geometrically distributed change point, it is shown in [5] that the exact optimal solution to a relaxed version of (3) is a stopping rule based on a periodic sequence of thresholds. It is also shown that it is sufficient to use only one threshold in the asymptotic regime of false alarm constraint \(\alpha\to 0\). Furthermore, the assumption of a geometrically distributed change point can be relaxed in the asymptotic regime. In the rest of this section, we assume that \(G\) is known and fixed.
#### 2.2.1 Exactly Optimal Algorithm
Let the change point \(\nu\) be a geometric random variable:
\[\mathsf{P}(\nu=n)=(1-\rho)^{n-1}\rho,\quad\text{ for }n=1,2,\cdots.\]
The relaxed version of (3) (for known \(G\)) is
\[\inf_{\tau}\ \mathsf{E}^{\pi,G}\left[(\tau-\nu)^{+}\right]+ \lambda_{f}\ \mathsf{P}^{\pi}(\tau<\nu), \tag{4}\]
where \(\lambda_{f}>0\) is a penalty on the cost of false alarms. Now, define \(p_{0}=0\) and
\[p_{n}=\mathsf{P}^{\pi,G}(\nu\leq n|X_{1},\cdots,X_{n}),\text{ for }n\geq 1. \tag{5}\]
Then, (4) is equivalent to solving
\[\inf_{\tau}\ \mathsf{E}^{\pi,G}\left[\sum_{n=0}^{\tau-1}p_{n}+ \lambda_{f}(1-p_{\tau})\right]. \tag{6}\]
The belief updated \(p_{n}\) can be computed recursively using the following equations: \(p_{0}=0\) and for \(n\geq 1\),
\[p_{n}=\frac{\tilde{p}_{n-1}\ g_{n}(X_{n})}{\tilde{p}_{n-1}\ g_{n}(X_{n})+(1- \tilde{p}_{n-1})f_{n}(X_{n})}, \tag{7}\]
where
\[\tilde{p}_{n-1}=p_{n-1}+(1-p_{n-1})\rho.\]
Since these updates are not stationary, the problem cannot be solved using classical optimal stopping theory [22] or dynamic programming [9]. However, the structure in (7) repeats after every fixed time \(T\). Motivated by this, in [5], a control theory is developed for Markov decision processes with periodic transition and cost structures. This new control theory is then used to solve the problem in (6).
**Theorem 2.2** ([5]).: _There exist thresholds \(A_{1}\), \(A_{2}\),..., \(A_{T}\), \(A_{i}\geq 0,\forall i\), such that the stopping rule_
\[\tau^{*}=\inf\{n\geq 1:p_{n}\geq A_{(n\bmod T)}\}, \tag{8}\]
_where \((n\bmod T)\) represents \(n\) modulo \(T\), is optimal for problem in (6). These thresholds depend on the choice of \(\lambda_{f}\)._
In fact, the solution given in [5] is valid for a more general change point problem in which separate delay and false alarm penalty is used for each time slot. We do not discuss it here.
#### 2.2.2 Asymptotically Optimal Algorithm
For large values of \(T\), which can easily be more than a million for certain applications, it is computationally not feasible to store \(T\) different values of thresholds. Thus, it is of interest to see if a single-threshold algorithm is optimal. It is shown in [5] that periodic threshold algorithms are strictly optimal. However, it is shown in [5] that a single-threshold test is asymptotically optimal in the regime of low probability of false alarms. We discuss this result below.
Let there exist \(d\geq 0\) such that
\[\lim_{n\rightarrow\infty}\frac{\log\mathsf{P}(\nu>n)}{n}=-d. \tag{9}\]
If \(\pi=\text{Geom}(\rho)\), then \(d=|\log(1-\rho)|\). Further, let
\[I=\frac{1}{T}\sum_{i=1}^{T}D(g_{i}\parallel f_{i}), \tag{10}\]
where \(D(g_{i}\parallel f_{i})\) is the Kullback-Leibler divergence between the densities \(g_{i}\) and \(f_{i}\):
\[D(g_{i}\parallel f_{i})=\int g_{i}(x)\log\frac{g_{i}(x)}{f_{i}(x)}dx.\]
The following theorem is proved in [5].
**Theorem 2.3** ([5]).: _Let the information number \(I\) be as defined in (10) and satisfy \(0<I<\infty\). Also, let \(d\) be as in (9). Then, with_
\[A_{1}=A_{2}=\cdots=A_{T}=1-\alpha,\]
\(\tau^{*}\in\mathbf{C}_{\alpha}\)_, and_
\[\begin{split}\mathsf{E}^{\pi,G}\left[(\tau^{*}-\nu)^{+}\right]& =\inf_{\tau\in\mathbf{C}_{\alpha}}\mathsf{E}^{\pi,G}\left[(\tau- \nu)^{+}\right](1+o(1))\\ &=\frac{|\log\alpha|}{I+d}(1+o(1)),\quad\text{ as }\alpha\to 0.\end{split} \tag{11}\]
_Here \(o(1)\to 0\) as \(\alpha\to 0\)._
#### 2.2.3 Solution to the Constraint Version of the Problem
We now argue that, just as in the classical case, the relaxed version of the problem (6) can be used to provide a solution to the constraint version of the problem (3). We provide proof for completeness.
**Lemma 2.4**.: _If \(\alpha\) is a value of the probability of false alarm achievable by the optimal stopping rule \(\tau^{*}\) in (6), then \(\tau^{*}\) is also optimal for the constraint problem (3) for this \(\alpha\)._
Proof.: By Theorem 2.2, we have
\[\mathsf{E}^{\pi,G}\left[(\tau^{*}-\nu)^{+}\right]+\lambda_{f}\,\mathsf{P}^{ \pi}(\tau^{*}<\nu)\leq\mathsf{E}^{\pi,G}\left[(\tau-\nu)^{+}\right]+\lambda_{f }\,\mathsf{P}^{\pi}(\tau<\nu). \tag{12}\]
If \(\mathsf{P}^{\pi}(\tau^{*}<\nu)=\alpha\) and \(\mathsf{P}^{\pi}(\tau<\nu)\leq\alpha\), then
\[\begin{split}\mathsf{E}^{\pi,G}&\left[(\tau^{*}- \nu)^{+}\right]+\lambda_{f}\,\mathsf{P}^{\pi}(\tau^{*}<\nu)=\mathsf{E}^{\pi,G} \left[(\tau^{*}-\nu)^{+}\right]+\lambda_{f}\,\alpha\\ &\leq\mathsf{E}^{\pi,G}\left[(\tau-\nu)^{+}\right]+\lambda_{f}\, \mathsf{P}^{\pi}(\tau<\nu)\leq\mathsf{E}^{\pi,G}\left[(\tau-\nu)^{+}\right]+ \lambda_{f}\,\alpha.\end{split} \tag{13}\]
Canceling \(\lambda_{f}\,\alpha\) from both sides we get
\[\mathsf{E}^{\pi,G}\left[(\tau^{*}-\nu)^{+}\right]\leq\mathsf{E}^{\pi,G}\left[ (\tau-\nu)^{+}\right].\]
The following lemma guarantees that a wide range of probability of false alarm \(\alpha\) is achievable by the optimal stopping rule \(\tau^{*}\).
**Lemma 2.5**.: _As we increase \(\lambda_{f}\to\infty\) in (6), the probability of false alarm achieved by the optimal stopping rule \(\tau^{*}\) goes to zero._
Proof.: As \(\lambda_{f}\to\infty\), if the probability of false alarm for \(\tau^{*}\) stays bounded away from zero, then the Bayesian risk
\[\mathsf{E}^{\pi,G}\left[(\tau^{*}-\nu)^{+}\right]+\lambda_{f}\; \mathsf{P}^{\pi}(\tau^{*}<\nu)\]
would diverge to infinity. This will contradict the fact that \(\tau^{*}\) is optimal because we can get a smaller risk at large enough \(\lambda_{f}\) by stopping at a large enough deterministic time.
### Optimal Robust Algorithm for Unknown Post-Change Law
We now assume that the post-change law \(G\) is unknown and provide the optimal solution to (3) under assumptions on the families of post-change laws \(\{\mathcal{P}_{i}\}_{i=1}^{T}\). Specifically, we extend the results in [26] for i.i.d. processes to i.p.i.d. processes. We assume in the rest of this section that all densities involved are equivalent to each other (absolutely continuous with respect to each other). Also, we assume that the change point \(\nu\) is a geometrically distributed random variable.
To state the assumptions on \(\{\mathcal{P}_{i}\}_{i=1}^{T}\), we need some defintions. We say that a random variable \(Z_{2}\) is stochastically larger than another random variable \(Z_{1}\) if
\[\mathsf{P}(Z_{2}\geq t)\geq\mathsf{P}(Z_{1}\geq t),\quad\text{for all }t\in \mathbb{R}.\]
We use the notation
\[Z_{2}\succ Z_{1}.\]
If \(\mathcal{L}_{Z_{2}}\) and \(\mathcal{L}_{Z_{1}}\) are the probability laws of \(Z_{2}\) and \(Z_{1}\), then we also use the notation
\[\mathcal{L}_{Z_{2}}\succ\mathcal{L}_{Z_{1}}.\]
We now introduce the notion of stochastic boundedness in i.p.i.d. processes. In the following, we use
\[\mathcal{L}(\phi(X),g)\]
to denote the law of some function \(\phi(X)\) of the random variable \(X\), when the variable \(X\) has density \(g\).
**Definition 2.6** (Stochastic Boundedness in i.p.i.d. Processes; Least Favorable Law).:
We say that the family \(\{\mathcal{P}_{i}\}_{i=1}^{T}\) is stochastically bounded by the i.p.i.d. law
\[\bar{G}=(\bar{g}_{1},\bar{g}_{2},\ldots,\bar{g}_{T}),\]
and call \(\bar{G}\) the least favorable law (LFL), if
\[\bar{g}_{i}\in\mathcal{P}_{i},\quad i=1,2,\ldots,T,\]
and
\[\mathcal{L}\left(\log\frac{\bar{g}_{i}(X_{i})}{f_{i}(X_{i})},g_{ i}\right)\succ\mathcal{L}\left(\log\frac{\bar{g}_{i}(X_{i})}{f_{i}(X_{i})},\bar{g}_ {i}\right),\quad\text{for all}\ \ g_{i}\in\mathcal{P}_{i},\quad i=1,2,\ldots,T. \tag{14}\]
Consider the stopping rule \(\tau^{*}\) designed using the LFL \(\bar{G}=(\bar{g}_{1},\bar{g}_{2},\ldots,\bar{g}_{T})\):
\[\bar{\tau}^{*}=\inf\{n\geq 1:\bar{p}_{n}\geq A_{(n\bmod T)}\}, \tag{15}\]
where \(\bar{p}_{0}=0\), and
\[\bar{p}_{n}=\frac{\tilde{p}_{n-1}\ \bar{g}_{n}(X_{n})}{\tilde{p}_{n-1} \ \bar{g}_{n}(X_{n})+(1-\tilde{p}_{n-1})f_{n}(X_{n})}, \tag{16}\]
where
\[\tilde{p}_{n-1}=\bar{p}_{n-1}+(1-\bar{p}_{n-1})\rho.\]
We now state our main result on robust quickest change detection in i.p.i.d. processes.
**Theorem 2.7**.: _Suppose the following conditions hold:_
1. _The family_ \(\{\mathcal{P}_{i}\}_{i=1}^{T}\) _be stochastically bounded by the i.p.i.d. law_ \[\bar{G}=(\bar{g}_{1},\bar{g}_{2},\ldots,\bar{g}_{T}).\]
2. _Let_ \(\alpha\in[0,1]\) _be a constraint such that_ \[\mathsf{P}^{\pi}(\bar{\tau}^{*}<\nu)=\alpha,\] _where_ \(\bar{\tau}^{*}\) _is the optimal rule designed using the LFL_ (_15_)_._
3. _All likelihood ratio functions involved are continuous._
4. _The change point_ \(\nu\) _is geometrically distributed._
_Then, the stopping rule \(\bar{\tau}^{*}\) in (15) designed using the LFL is optimal for the robust constraint problem in (3)._
Proof.: The key step in the proof is to show that for each \(k\in\mathbb{N}\),
\[\begin{split}\mathsf{E}_{k}^{\bar{G}}\left[(\bar{\tau}^{*}-k)^{+}| \mathcal{F}_{k-1}\right]\succ\mathsf{E}_{k}^{G}\left[(\bar{\tau}^{*}-k)^{+}| \mathcal{F}_{k-1}\right],\\ \text{for all }G=(g_{1},\ldots g_{T}):g_{i}\in\mathcal{P}_{i},\;i \leq T,\end{split} \tag{17}\]
where \(\mathcal{F}_{k-1}\) is the sigma algebra generated by observations \(X_{1},\ldots,X_{k-1}\). If (17) is true then we have for each \(k\in\mathbb{N}\),
\[\begin{split}\mathsf{E}_{k}^{\bar{G}}\left[(\bar{\tau}^{*}-k)^{+} \right]&\geq\mathsf{E}_{k}^{G}\left[(\bar{\tau}^{*}-k)^{+}\right] \\ \text{for all }G=(g_{1},\ldots g_{T}):g_{i}\in\mathcal{P}_{i},\;i \leq T.\end{split} \tag{18}\]
Averaging over the prior on the change point, we get
\[\begin{split}\mathsf{E}^{\pi,\bar{G}}\left[(\bar{\tau}^{*}-\nu)^{+ }\right]&=\sum_{k}\pi_{k}\mathsf{E}_{k}^{\bar{G}}\left[(\bar{\tau} ^{*}-k)^{+}\right]\geq\sum_{k}\pi_{k}\mathsf{E}_{k}^{G}\left[(\bar{\tau}^{*}-k) ^{+}\right]=\mathsf{E}^{\pi,G}\left[(\bar{\tau}^{*}-\nu)^{+}\right],\\ \text{for all }G=(g_{1},\ldots g_{T}):g_{i}\in\mathcal{P}_{i},\;i \leq T.\end{split} \tag{19}\]
The last equation gives
\[\begin{split}\mathsf{E}^{\pi,\bar{G}}\left[(\bar{\tau}^{*}-\nu)^{+ }\right]&\geq\mathsf{E}^{\pi,G}\left[(\bar{\tau}^{*}-\nu)^{+} \right],\\ \text{for all }G=(g_{1},\ldots g_{T}):g_{i}\in\mathcal{P}_{i},\;i \leq T.\end{split} \tag{20}\]
This implies that
\[\mathsf{E}^{\pi,\bar{G}}\left[(\bar{\tau}^{*}-\nu)^{+}\right]=\sup_{G:g_{i} \in\mathcal{P}_{i},i\leq T}\mathsf{E}^{\pi,G}\left[(\bar{\tau}^{*}-\nu)^{+} \right], \tag{21}\]
where we have equality because the law \(\bar{G}\) belongs to the family considered on the right. Now, if \(\tau\) is any stopping rule satisfying the probability of false alarm constraint of \(\alpha\), then since \(\bar{\tau}^{*}\) is the optimal test for the LFL \(\bar{G}\), we have (see Theorem 2.2 and Lemma 2.4)
\[\begin{split}\sup_{G:g_{i}\in\mathcal{P}_{i},i\leq T}\mathsf{E}^ {\pi,G}\left[(\tau-\nu)^{+}\right]&\geq\mathsf{E}^{\pi,\bar{G}} \left[(\tau-\nu)^{+}\right]\geq\mathsf{E}^{\pi,\bar{G}}\left[(\bar{\tau}^{*}- \nu)^{+}\right]\\ &=\sup_{G:g_{i}\in\mathcal{P}_{i},i\leq T}\mathsf{E}^{\pi,G} \left[(\bar{\tau}^{*}-\nu)^{+}\right].\end{split} \tag{22}\]
The last equation proves the robust optimality of the stopping rule \(\bar{\tau}^{*}\) for the problem in (3).
We now prove the key step (17). Towards this end, we prove that for every integer \(N\geq 0\),
\[\begin{split}\mathsf{P}_{k}^{\bar{G}}\left[(\bar{\tau}^{*}-k)^{+} >N|\mathcal{F}_{k-1}\right]&\geq\mathsf{P}_{k}^{G}\left[(\bar{ \tau}^{*}-k)^{+}>N|\mathcal{F}_{k-1}\right],\\ \text{for all }G=(g_{1},\ldots g_{T}):g_{i}\in\mathcal{P}_{i},\;i \leq T.\end{split} \tag{23}\]
This is trivially true for \(N=0\) since event \(\{(\bar{\tau}^{*}-k)^{+}>0\}\) is \(\mathcal{F}_{k-1}\)-measurable. So we
only prove it for \(N\geq 1.\) Towards this end, we first have
\[\begin{split}\mathsf{P}_{k}^{\bar{G}}\left[(\bar{\tau}^{*}-k)^{+} \leq N|\mathcal{F}_{k-1}\right]&=\mathsf{P}_{k}^{\bar{G}}\left[ \bar{\tau}^{*}\leq k+N|\mathcal{F}_{k-1}\right]\\ &\quad=\mathsf{P}_{k}^{\bar{G}}\left[f(h_{1}(X_{1}),h_{2}(X_{2}), \ldots,h_{k+N}(X_{k+N}))\;\geq\;0\;|\;\mathcal{F}_{k-1}\right],\end{split} \tag{24}\]
where
\[h_{i}(x)=\log\frac{\bar{g}_{i}(x)}{f_{i}(x)},\]
the function \(f(z_{1},z_{2},\ldots,z_{N})\) is given by
\[f(z_{1},z_{2},\ldots,z_{N})=\max_{1\leq n\leq N}\left(\sum_{i=1}^{n}(1-\rho)^{ k-1}\rho\;\exp\left(\sum_{i=k}^{n}z_{i}\right)-B_{n}\right), \tag{25}\]
and
\[B_{n}=\frac{A_{n}}{1-A_{n}}(1-\rho)^{n}.\]
Now recall from (14) that
\[\mathcal{L}\left(h_{i}(X_{i}),g_{i}\right)\succ\mathcal{L}\left(h_{i}(X_{i}),\bar{g}_{i}\right),\quad\text{for all}\;\;g_{i}\in\mathcal{P}_{i},\quad i=1,2,\ldots,T. \tag{26}\]
Since the function \(f\) is continuous (being the maximum of continuous functions) and non-decreasing in each of its arguments, Lemma III.1 in [26] implies that
\[\begin{split}\mathsf{P}_{k}^{\bar{G}}\left[f(h_{1}(X_{1}),h_{2}(X _{2}),\ldots,h_{k+N}(X_{k+N}))\;\geq\;0\;|\;\mathcal{F}_{k-1}\right]\\ \leq\mathsf{P}_{k}^{G}\left[f(h_{1}(X_{1}),h_{2}(X_{2}),\ldots,h_ {k+N}(X_{k+N}))\;\geq\;0\;|\;\mathcal{F}_{k-1}\right],\\ \text{for all }G=(g_{1},\ldots g_{T}):g_{i}\in\mathcal{P}_{i},\; i\leq T.\end{split} \tag{27}\]
Equations (24) and (27) combined gives
\[\begin{split}\mathsf{P}_{k}^{\bar{G}}\left[(\bar{\tau}^{*}-k)^{+} \leq N\;|\;\mathcal{F}_{k-1}\right]&=\mathsf{P}_{k}^{\bar{G}} \left[\bar{\tau}^{*}\leq k+N\;|\;\mathcal{F}_{k-1}\right]\\ &=\mathsf{P}_{k}^{\bar{G}}\left[f(h_{1}(X_{1}),h_{2}(X_{2}), \ldots,h_{k+N}(X_{k+N}))\geq 0\;|\;\mathcal{F}_{k-1}\right]\\ &\leq\mathsf{P}_{k}^{\bar{G}}\left[f(h_{1}(X_{1}),h_{2}(X_{2}), \ldots,h_{k+N}(X_{k+N}))\geq 0\;|\;\mathcal{F}_{k-1}\right]\\ &=\mathsf{P}_{k}^{G}\left[\bar{\tau}^{*}\leq k+N\;|\;\mathcal{F}_ {k-1}\right]\\ &=\mathsf{P}_{k}^{G}\left[(\bar{\tau}^{*}-k)^{+}\leq N\;|\; \mathcal{F}_{k-1}\right],\\ &\quad\quad\quad\quad\quad\text{for all }G=(g_{1},\ldots g_{T}):g_{i} \in\mathcal{P}_{i},\;i\leq T.\end{split} \tag{28}\]
This proves (23) and hence (17).
## 3 Quickest Joint Detection and Classification
### Joint Detection and Classification Formulation
We assume that in a normal regime, the data can be modeled as an i.p.i.d. process with the law \((g_{1}^{(0)},\cdots,g_{T}^{(0)})\). At some point in time \(\nu\), the law of the i.p.i.d. process is governed not by the densities \((g_{1}^{(0)},\cdots,g_{T}^{(0)})\), but by one of the densities \((g_{1}^{(\ell)},\cdots,g_{T}^{(\ell)})\), \(\ell=1,2,\ldots,M\), with
\[g_{n+T}^{(\ell)}=g_{n}^{(\ell)},\quad\forall n\geq 1,\quad\ell=1,2,\ldots,M.\]
Specifically, at the time point \(\nu\), the distribution of the random variables change from \(\{g_{n}^{(0)}\}\) to \(\{g_{n}^{(\ell)}\}\): for some \(\ell=1,2,\ldots,M\),
\[X_{n}\sim\begin{cases}g_{n}^{(0)},&\quad\forall n<\nu,\\ g_{n}^{(\ell)}&\quad\forall n\geq\nu.\end{cases} \tag{29}\]
We want to detect the change described in (29) as quickly as possible, subject to a constraint on the rate of false alarms and on the probability of misclassification. Mathematically, we are looking for a pair \((\tau,\delta)\), where \(\tau\) is stopping time, i.e.,
\[\{\tau\leq n\}\in\sigma(X_{1},X_{2},\ldots,X_{n}),\]
and \(\delta\) is a decision rule, i.e., a map such that
\[\delta(X_{1},X_{2},\ldots,X_{\tau})\in\{1,2,\ldots,M\}.\]
Let \(\mathsf{P}_{\nu}^{(\ell)}\) denote the probability law of the process \(\{X_{n}\}\) when the change occurs at time \(\nu\) and the post-change law is \((g_{1}^{(\ell)},\cdots,g_{T}^{(\ell)})\). We let \(\mathsf{E}_{\nu}^{(\ell)}\) denote the corresponding expectation. When there is no change, we use the notation \(\mathsf{E}_{\infty}\). The problem of interest is as follows [17]:
\[\begin{split}\min_{\tau,\delta}&\max_{1\leq\ell\leq M }\sup_{\nu\geq 1}\,\mathrm{ess}\sup\,\mathsf{E}_{\nu}^{(\ell)}[(\tau-\nu+1)^{+}|X_{1},\cdots,X_{\nu-1}],\\ \mathrm{subj.\ to}&\mathsf{E}_{\infty}[\tau]\geq \beta,\\ \mathrm{and}&\mathsf{P}_{1}^{(\ell)}[\tau<\infty, \delta\neq\ell]\leq a_{\beta}\;\mathsf{E}_{1}^{(\ell)}[\tau],\quad\ell=1,2, \ldots,M,\\ &\quad\mathrm{where}\;\log a_{\beta}^{-1}\sim\log\beta,\;\;\mathrm{ as}\;\beta\to\infty.\end{split} \tag{30}\]
Here \(\mathrm{ess}\sup\) is the essential supremum of the random variable \(\mathsf{E}_{\nu}^{(\ell)}[(\tau-\nu+1)^{+}|X_{1},\cdots,X_{\nu-1}]\), i.e., the smallest constant dominating the random variable with probability one. Here and below, for two functions \(h(\beta)\) and \(f(\beta)\) of \(\beta\), we use \(f(\beta)\sim h(\beta)\), as \(\beta\to\infty\), to denote that the ratio of the two functions goes to \(1\) in the limit. Further motivation for this and other problem formulations for change point detection and isolation can be found in the literature [25], [17], [20].
### Algorithm for Detection when \(M=1\)
When \(M=1\), i.e., when there is only one post-change i.p.i.d. law, then an algorithm that is asymptotically optimal for detecting a change in the distribution is the periodic-CUSUM algorithm proposed in [3]. In this algorithm, we compute the sequence of statistics
\[W_{n+1}=W_{n}^{+}+\log\frac{g_{n+1}^{(1)}(X_{n+1})}{g_{n+1}^{(0)}(X_{n+1})} \tag{31}\]
and raise an alarm as soon as the statistic is above a threshold \(A\):
\[\tau_{c}=\inf\{n\geq 1:W_{n}\geq A\}. \tag{32}\]
Define
\[I_{10}=\frac{1}{T}\sum_{i=1}^{T}D(g_{i}^{(1)}\parallel g_{i}^{(0)}), \tag{33}\]
where \(D(g_{i}^{(1)}\parallel g_{i}^{(0)})\) is the Kullback-Leibler divergence between the densities \(g_{i}^{(1)}\) and \(g_{i}^{(0)}\). Then, the following result is proved in [3].
**Theorem 3.1** ([3]).: _Let the information number \(I_{10}\) as defined in (33) satisfy \(0<I_{10}<\infty\). Then, with \(A=\log\beta\),_
\[\mathsf{E}_{\infty}[\tau_{c}]\geq\beta,\]
_and as \(\beta\to\infty\),_
\[\begin{split}&\sup_{\nu\geq 1}\text{ess}\sup\mathsf{E}_{\nu}[( \tau_{c}-\nu+1^{+}|X_{1},\cdots,X_{\nu-1}]\\ &\sim\inf_{\tau:\mathsf{E}_{\infty}[\tau]\geq\beta}\text{ess} \sup\mathsf{E}_{\nu}[(\tau-\nu+1)^{+}|X_{1},\cdots,X_{\nu-1}]\\ &\sim\frac{\log\beta}{I_{10}}.\end{split} \tag{34}\]
Thus, the periodic-CUSUM algorithm is asymptotically optimal for detecting a change in the distribution, as the false alarm constraint \(\beta\to\infty\). Further, since the set of pre- and post-change densities \((g_{1}^{(0)},\cdots,g_{T}^{(0)})\) and \((g_{1}^{(1)},\cdots,g_{T}^{(1)})\) are finite, the recursion in (31) can be computed using finite memory needed to store these \(2T\) densities.
### Algorithm for Joint Detection and Classification
When the possible number of post-change distributions \(M>1\) and when we are also interested in accurately classifying the true post-change law, the periodic-CUSUM algorithm is not sufficient. We now propose an algorithm that can perform joint detection and classification.
For \(\ell=1,\ldots,M\), define the stopping times
\[\tau_{\ell}=\inf\left\{n\geq 1:\max_{1\leq k\leq n}\;\min_{0\leq m\leq M,m\neq \ell}\;\sum_{i=k}^{n}\log\frac{g_{i}^{(\ell)}(X_{i})}{g_{i}^{(m)}(X_{i})}\geq A \right\}. \tag{35}\]
The stopping time and decision rule for our detection-classification problem is defined as follows:
\[\begin{split}\tau_{dc}&=\min_{1\leq\ell\leq M}\; \tau_{\ell},\\ \delta_{dc}&=\arg\min_{1\leq\ell\leq M}\tau_{\ell}. \end{split} \tag{36}\]
A window-limited version of the above algorithm is obtained by replacing each \(\tau_{\ell}\) in (35) by
\[\tilde{\tau}_{\ell}=\inf\left\{n:\max_{n-L_{\beta}\leq k\leq n}\;\min_{0\leq m \leq M,m\neq\ell}\;\sum_{i=k}^{n}\log\frac{g_{i}^{(\ell)}(X_{i})}{g_{i}^{(m)}( X_{i})}\geq A\right\} \tag{37}\]
for an appropriate choice of window \(L_{\beta}\) (to be specified in the theorem below).
For \(1\leq\ell\leq M\) and \(0\leq m\leq M,\;m\neq\ell\), define
\[I_{\ell m}=\frac{1}{T}\sum_{i=1}^{T}D(g_{i}^{(\ell)}\parallel g_{i}^{(m)}), \tag{38}\]
and
\[I^{*}=\min_{1\leq\ell\leq M}\;\min_{0\leq m\leq M,m\neq\ell}\;I_{\ell m}. \tag{39}\]
Recall that we are looking for \((\tau,\delta)\) such that
\[\mathsf{E}_{\infty}[\tau]\geq\beta(1+o(1)),\;\text{as}\;\beta\to\infty, \tag{40}\]
and
\[\begin{split}\mathsf{P}_{1}^{(\ell)}[\tau<\infty,\delta\neq\ell] \leq a_{\beta}\;\mathsf{E}_{1}^{(\ell)}[\tau],\;\;\;\;\ell=1,2,\ldots,M,\\ \text{where}\;\log a_{\beta}^{-1}\sim\log\beta,\;\text{ as}\;\beta \to\infty.\end{split} \tag{41}\]
Let
\[C_{\beta}=\{(\tau,\delta):\text{conditions in (\ref{eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq
_Also,_
\[\begin{split}&\max_{1\leq\ell\leq M}\sup_{\nu\geq 1}\ \operatorname{\mathit{ess}sup}\operatorname{\mathsf{E}}_{\nu}^{(\ell)}[(\tau_{ dc}-\nu+1)^{+}|X_{1},\cdots,X_{\nu-1}]\\ &\sim\inf_{(\tau,\delta)\in C_{\beta}}\max_{1\leq\ell\leq M}\sup _{\nu\geq 1}\ \operatorname{\mathit{ess}sup}\operatorname{\mathsf{E}}_{\nu}^{(\ell)}[(\tau- \nu+1)^{+}|X_{1},\cdots,X_{\nu-1}]\\ &\sim\frac{\log\beta}{I^{*}},\ \text{ as }\beta\to\infty.\end{split} \tag{43}\]
_Finally, the window-limited version of the test (37) also satisfies the same asymptotic optimality property as long as_
\[\lim\inf_{\beta\to\infty}\frac{L_{\beta}}{\log\beta}>\frac{1}{I^{*}}.\]
_This condition is satisfied, for example, by_
\[L_{\beta}=\frac{\log\beta}{I^{*}}(1+\epsilon)\]
_for any fixed \(\epsilon>0\)._
Proof.: For \(1\leq\ell\leq M\) and \(0\leq m\leq M,\ m\neq\ell\), define
\[Z_{i}(\ell,m)=\log\frac{g_{i}^{(\ell)}(X_{i})}{g_{i}^{(m)}(X_{i})}\]
to be the log-likelihood ratio at time \(i\). In the rest of the proof, to write compact equations, we use \(X_{1}^{\nu-1}\) to denote the vector
\[X_{1}^{\nu-1}=(X_{1},X_{2},\ldots,X_{\nu-1}).\]
For each \(1\leq\ell\leq M\) and \(0\leq m\leq M,\ m\neq\ell\), we first show that the sequence \(\{Z_{i}(\ell,m)\}\) satisfies the following statement:
\[\begin{split}\sup_{\nu\geq 1}\operatorname{\mathit{ess}sup} \operatorname{\mathsf{P}}_{\nu}^{(\ell)}\left(\max_{t\leq n}&\sum _{i=\nu}^{\nu+t}Z_{i}(\ell,m)\geq I_{\ell m}(1+\delta)n\ \bigg{|}\ X_{1}^{\nu-1}\right)\\ &\to 0,\ \text{as }n\to\infty,\quad\forall\delta>0,\end{split} \tag{44}\]
where \(I_{\ell m}\) is as defined in (38).
Towards proving (44), note that as \(n\to\infty\)
\[\frac{1}{n}\sum_{i=\nu}^{\nu+n}Z_{i}(\ell,m)\to I_{\ell m},\quad\text{a.s.} \ \operatorname{\mathsf{P}}_{\nu}^{(\ell)},\ \ \forall\nu\geq 1. \tag{45}\]
The above display is true because of the i.p.i.d. nature of the observation process. This
implies that as \(n\to\infty\)
\[\max_{t\leq n}\frac{1}{n}\sum_{i=\nu}^{\nu+t}Z_{i}(\ell,m)\to I_{\ell m},\quad \text{a.s.}\ \ \mathsf{P}_{\nu}^{(\ell)},\ \ \forall\nu\geq 1. \tag{46}\]
To show this, note that
\[\max_{t\leq n}\!\frac{1}{n}\sum_{i=\nu}^{\nu+t}Z_{i}(\ell,m)=\max\left\{\max_{ t\leq n-1}\frac{1}{n}\sum_{i=\nu}^{\nu+t}Z_{i}(\ell,m),\ \ \frac{1}{n}\sum_{i=\nu}^{\nu+n}Z_{i}(\ell,m)\right\}. \tag{47}\]
For a fixed \(\epsilon>0\), because of (45), the LHS in (46) is greater than \(I_{\ell m}(1-\epsilon)\) for \(n\) large enough. Also, let the maximum on the LHS be achieved at a point \(k_{n}\), then
\[\max_{t\leq n}\frac{1}{n}\sum_{i=\nu}^{\nu+t}Z_{i}(\ell,m)=\frac{1}{n}\sum_{i= \nu}^{\nu+k_{n}}Z_{i}(\ell,m)=\frac{k_{n}}{n}\frac{1}{k_{n}}\sum_{i=\nu}^{\nu +k_{n}}Z_{i}(\ell,m).\]
Now \(k_{n}\) cannot be bounded because the left-hand side in the above equation is lower bounded by \(I_{\ell m}(1-\epsilon)\), and because of the presence of \(n\) in the denominator on the right-hand side of the above equation. This implies \(k_{n}>i\), for any fixed \(i\), and \(k_{n}\to\infty\). Thus, \(\frac{1}{k_{n}}\sum_{i=\nu}^{\nu+k_{n}}Z_{i}(\ell,m)\to I_{\ell m}\). Since \(k_{n}/n\leq 1\), we have that the LHS in (46) is less than \(I_{\ell m}(1+\epsilon)\), for \(n\) large enough. This proves (46). To prove (44), note that due to the i.p.i.d. nature of the processes
\[\begin{split}&\sup_{\nu\geq 1}\operatorname{ess}\sup\mathsf{P}_{ \nu}^{(\ell)}\left(\max_{t\leq n}\sum_{i=\nu}^{\nu+t}Z_{i}(\ell,m)\geq I_{ \ell m}(1+\delta)n\;\bigg{|}\;X_{1}^{\nu-1}\right)\\ &\quad=\sup_{1\leq\nu\leq T}\mathsf{P}_{\nu}^{(\ell)}\left(\max_ {t\leq n}\sum_{i=\nu}^{\nu+t}Z_{i}(\ell,m)\geq I_{\ell m}(1+\delta)n\right). \end{split} \tag{48}\]
The right-hand side goes to zero because of (46) and because the maximum on the right-hand side in (48) is over only finitely many terms.
Next, we show that the sequence \(\{Z_{i}(\ell,m)\}\), for each \(1\leq\ell\leq M\) and \(0\leq m\leq M,\ m\neq\ell\), satisfies the following statement:
\[\begin{split}\lim_{n\to\infty}\sup_{k\geq\nu\geq 1} \operatorname{ess}\sup\ \mathsf{P}_{\nu}^{(\ell)}&\left(\frac{1}{n}\sum_{i=k}^{k+n}Z_{i}( \ell,m)\leq I_{\ell m}-\delta\;\bigg{|}\;X_{1}^{k-1}\right)\\ &\quad=0,\quad\forall\delta>0.\end{split} \tag{49}\]
To prove (49), note that due to the i.p.i.d nature of the process we have
\[\begin{split}\sup_{k\geq\nu\geq 1}&\operatorname{ess} \sup\;\mathsf{P}_{\nu}^{(\ell)}\left(\frac{1}{n}\sum_{i=k}^{k+n}Z_{i}(\ell,m) \leq I_{\ell m}-\delta\;\middle|\;X_{1}^{k-1}\right)\\ &=\sup_{k\geq\nu\geq 1}\mathsf{P}_{\nu}^{(\ell)}\left(\frac{1}{n} \sum_{i=k}^{k+n}Z_{i}(\ell,m)\leq I_{\ell m}-\delta\;\right)\\ &=\sup_{\nu+T\geq k\geq\nu\geq 1}\mathsf{P}_{\nu}^{(\ell)} \left(\frac{1}{n}\sum_{i=k}^{k+n}Z_{i}(\ell,m)\leq I_{\ell m}-\delta\right)\\ &=\max_{1\leq\nu\leq T}\max_{\nu\leq k\leq\nu+T}\mathsf{P}_{\nu}^ {(\ell)}\left(\frac{1}{n}\sum_{i=k}^{k+n}Z_{i}(\ell,m)\leq I_{\ell m}-\delta \right).\end{split} \tag{50}\]
The right-hand side of the above equation goes to zero as \(n\to\infty\) for any \(\delta\) because of (45) and also because of the finite number of maximizations. The theorem now follows from Theorem 4 of [17] because Conditions A1 and A2 from [17] are satisfied.
**Remark 1**.: The algorithm and optimality are easily extended to multistream data as well, where there are a finite number of streams of observations, and only one stream is affected after the change. The goal is to detect the change and also to identify the affected stream with low probability.
## 4 Multislot Quickest Change Detection
Let \((f_{1},\cdots,f_{T})\) and \((g_{1},\cdots,g_{T})\) represent the laws of two i.p.i.d processes with \(f_{i}\neq g_{i}\), \(\forall i\). We assume that the second-order moments of log-likelihood ratios are finite and positive:
\[\begin{split}\text{(M1)}& 0<\mathsf{E}_{1}\left( \left|\log\frac{g_{i}(X_{i})}{f_{i}(X_{i})}\right|\right)<\infty,\;i=1,2,\cdots,T\\ \text{(V1)}& 0<\mathsf{E}_{1}\left(\log\frac{g_{i}(X_{i} )}{f_{i}(X_{i})}\right)^{2}<\infty,\;i=1,2,\cdots,T.\end{split} \tag{51}\]
Here \(\mathsf{E}_{1}\) denotes the expectation when the change occurs at time \(\nu=1\).
In the multislot change detection problem, the change occurs in only a subset of the \(T\) time slots in each period. To capture this we now introduce a new notation for the post-change law emphasizing the slots where the density changes. For a subset \(S\subset\{1,2,\ldots,T\}\), define a possible post-change i.p.i.d. law as
\[g_{S}=(g_{S,1},g_{S,2},\ldots,g_{S,T}), \tag{52}\]
with
\[g_{S,i}=\begin{cases}g_{i},\text{ if }\;i\in S\\ f_{i},\text{ if }\;i\not\in S.\end{cases} \tag{53}\]
Note that
\[g_{S}=(g_{1},\cdots,g_{T}),\ \ \text{if}\ S=\{1,\ldots,T\}.\]
Thus, the set \(S\) denotes the slots in which the change occurs:
\[X_{n}\sim\begin{cases}f_{n},&\forall n<\nu\\ g_{S,n}&\forall n\geq\nu,\end{cases} \tag{54}\]
with the understanding that \(g_{S,n+T}=g_{S,n}\), \(\forall n\). This set is not known to the decision maker. However, it is known that
\[S\in\mathcal{S}\subset 2^{\{1,\ldots,T\}},\]
i.e., \(S\) belongs to a family of the subsets of the power set of \(\{1,\ldots,T\}\). For example,
\[\mathcal{S}=\{S:|S|\leq m\},\]
where \(|S|\) denotes the size of set \(S\). The algorithms that we will propose for change detection will be especially useful when \(m\ll T\).
Let \(\tau\) be a stopping time for the process \(\{X_{n}\}\), i.e., a positive integer-valued random variable such that the event \(\{\tau\leq n\}\) belongs to the \(\sigma\)-algebra generated by \(\{X_{1},\cdots,X_{n}\}\). We model the change point \(\nu\) as a random variable with a prior \(\pi\):
\[\pi_{n}=\mathsf{P}(\nu=n),\ \ \ \ \text{for}\ n=1,2,\cdots.\]
Let \(\mathsf{P}_{n}^{S}\) denotes the law of the observation process \(\{X_{n}\}\) when the change occurs in slots \(S\) at time \(\nu=n\) and define
\[\mathsf{P}_{\pi}^{S}=\sum_{n=1}^{\infty}\pi_{n}\ \mathsf{P}_{n}^{S}.\]
We use \(\mathsf{E}_{n}^{S}\) and \(\mathsf{E}_{\pi}^{S}\) to denote the corresponding expectations. For each \(S\in\mathcal{S}\), we seek a solution to
\[\min_{\tau\in\mathbf{C}_{\alpha}}\mathsf{E}_{\pi}^{S}\left[\tau- \nu|\tau\geq\nu\right], \tag{55}\]
where
\[\mathbf{C}_{\alpha}=\{\tau:\mathsf{P}_{\pi}^{S}(\tau<\nu)\leq \alpha\}, \tag{56}\]
and \(\alpha\) is a given constraint on the probability of a false alarm. In fact, we seek an algorithm that solves the above problem uniformly over every \(S\).
### Algorithm for Multislot Change Detection
Consider a mixing distribution or a probability mass function on the set \(\mathcal{S}\):
\[p_{S}\geq 0,\ \forall S\in\mathcal{S},\quad\text{ and }\quad\sum_{S\in\mathcal{S}}p_{S}=1.\]
Define the mixture statistic
\[R_{n}=\frac{1}{\Pi_{n}}\sum_{k=1}^{n}\pi_{k}\sum_{S\in\mathcal{S}}p_{S}\prod_{i =k}^{n}\frac{g_{S,i}(X_{i})}{f_{i}(X_{i})}, \tag{57}\]
where \(\Pi_{n}=\mathsf{P}(\nu>n)\), and the stopping rule
\[\tau_{mps}=\inf\{n\geq 1:R_{n}>A\}. \tag{58}\]
In the following, we call this algorithm the mixture periodic Shiryaev or the MPS algorithm. Note that
\[R_{n}=\sum_{S\in\mathcal{S}}p_{S}\frac{1}{\Pi_{n}}\sum_{k=1}^{n}\pi_{k}\prod_{ i=k}^{n}\frac{g_{S,i}(X_{i})}{f_{i}(X_{i})}.\]
Thus, the statistic \(R_{n}\) is a mixture of \(|\mathcal{S}|\) periodic Shiryaev statistics (see [5] and Section 2.2), one for each \(S\in\mathcal{S}\). Thus, the statistic \(R_{n}\) can be computed recursively and using finite memory for geometric prior \(\pi=\text{Geom}(\rho)\) (see Lemma 5.1 in [5]).
### Lower Bound on Detection Delay
In this section, we obtain a lower bound on the average detection delay for any stopping time that satisfies the constraint on the probability of false alarm (56). We make the following assumptions.
1. Let there exist \(d\geq 0\) such that \[\lim_{n\to\infty}\frac{\log\mathsf{P}(\nu>n)}{n}=-d.\] (59)
2. Also, let \[\sum_{n=1}^{\infty}\pi_{n}|\log\pi_{n}|<\infty.\] (60)
If \(\pi=\text{Geom}(\rho)\), then
\[\frac{\log\mathsf{P}(\nu>n)}{n}=\frac{\log(1-\rho)^{n}}{n}=\frac{n\log(1-\rho )}{n}=\log(1-\rho).\]
Thus, \(d=|\log(1-\rho)|\). In addition,
\[\sum_{n=1}^{\infty}\pi_{n}|\log\pi_{n}|=\frac{1-\rho}{\rho}\log\frac{1}{(1-\rho) }+\log\frac{1}{\rho}<\infty.\]
Thus, conditions (A1) and (A2) above are satisfied by the geometric prior.
We first start with a lemma whose proof is elementary. Define
\[Z_{i}=\log\frac{g_{S,i}(X_{i})}{f_{i}(X_{i})}, \tag{61}\]
and
\[I_{S}=\frac{1}{T}\sum_{i\in S}D(g_{S,i}\parallel f_{i}). \tag{62}\]
**Lemma 4.1**.: _For \(Z_{i}\) defined in (61) and \(I_{S}\) defined in (62), for each \(S\in\mathcal{S}\) and as \(n\rightarrow\infty\), we have_
\[\frac{1}{n}\sum_{i=k}^{k+n-1}Z_{i} \rightarrow I_{S},\quad\mathsf{P}_{k}^{S}\ \text{ a.s.},\ \forall k\geq 1. \tag{63}\]
The lower bound is supplied by the following theorem.
**Theorem 4.2**.: _Let the information number \(I_{S}\) be as defined in (62). Also, let the prior \(\pi\) satisfy the condition (A1) in (59). Then, for any stopping time \(\tau\in\mathbf{C}_{\alpha}\), we have_
\[\mathsf{E}_{\pi}^{S}\left[\tau-\nu|\tau\geq\nu\right]\geq\frac{|\log\alpha|}{ I_{S}+d}(1+o(1)),\quad\text{ as }\alpha\to 0. \tag{64}\]
_Here \(o(1)\to 0\) as \(\alpha\to 0\)._
Proof.: The result follows from Lemma 4.1 above and Theorem 5.1 in [5] (see also Section 2.2).
### Optimality of the MPS Algorithm
We now show that the MPS algorithm (58) is asymptotically optimal for problem (55) for each post-change slots \(S\in\mathcal{S}\), as the false alarm constraint \(\alpha\to 0\). We first prove an important lemma.
Define
\[\gamma_{k}(\epsilon)=\sum_{n=1}^{\infty}\mathsf{P}_{k}^{S}\left(\left|\frac{1 }{n}\sum_{i=k}^{k+n-1}Z_{i}-I_{S}\right|>\epsilon\right), \tag{65}\]
where \(Z_{i}\) is as defined in (61).
**Lemma 4.3**.: _For every \(\epsilon>0\),_
\[\sum_{k=1}^{\infty}\pi_{k}\gamma_{k}(\epsilon)<\infty, \tag{66}\]
_where \(\gamma_{k}(\epsilon)\) is defined in (65)._
_Proof._ The sequence
\[\{\gamma_{k}(\epsilon)\}_{k=1}^{\infty}\]
in (65) is periodic with period \(T\), and as a result, there are at most \(T\) distinct values in the above sequence:
\[\gamma_{1}(\epsilon),\;\cdots,\gamma_{T}(\epsilon).\]
Thus, if we show that these \(T\) numbers are finite for any \(\epsilon>0\), then we automatically have
\[\sum_{k=1}^{\infty}\pi_{k}\gamma_{k}(\epsilon)\leq\max_{1\leq k \leq T}\gamma_{k}(\epsilon)<\infty,\quad\forall\epsilon>0.\]
Furthermore, since we do not make any explicit assumptions on the actual values taken by the densities \((f_{1},\cdots,f_{T})\) and \((g_{1},\cdots,g_{T})\), we can exploit the i.p.i.d. nature of the processes to just show that
\[\gamma_{1}(\epsilon)<\infty,\quad\forall\epsilon>0.\]
Recall the definition of \(\gamma_{1}(\epsilon)\):
\[\gamma_{1}(\epsilon)=\sum_{n=1}^{\infty}\mathsf{P}_{1}^{S}\left( \left|\frac{1}{n}\sum_{i=1}^{n}Z_{i}-I_{S}\right|>\epsilon\right).\]
For \(\ell=1,2,\cdots,T\), define
\[Z_{i}^{(\ell)}=\begin{cases}Z_{i}&\text{if }i=mT+\ell\;\text{ for }m=0,1,2,\cdots\\ 0&\text{otherwise }.\end{cases}\]
Note that
\[Z_{i}^{(\ell)}=0,\;\text{ if }\ell\not\in S.\]
Also recall that
\[I_{S}=\frac{1}{T}\sum_{\ell\in S}I_{\ell},\]
where
\[I_{\ell}=D(g_{\ell}\parallel f_{\ell}).\]
Using these definitions, we can write
\[\frac{1}{n}\sum_{i=1}^{n}Z_{i}-I_{S}=\frac{1}{n}\sum_{i=1}^{n}\sum_{\ell\in S}Z_{ i}^{(\ell)}-\frac{1}{T}\sum_{\ell\in S}I_{\ell}=\sum_{\ell\in S}\left(\frac{1}{n} \sum_{i=1}^{n}Z_{i}^{(\ell)}-\frac{I_{\ell}}{T}\right).\]
This implies
\[\mathsf{P}_{1}^{S}\left(\left|\frac{1}{n}\sum_{i=1}^{n}Z_{i}-I_{S}\right|> \epsilon\right)\leq\sum_{\ell\in S}\mathsf{P}_{1}^{S}\left(\left|\frac{1}{n} \sum_{i=1}^{n}Z_{i}^{(\ell)}-\frac{I_{\ell}}{T}\right|>\frac{\epsilon}{|S|} \right).\]
Thus, to show the summability of the LHS in the above equation, we need to show the summability of each of the \(|S|\) terms on the RHS. Again, due to the i.p.i.d. nature of the processes, and because we have made no explicit assumptions about the densities \((f_{1},\cdots,f_{T})\) and \((g_{1},\cdots,g_{T})\), it is enough to establish the summability of any one of the terms on the right. That is, for \(\ell\in S\), we want to show that
\[\sum_{n=1}^{\infty}\mathsf{P}_{1}^{S}\left(\left|\frac{1}{n}\sum_{i=1}^{n}Z_{ i}^{(\ell)}-\frac{I_{\ell}}{T}\right|>\frac{\epsilon}{|S|}\right)<\infty.\]
Define for \(\ell=1,2,\cdots,T\),
\[I_{i}^{(\ell)}=\begin{cases}I_{\ell}&\text{ if }i=mT+\ell\ \text{ for }m=0,1,2,\cdots,\ell\in S\\ 0&\text{ otherwise.}\end{cases}.\]
Using this definition we write for \(\ell\in S\),
\[\frac{1}{n}\sum_{i=1}^{n}Z_{i}^{(\ell)}-\frac{I_{\ell}}{T}=\frac{1}{n}\sum_{i =1}^{n}(Z_{i}^{(\ell)}-I_{i}^{(\ell)})+\frac{1}{n}\sum_{i=1}^{n}I_{i}^{(\ell) }-\frac{I_{\ell}}{T}.\]
Thus, with \(\tilde{Z}_{i}^{(\ell)}=Z_{i}^{(\ell)}-I_{i}^{(\ell)}\), we have
\[\begin{split}\mathsf{P}_{1}^{S}\left(\left|\frac{1}{n}\sum_{i=1}^ {n}Z_{i}^{(\ell)}-\frac{I_{\ell}}{T}\right|>\frac{\epsilon}{|S|}\right)& \leq\mathsf{P}_{1}^{S}\left(\left|\frac{1}{n}\sum_{i=1}^{n} \tilde{Z}_{i}^{(\ell)}\right|>\frac{\epsilon}{2|S|}\right)\\ &+\mathsf{P}_{1}^{S}\left(\left|\frac{1}{n}\sum_{i=1}^{n}I_{i}^{( \ell)}-\frac{I_{\ell}}{T}\right|>\frac{\epsilon}{2|S|}\right).\end{split} \tag{67}\]
Now, \(\frac{1}{n}\sum_{i=1}^{n}I_{i}^{(\ell)}=\frac{1}{n}I_{\ell}\lfloor\frac{n}{T}\rfloor \rightarrow\frac{I_{\ell}}{T}\), as \(k\rightarrow\infty\). Thus, for \(n\) large enough, the second term on the right in (67) is identically zero. Thus, we only need to show that
\[\sum_{n=1}^{\infty}\mathsf{P}_{1}^{S}\left(\left|\frac{1}{n}\sum_{i=1}^{n} \tilde{Z}_{i}^{(\ell)}\right|>\frac{\epsilon}{2|S|}\right)<\infty,\quad\ell \in S.\]
Towards this end, note that in the term \(\mathsf{P}_{1}^{S}\left(\left|\frac{1}{n}\sum_{i=1}^{n}\tilde{Z}_{i}^{(\ell)} \right|>\frac{\epsilon}{2|S|}\right)\), the sum \(\sum_{i=1}^{n}\tilde{Z}_{i}^{(\ell)}\) is updated only once in \(T\) time steps. As a result, as a function of \(n\), the probability decreases monotonically between \(kT+\ell\) and \((k+1)T+\ell-1\), for every \(k=0,1,2,\dots\). Using this fact, we can write
\[\sum_{n=1}^{\infty}\mathsf{P}_{1}^{S}\left(\left|\frac{1}{n}\sum _{i=1}^{n}\tilde{Z}_{i}^{(\ell)}\right|>\frac{\epsilon}{2|S|}\right) \leq T+T\sum_{j=1}^{\infty}\mathsf{P}_{1}^{S}\left(\left|\frac{1} {jT+\ell}\sum_{i=1}^{jT+\ell}\tilde{Z}_{i}^{(\ell)}\right|>\frac{\epsilon}{2| S|}\right)\] \[\leq T+T\sum_{j=1}^{\infty}\mathsf{P}_{1}^{S}\left(\left|\frac{1 }{j}\sum_{i=1}^{jT+\ell}\tilde{Z}_{i}^{(\ell)}\right|>\frac{\epsilon}{2|S|} \right).\]
The rightmost summation is finite because the sum inside is a sum of \(j\) i.i.d. random variables with the distribution of \(Z_{\ell}\) under \(\mathsf{P}_{1}\)[25]. The summation is finite for i.i.d. random variables with finite variance. See also [24].
In words, the above lemma states that i.p.i.d. processes satisfy the complete convergence condition [25], [24].
**Theorem 4.4**.: _Let the prior satisfy the conditions (A1) and (A2). With \(A=\frac{1-\alpha}{\alpha}\) in (58), we have for each \(S\in\mathcal{S}\),_
\[\mathsf{P}_{\pi}^{S}(\tau_{mps}<\nu)\leq\alpha\]
_and_
\[\mathsf{E}_{\pi}^{S}\left[\tau_{mps}-\nu|\tau_{mps}\geq\nu\right] \leq\frac{|\log\alpha|}{I_{S}+d}(1+o(1)),\quad\text{ as }\alpha\to 0. \tag{68}\]
Proof.: The results follow directly from Lemma 4.3 and arguments provided in [24]. But, we provide the proof in detail for completeness.
Recall that the mixture statistic is defined as
\[R_{n}=\frac{1}{\Pi_{n}}\sum_{k=1}^{n}\pi_{k}\sum_{S\in\mathcal{S}}p_{S}\prod_ {i=k}^{n}\frac{g_{S,i}(X_{i})}{f_{i}(X_{i})}, \tag{69}\]
where \(\Pi_{n}=\mathsf{P}(\nu>n)\), and the stopping rule is defined as
\[\tau_{mps}=\inf\{n\geq 1:R_{n}>A\}. \tag{70}\]
We first note that for any discrete integer-valued random variable such as a stopping time \(\tau\),
\[\mathsf{E}[\tau]=\sum_{n=0}^{\infty}\mathsf{P}(\tau>n)\leq N+\sum_{n=N}^{ \infty}\mathsf{P}(\tau>n), \tag{71}\]
where \(N\) is any positive integer. The theorem follows by carefully choosing the value \(N\) above and obtaining an upper bound on \(\mathsf{P}(\tau>n)\).
For \(0<\epsilon<I_{S}+d\), set
\[N=N_{\alpha}=1+\Big{\lfloor}\frac{\log(A_{\alpha}/\pi_{k})}{I_{S}+d-\epsilon} \Big{\rfloor},\]
where
\[A_{\alpha}=\frac{1-\alpha}{\alpha}.\]
Using \(N=N_{\alpha}\) and \(\tau=(\tau_{mps}-k)^{+}\) in (71) we get
\[\begin{split}\mathsf{E}_{k}^{S}[(\tau_{mps}-k)^{+}]& \leq N_{\alpha}+\sum_{n\geq N_{\alpha}}\mathsf{P}_{k}^{S}(\tau_{mps} >k+n)\leq N_{\alpha}+\sum_{n\geq N_{\alpha}}\mathsf{P}_{k}^{S}(R_{k+n}<A_{ \alpha})\\ &=N_{\alpha}+\sum_{n\geq N_{\alpha}}\mathsf{P}_{k}^{S}(\log R_{k+ n}<\log A_{\alpha}).\end{split} \tag{72}\]
Now,
\[R_{k+n}=\frac{1}{\Pi_{k+n}}\sum_{t=1}^{k+n}\pi_{t}\sum_{S\in\mathcal{S}}p_{S} \prod_{i=t}^{k+n}\frac{g_{S,i}(X_{i})}{f_{i}(X_{i})}.\]
The MPS statistic is lower bounded by
\[R_{k+n}\geq\frac{1}{\Pi_{k+n}}\pi_{k}\;p_{S}\prod_{i=k}^{k+n}\frac{g_{S,i}(X_{ i})}{f_{i}(X_{i})}.\]
Here \(S\) on the right is the true post-change multiplot set. Taking logarithms on both sides we get
\[\log(R_{k+n})\geq|\log\Pi_{k+n}|+\log(\pi_{k})+\log(p_{S})+\sum_{i=k}^{k+n}Z_{ i}. \tag{73}\]
Using (73) we can bound the probability in (72) for \(n\geq N_{\alpha}\),
\[\begin{split}\mathsf{P}_{k}^{S}(\log R_{k+n}<\log A_{\alpha})& \leq\mathsf{P}_{k}^{S}\left(|\log\Pi_{k+n}|+\log(p_{S})+\sum_{i=k }^{k+n}Z_{i}<\log(A_{\alpha}/\pi_{k})\right)\\ &=\mathsf{P}_{k}^{S}\left(\frac{1}{n}\sum_{i=k}^{k+n}Z_{i}+\frac {|\log\Pi_{k+n}|}{n}+\frac{\log(p_{S})}{n}<\frac{\log(A_{\alpha}/\pi_{k})}{n} \right)\\ &\leq\mathsf{P}_{k}^{S}\left(\frac{1}{n}\sum_{i=k}^{k+n}Z_{i}+ \frac{|\log\Pi_{k+n}|}{n}+\frac{\log(p_{S})}{n}<I_{S}+d-\epsilon\right)\\ &=\mathsf{P}_{k}^{S}\left(\frac{1}{n}\sum_{i=k}^{k+n}Z_{i}<I_{S}+ d-\frac{|\log\Pi_{k+n}|}{n}-\frac{\log(p_{S})}{n}-\epsilon\right).\end{split} \tag{74}\]
Now select \(\alpha\) small enough so that for every \(n\geq N_{\alpha}\)
\[\Big{|}d-\frac{|\log\Pi_{k+n}|}{n}\Big{|} <\frac{\epsilon}{4},\] \[\Big{|}\frac{\log(p_{S})}{n}\Big{|} <\frac{\epsilon}{4}.\]
Specifically, select \(\alpha\) small enough such that the statements in the above display are true for all \(n\) satisfying
\[n\geq 1+\Big{\lfloor}\frac{\log(A_{\alpha})}{I_{S}+d-\epsilon}\Big{\rfloor}.\]
This ensures that the chosen small \(\alpha\) is not a function of the index \(k\). This gives us
\[\begin{split}\mathsf{P}^{S}_{k}&(\log R_{k+n}<\log A _{\alpha})\\ &\leq\ \mathsf{P}^{S}_{k}\left(\frac{1}{n}\sum_{i=k}^{k+n}Z_{i}<I_{S}+d -\frac{|\log\Pi_{k+n}|}{n}-\frac{\log(p_{S})}{n}-\epsilon\right)\\ &\leq\ \mathsf{P}^{S}_{k}\left(\frac{1}{n}\sum_{i=k}^{k+n}Z_{i}<I_{S}- \frac{\epsilon}{2}\right).\end{split} \tag{75}\]
Substituting this in (72) we get for \(\alpha\) small enough, uniformly over \(k\),
\[\begin{split}\mathsf{E}^{S}_{k}[(\tau_{mps}-k)^{+}]& \leq N_{\alpha}+\sum_{n\geq N_{\alpha}}\mathsf{P}^{S}_{k}(\log R _{k+n}<\log A_{\alpha})\\ &\leq N_{\alpha}+\sum_{n\geq N_{\alpha}}\mathsf{P}^{S}_{k}\left( \frac{1}{n}\sum_{i=k}^{k+n}Z_{i}<I_{S}-\frac{\epsilon}{2}\right)\\ &\leq N_{\alpha}+\sum_{n=1}^{\infty}\mathsf{P}^{S}_{k}\left( \Big{|}\frac{1}{n}\sum_{i=k}^{k+n}Z_{i}-I_{S}\Big{|}>\frac{\epsilon}{2}\right) \\ &=N_{\alpha}+\gamma_{k}(\epsilon/2).\end{split} \tag{76}\]
This gives us
\[\begin{split}\mathsf{E}^{S}_{\pi}[(\tau_{mps}-\nu)^{+}]& =\sum_{k=1}^{\infty}\pi_{k}\mathsf{E}^{S}_{k}[(\tau_{mps}-k)^{+}]\\ &\leq\sum_{k=1}^{\infty}\pi_{k}N_{\alpha}+\sum_{k=1}^{\infty}\pi _{k}\gamma_{k}(\epsilon/2)\\ &\leq\sum_{k=1}^{\infty}\pi_{k}\left(1+\frac{\log(A_{\alpha}/\pi_ {k})}{I_{S}+d-\epsilon}\right)+\sum_{k=1}^{\infty}\pi_{k}\gamma_{k}(\epsilon/2 )\\ &=\frac{\log(A_{\alpha})}{I_{S}+d-\epsilon}+\text{ constant}.\end{split} \tag{77}\]
The remaining term is a constant (not a function of \(\alpha\)) because of the assumptions
made in the theorem statement and due to Lemma 4.3. Finally,
\[\mathsf{E}_{\pi}^{S}[\tau_{mps}-\nu|\tau_{mps}\geq\nu] =\frac{\mathsf{E}_{\pi}^{S}[(\tau_{mps}-\nu)^{+}]}{\mathsf{P}_{\pi} ^{S}(\tau_{mps}\geq\nu)}\] \[\leq\frac{\frac{\log(A_{\alpha})}{I_{S}+d-\epsilon}+\text{ constant}}{1-\alpha} \tag{78}\] \[=\frac{|\log\alpha|}{I_{S}+d-\epsilon}(1+o(1)),\quad\text{ as }\alpha \to 0.\]
The result now follows because \(\epsilon\) can be made arbitrarily small.
The fact that setting \(A=A_{\alpha}\) guarantees that the false alarm constraint is satisfied follows from [24].
Thus, the MPS algorithm achieves the asymptotic lower bound given in Theorem 4.2 and is asymptotically optimal uniformly over \(S\).
## 5 Numerical Results
### Applying the Periodic-CUSUM Algorithm to Los Angeles Traffic Data
In this section, we demonstrate how to train and apply the periodic-CUSUM algorithm (31) on traffic flow indicator data of the Los Angeles' (LA) highway. For ease of reference, we reproduce the algorithm here.
\[W_{n+1}=W_{n}^{+}+\log\frac{g_{n+1}^{(1)}(X_{n+1})}{g_{n+1}^{(0)}(X_{n+1})}; \ W_{n}=0. \tag{79}\]
\[\tau_{c}=\inf\{n\geq 1:W_{n}\geq A\}. \tag{80}\]
We train the pre-change model using weekday traffic data and the post-change model using weekend or holiday data. We then apply the algorithm to the traffic data to detect weekend or holiday traffic. We now discuss the application in detail.
We applied the algorithm at selected stations along the Westbound I-10 highway in LA County (see Fig. 1) by incorporating multiple historical traffic flow datasets that were freely accessible via Performance Measurement System (PeMS website [https://pems.dot.ca.gov/](https://pems.dot.ca.gov/)) for a time span of August 2020 - September 2021. The traffic counts reported by PeMS act as a proxy for the traffic flow of each ten stations spaced about 0.33 miles apart, as seen in Fig. 1. PeMS data in archive format is released daily.
Downloaded data for one station in LA has multiple fields in a comma-delimited text format. Each field contains useful traffic attributes such as timestamps, station IDs, number of vehicles/5-minute bins, and so on (see Fig. 2 for a sample of this data). In addition, we were able to locate each station precisely on the map of Los Angeles by using station_id (second column in Fig. 2) as a key from PeMS and merging those
Figure 1: Selected stations map. The station ID is denoted by the numbers for west-bound traffic: The main features are depicted, while a light-grey hue highlights a segment of the I-10 highway (Christopher Columbus Transcontinental Highway), LA, CA.
Figure 2: Sample raw data of an eastbound vehicle detector station (VDS) with “station_id” (column two) ending with 8202. Note that spotted column for “total_flow” (column ten) is defined as the number of vehicles/five-minute.
with an additional table that contains geographical coordinates [lat, lon] for selected traffic stations as illustrated in Fig. 1.
We observed similar patterns on different days of the week as well as across multiple stations within the same segment of interest (Fig. 3). We chose the sensor traffic with an ID ending with 7095 on a random weekday of the month of August 2021 for training purposes, and we left out the last month of September 2021 for the test set. The comparison of sample paths of August's Mondays with Labor Day Holidays of 2020 (label 249) and 2021 from the test using the smoothing technique is depicted in Fig. 4. Because the readings were noisy, we used the station's median moving average (MMA) of the previous hour's samples (we dropped the duplicates while keeping the last in our station dataset for future analysis).
For applying the periodic CUSUM algorithm, we assumed that \(T=288\) (\(12\times 24\)) (number of bins), and modeled
\[\begin{split} g_{i}^{(0)}&=\text{Poisson}(\lambda _{i}^{(0)}),\quad i=1,2,\ldots,T\\ g_{i}^{(1)}&=\text{Poisson}(\lambda_{i}^{(1)}), \quad i=1,2,\ldots,T.\end{split} \tag{81}\]
Figure 4: (a) Comparisons between some of the training sample paths in normal (August’s Mondays) vs Labor Day of 2021 and Labor day of 2020 (label 249). (b): Labor week periodic CUSUM test statistics and event labels for 09/06/2021.
Figure 3: (a): Illustration of sample path from the station with index three (7095) over a day (288 bins). (b): Average traffic counts on different days of Aug (Mon=0), 2021.
We then learned the Poisson parameters from the training data. In Fig. 3(b), we have plotted the periodic CUSUM statistic for the test data. The red rectangular blocks indicate the location of weekends and the blue curve is the test statistic. As seen in the figure, the test statistic rose sharply around the weekends to indicate that a change in the traffic flow has been detected.
### Numerical result for Multislot Quickest Change Detection
In this section, we apply the MPS algorithm (57) on simulated noisy sinusoidal data. For ease of reference, we reproduce the algorithm here.
\[R_{n}=\frac{1}{\Pi_{n}}\sum_{k=1}^{n}\pi_{k}\sum_{S\in\mathcal{S}}p_{S}\prod_ {i=k}^{n}\frac{g_{S,i}(X_{i})}{f_{i}(X_{i})}, \tag{82}\]
where \(\Pi_{n}=\mathsf{P}(\nu>n)\), and the stopping rule
\[\tau_{mps}=\inf\{n\geq 1:R_{n}>A\}. \tag{83}\]
Specifically, we assume that we observe a noisy version of a sequence of sinusoidal waveforms as shown in Fig. 4(a). At the change point, the shape of the sinusoidal signal is distorted as shown in Fig. 4(b). The goal is to detect this distortion in real-time. We assume that we know the type of distortion but don't know the precise location of the distortion. Thus, the actual distortion can be any one of the five shown in Fig. 5(a). We will assume that each of the distortions is equally likely for the design of the MPS algorithm (82).
Let \(h(t)\) be the blue sinusoidal signal shown in Fig. 4(b). Then, we assume that \(T=25\) and create the simulated data using
\[f_{i}=\mathcal{N}\left(\mu_{0,i},0.01\right),\quad i=1,2,\ldots,25,\]
Figure 5: (a): Depiction of four positive half sinusoidal waves sampled at a frequency of 1k Hz (Samples/Cycle). (b): Illustration of parameters that governed the PDFs of pre-change and actual post-change that shifted up at the first time slot ([0, 4]).
where \(\mu_{0}\) is a 25-length vector given by
\[\mu_{0,i}=h(i),\quad i=1,2,\ldots,25.\]
The post-change data is generated from
\[g_{i}=\mathcal{N}\left(\mu_{1,i},0.01\right),\quad i=1,2,\ldots,25,\]
where
\[\mu_{1,i}=\begin{cases}\mu_{0,i},&\text{for i}\notin[0,4]:=[0,1,2,3,4]\\ \mu_{0,i}+0.6,&\text{for i}\in[0,4].\end{cases}\]
Thus, the post-change data is generated by assuming that the true post-change slots are
\[S=[0,4].\]
But, we assume that the multislot family \(\mathcal{S}\) is
\[\mathcal{S}=\{[0,4],[5,9],[10,14],[15,19],[20,24]\}\]
with
\[p_{S}=\frac{1}{5},\quad\text{for all }S\in\mathcal{S}.\]
We also assumed a geometric prior on the change point with parameter \(\rho=0.01\), i.e., \(\pi_{k}=(1-\rho)^{k-1}\rho.\) We plot the generated data and the MPS algorithm statistic in Fig. (b)b. As can be seen from the figure, the algorithm detects the change quite
Figure 6: (a): Depiction of all possible post-change waveforms. (b): Test statistics and sample path for pre/post-change distributions of Gaussian with change-point at time index 125 (at the end of the fifth cycle).
effectively. We repeated the simulation with different change slots. Regardless of the slot index, the MPS algorithm was able to detect the changes with no false alarms.
### Numerical Results for Robust Quickest Change Detection Algorithm
In this section, we apply the robust algorithm defined in (15) to simulated data. For ease of reference, we reproduce the algorithm here.
\[\bar{\tau}^{*}=\inf\{n\geq 1:\bar{p}_{n}\geq A_{(n\bmod T)}\}, \tag{84}\]
where \(\bar{p}_{0}=0\), and
\[\bar{p}_{n}=\frac{\tilde{p}_{n-1}\;\bar{g}_{n}(X_{n})}{\tilde{p}_{n-1}\;\bar{g }_{n}(X_{n})+(1-\tilde{p}_{n-1})f_{n}(X_{n})}, \tag{85}\]
with
\[\tilde{p}_{n-1}=\bar{p}_{n-1}+(1-\bar{p}_{n-1})\rho.\]
Recall that here \((\bar{g}_{1},\ldots,\bar{g}_{T})\) is the least favorable i.p.i.d. law and \((f_{1},\ldots,f_{T})\) is the pre-change i.p.i.d. law.
In this numerical experiment, we assume that we observe a noisy version of a rectangular waveform; see Fig. (a)a. Before a change point of \(500\), the rectangular waveform alternates between \(+1\) and \(-1\) for \(50\) time slots each (blue curve in Fig. (a)a). After the change point, the waveform switches between \(+1.8\) and \(-0.2\) (red waveform in Fig. (a)a). We assume that the decision maker is unaware of the exact post-change waveform. But, he/she knows that the deviation will be at least by \(0.1\) (dashed orange waveform in Fig. (a)a). We assume that we observe the waveform after Gaussian zero-mean random variables with variance \(0.01\) have been added. In this setup, it can
Figure 7: (a): Illustration of different rectangular waveforms involved in robust detection of change in distribution. (b): depictions of a sample path of a five-square signal with shift-up of \(0.8\) in mean at change-point \(\nu\)=500 and calculated robust test statistic.
be shown that the Gaussian i.p.i.d. process with an orange mean level is the least favorable. The generated observation sequence and the robust change detection statistic (85) are shown in Fig. 7b. We used \(\rho=0.01\) to generate the statistic. As can be seen from the figure, the robust change detection algorithm effectively detects the change in the waveform pattern.
### Numerical Results for ECG Arrhythmia Detection and Fault Isolation
The majority of ECG data are periodic in nature due to the electrical activity within one's heart muscle cells (internal dynamics) over the course of one heartbeat (a PQRS cycle as illustrated in Fig. 8). Due to its diagnostics application, there has recently been a large body of papers on developing algorithms to automate the detection and classification of heart arrhythmia from ECG data using machine learning and statistical pattern recognition perspectives [10, 12, 16]. In this section, we apply the quickest change detection and fault isolation algorithm for i.p.i.d. processes developed in Section 3 to real ECG data and simulated wavelet data. Again, for ease of reference, we reproduce the algorithm here.
For \(\ell=1,\ldots,M\), define the stopping times
\[\tau_{\ell}=\inf\left\{n\geq 1:\max_{1\leq k\leq n}\;\min_{0\leq m\leq M,m\neq \ell}\;\sum_{i=k}^{n}\log\frac{g_{i}^{(\ell)}(X_{i})}{g_{i}^{(m)}(X_{i})} \geq A\right\}. \tag{86}\]
The stopping time and decision rule for our detection-classification problem is defined as follows:
\[\begin{split}\tau_{dc}&=\min_{1\leq\ell\leq M}\; \tau_{\ell},\\ \delta_{dc}&=\arg\min_{1\leq\ell\leq M}\tau_{\ell}. \end{split} \tag{87}\]
A window-limited version of the above algorithm is obtained by replacing each \(\tau_{\ell}\) in (35) by
\[\tilde{\tau}_{\ell}=\inf\left\{n:\max_{n-L_{\beta}\leq k\leq n}\;\min_{0\leq m \leq M,m\neq\ell}\;\sum_{i=k}^{n}\log\frac{g_{i}^{(\ell)}(X_{i})}{g_{i}^{(m)}( X_{i})}\geq A\right\} \tag{88}\]
for an appropriate choice of window \(L_{\beta}\). Recall that here \((g_{1}^{(0)},\ldots,g_{T}^{(0)})\) is the normal i.p.i.d. law and \((g_{1}^{(\ell)},\ldots,g_{T}^{(\ell)})\), for \(\ell\neq 0\), is the post-change i.p.i.d. law representing anomaly or change of type \(\ell\).
#### 5.4.1 MIT-BIH Dataset
This paper uses the MIT-Boston's Beth Israel University-Hospital (MIT-BIH) dataset downloaded from the Research Resource for Complex Physiologic Signals (PhysioNet) website. The acquired dataset contained 48 recordings from 47 human subjects in which each human subject's data were recorded for about half an hour [19]. The data contained information in the form of a 2D array for two-channel signals, a 1D array of expert annotations for the type of arrhythmia, and a 1D array for the location of
R-peaks to provide sufficient information for the interpretation of each ECG data. The 2D array signal consists of two-channel sinusoidal waves with an 11-bit resolution over ten milli-volts (mV) range sourced from a 12-lead standard ECG device with a constant sampling rate of 360 samples/second (Hz) for all ECGs in MIT-BIH database [18].
Since the main-lead II (mlIII) channel was the common ECG recording for all patients, the annotations are only provided for this lead from 12 leads. Thus, we analyzed this array similarly to [10].
We used a four-class representation from the Association for the Advancement of Medical Instrumentation (AAMI) standard to re-cluster different annotations of MIT-BIH into smaller clusters. The standard, which has four larger classes, namely 'N' (i.e., any 'N,' 'e,' 'j,' 'L,' or 'R' from MIT-BIH for normal heartbeat), 'S' (supraventricular ectopic beat), 'V' (ventricular ectopic beat), and 'F' (fusion beat) has been used to re-cluster 12 classes of observed annotations in MIT-BIH into four verified classes in Tab. 1. As a result of the existence of this table, we grouped each label into a representative class of AAMI.
We chose the subject patient with the identification (ID) number 208 for a patient-specific analysis. As seen in Tab. 2, the corresponding size of the annotation array for this subject is around 3,000 out of all 112,000 in the MIT-BIH dataset (including the patient with ID=208). As Tab. 2 suggests, we removed the supraventricular arrhythmia (cluster of 'S') from the ECG wave.
\begin{table}
\begin{tabular}{c c c} \hline id & Class Rep. & Symbol \\ \hline
0 & N & ‘N’,‘e’,‘j’,‘L’,‘R’ \\
1 & V & ‘V’,‘E’ \\
2 & S & ‘S’, ‘A’,‘a’,‘J’ \\
3 & F & ‘F’ \\ \hline \end{tabular}
\end{table}
Table 1: The equivalent classes of AAMI for normal and abnormal heartbeats in the MIT-BIH dataset.
Figure 8: Depiction of morphological features of a normal heartbeat: peaks have been identified.
#### 5.4.2 Data Centering and Standardization
As illustrated in Fig. 8, one way of segmentation of data of mIII of ECGs is to obtain an index of mid R-R from heartbeats. Applying partitioning above for patient with ID=208 resulted in 2,951 heartbeats data containing R annotations for main-lead II data with an equal number of annotations excluding 'S' and 'Q' waves, which ruled 88 annotated heartbeats out of all human annotated heartbeats in the result. Because of the sampling rate of the ECG device, the heartbeats were bounded above by a length of 360. All obtained heartbeats had different time lengths. The re-sampling function based on Fast Fourier Transformation (FFT), applied to the length of heartbeats were less than 360 (see an example of this implementation in the lower plot of Fig. 9).
#### 5.4.3 Training and Test Splits
We randomly sampled 50% of all heartbeats in three clusters of 'N', 'V', and 'F' of AAMI standard, which accounted for about 48% of heartbeats for training purposes (the # of each heartbeat is calculated as Tab. 3). For showing the effectiveness of our algorithm, we used a sequence made with ten heartbeats. Because the 'S' and 'Q' heartbeats might present in any order for the testing real-time situation we only provide testing from the original ECG that all annotations were contained in one of three studied classes and typically will be shown in batches of ten heartbeats for visualization similar to Fig. 9.
To train the i.p.i.d. models for each class, we assumed that the ECG waveforms are deterministic waveforms corrupted by Gaussian noise. We used the training data to learn the means and variances for the Gaussian i.p.i.d. processes. We used \(T=360\), the time obtained after resampling. The learned mean and variance parameters are shown in Fig. 10. The bold lines are the expected values and the dashed lines are one standard deviation away from the mean line. As shown in Fig. 10, there are only a few time slots in which two distributions can be separated. Thus, we only focused on discrete time intervals of [130, 155] or [200, 220] to improve the accuracy of predictions.
\begin{table}
\begin{tabular}{c c c c c c} \hline & ‘N’ & ‘V’ & ‘S’ & ‘F’ & ‘Q’\({}^{*}\) & Total \\ \hline Patient with ID=208 & 1,585 & 992 & 2 & 373 & 86 & 3,039 \\ MIT-BIH & 90,631 & 7,236 & 2,781 & 803 & 11,196 & 112,647 \\ \hline \multicolumn{6}{l}{\({}^{*}\) represents the cluster for all annotations that are not included in the first four clusters.} \\ \end{tabular}
\end{table}
Table 2: Number of re-clustered annotations for the patient with ID=208 vs all labels in MIT-BIH database.
\begin{table}
\begin{tabular}{c c c c c} \hline & ‘N’ & ‘V’ & ‘F’ & Total \\ \hline Training & 817 & 488 & 171 & 1,476 \\ Test & 789 & 477 & 177 & 1,443 \\ \hline \end{tabular}
\end{table}
Table 3: Number of different heartbeats for each cluster in training and test sets chosen from the patient with ID=208.
Figure 10: Depiction of pre/post-change parameters for different types of heartbeats for the patient with ID=208.
Figure 9: Illustration of the first ten raw heartbeats vs resampled partitioned of MLII data from a two-channel ECG of the patient with ID=208.
#### 5.4.4 Results of Applying I.P.I.D. Quickest Detection and Isolation Algorithm to ECG Data
In Fig. 11 we have plotted the test statistics obtained from ECG data with ten heartbeats that began with the heartbeat of the 80th of the test set. Specifically, we plot the statistic in (88) for every class. The red statistic is for arrhythmia of type F and the green statistic is for arrhythmia of type V. A spike in the values of these statistics indicates that an arrhythmia of the corresponding type has been detected. As seen in the figure, the algorithm is quite accurate in detecting arrhythmias. We remark that we reset the test statistic to zero each time the statistic crosses a threshold.
Next, we consider the segment with starting heartbeat index of 1373 that is represented in Fig. 12. As can be seen from the figure, there are both false alarms and incorrect fault isolations. Finally, we apply the algorithm to another segment shown in Fig. 13 that begins with a type 'V' heartbeat in index 235 and ended with a type 'V' in index 244. It only had one miss-classification error, which resulted in isolating the type 'F' instead of the type 'V'.
#### 5.4.5 Results of Applying I.P.I.D. Quickest Detection and Isolation Algorithm to Wavelet Data
Due to a limited amount of multi-class data in the MIT-BIH dataset, we use simulated data using wavelets to show the effectiveness of our algorithm for three-class detection and classification. One of the noise-resistant wavelet transformations on ECG was Ricker wavelet or Marr wavelet which is known as Mexican hat or Marr's wavelet in the Americas [15]. For simulation purposes, we used the Mexican-Hat wavelet, which resembles morphological features of ECG heartbeats with known pre and post-change distributions' parameters. In mathematical terms, Marr's wavelet has been formulated as follows.
\[\psi(t)=\frac{2}{\sqrt[4]{9\pi}}\left(1-t^{2}\right)e^{-t^{2}/2}\]
Figure 11: Illustrations of a sample path of ECG with ten heartbeats had started with normal heartbeat with index 80 and calculated i.p.i.d. fault isolation test statistic.
Figure 12: The evolution of i.p.i.d. fault isolation test statistic with one type of arrhythmia, happened at index 1374 and continued with the presence of four arrhythmias identical to type ‘V’ arrhythmia and ended in a type ‘V’ arrhythmia.
Figure 13: Depiction for ECG segment included ten heartbeats started with arrhythmia at index 235 in the test set.
For discrete-time simulations, we re-sampled a 100-long wave centered at zero from a Scipy's Ricker wavelet generating function. Different functional variations of a Mexican hat wavelet, such as shift up, scaling, time delay, and or integration of two perturbations produced three types of anomalies. In total, we had four classes as illustrated in Fig. 14. The actual data was generated by adding zero-mean Gaussian noise with variance 0.01 to the wavelets and then cascading the noisy waveforms together to make an ECG-like waveform pattern. The results are plotted in Fig. 15 and Fig. 16. As seen in the figures, our algorithm can detect and identify faults quite accurately in real time.
## 6 Conclusions
We developed algorithms for the quickest change detection in i.p.i.d. processes when the post-change i.p.i.d. law is unknown. We introduced the concept of a least favorable i.p.i.d. law and showed that a multi-threshold Shiryaev algorithm designed using the least favorable i.p.i.d. law is robust optimal. We then proposed an algorithm for quickest change detection and fault isolation in the i.p.i.d. setting and showed that it is asymptotically optimal, as the rate of false alarms and misclassifications go to zero. We also showed that a mixture-based test is asymptotically optimal for the multislot quickest change detection problem. We showed that the developed algorithm can be successfully used to detect anomalies in real traffic data and real ECG data.
## 7 Acknowledgements
The work of Yousef Oleyaeimotlagh, Taposh Banerjee and Ahmad Taha was partially supported by the National Science Foundation under Grant 1917164. The work of Yousef Oleyaeimotlagh, Taposh Banerjee, and Eugene John was also partially supported by the National Science Foundation under Grant 2041327.
|
2303.05191 | A two-dimensional magneto-optical trap of dysprosium atoms as a compact
source for efficient loading of a narrow-line three-dimensional
magneto-optical trap | We report on a scheme for loading dysprosium atoms into a narrow-line
three-dimensional magneto-optical trap (3D MOT). Our innovative approach
replaces the conventional Zeeman slower with a 2D MOT operating on the broad
421-nm line to create a high-flux beam of slow atoms. Even in the absence of a
push beam, we demonstrate efficient loading of the 3D MOT, which operates on
the narrower 626-nm intercombination line. Adding push beams working at either
421 nm or 626 nm, significant enhancement of the loading rate is achieved. We
reach the best performance, with an enhancement factor of $3.6$, using a push
beam red-detuned to the 626-nm line. With loading rates greater than $10^8$
atoms/s achieved at a moderate oven reservoir temperature of $800\,^{\circ}$C,
our method offers similar or greater performance than Zeeman-slower-based
systems. Our 2D-MOT-based approach constitutes a promising first step for
state-of-the-art quantum gas experiments with several advantages over the
Zeeman-slower-based setup and is readily adaptable to other open-shell
lanthanides. | Shuwei Jin, Jianshun Gao, Karthik Chandrashekara, Christian Gölzhäuser, Joschka Schöner, Lauriane Chomaz | 2023-03-09T11:44:00Z | http://arxiv.org/abs/2303.05191v2 | # A 2D MOT of dysprosium atoms as a compact source
###### Abstract
We report on a new scheme for loading dysprosium atoms into a three-dimensional magneto-optical trap (3D MOT) working on the narrow 626-nm intercombination line. Our innovative approach replaces the conventional Zeeman slower with a 2D MOT operating on the broad 421-nm line to create a high-flux beam of slow atoms. Even in the absence of a push beam, we demonstrate efficient loading of the 3D MOT. Adding push beams working at either 421 nm and 626 nm, significant enhancement of the loading rate is achieved. We reach the best performance, with an enhancement factor of 3.6, using a push beam red detuned to the 626-nm line. With loading rates greater than \(10^{8}\,\)atoms/s achieved at a moderate oven reservoir temperature of \(800\,^{\circ}\)C, our method offers similar or greater performance than Zeeman-slower-based systems. Our 2D-MOT-based approach constitutes a promising first step for state-of-the-art quantum gas experiments with several advantages over the Zeeman-slower-based setup and is readily adaptable to other open-shell lanthanides.
## I Introduction
Over the last decade, ultracold gases of open-\(f\)-shell lanthanide atoms, such as Er and Dy, have become a platform of choice for studying novel quantum phenomena [1]. Their electronic ground state's structures provide these atoms with remarkable properties: the presence of a closed outer \(6s\) shell yields electronic transitions with similar properties to those of Yb or Sr. The open \(4f\) shell confers on these atoms an even wider variety of transitions, a large effective spin, and a large magnetic moment, amongst the largest of the periodic table. In particular, the latter feature allows to explore the quantum effects of long-range and anisotropic interactions using ultracold gases of open-shell lanthanides [1].
The spectrum complexity brought up by the open-shell character of the magnetic lanthanides had however refined the scientific effort to laser-cool such species for years. In 2006, a breakthrough experiment by J. J. McClelland and J. L. Hanssen [2] demonstrated the possibility of laser-cooling Er on the broad transition at 401 nm, despite the existence of numerous decay channels. This work paved the way for many experiments bringing open-shell lanthanides to ultracold temperatures [1; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14] and quantum degeneracy [15; 16; 17; 18; 19].
A particular advantage of the lanthanides is the existence of closed narrow transitions. These transitions allow for reaching ultra-low temperatures even within the Doppler cooling regime and have been exploited in laser cooling schemes. In particular, magneto-optical traps (MOTs) of open-shell lanthanides working on ultra-narrow lines have been loaded from broad-line three-dimensional (3D) MOTs [3; 4; 5], similarly to Sr and Yb [20; 21]. Among all the closed narrow transitions of lanthanides, the intercombination line stands out by its intermediate linewidth of a few hundreds of kHz, which allows for MOTs with capture velocities on the order of 10 m/s and Doppler temperatures in the few \(\mu\)K range. Following works on Yb [21], MOTs working on the intercombination line have been directly loaded from slow atomic beams, enabling a simplified cooling scheme [1; 7; 8; 10; 11].
Another particular feature of open-shell lanthanides is their high melting point \(\sim 1000\,^{\circ}\)C. To produce the slow atomic beam required for 3D-MOT loading, all previous open-shell-lanthanide experiments use a similar scheme based on a high-temperature oven aligned with an axial Zeeman slower working on the atom's broadest cooling transition [1; 4]. Besides Zeeman slowers, two-dimensional (2D) MOTs have been proven to be convenient high-flux sources of slow atoms for many of the laser-cooled atomic species [22]. This includes species requiring an oven for a first vaporization stage, like Li [23], Na [24], Sr [25; 26], and Yb [27; 28; 29]. Among others, the advantages of 2D-MOT sources over Zeeman solvers are the high compactness of the setups and the absence of a direct view between the science chamber and the oven output. This is particularly beneficial for species like open-shell lanthanides, for which Zeeman slower light needs to be reflected from a mirror inside the vacuum setup in order to avoid material deposition on the viewport. In this case, a 2D MOT allows for better optical access around the science chamber and a simplified integration of a glass cell. Yet despite all these benefits, a 2D MOT has not been realized for open-shell lanthanides.
In this paper, we report on a first apparatus in which a 3D MOT of Dy atoms working on the intercombination line is loaded from a 2D MOT operating on the broadest cooling transition. We observe efficient loading of the 3D MOT with rates \(\phi_{\rm 3D}\gtrsim 10^{8}\) atoms/s and saturation atom numbers \(N_{\rm sat}\approx 3\times 10^{8}\) at a moderate oven temperature. The 3D MOT can be loaded even in the absence of an additional beam that pushes the atoms from the 2D to the 3D MOT (hereafter called the push beam). Using a push beam working on either the broad or the narrow transition, the loading of the 3D-MOT is significantly enhanced with a maximal enhancement factor of
3.6 achieved using a red-detuned push beam close to the intercombination line.
The paper is organized as follows. In Sec. II, we briefly review the relevant characteristics of Dy atoms and the main components of our experimental setup, and present the 2D-MOT and 3D-MOT schemes. In Sec. III, the measurement scheme used for our optimization and characterization of the 2D/3D-MOT source is described. In Sec.IV, we report on the optimization procedure applied to our 2D-MOT source and its achievement in the absence of a push beam. In Sec. V, we further introduce a push beam and describe the observed enhancement of the 3D-MOT loading, comparing broad- and narrow-line push configurations. Finally, in Sec. VI, the optimal 3D-MOT parameters for its loading in the absence and presence of push beams are investigated, compared, and comprehended in relation to the velocity features of the 2D-MOT atomic beam and of the 3D-MOT capture process.
## II Experimental setup and 2D/3D-MOT scheme
The Dy electronic levels and transitions of interest for this work are depicted in Fig. 1 (a) [30]. Dy's ground level \([Xe]4f^{10}6s^{2}(^{5}I_{8})\), has a total angular-momentum quantum number \(J=8\). The main cooling transitions connect
Figure 1: **Experimental Setup.** (a) Energy level diagram of Dy in wavelength \(\lambda\) for the levels of total electronic angular-momentum quantum number \(J=8\) and \(J=9\). The state’s color indicates its parity (blue, odd; red, even). The used transitions are marked by the arrows. (b) Sketch of the experimental setup showing the main vacuum components and optical paths. inset, typical absorption image of a 3D MOT. (c) Section view of the setup at the oven and 2D-MOT chamber. The aperture set at the effusion cell output is illustrated as well as an instance of magnet block. (d) Sketch of the 3D-MOT capture process within our experimental geometry. The atoms in the jet entering with \(v\leq v_{\mathrm{cap}}\) can be stopped within the capture region of radius \(R_{\mathrm{cap}}\). This region is limited by the beam extent, and therefore the viewports’ clear aperture.
it to excited levels with one electron of the \(6s\) shell excited to the \(6p\) shell and having a total angular-momentum quantum number \(J^{\prime}=9\). The transition that connects to the \(6s6p\)-singlet level, \([Xe]4f^{10}(^{5}I_{8})6s6p(^{1}P_{1})(8,1)_{9}\), has the broadest linewidth of \(\Gamma_{421}=2\pi\times 32.2\) MHz and a wavelength of \(\lambda_{421}=2\pi/k_{421}=421.291\) nm (saturation intensity \(I_{\rm sat}^{421}=564\) W/m\({}^{2}\)). The intercombination transition that connects to the \(6s6p\)-triplet level, \([Xe]4f^{10}(^{5}I_{8})6s6p(^{3}P_{1})(8,1)_{9}\), has an intermediate linewidth of \(\Gamma_{626}=2\pi\times 136\) kHz and a wavelength of \(\lambda_{626}=2\pi/k_{626}=626.082\) nm (saturation intensity \(I_{\rm sat}^{626}=720\) mW/m\({}^{2}\)). These three levels have similar \(g\) factors of \(g_{J}=1.24\), \(g_{J^{\prime}}^{(421)}=1.22\), and \(g_{J^{\prime}}^{(626)}=1.29\) respectively. Finally, we note that Dy has a large atomic mass, ranging from \(m=156u\) to \(164u\) depending on the isotope, with \(u\) the atomic mass unit.
The 421-nm and 626-nm lights used to address these transitions are generated by two commercial frequency-doubled amplified diode lasers (DLC TA-SHG PRO) from TOPTICA Photonics AG. The lasers are operated with 800mW and 1.6W output power respectively. Both lasers are frequency-locked to a commercial ultra-low expansion cavity from Stable Laser Systems using the offset-sideband Pound-Drever-Hall scheme [31]. The cavity has a 1.46 GHz free spectral range and a finesse determined by means of cavity-ring-down measurements to be 21170(60) at 842 nm and 20760(40) at 626 nm, corresponding respectively to the wavelengths used to lock the 421-nm and 626-nm lasers.
The vacuum apparatus for our experiment is depicted in Fig. 1 (b). It consists of one high-temperature dual-filament effusion cell (DFC-40-25-WK-2B) from CreaTec Fischer & Co. GmbH, and of two chambers - the 2D-MOT chamber and the science chamber - connected via a differential pumping section. Most vacuum components have been produced by SAES Rial Vacuum out of stainless steel (grade 316L or 316LN) or titanium. Pressures on the order of \(10^{-9}\) mbar and \(10^{-11}\) mbar are achieved in the 2D-MOT and science chambers respectively.
Solid Dy pieces are placed in the reservoir region of the effusion cell and are vaporized by heating this region up to 800 \({}^{\circ}\)C. The relatively low temperature of the reservoir region was chosen to spare material and allow for a long lifetime of the source. Figure 1 (c) details the setup design from the effusion-cell output to the 2D-MOT chamber. The oven is inserted into the 2D-MOT chamber, transversely to the 2D-MOT axis (\(x\)-axis). The distance between the oven's last aperture and the center of the 2D-MOT chamber (40.8mm) is minimized with the aim of maximizing the incoming atomic flux in the 2D-MOT. A Dy atomic vapor jet is formed by a custom-designed set of apertures located in the hot-lip region of the effusion cell, which is heated up to 1100 \({}^{\circ}\)C. A last cold aperture, connected to the oven water-cooling system, enables to filter out the part of the atomic jet exiting the oven at angles larger than 7.5 \({}^{\circ}\).
The atoms exiting the oven into the 2D-MOT chamber experience the forces induced by the cooling beams. The cooling beams are made up of a single 421-nm laser beam propagating through the chamber in a bow-tie \(\sigma_{+}\sigma_{-}\) retro-reflected configuration. The bow-tie plane (\(yz\)-plane), together with the magnetic-field configuration (see below), defines the 2D-MOT axis as the orthogonal \(x\) axis (see Fig. 1 (b,c)). The laser beam has a power of 430 mW and waist of 16 mm. The laser beam path does not include any active optical components in order to ensure a high effective power on the atoms. Therefore, the beam frequency can only be altered through the laser locking point, and its power via mechanical means.
Eight stacks of permanent magnets are placed symmetrically on both sides of the 2D-MOT chamber, as partially illustrated in Fig. 1 (c), to provide the 2D-MOT magnetic field. The magnetic field is zero-valued along the 2D-MOT axis and oriented along the cooling beams on their propagation axes. It has a roughly uniform gradient in the \(yz\) plane over the chamber's central region whose magnitude, \(b_{\rm 2D}^{\prime}\), can be adjusted by changing the number of magnets. Hereafter, the values of \(b_{\rm 2D}^{\prime}\) correspond to the theoretical expectations for a perfect arrangement of magnet stacks and no other magnetic source. Increasing the number of magnets by one in each of the 8 blocks increases the gradient by approximately 4.4 G/cm. A gradient of up to 44.4 G/cm can be generated with our current magnet-holder design. To optimize the performance of the 2D MOT, we adjust the divergence of the cooling beam, its detuning from resonance \(\delta_{\rm 2D}\), as well as the number and positions of the magnets. We note that due to the permanent magnets implemented, adjusting the 2D-MOT magnetic field involves an important manual aspect.
The atoms trapped in the 2D MOT can travel along the \(x\) direction and reach the center of the science chamber. The distance between the two chamber centers along \(x\) was minimized during our design process. It equals 347 mm and includes a 55.7 mm-long differential pumping section, which is inserted into the 2D-MOT chamber. At the center of the science chamber, a 3D MOT is formed using three orthogonal retro-reflected 626 nm laser beams and a pair of magnetic coils. The coils are aligned along the \(z\)-axis and connected in anti-Helmholtz configuration to provide a magnetic gradient \(b_{\rm 3D}^{\prime}\) up to 4 G/cm in the current configuration. Hereafter, the values of \(b_{\rm 3D}^{\prime}\) correspond to the gradient in the \(xy\) plane extracted from numerical calculations using our coil geometry. In the present setup an additional pair of magnetic coils, aligned along the \(z\) axis and connected in Helmholtz configuration can be used to apply a tunable offset magnetic field along the \(z\) direction, aligned with gravity.
An important constraint related to the narrow-linewidth transition used for our 3D-MOT scheme is the low capture velocity of the 3D MOT, \(v_{\rm cap}\). The principle of the capturing process is illustrated in Fig. 1 (d): atoms of the 2D-MOT jet with an initial axial velocity \(v_{x}<v_{\rm cap}\) have to be stopped by the 3D-MOT radiation pressure within the 3D-MOT capture region of radius \(R_{\rm cap}\). An upper bound on \(v_{\rm cap}\) can be estimated in the limit of an
infinitely saturated transition, where the atoms scatter photons at a rate \(\Gamma_{626}/2\) independent of the light detuning. In this approximation, a constant radiation pressure force is exerted onto the atoms over the 3D-MOT capture region, yielding \(v_{\rm cap}\leq\sqrt{2R_{\rm cap}\hbar\Gamma_{626}k_{626,x}/m}\), with \(k_{626,x}\) the recoil momentum transferred by one photon along the \(x\) axis (\(\hbar\) is the reduced Planck constant), see e.g. [32]. With our geometry (see Fig. 1 (d)), \(k_{626,x}=k_{626}/\sqrt{2}\) and even with MOT-beam sizes equal to the viewports' clear aperture (yielding \(R_{\rm cap}=35/\sqrt{2}\,\)mm), we estimate the maximum capture velocity of our 626-nm 3D MOT to be \(v_{\rm cap}\lesssim 11\,\)m/s. To favor the loading of atoms into the 3D MOT, we thus use relatively large 3D-MOT beam waists of \(w_{\rm 3D}=12\,\)mm with a power of \(P_{\rm 3D}\approx 85\,\)mW per beam, so as to nearly fulfill the above estimates of the capture radius and velocity. Note that the atoms fall under gravity when traveling from the 2D-MOT to the 3D-MOT chamber, which might compromise their capture. With a horizontal velocity of \(11\,\)m/s, the falling distance is \(4.9\,\)mm and is smaller than \(R_{\rm cap}\). Furthermore, if the atoms emerge from the 2D-MOT with an \(11\,\)m/s velocity oriented \(15\,\)mrad upward, the fall is suppressed. The relevant parameters for the optimization of the 3D-MOT loading are the detuning of the 3D-MOT beams, \(\delta_{\rm 3D}\), and the magnetic-field gradient, \(b^{\prime}_{\rm 3D}\).
In the following, we focus on the isotope \({}^{164}\)Dy, which has the highest natural abundance. To gain first insights into the expected performance of our 2D-MOT-based source and the relevant parameter ranges, we perform Monte-Carlo simulations of the 2D- and 3D-MOT capture processes along the lines of ref. [26], see Appendix A and ref. [33] for details. Using an oven temperature of \(T=1000\,^{\circ}\)C, our simulations indicate that a maximal 2D-MOT flux is achieved for \(b^{\prime}_{\rm 2D}=31\,\)G/cm and \(\delta_{\rm 2D}=-2.1\,\Gamma_{\rm 421}\) and is estimated to \(\phi_{\rm 2D}\approx 3\times 10^{10}\) atoms/s for \({}^{164}\)Dy. The 3D-MOT loading process can also be included in the simulation. However, due to the large fraction of unloaded trajectories over the full process and our limited number of simulated trajectories (see Appendix A), the extracted 3D-MOT loading rates show large fluctuations. In the simulations, loading into the 3D-MOT is detected when adding a push beam and 3D-MOT loading rates on the order of \(9\times 10^{8}\) atoms/s were extracted for \({}^{164}\)Dy. Based on these simulation results and their parameters, we started our experimental search. In the experiment, we observed a 3D-MOT loading even without an additional push beam. This experimental observation serves as the starting point for our optimization process.
## III Measurement protocol
In the following, we characterize the performance of our 2D-MOT-based atom source through the achieved 3D-MOT loading rates and atom numbers. Our experimental procedure is as follows: In a first step we switch on the 2D-MOT, optional push, and 3D-MOT beams, the offset-field and gradient coils, with fixed parameter values for a time \(t_{\rm load}\). We then switch off the 2D-MOT and push beams, and hold the cloud for \(70\,\)ms without changing any other parameter values except the offset field [34]. Finally, we switch off the 3D-MOT light and gradient field, let the cloud fall and expand for a short time of flight \(t_{\rm TOF}\) (typically \(t_{\rm TOF}=5\,\)ms), and take an absorption image using horizontally linearly polarized 421-nm light propagating along the \(y\) axis. The absorption signal is recorded on a CMOS camera (Hamamatsu Orca Spark) via an imaging lens providing a magnification of \(0.438\). The imaging pulse lasts \(25\,\mu\)s. The imaging beam is operated on resonance, with an intensity below \(0.2\,\)mW/cm\({}^{2}\) and a waist of about \(10\,\)mm. An exemplary image is shown in the inset of Fig. 1 (b).
For the present characterization, we do not compress the gas by decreasing the light detuning and intensity after the 3D MOT loading stage. Therefore the cloud has a relatively high temperature of about \(500\,\mu\)K while the remnant field is estimated to be around \(0.4\,\)G. At these temperature and magnetic-field values, the atomic population is expected to be depolarized, and all Zeeman substates occupied. Assuming an equal population of all substates, the light-scattering cross-section, \(\sigma\), is identical for any light polarization and is given by renormalizing the bare cross-section \(\sigma_{0}=3\lambda_{421}^{2}/2\pi\) by the average of the Clebsch-Gordan coefficients for our \(J=8\to J^{\prime}=9\) dipole transition, over the initial Zeeman substates. This yields \(\sigma=0.3725\sigma_{0}\). Experimentally we have observed that the absorption signal does not depend on the imaging light polarization, which experimentally supports the use of \(\sigma=0.3725\sigma_{0}\).
By integrating over a region of interest in the absorption images, we extract the atom number \(N\) in the 3D MOT at the end of the sequence. To optimize our setup, we mostly rely on a simple scheme that consists in measuring \(N\) for a characteristic \(t_{\rm load}=4\,\)s, hereafter referred to as \(N_{\rm 4s}\). To further characterize our system, we record loading curves of \(N\) versus \(t_{\rm load}\), which we fit to an exponential growth function, \(N(t)=N_{\rm sat}(1-e^{-t/\tau})\), to extract the 3D-MOT loading rate, \(\phi_{\rm 3D}=N_{\rm sat}/\tau\), and the saturation atom number \(N_{\rm sat}\).
We note that the resonance frequencies for the 421-nm and 626-nm lights have been extracted from the 3D-MOT images. The 421-nm resonance frequency is directly extracted from the maximum in the absorption signal when scanning the frequency of the imaging light. The resonance frequency for the 626-nm light is determined by shining an additional 626-nm beam onto the atoms during the first milliseconds of their time of flight and monitoring the atom-number depletion versus the beam frequency via absorption imaging after time of flight.
Optimization of the 2D MOT, without push beam
In our setup, the optimization of the 2D-MOT parameters has been performed, following the first observation of a 3D-MOT loading, in the absence of a push beam. Here we report on the protocol followed in the optimization process and the performance achieved in this configuration. We start the optimization process by setting the 2D-MOT parameters close to the simulated optimum (see Sec. II), where a 3D-MOT loading is detected (\(b^{\prime}_{\rm 2D}=35.4\,{\rm G/cm}\), \(\delta_{\rm 2D}=-2\Gamma_{\rm 421}\)). We scan the 3D-MOT gradient and detuning to maximize the 3D-MOT loading, yielding \(b^{\prime}_{\rm 3D}=0.9\,{\rm G/cm}\) and \(\delta_{\rm 3D}=-55.8(3)\Gamma_{\rm 626}\). Using this 3D-MOT setting, we go on by changing the 2D-MOT parameters as follows: choosing a given number of magnets per stack, we start by setting the magnet stacks at their design positions. In this configuration, we optimize the divergence of the 2D-MOT cooling beam [35] and its detuning \(\delta_{\rm 2D}\) by maximizing \(N_{\rm 4s}\). We then adjust the position of each magnet stack and iterate on the beam parameters.
For each number of magnets per stack implemented, we identify the best values of the 2D-MOT parameters and we finally record the full 3D-MOT loading curve in the optimized configuration. Examples of such loading curves and their fits (see Sec. III) are shown in Fig. 2 (a) inset. Figure 2 (a) displays the dependence of the fitted loading rate \(\phi_{\rm 3D}\) with the magnetic gradient \(b^{\prime}_{\rm 2D}\). A maximum loading rate of \(\phi_{\rm 3D}=2.7(2)\times 10^{7}\) atoms/s and a saturation atom number of \(N_{\rm sat}=9.9(2)\times 10^{7}\) is found for the configuration with 6 magnets per stack (\(b^{\prime}_{\rm 2D}\approx 26.7\,{\rm G/cm}\)) and \(\delta_{\rm 2D}=-1.95(1)\Gamma_{\rm 421}\). We compare these experimental observations to the expectations from our Monte Carlo simulations, see Sec. II, and Appendix A for details. So as to obtain reliable results despite limited sampling, we do not include the 3D-MOT loading step and simulate only the 2D-MOT loading process. Figure 2 (a) shows the simulated 2D-MOT flux as a function of \(b^{\prime}_{\rm 2D}\) with an oven at \(T=800\,^{\circ}{\rm C}\) and at the value of \(\delta_{\rm 2D}\) for which this rate is maximal, like in the experimental procedure. We observe that the optimal \(b^{\prime}_{\rm 2D}\) and \(\delta_{\rm 2D}\) are comparable yet respectively slightly smaller and larger in magnitude for the experimental \(\phi_{\rm 3D}\) compared to the simulated \(\phi_{\rm 2D}\) (optimum at \(b^{\prime}_{\rm 2D}=32\,{\rm G/cm}\) and \(\delta_{\rm 2D}=-2.1\Gamma_{\rm 421}\)). Both in experiment and theory, we find that the optimum value of \(|\delta_{\rm 2D}|\) slightly varies over the explored \(b^{\prime}_{\rm 2D}\)-range, by about \(0.2\,\Gamma_{\rm 421}\) and \(0.7\,\Gamma_{\rm 421}\) respectively. The similarity between the experiment and simulation results is remarkable given the approximations made in the simulations, see Appendix A. The simulations do not necessarily provide the quantitative optima for different parameters, but are a suitable tool to identify the relevant range of parameters.
In the optimized magnetic configuration identified above, we further investigate the influence of the 2D-MOT detuning and record the full 3D-MOT loading curve for different values of \(\delta_{\rm 2D}\). The extracted loading rates are shown in Fig. 2 (b). Over the investigated \(\delta_{\rm 2D}\) range (\(1\Gamma_{\rm 421}\)), the variations of \(\phi_{\rm 3D}\) are rather symmetric around its maximum, and its value changes by less than 50%. In Fig. 2 (b), we also show the simulated 2D-MOT flux at \(b^{\prime}_{\rm 2D}=26.7\,{\rm G/cm}\). The variations of the simulated \(\phi_{\rm 2D}\) match those of the experimental \(\phi_{\rm 3D}\) well. Experimentally, the maximum is found at \(\delta_{\rm 2D}=-1.95(1)\Gamma_{\rm 421}\) with a loading rate of \(\phi_{\rm 3D}=2.7(1)\times 10^{7}\) atoms/s and
Figure 2: **2D-MOT optimization without push beam.**(a) Experimental 3D-MOT loading rate \(\phi_{\rm 3D}\) (circles, left axis) and simulated 2D-MOT flux \(\phi_{\rm 2D}\) (green line and dots, right axis) as a function of the 2D-MOT magnetic gradient \(b^{\prime}_{\rm 2D}\). The 3D-MOT parameters were fixed to \(b^{\prime}_{\rm 3D}=0.9\,{\rm G/cm}\), \(\delta_{\rm 3D}=-55.8(3)\Gamma_{\rm 626}\) and the 2D-MOT configurations (including \(\delta_{\rm 2D}\)) were individually optimized, see text. The inset shows the experimental 3D-MOT loading curves and exponential fits from which the rates are extracted, and the errorbars show the standard deviation of three experimental runs. (b) Experimental \(\phi_{\rm 3D}\) (blue circles) and simulated \(\phi_{\rm 2D}\) (green line and dots) as a function of the 2D-MOT detuning \(\delta_{\rm 2D}\) at the experimental optimal gradient, \(b^{\prime}_{\rm 2D}=26.7\,{\rm G/cm}\). The 3D-MOT parameters were fixed to \(b^{\prime}_{\rm 3D}=0.42\,{\rm G/cm}\), \(\delta_{\rm 3D}=-42.6(3)\Gamma_{\rm 626}\). In (a) and (b), the errorbars are the 63% confidence interval from the fit. The shaded area shows the standard deviation of three simulation runs.
a saturation atom number of \(N_{\rm sat}=12.2(1)\times 10^{7}\). We note that different 3D-MOT parameters were used compared to Fig. 2 (a), which explains the slightly different loading performances (see also Sec.VI).
## V Push-beam enhancement
To further increase the loading rate of our 3D MOT, we implement a push-beam scheme, as typically done in 2D-MOT setups, see e.g. [22; 23; 24; 25; 26; 27; 28; 29]: we additionally shine a beam propagating through the apparatus along \(+x\), see Fig.1 (b). This beam has a frequency close to one of the transitions of Dy and, via radiation pressure, pushes the atoms from the 2D-MOT to the science chamber in a velocity-selective way. In the case of Dy, similarly to Yb [27; 28; 29], two convenient choices are possible: the push beam can be near-resonant either with the 626-nm intercombination line or with the broad 421-nm one. To determine which choice is more beneficial, we implemented both sequentially and compared their experimental achievements. We note that, in both cases, a beam-walking optimization yields a configuration in which the push beam goes through the differential pumping stage and is detected on the opposite side of the science chamber. The push-beam waist is \(w_{\rm push}\approx 0.8\) mm, and, in order to control the effect of the push beam, we vary its power \(P_{\rm push}\), and detuning \(\delta_{\rm push}\). In either case (421-nm or 626-nm), an optimization on the push-beam parameters clearly improves the 3D-MOT loading.
We perform a systematic experimental study of the 3D-MOT atom number \(N_{\rm 4s}\) while varying the push-beam parameters in a range where improved 3D-MOT loading is found. Figure 3 (a,b) shows the enhancement factor in \(N_{\rm 4s}\) between a configuration with and without a (b) 421-nm and (c) 626-nm push beam. Here we use \(b^{\prime}_{\rm 3D}=0.42\) G/cm, \(\delta_{\rm 3D}=-42.6(3)\Gamma_{626}\), and the reference without push beam is \(N_{\rm 4s}=7.5(3)\times 10^{7}\) (see also Fig. 4 (a),(b),(c)). In both Fig. 3 (a) and (b), the enhancement factor at fixed \(P_{\rm push}\) shows a maximum for varying \(\delta_{\rm push}\). We now describe the power dependence of this maximum. In either case (421-nm and 626-nm), the \(\delta_{\rm push}\) value at which the maximum is found increases in absolute value for increasing \(P_{\rm push}\). Yet, the power dependence shows different features in the two cases. In particular, the value of the maximum enhancement factor itself shows distinct variations with \(P_{\rm push}\). In the case of the 421-nm push beam, this maximum enhancement factor is nearly independent of \(P_{\rm push}\). In contrast, in the the 626-nm case, it shows an overall optimum in power, corresponding to \(P_{\rm push}=18\) mW. Overall, the push beam on the narrow transition is found to outperform the one on the broad transition and yields the maximal enhancement factor observed of 3.6(2).
These experimental findings can be comprehended from a description of the push-beam effect, which is rooted in the radiation pressure force,
\[\mathbf{F}_{\rm push}(v_{x})=\hbar ke_{x}\frac{\Gamma}{2}\frac{s_{0}}{(1+s_{0})} \frac{1}{1+4\frac{(\delta_{\rm push}-kv_{x})^{2}}{(1+s_{0})\Gamma^{2}}}. \tag{1}\]
Here \(k\) is the push-beam wavenumber, \(s_{0}=2P_{\rm push}/(\pi w_{\rm push}^{2}I_{\rm sat})\) is the saturation parameter, \(\Gamma\) and \(I_{\rm sat}\) are the associated transition's linewidth and saturation intensity. The force (1) is directed along the push-beam propagation direction of unit vector \(\mathbf{e}_{x}\), and its magnitude depends on the atom's velocity along \(x\), \(v_{x}\), through the Doppler effect. More precisely it is a Lorentzian function of \(v_{x}\), of center \(\delta_{\rm push}/k\), of width \(\sqrt{1+s_{0}}\,\Gamma/k\), and of amplitude on resonance \(\hbar k\frac{\Gamma}{2}\frac{s_{0}}{(1+s_{0})}\). The effects of the push-beam parameters are as follows: Varying \(\delta_{\rm push}\) changes the velocity class with which the force is resonant. Changing \(P_{\rm push}\) alters \(s_{0}\) and has a twofold effect: (i) it scales the resonant amplitude of the force up by the factor \(\frac{s_{0}}{1+s_{0}}\) and (ii) it broadens the range of velocities addressed by the force by \(\sqrt{1+s_{0}}\). The effects (i) and (ii) dominate at low and high saturation respectively. Note that Eq. (1) assumes no effect of the magnetic field, which theoretically cancels along the propagation axis of the push beam.
The drastically different linewidths of the 421-nm and 626-nm transition result in a different operation of the corresponding push beams. The 421-nm push beam may generate much larger forces than the 626-nm beam. In particular, a force magnitude corresponding to the saturated 626-nm case is generated with a 421-nm beam of saturation parameter as low as \(s_{0}=0.0021\). Furthermore, the bare velocity widths of the force \(\Gamma/k\) are also different, equal to \(0.085\) m/s and \(13.6\) m/s for the 626-nm and 421-nm transition respectively. This is to be compared with the expected spread of velocity distribution of the 2D MOT of about \(4\) m/s defined by its half width at half maximum.
These different linewidths, together with the powers used in experiment [36], result in broadly different force profiles in the optimal push configurations. Figure 3 (c) sketches such force profiles as expected from Eq. (1). In both the 421-nm and 626-nm cases, improved loading conditions are achieved for red detunings, \(\delta_{\rm push}<0\), which corresponds to a resonant pushing of atoms traveling against the push beam propagation (\(v_{x}<0\)). Yet, the resonant velocity classes are vastly different with \(v_{x}\sim-10\) m/s (\(v_{x}\sim-100\) m/s) for the 626-nm (421-nm) case. Therefore, in the 421-nm case, mostly the detuned "tail" of the radiative force is involved in pushing the atoms. On the contrary, the much weaker force generated by the 626-nm light is used on and close to its resonance.
The 421-nm and 626-nm push beams also operate at considerably different saturation parameters of \(s_{0}\leq 0.5\) and \(3\times 10^{3}<s_{0}<60\times 10^{3}\), respectively. Following the discussion on the effects (i)-(ii), a change in power thus affects the force profiles of the 421-nm and 626-nm lights differently. As illustrated in Fig. 3 (c) through two typical
situations of different \(P_{\text{push}}\), an increase in power for the 421-nm beam mostly yields a scaling up of the resonant force magnitude. In contrast, the 626-nm push operates at roughly constant (saturated) resonant force magnitude and the change in power mainly yields a broadening of the resonance.
Relevant to the push-beam effect on our 2D-MOT are the force profiles in the small \(|v_{x}|\) range encompassing the 2D-MOT velocity distribution, see Fig. 3 (c) inset. The power dependences of the force in this range explain the behaviors of the maximal enhancement observed in Fig. 3 (a) and (b). The overall shift to larger \(|\delta_{\text{push}}|\) values for increasing \(P_{\text{push}}\) is justified as follows: increasing \(P_{\text{push}}\) at fixed \(\delta_{\text{push}}\) yields a detrimental effect of pushing the atoms with positive \(v_{x}\) too much (via power scaling or broadening of the force) such that the final \(v_{x}\) may exceed the 3D-MOT capture velocity \(v_{\text{cap}}\). Instead, shifting \(\delta_{\text{push}}\) to larger negative values prevents this effect and additionally yields the benefit of pushing atoms with larger negative \(v_{x}\) back towards the 3D MOT.
This shift has however different impacts in the 421-nm and 626-nm cases due to their different operation regimes. In the 421-nm case, the small \(|v_{x}|\) range corresponds to a far-detuned regime in which the amplitude scaling is well compensated by a shift of the resonance. Therefore, when increasing \(P_{\text{push}}\), \(\delta_{\text{push}}\) is adjusted such that the force profile in this \(|v_{x}|\) range is kept basically unchanged. The push-beam effect is thus nearly power-independent and so is the enhancement factor. In the 626-nm case, instead, the force profile in the optimal-enhancement condition changes with \(P_{\text{push}}\) in the small \(|v_{x}|\) range, affecting the push efficiency. At low power, the range of velocity classes addressed by the force is small compared to the velocity distribution itself, yielding a low push efficiency. As described above, increasing \(P_{\text{push}}\), the 626-nm force broadens with a saturated resonant amplitude. Therefore if one tries to keep a constant push effect (i.e. force profile) in the small negative \(v_{x}\) range while increasing \(P_{\text{push}}\), the saturation and power broadening effects imply an increasing detrimental push effect on the positive \(v_{x}\) range. Hence, at too large power, either the push effect on \(v_{x}>0\) or on \(v_{x}<0\) is not optimal, and an optimum efficiency is expected at an intermediate power. At the optimum \(P_{\text{push}}=18\,\)mW, the 626-nm push force has an expected velocity width of about 7 m/s, comparable to the expected width of the 2D-MOT distribution. The fact that the 626-nm force profile can be adapted to strongly push the \(v_{x}\lesssim 0\) class with a reduced impact on the \(v_{x}\sim 10\,\)m/s one may explain the observed better performance of the 626-nm push beam.
For the case of the 626-nm push beam, we additionally observe a rather unexpected behavior in Fig. 3 (b): \(N_{\text{4s}}\) shows multiple local maxima when varying \(\delta_{\text{push}}\) at fixed \(P_{\text{push}}\). This effect may be explained by the possible presence of a remnant magnetic field along the push-beam propagation axis. Such a magnetic field makes the vectorial nature of the push-beam transition (\(J=8\to J^{\prime}=9\)) become relevant and modifies the simple picture of Eq. (1) and Fig. 3 (c). In particular, this yields different resonant conditions for the push-beam light components driving the \(\sigma_{+}\), \(\pi\), and \(\sigma_{-}\) transitions respectively. Therefore, the total force profile, given by the sum of these three contributions, would then present three distinct peaks with different resonant velocities, and amplitudes and widths set by the light-polarization composition. The relevance of this effect is supported by an observed change in the relative strengths of the maxima in \(N_{\text{4s}}\) when changing the push-beam polarization. We note that the best performance is found with a push beam of horizontal linear polarization.
Figure 3: **Push-beam Effect**. Enhancement factor in the atom number in 3D-MOT at \(t_{\text{load}}=4s\), \(N_{\text{4s}}\), obtained by the addition of a 421-nm (a) and 626-nm (b) push beam, as a function of the push-beam power \(P_{\text{push}}\) and detuning \(\delta_{\text{push}}\). The 3D-MOT parameters were set to \(b^{\prime}_{\text{3D}}=0.42\,\)G/cm, \(\delta_{\text{3D}}=-42.6(3)\Gamma_{626}\). The reference without push beam is \(N_{\text{4s}}=7.5(3)\times 10^{7}\). \(P_{\text{push}}\) is tuned through the amplitude of the driving signal of an acousto-optic modulator. The \(P_{\text{push}}\) values are extracted via a rescaling of the measured (for (a), frequency-dependent) power in front of the entrance viewport at fixed diffraction efficiency. (c) Velocity-dependent force (1) exerted by the 421-nm (blue lines) and 626-nm (red lines) push beams for the characteristic parameters highlighted by triangle (solid lines) and star (dashed lines) symbols in (a) and (b), respectively. The forces are calculated assuming no magnetic field. The inset shows a zoom-in in the relevant low-velocity region. The grey-shaded area illustrates a Gaussian approximation of the expected velocity distribution of the 2D-MOT atomic beam.
## VI 3D-MOT parameters and capture with and without push beam
In a final study, we investigate the optimal 3D-MOT settings and their variations without, with a 421-nm, and with a 626-nm push beam. We set the push-beam parameters to the values providing the optimal enhancement factors in Fig. 3. Figures 4 (a), (b), and (c) show the atom number \(N_{\text{4s}}\), while scanning the 3D-MOT detuning \(\delta_{\text{3D}}\) and magnetic gradient \(b^{\prime}_{\text{3D}}\) in these three push-beam configurations. The three plots differ in their overall magnitude but show qualitatively similar variations with \(\delta_{\text{3D}}\) and \(b^{\prime}_{\text{3D}}\). In all three cases, for each \(b^{\prime}_{\text{3D}}\) value, \(N_{\text{4s}}\) shows a maximum (noted \(N^{*}\)) when varying \(\delta_{\text{3D}}\). The optimum is found at a negative detuning, \(\delta^{*}_{\text{3D}}\), whose magnitude \(|\delta^{*}_{\text{3D}}|\) increases with \(b^{\prime}_{\text{3D}}\). An overall optimum in \(N_{\text{4s}}\) is found at a finite value of the \((b^{\prime}_{\text{3D}},\delta_{\text{3D}})\) set.
In the following, we compare the variations of \(\delta^{*}_{\text{3D}}\) and \(N^{*}\) with \(b^{\prime}_{\text{3D}}\) quantitatively in the three aforementioned configurations in order to understand the capture process. We extract \(\delta^{*}_{\text{3D}}\) from the three sets of experimental data of Fig. 4 (a), (b), and (c) and display them as a function of \(b^{\prime}_{\text{3D}}\) in Fig. 4 (d). We observe that the variations of \(\delta^{*}_{\text{3D}}\) versus \(b^{\prime}_{\text{3D}}\) are similar in the presence of 421-nm and 626-nm push beams but slightly differ from the case without push beam. Furthermore, in all three cases, \(\delta^{*}_{\text{3D}}\) appears to obey a roughly linear dependence on \(b^{\prime}_{\text{3D}}\): \(\delta^{*}_{\text{3D}}=\mu_{R}b^{\prime}_{\text{3D}}+\delta_{0}\). Based on a simple theory of the 3D-MOT-capture process inspired from ref. [32] that we develop in Appendix B (see also Fig. 1 (d)), we can interpret the linear-dependence parameters in terms of effective capture parameters (see Eq. (14)). The offset detuning relates to an effective capture velocity in the limit of small gradients (\(b^{\prime}\to 0\)), \(v_{0}\), via \(\delta_{0}=-k_{\text{626}}v_{0}/\sqrt{8}\), matching the relation expected in optical molasses. The slope relates to an effective capture radius with large gradients (\(b^{\prime}\rightarrow\infty\)): \(\mu_{R}\propto R_{\infty}\). A linear fit yields \(v_{0}=(7.8(1),7.5(1),5.6(2))\,\mathrm{m}\mathrm{s}\) and \(R_{\infty}=(17(1),19.1(7),32(2))\,\mathrm{mm}\) for (with 626-nm push, with 421-nm push, without push). The increase in \(v_{0}\) and decrease of \(R_{\infty}\) found when adding a push beam reveal a change in the velocity distribution of the atomic beam emerging from the 2D-MOT. It evidences the boost in velocities and the decrease in spreading provided by the push beams.
Our theory of the 3D-MOT capture process implies
Figure 4: **3D-MOT with and without push beam**. (a-c) Experimental 3D-MOT atom number, \(N_{\text{4s}}\), as a function of the 3D-MOT detuning \(\delta_{\text{3D}}\) and gradient \(b^{\prime}_{\text{3D}}\), in (a) the absence, or the presence of a (b) 421-nm and (c) 626-nm push beam. The other parameters are set to the optimal values identified in Figs. 2 and 3. In particular, we set (b) \(\delta_{\text{push}}=-8.3\Gamma_{\text{421}}\), \(P_{\text{push}}=0.26\,\mathrm{m}\mathrm{W}\) and (c) \(\delta_{\text{push}}=-82.3(3)\Gamma_{\text{626}}\), \(P_{\text{push}}=12\,\mathrm{m}\mathrm{W}\). (d) 3D-MOT detuning, \(\delta^{*}_{\text{3D}}\), at which the maximal \(N_{\text{4s}}\) is achieved at fixed \(b^{\prime}_{\text{3D}}\). The values are extracted from the scans of (a, black square), (b, blue triangle), (c, red circle) and shown as a function of \(b^{\prime}_{\text{3D}}\). (e) Optimal values of \(N_{\text{4s}}\), \(N^{*}\), as a function of \(b^{\prime}_{\text{3D}}\). Same code as (d). (f) 3D-MOT loading rate as a function of the 3D-MOT beam power, \(P_{\text{3D}}\), with a 626-nm push beam of \(\delta_{\text{push}}=-82.3(3)\Gamma_{\text{626}}\), \(P_{\text{push}}=18\mathrm{m}\mathrm{W}\), and \(b^{\prime}_{\text{3D}}=0.42\,\mathrm{G}\mathrm{/}\mathrm{c}\mathrm{m}\), \(\delta_{\text{3D}}=-42.6(3)\Gamma_{\text{626}}\). The inset shows the corresponding full 3D-MOT loading curve (circles) and their exponential fits (lines). The different colors correspond to the different \(P_{\text{3D}}\) values. In (d-f), the error bars show the standard deviation of three experimental repetitions.
variations of the capture radius and velocity with \(b^{\prime}_{\text{3D}}\). In particular, \(v_{\text{cap}}\) increases linearly with \(b^{\prime}_{\text{3D}}\). Ultimately, this limits the validity of our description to an intermediate gradient range as the capture parameters are bounded by the geometry and maximal radiative force, see Sec. II and Appendix B. These variations also enable us to comprehend the changes with \(b^{\prime}_{\text{3D}}\) in the loading efficiency and therefore in \(N^{*}\). Let us first describe the experimental observations. Fig. 4 (e) depicts the variations of \(N^{*}\) with \(b^{\prime}_{\text{3D}}\), as extracted from Fig. 4 (a), (b), and (c). We observe that \(N^{*}\) varies with \(b^{\prime}_{\text{3D}}\) following a similar trend between the three configurations: Starting from small \(b^{\prime}_{\text{3D}}\), \(N^{*}\) sharply increases and then slowly decreases when increasing \(b^{\prime}_{\text{3D}}\). A maximum of \(N^{*}\) is found at an intermediate \(b^{\prime}_{\text{3D}}\) whose value depends on the push-beam configuration. The optimum \(b^{\prime}_{\text{3D}}\) is the lowest when no push beam is used and takes the value \(b^{\prime}_{\text{3D}}=0.31\,\text{G}/\text{cm}\). The optimum is shifted to larger values when using a push beam, namely \(b^{\prime}_{\text{3D}}=0.42\,\text{G}/\text{cm}\) (\(0.51\,\text{G}/\text{cm}\)) with the 626-nm (421-nm) push beam. We also observe a steeper decrease of \(N^{*}\) at large \(b^{\prime}_{\text{3D}}\) with a push beam compared to the case without push beam. Let us now understand this behavior based on the capture-process theory. The increase of \(N^{*}\) at small \(b^{\prime}_{\text{3D}}\), is justified by the corresponding increase of \(v_{\text{cap}}\), enabling to capture a larger fraction of the atomic distribution. The gain earned by increasing \(v_{\text{cap}}\) saturates once it encompasses the full velocity distribution of the atomic beam or reaches the upper bound of \(v_{\text{cap}}\lesssim 11\,\text{m}/\text{s}\) imposed by our 3D-MOT configuration as introduced in Sec. II. For larger values of \(b^{\prime}_{\text{3D}}\), a decrease of the capture efficiency is foreseen since the radiation pressure profile is not anymore optimized for the low velocities of the atomic jet. The weaker dependence of \(N^{*}\) on \(b^{\prime}_{\text{3D}}\) in the absence of a push beam at large \(b^{\prime}_{\text{3D}}\) shall relate to the different velocity distributions in the atomic jet between the configurations. Using the relation of Appendix B, we can estimate the capture velocities for the optimal \(b^{\prime}_{\text{3D}}\) to \(v_{\text{cap}}=(10.1(2),10.7(2),8.8(3))\,\text{m}/\text{s}\) for (626-nm push, 421-nm push, without push). The different optimal \(b^{\prime}_{\text{3D}}\) can therefore be interpreted as a requirement to increase the capture velocities when introducing the push beams.
Overall, the largest \(N^{*}\) is found with the 626-nm push beam. The optimal values of the parameters determined within our optimization process are reported in Table 1. Finally, we study the full loading curve of the 3D MOT in this identified optimal configuration. Additionally, we investigate the effect of changing the power of the 3D-MOT beams \(P_{\text{3D}}\). The loading curves and the extracted loading rates are displayed in Fig. 4 (f). At the previously set value of \(P_{\text{3D}}\approx 85\,\text{mW}\), we measure a loading rate of \(\phi_{\text{3D}}=1.10(2)\times 10^{8}\,\text{atoms}/\text{s}\) and a saturation number of \(N_{\text{sat}}=2.80(2)\times 10^{8}\). By increasing \(P_{\text{3D}}\), both the saturation atom numbers and loading rates first sharply increase and then continue increasing at a slower rate. With the maximal power accessible in the present setup of \(\approx 130\,\text{mW}\) per beam, we find a maximal loading rate of \(\phi_{\text{3D}}=1.28(4)\times 10^{8}\) atoms/s and a saturation atom number of \(N_{\text{sat}}=3.76(9)\times 10^{8}\). We note that a gain in the loading rate and saturation number seems still possible by increasing the power yet further. In this work, we have simply relied on the power broadening of the 3D-MOT radiative force to increase the capture of our narrow-line 3D MOT. Additional schemes, such as spectral broadening [8; 28; 29; 10] or angled-slowing beams [37; 38], could be considered for potential further performance enhancement. We also note that increasing the oven temperature drastically increases the loading rate. As declared earlier, we decided to proceed with the optimization of our setup at a relatively low oven reservoir temperature for enhancing the lifetime of our source.
## VII Conclusions
We have demonstrated the successful operation of a Dy intercombination line 3D MOT loaded from a 2D MOT working on the broad 421-nm transition. The addition of a push beam operating close to the intercombination line allows for the best loading performance, with a more-than-three-fold increase in the loading rate. We observe loading rates of \(\phi_{\text{3D}}>1\times 10^{8}\) atoms/s and a saturation number of \(N_{\text{sat}}\approx 3\times 10^{8}\). This is similar or better compared to other cold-atom Dy setups based on Zeeman slowers despite the lower oven temperatures at which we operate, see e.g. [4; 5; 8; 10; 12]. We note that the loading of the intercombination-line 3D MOT is a promising first step for quantum gas experiments. In particular, following the loading, a compression step can be applied in which the power and the detuning absolute value of the 3D-MOT beams are decreased [8; 10; 12]. By applying such a step to our samples, temperatures of \(15\,\mu\text{K}\) have been achieved with negligible atom loss.
We note that our setup has several advantages com
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Parameter & Value & Unit \\ \hline \multirow{4}{*}{2D MOT} & \(\lambda_{\text{2D}}\) & 421 & nm \\ & \(P_{\text{2D}}\) & 430 & mW \\ & \(w_{\text{2D}}\) & 16 & mm \\ & \(b^{\prime}_{\text{2D}}\) & 26.7 & G/cm \\ & \(\delta_{\text{2D}}\) & -1.95 & \(\Gamma_{\text{421}}\) \\ \hline \multirow{4}{*}{Push} & \(\lambda_{\text{push}}\) & 626 & nm \\ & \(P_{\text{push}}\) & 18 & mW \\ & \(w_{\text{push}}\) & 0.8 & mm \\ & \(\delta_{\text{push}}\) & -82.3 & \(\Gamma_{\text{626}}\) \\ \hline \multirow{4}{*}{3D MOT (loading)} & \(\lambda_{\text{3D}}\) & 626 & nm \\ & \(P_{\text{3D}}\) & 85 & mW \\ \cline{1-1} & \(w_{\text{3D}}\) & 12 & mm \\ \cline{1-1} & \(b^{\prime}_{\text{3D}}\) & 0.42 & G/cm \\ \cline{1-1} & \(\delta_{\text{3D}}\) & -42.6 & \(\Gamma_{\text{626}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Values of the different relevant parameters for optimal operation of our 2D-/3D-MOT scheme. The waists \(w_{\text{2D}}\), \(w_{\text{push}}\), \(w_{\text{3D}}\), and the powers \(P_{\text{2D}}\), \(P_{\text{3D}}\) were simply set to their values and not optimized. Other values are the results of our optimization process described in this paper.
pared to Zeeman-slower-based ones. These include its compactness (our system is less than 1 m long), its lower energetic consumption thanks to the use of permanent magnets and lower oven temperature, as well as the absence of a direct view between the oven and the center of the science chamber which reduces collisions with hot atoms without the need of a mechanical shutter and allows for a full optical switching of the atomic beam compatible with fast-cycling experiments. In future developments, a glass cell could directly be substituted for the metallic chamber and therefore allow for even greater optical access and faster magnetic-field control without e.g. the need for additional transport of the atomic cloud.
Another interesting advantage of intercombination-line MOTs of heavy atoms is the possibility to operate them with only five beams, removing the beam coming from the top [12]. In our setup, we have tested this configuration and a 3D-MOT could be loaded but we did not perform an optimization in this setting yet. We also note that we achieved and observed 2D MOTs of other isotopes of Dy, namely \({}^{162}\)Dy and \({}^{163}\)Dy. We also tried and successfully loaded a 3D MOT of \({}^{163}\)Dy. \({}^{161}\)Dy could not be loaded on a first try and we suspect that a repumping frequency should be added to the 2D-MOT light.
Our novel scheme thus constitutes a very favorable platform on which to build more complex experiments. To cite only two examples, based on such a 3D MOT, one could proceed with (i) loading a dipole trap and performing evaporative cooling to quantum degeneracy, or (ii) loading single atoms in arrays of optical tweezers. Both platforms are highly promising candidates for quantum simulation or quantum computation purposes [39; 40; 41; 42]. Finally, we also note that this scheme should be readily adaptable to other open-shell lanthanide species such as Er.
_We note that another setup based on a similar 421-nm 2D-MOT loading a 626-nm 3D-MOT of Dy atoms has been developed in the group of I. Ferrier-Barbut (Bloch et al., in prep). We have greatly benefited from exchanges between our groups._
###### Acknowledgements.
First and foremost, we thank the Heidelberg quantum-gas community for their constant technical and scientific support along with the numerous fruitful discussions. This includes S. Jochim, M. Weidemuller, M. Oberthaler, F. Jendrzejewski, and their groups with a special mention to the HQA for sharing many of their design thoughts. We thank I. Ferrier-Barbut and his group for open exchanges and discussions especially during the design process. We thank M. Barbiero for providing his simulation code. We thank J. Wilson and J. Thompson for enlightening discussions based on their Yb 2D/3D-MOT setup. We further thank A. Patscheider, D. Petter, G. Natale, P. Ilzhofer, E. Kirilov, J. Beugnon, J. Dalibard, and C. Weitenberg for numerous early-stage discussions and technical advice. We thank T. Yefsah and M. Rabinovic for sharing technical designs. We thank L. Hoennen, P. Holzenkamp, V. Salazar Silva, B. Bader for their technical assistance. This work is funded by the European Research Council (ERC) under the European Union's Horizon Europe research and innovation program under grant number 101040688 (project 2DDip), and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through project-ID 273811115 (SFB1225 ISOQUANT) and under Germany's Excellence Strategy EXC2181/1-390900948 (the Heidelberg Excellence Cluster STRUCTURES). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. J. G. acknowledges support from the International Max Planck Research School for Quantum Dynamics (IMPRS-QD). \(\dagger\) these authors contributed equally. \(\star\) Correspondence and requests for materials should be addressed to [email protected].
|
2306.00637 | Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image
Diffusion Models | We introduce W\"urstchen, a novel architecture for text-to-image synthesis
that combines competitive performance with unprecedented cost-effectiveness for
large-scale text-to-image diffusion models. A key contribution of our work is
to develop a latent diffusion technique in which we learn a detailed but
extremely compact semantic image representation used to guide the diffusion
process. This highly compressed representation of an image provides much more
detailed guidance compared to latent representations of language and this
significantly reduces the computational requirements to achieve
state-of-the-art results. Our approach also improves the quality of
text-conditioned image generation based on our user preference study. The
training requirements of our approach consists of 24,602 A100-GPU hours -
compared to Stable Diffusion 2.1's 200,000 GPU hours. Our approach also
requires less training data to achieve these results. Furthermore, our compact
latent representations allows us to perform inference over twice as fast,
slashing the usual costs and carbon footprint of a state-of-the-art (SOTA)
diffusion model significantly, without compromising the end performance. In a
broader comparison against SOTA models our approach is substantially more
efficient and compares favorably in terms of image quality. We believe that
this work motivates more emphasis on the prioritization of both performance and
computational accessibility. | Pablo Pernias, Dominic Rampas, Mats L. Richter, Christopher J. Pal, Marc Aubreville | 2023-06-01T13:00:53Z | http://arxiv.org/abs/2306.00637v2 | # Wurstchen: Efficient Pretraining of Text-to-Image Models
###### Abstract
We introduce Wurstchen, a novel technique for text-to-image synthesis that unites competitive performance with unprecedented cost-effectiveness and ease of training on constrained hardware. Building on recent advancements in machine learning, our approach, which utilizes latent diffusion strategies at strong latent image compression rates, significantly reduces the computational burden, typically associated with state-of-the-art models, while preserving, if not enhancing, the quality of generated images. Wurstchen achieves notable speed improvements at inference time, thereby rendering real-time applications more viable. One of the key advantages of our method lies in its modest training requirements of only 9,200 GPU hours, slashing the usual costs significantly without compromising the end performance. In a comparison against the state-of-the-art, we found the approach to yield strong competitiveness. This paper opens the door to a new line of research that prioritizes both performance and computational accessibility, hence democratizing the use of sophisticated AI technologies. Through Wurstchen, we demonstrate a compelling stride forward in the realm of text-to-image synthesis, offering an innovative path to explore in future research.
## 1 Introduction
State-of-the-art diffusion models (Ho et al., 2020; Saharia et al., 2022; Ramesh et al., 2022) have advanced the field of image synthesis considerably, achieving remarkable results that closely approximate photorealism. However, these foundation models, while impressive in their capabilities, carry a significant drawback: they are computationally demanding. For instance, Stable Diffusion 1.4, one of the most notable models in the field, used 150,000 GPU hours for training. Against this backdrop, we propose a novel approach, named "Wurstchen", which drastically reduces the computational demands while maintaining competitive performance. Our method is based on a novel architecture that elegantly distributes the task of image synthesis across three distinct stages, thereby making the learning process more manageable and computationally efficient.
## 1 Introduction
The _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross_ of a _Ross of a _Ross_ of
The approach uses three distinct stages for image synthesis (see Figure 2): initially, a text-conditional latent diffusion model is used to create a latent image of reduced resolution (Stage C), which is then decoded by another model into a vector-quantized latent space of higher resolution (Stage B). Finally, the quantized latent image is decoded to yield the full-resolution output image (Stage A).
Training is performed in reverse order to the inference (Figure 3): The initial training is carried out on Stage A and employs a Vector-quantized Generative Adversarial Network (VQGAN) to create a discretized latent space. As shown in earlier work, this compact representation facilitates learning and inference speed (Rombach et al., 2022; Chang et al., 2023; Rampas et al., 2023). In the next phase Stage B is trained, which acts as a further compression stage, employing an encoder that projects images into an even more condensed space and a decoder that tries to reconstruct VQGAN latents from the encoded image. We employ a token predictor based on the Paella (Rampas et al., 2023) model for this task, which is conditioned on the representation of the encoded image, as it comes with the benefits of a low required number of sampling steps (which is especially beneficial to computational efficiency due to the comparatively highly resolved latent space) (Rampas et al., 2023), simple implementation and training. Finally, for the construction of Stage C, the aforementioned image encoder is employed to project images into the condensed latent space where a text-conditional latent diffusion model (Rombach et al., 2022) is trained. The significant reduction in space dimensions in Stage C allows for more efficient training of the diffusion model, considerably reducing both the computational resources required and the time taken for the process.
Our proposed Wurstchen model thus introduces a thoughtfully designed approach to address the high computational burden of current state-of-the-art models, providing a significant leap forward in text-to-image synthesis. With this approach we are able to train a 1B parameter Stage C text-conditional diffusion model within approximately 9,200 GPU hours, resembling a 16x reduction in computation compared to the amount Stable Diffusion 1.4 used for training (150,000 GPU hours), while showing similar fidelity both visually and numerically. Throughout this paper, we provide a comprehensive evaluation of Wurstchen's efficacy, demonstrating its potential to democratize the deployment & training of high-quality image synthesis models.
Our main contributions are the following:
1. We propose a novel architecture for text-to-image synthesis that substantially reduces computational demands while achieving state-of-the-art performance. This approach introduces an efficient pipeline following a three-stage paradigm, namely a text-conditioned diffusion model (Stage C), an image encoder/decoder (Stage B), and a VQGAN (Stage A).
2. Our architecture enables the training of a 1B parameter Stage C diffusion model with a significantly reduced compute budget. This level of efficiency is achieved without sacrificing the quality of the synthesized images.
3. We provide comprehensive experimental validation of the model's efficacy, opening the door to further research in the field of efficient, high-quality generative models in presenting a compelling paradigm that simultaneously prioritizes both performance and computational feasibility.
Figure 2: Inference architecture for text-conditional image generation.
4. We are publicly releasing the source code and the entire suite of model weights.
## 2 Related Work
### Conditional Image Generation
The field of image generation guided by text prompts has undergone significant progression in recent years. Initial approaches predominantly leveraged Generative Adversarial Networks (GANs) (Reed et al., 2016; Zhang et al., 2017). More recently, however, a paradigm shift in the field of image generation towards diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020) has occurred. These approaches, in some cases, have not only met but even exceeded the performance of GANs in both conditional and unconditional image generation (Dhariwal and Nichol, 2021). Diffusion models put forth a score-based scheme that gradually eliminates perturbations (e.g., noise) from a target image, with the training objective framed as a reweighted variational lower-bound. Next to diffusion models, another dominant choice for training text-to-image models are transformers. In their early stages, transformer-based models utilized an autoregressive approach, leading to a significant slowdown in inference due to the requirement for each token to be sampled individually. Current strategies, however, employ a bidirectional transformer (Ding et al., 2022; Chang et al., 2022; Chang et al., 2023) to address the challenges that traditional autoregressive models present. As a result, image generation can be executed using fewer steps, while also benefiting from a global context during the generative phase. Other recent work has shown that convolution-based approaches for image generation can yield similar results (Rampas et al., 2023).
### Compressed Latent Spaces
The majority of approaches in the visual modality of generative models use some way to train at a smaller space, followed by upscaling to high resolutions, as training at large pixel resolutions can become exponentially more expensive with the size of images. For text-conditional image generation, there are two established categories of approaches: encoder-based and upsampler-based. Latent diffusion models (Rombach et al., 2022), DALL-E (Ramesh et al., 2021), CogView (Ding et al., 2021; Ding et al., 2022), MUSE (Chang et al., 2023) belong to the first category and employ a two-stage training process. Initially, an autoencoder (Rumelhart et al., 1985) is trained to provide a lower-dimensional, yet perceptually equivalent, representation of the data. This representation forms the basis for the subsequent training of a diffusion or a transformer model. Eventually, generated latent representations can be decoded with the decoder branch of the autoencoder to the pixel space. The result is a significant reduction in computational complexity for the diffusion / sampling process and efficient image decoding from the latent space using a single network pass. On the contrary, upsampler-based methods generate images at low resolution in the pixel space and use subsequent models for upscaling the images to higher resolution. UnClip (Ramesh et al., 2022) and Imagen (Saharia et al., 2022) both generate images at 64x64 and upscale using two models to 256 and 1024 pixels. The former model is the largest in terms of parameter count, while the latter models are smaller due to working at higher resolution and only being responsible for upscaling.
### Conditional Guidance
The conditional guidance of models in text-based scenarios is typically facilitated through the encoding of textual prompts via a pretrained language model. Two major categories of text encoders are prevalently employed: contrastive text encoders and uni-modal text encoders. Contrastive Language-Image Pretraining (CLIP) (Radford et al., 2021) is a representative of the contrastive multimodal models that strives to align text descriptions and images bearing semantic resemblance within a common latent space. A host of image generation methodologies have adopted a frozen CLIP model as their exclusive conditioning method in recent literature. The hierarchical DALL-E 2 by Ramesh _et al_. (Ramesh et al., 2022) specifically harnesses CLIP image embeddings as input for their diffusion model, while a 'prior' performs the conversion of CLIP text embeddings to image embeddings. Stable Diffusion (Rombach et al., 2022), on the other hand, makes use of un-pooled CLIP text embeddings to condition its latent diffusion model. In contrast, the works of Saharia _et al_. (Saharia et al., 2022), Liu _et al_. (Liu et al., 2022) and Chang _et al_. (Chang et al., 2023) leverage a large, uni-modal language model such as T5 (Raffel et al., 2020) or ByT5 (Xue et al., 2022) that
can encode textual prompts with notable accuracy, leading to image generations of superior precision in terms of composition, style, and layout.
## 3 Method
Our method comprises three stages, all implemented as deep neural networks. For image generation, we first generate a latent image at a strong compression ratio using a text-conditional latent diffusion model (Stage C). Subsequently, this representation is transformed to an upsampled and quantized latent space by the means of a secondary model which is tasked for this reconstruction (Stage B). Finally, the tokens that comprise the latent image in this intermediate resolution are decoded to yield the output image (Stage A). The training of this architecture is performed in reverse order, starting with Stage A, then following up with Stage B and finally Stage C (see Figure 3).
### Stage A and B
It is a known and well-studied technique to reduce the computational burden by compressing data into a smaller representation[Rombach et al., 2022, Chang et al., 2022]. Our approach follows this paradigm, too, and makes use of Stage A & B to achieve a notably higher compression than usual. Let \(H\times W\times C\) be the dimensions of images. A spatial compression maps images to a latent representation with a resolution of \(h\times w\times z\) with \(h=H/f,w=W/f\), where \(f\) defines the compression rate. Common approaches for modelling image synthesis use a one-stage compression between f4 and f16 [Esser et al., 2021, Chang et al., 2023, Rombach et al., 2022], with higher factors usually resulting in worse reconstructions. Our Stage A consists of a f4 VQGAN [Esser et al., 2021] with parameters \(\Theta\) and initially encodes images \(\mathbf{x}\in\mathbb{R}^{3\times 512\times 512}\) into \(128\times 128\) discrete tokens from a learnt codebook of size 8,192.
\[\mathbf{x}_{q}=f_{\Theta}(\mathbf{x})\]
The network is trained as described by Esser _et al_. and tries to reconstruct the image based on the quantized latents, so that:
\[f_{\Theta}^{-1}\left(f_{\Theta}\left(\mathbf{x}\right)\right)=f_{\Theta}^{-1} \left(\mathbf{x}_{q}\right)\approx\mathbf{x}\]
where \(f_{\Theta}^{-1}\) resembles the decoder part of the VQGAN.
Afterwards, Stage B is learnt in the compressed and discrete VQGAN space to reconstruct images that were encoded with an additional model which utilizes an inherent higher compression ratio (see Figure 3). We make use of the large (L) configuration of an EfficientNet2 stem [Tan and Le, 2020] to encode images, and task the Stage B model to reconstruct the representation of the same image in the VQGAN space of Stage A. The EfficientNet2 \(e_{\phi}\) takes in images \(x\in\mathbb{R}^{3\times 384\times 384}\) and embeds them into a space of \(\mathbb{R}^{1280\times 12\times 12}\).
We use simple bicubic interpolation for the resizing of the images from \(512\times 512\) to \(384\times 384\). On top of that representation, we add a \(1\times 1\) convolutional head that normalizes and projects the embeddings to \(c_{\mathrm{eff}}\in\mathbb{R}^{16\times 12\times 12}\). This compressed representation of the images is given to the Stage B decoder as conditioning to guide the decoding process. We formulated this learning process in a typical noising/denoising framework and decided to use the architecture of Paella [Rampas et al., 2023] for that. The approach works on quantized tokens and is hence perfectly suitable for this task. Image tokens \(\mathbf{x}_{q}\) are noised by random token replacement with other tokens from the VQGAN codebook based on random timesteps. The noised representation \(\tilde{\mathbf{x}}_{q,t}\), together with the EfficientNet embeddings \(\mathbf{c}_{\mathrm{eff}}\), text conditioning \(c_{\mathrm{text}}\) and the timestep \(t\) are given to the model.
\[\tilde{\mathbf{x}}_{q,0}=f_{\vartheta}(\tilde{\mathbf{x}}_{q,t},\mathbf{c}_{ \mathrm{eff}},\mathbf{c}_{\mathrm{text}},t)\]
Its task is to predict the original tokens. Sampling is executed in an iterative fashion given new EfficientNet embeddings. After training, images \(\mathbf{x}\in\mathbb{R}^{3\times 512\times 512}\) can be decoded from a latent space of \(\mathbb{R}^{16\times 12\times 12}\), resulting in a total spatial compression of **f42**.
Figure 4 shows depictions of images and their corresponding reconstructions. Because the EfficientNet encoder was trained on ImageNet data, which does not capture the broad distribution of images present in large text-image datasets, the model is initialized from a pretrained checkpoint, but also updated during the training of Stage B. We use Cross-Attention [Vaswani et al., 2017] for conditioning and project both \(\mathbf{c}_{\mathrm{eff}}\) (flattened) and \(\mathbf{c}_{\mathrm{text}}\) to the same dimension in each block of the
model and concatenate them. We refer to (Rampas et al., 2023) for more details on the training and sampling. Furthermore, during training Stage B, we intermittently add noise to the EfficientNet embeddings, to teach the model to understand non-perfect embeddings, which is likely to be the case when generating these embeddings with Stage C. Lastly, we also randomly drop \(\mathbf{c}_{\text{eff}}\) and \(\mathbf{c}_{\text{text{text}}}\) to be able to sample with classifier-free-guidance (Ho and Salimans, 2022) during sampling.
Figure 3: Training objectives of our model. Initially a VQGAN-based autoencoder is trained. Secondly, Stage B is trained as a latent image decoder, decoding an EfficientNet latent image to the original VQGAN latent space. Finally, Stage C is trained as a text-conditional latent diffusion model at a compression rate of f42.
### Stage C
After Stage A and Stage B are trained, training of the text-conditional last stage can be started. We follow a standard diffusion process, applied in the latent space of the finetuned EfficientNet encoder. Images are encoded into their latent representation \(\mathbf{x}_{\text{eff}}=\mathbf{c}_{\text{eff}}\), which now become the target, instead of the conditioning. The latents are noised by using the following forward diffusion formula:
\[\mathbf{x}_{\text{eff},t}=\sqrt{\bar{\alpha}_{t}}\cdot\mathbf{x}_{\text{eff}}+ \sqrt{1-\bar{\alpha}_{t}}\cdot\epsilon\]
where \(\epsilon\) represents noise from a zero mean unit variance normal distribution. We use a cosine schedule (Nichol and Dhariwal, 2021) to generate \(\bar{\alpha}_{t}\) and use continuous timesteps. The diffusion model takes in the noised embeddings \(\mathbf{x}_{\text{eff},t}\), the text conditioning \(\mathbf{c}_{\text{text}}\) and the timestep \(t\). The model returns the prediction for the noise in the following form:
\[\bar{\epsilon}=\frac{\mathbf{x}_{\text{eff},t}-\mathbf{a}}{\mid 1-\mathbf{b}\mid+1e^{-5}}\]
where \(\mathbf{a}\) and \(\mathbf{b}\) result from:
\[\mathbf{a},\mathbf{b}=f_{\theta}(\mathbf{x}_{\text{eff},t},\mathbf{c}_{\text{ text}},t)\]
We decided to formulate the objective as such, since it made the training more stable. We hypothesize this occurs because the model parameters are initialized to predict \(\mathbf{0}\) at the beginning, enlarging the difference to timesteps with a lot of noise. By reformulating to the \(\mathbf{a}\) & \(\mathbf{b}\) objective, the model initially returns the input, making the loss small for very noised inputs. We use the standard mean-squared-error loss between the predicted noise and the ground truth noise. Additionally, we employ the p2 loss weighting (Choi et al., 2022):
\[p_{2}(t)\cdot\mid\mid\epsilon-\bar{\epsilon}\mid\mid^{2}\]
where \(p_{2}(t)\) is defined as \(\frac{1-\bar{\alpha}_{t}}{1+\bar{\alpha}_{t}}\), making higher noise levels contribute more to the loss. Text conditioning \(\mathbf{c}_{\text{text}}\) are dropped randomly for 5% of the time and replaced with a null-label in order to use classifier-free-guidance (Ho and Salimans, 2022)
### Image Generation (Sampling)
Sampling starts at Stage C from initial random noise \(\mathbf{x}_{\text{eff},T_{C}}=\mathcal{N}(0,\mathbf{I})\). We use the DDPM (Ho et al., 2020) algorithm to sample the EfficientNet latents conditioned on text-embeddings. To do so, we run the following operation for \(T_{C}\) steps:
\[\hat{\mathbf{x}}_{\text{eff},t-1}=\frac{1}{\sqrt{\alpha_{t}}}\cdot(\hat{ \mathbf{x}}_{\text{eff},t}-\frac{1-\alpha_{t}}{\sqrt{1-\bar{\alpha}_{t}}} \bar{\epsilon})+\sqrt{(1-\alpha_{t})\frac{1-\bar{\alpha}_{t-1}}{1-\bar{\alpha} _{t}}}\epsilon\]
We denote the outcome as \(\bar{\mathbf{x}}_{\text{eff}}\) which is of shape \(16\times 12\times 12\). This output is flattened to a shape of \(144\times 16\) and given as conditioning, along with the same text embeddings used to sample \(\bar{\mathbf{x}}_{\text{eff}}\), to Stage B. This Stage operates at the \(128\times 128\) VQGAN latent space. We initialize \(\mathbf{x}_{q,T_{B}}\) to random tokens drawn from the VQGAN codebook. We sample \(\bar{\mathbf{x}}_{q}\) by iteratively predicting all tokens for \(T_{B}\) steps.
\[\mathbf{x}_{q,t-1}=f_{\vartheta}(\mathbf{x}_{q,t},\mathbf{c}_{\text{eff}}, \mathbf{c}_{\text{text}},t)\]
and subsequently denoising a ratio of the tokens back to their original noise. Finally \(\bar{\mathbf{x}}_{q}\) will be projected back to the pixel space using the decoder \(f_{\Theta}^{-1}\) of the VQGAN (Stage A):
\[\bar{\mathbf{x}}=f_{\Theta}^{-1}(\bar{\mathbf{x}}_{q})\]
A depiction of the sampling pipeline can be seen in Figure 2.
### Model Decisions
Many choices were required when setting up and training the different stages. One of the most important decision had to be made about the image encoder. Theoretically, any visual model could be used for that, but three things should be kept in mind: the training objective the model was trained with, the parameter count and the embedding dimension. We hypothesize that it is beneficial to use an encoder that already has a good feature representation of a wide variety of images. Furthermore, having a small and parameter efficient model makes training of Stage B & C faster. Finally, the feature dimension of the encoder network is vital. If it is excessively small, it may fail to capture sufficient image details; conversely, if it is overly large, it may unnecessarily increase computational requirements and extend training duration. Moreover, the type of model for Stage A & B also resembles a choice to be made. We decided to use the architecture of Paella for Stage B, due to its ability to handle quantized data and requiring very little number of inference steps to sample images (Rampas et al., 2023). The latter attribute is crucial to maintain a low-latency pipeline, as sampling at a resolution of \(128\times 128\) could be computationally demanding if many steps are necessary. However, in theory a diffusion model could be used, too. A different architecture is needed for Stage C since it requires a model capable of handling continuous data, unlike the one used for Stage B. Hence we decided to use a latent diffusion model (Rombach et al., 2022). While diffusion models require more inference steps, this demand is rendered more feasible within the context of a denser latent space.
## 4 Experiments
### Text-to-Image Training
To demonstrate Wurstchen's capabilities on text-to-image generation, we trained a 18M parameter Stage A, a 600M parameter Stage B and a 1B parameter Stage C. We employed an EfficientNet2-Large stem (Tan and Le, 2020) in the training. Stage B and C are both conditioned on unpooled CLIP-H (Ilharco et al., 2021) text-embeddings. All models are optimized using AdamW (Loshchilov and Hutter, 2019) with a learning rate of \(1e^{-4}\) using a linear warm-up schedule for 10k steps. Stage B & C were trained for 0.25M and 0.8M steps using a batch size of 384 and 1280, respectively. All stages were trained on subsets of the improved-asesthetic LAION-5B (Schuhmann et al., 2022) dataset.
Figure 4: Reconstruction samples using Stage B using a total compression factor of f42.
### Text-to-Image Evaluation
Evaluations of text-to-image models in both supervised and zero-shot settings commonly use the COCO 2014 [Chen et al., 2015] validation set as a reference benchmark [Rombach et al., 2022, Saharia et al., 2022, Chang et al., 2023, Ramesh et al., 2022]. The primary automated metrics employed for performance assessment are the Frechet Inception Distance (FID) [Heusel et al., 2017], which quantifies image fidelity, and the CLIP score [Hessel et al., 2022, Radford et al., 2021], which determines alignment between image and text. In line with prior studies, we provide the FID-30k metric in a zero-shot context, which involves randomly selecting 30K prompts and image pairs from the validation set and comparing the model's generated samples based on these prompts with the reference images from the validation set in the latent space of an independent third model (Inception V3 trained on ImageNet). The same generated images will be used to calculate the CLIP score with the captions. The results can be seen in Table 1. All of the experiments use the standard DDPM [Ho et al., 2020] algorithm to sample latents in Stage C. Stage B uses the sampling as described in [Rampas et al., 2023]. Both stages also make use of classifier-free-guidance [Ho and Salimans, 2022] with guidance scale \(w\). We fix the hyperparameters for Stage B sampling to \(T_{B}=8\) and \(w=2\). To find good sampling parameters for Stage C, we evaluate FID & CLIP score for different classifier-free-guidance weights \(w\). We choose to fix \(T_{C}=60\). Furthermore, we also compare to the most similar model, Stable Diffusion 1.4, in terms of trainable parameters and conditioning. The results are shown in Figure 5 as pareto curves for the COCO [Chen et al., 2015] dataset. We observe similar results for the CLIP scores in both models, however, slightly worse results for FID values. We hypothesize this is partly caused by artifacts and inaccuracies produced by the image reconstruction of Stage B. In an attempt to validate this hypothesis, we computed the FID scores between original COCO images and reconstructed images using Stage B only, which gave a score of \(\mathrm{FID}=5.73\), thus highlighting the fact that, as we believed, the quality of the reconstructions is indeed a significant contributor to the FID score, and a clear target for improvement. Furthermore, Figure A.1-A.5 show visual comparisons between Wurstchen and Stable Diffusion 1.4. All prompts are non-cherrypicked and all generations use the same seed. The prompts represent a diverse subset of the dalle-mini prompts [Dayma et al., 2021]. Visually, we observe similar fidelity and prompt-alignment and find both models to be on par.
### Computational Requirements
Table 1 shows the computational costs for training Wurstchen compared to the original StableDiffusion 1.4. Based on the evaluations in Section 4.2, it can be seen that the proposed setup of decoupling high-resolution image projection from the actual text-conditional generation can be leveraged even
Figure 5: Pareto curves for FID and CLIP scores comparing Würstchen to Stable Diffusion 1.4. We observe the two models to be on par in terms of the CLIP score, but Stable Diffusion 1.4 achieving higher fidelity for the COCO dataset. We hypothesize the inferior performance on FID to be highly affected by Stage B, as reconstructions lack details.
more as done in the past (Esser et al., 2021; Saharia et al., 2022; Ramesh et al., 2022), while still staying on-par in terms of quality, fidelity and alignment. Stage C, being the most expensive stage to train, required only 9,200 GPU hours, compared to 150,000 GPU hours2 for StableDiffusion 1.4, making it a 16x improvement. Moreover, although needing to sample with both Stage A & B to generate the VQGAN latents \(\overline{\mathbf{x}}_{q}\), the total inference is still very fast. Figure 6 shows sampling times for different batch sizes.
Footnote 2: As reported in the model card at [https://huggingface.co/CompVis/stable-diffusion-v-1-4-original](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
## 5 Discussion
Several constraints persist in our current experimental setup. A principal issue arises when the model attempts to sample images in Stage B at resolutions higher than its training capacity, leading to the generation of repetitive patterns. This could potentially result from the model's inability to interpret larger images appropriately. We attribute this issue to the conditioning mechanism, where the EfficientNet embeddings are injected via cross-attention, causing them to be flattened, thereby losing their two-dimensional positional bias. This might account for the model's difficulties in generalizing to varying resolutions during inference. Figure 7 provides examples of sampling at different resolutions. Furthermore, the current design of Wurstchen suffers from common limitations that are characteristic of models conditioned solely on CLIP text-embeddings, such as StableDiffusion (Rombach et al., 2022), which includes challenges in rendering text and compositional difficulties with complex scenes. However, the relatively inexpensive computational demands of this model open up possibilities for iterating on the model design at faster pace. Moreover, training of Stage B using the current architecture turned out to be difficult to train due to instabilities. As a result,
\begin{table}
\begin{tabular}{|l|r|r|r|r|r|} \hline Model & Parameters & Sampling Steps & FID-COCO-30k \(\downarrow\) & open source & GPU hours @ A100 \\ \cline{2-5} & & & 256eqx & 512px & \\ \hline CogView (Ramesh et al., 2021) & 4B & 1024 & 27.1 & ✓ & – \\ DALL-E (Ramesh et al., 2021) & 12B & 256 & 17.89 & – & – \\ LDM (Rombach et al., 2022) & 1.45B & 250 & 12.63 & ✓ & – \\ GLIDE (Nichol et al., 2021) & 3.5B & 250 & 12.24 & – & – \\ Make-A-Scene (Gafni et al., 2022) & 4B & 1024 & 11.84 & – & – \\ Paella (Rampas et al., 2023) & 1B & **12** & 11.07 & ✓ & 64,000 \\ DALL-E (Ramesh et al., 2022) & 3.5B & 250 & 10.39 & – & – \\ MUSE-3B (Chang et al., 2023) & 3B & 24 & 7.78 & – & – \\ Imagi (Saharia et al., 2022) & 2B & 1000 & 7.27 & – & – \\ Parti (Yu et al., 2022) & 20B & 1024 & **7.23** & – & – \\ \hline Wurstchen (proposed) & 0.99B & 60 & 11.32 & ✓ & **9,200** \\ Stable Diffusion 1.4 (Rombach et al., 2022) & 0.8B & 50 & **8.27\({}^{*}\)** & ✓ & 150,000 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of the zero-shot Fréchet Inception Distance to other state-of-the-art text-to-image methods on \(256\times 256\) and \(512\times 512\) image resolutions. \({}^{*}\) own evaluation
Figure 6: Optimized sampling speeds for different batch sizes.
we had to stop training early. After training Stage B for 250k steps, the model already performed good for the amount of compression it has to decode, however still shows flaws in finer details of images. Figure 4 shows examples. We leave this open for future work to iterate on the stability of the training mechanism. On the other hand, training Stage C behaved significantly more stable and did not encounter issues during training or sampling.
We anticipate that Stage B could see significant enhancements in terms of image reconstruction quality and its ability to handle images beyond the training resolution. Adjustments may be made within the conditioning mechanism, with positional embeddings on the cross-attention potentially improving the aforementioned issue. Alternative conditioning mechanisms, such as Modulated / Adaptive Layer Normalization (Chen et al., 2019; Perez et al., 2017) or simple concatenation, could also prove effective. Moreover, if the EfficientNet latents could be quantized, the possibility of implementing a sampling mechanism working on quantized latent spaces (such as (Chang et al., 2023; Rampas et al., 2023)) for Stage A would arise, which could further reduce computational demands. Conversely, training Stage B using Latent Diffusion Models (Rombach et al., 2022) might also be increased in efficiency by minimizing the number of inference steps by approaches like distillation or consistency models (Song et al., 2023). It should be noted that the current design of Stage B also functions as an upsampler, encoding \(384\times 384\) images via EfficientNet and decoding into \(512\times 512\) images. It is conceivable that this ratio might be increased, allowing Stage B to serve as both a decoder and upsampler. Furthermore, considerable effort has been dedicated to enhancing the efficiency of text-to-image model training through tactics such as pre-calculating embeddings and using lower precision number formats. Other than mixed precision training (Micikevicius et al., 2018), we have not implemented any specific accelerative strategies, suggesting the potential for even greater computational efficiency and reduced resource requirements. Finally, the paradigm of further decoupling large-scale conditional training from high-resolution constraints could also be applied to the field of conditional video generation. Such an approach could yield even more significant accelerations in training & processing speed than for images.
## 6 Conclusion
In this work we presented our text-conditional image generation model Wurstchen, which employs a three stage process of decoupling text-conditional image generation from high resolution spaces. The
Figure 7: Failure cases of Stage B: Decoding at resolutions unseen during training (in this case: 512\(\times\)768) represents a great challenge for the model and results in repetitive patterns. We hypothesize the reason can be found in the conditioning mechanism used to inject the EfficientNet embeddings.
proposed process enables to train large scale models efficiently, substantially reducing computational requirements, while at the same time providing high-fidelity images. Our trained model achieved comparable performance to models trained using significantly more computational resources, illustrating the viability of this approach and suggesting potential efficient scalability to even larger model parameters. We hope our work can serve as a starting point for further research into a more sustainable and computationally more efficient domain of generative AI and open up more possibilities into training, finetuning & deploying large-scale models on consumer hardware. We provide all of our source code, including training-, and inference scripts and trained models on GitHub 3.
Footnote 3: [https://github.com/dome272/wuerstchen](https://github.com/dome272/wuerstchen)
## Acknowledgements
The authors wish to express their thanks to Stability AI Inc. for providing generous computational resources for our experiments and LAION gemeinmutziger e.V. for dataset access and support.
|
2306.10424 | Mid-infrared trace detection with parts-per-quadrillion quantitation
accuracy: Expanding frontiers of radiocarbon sensing | Detection sensitivity is one of the most important attributes to consider
during selection of spectroscopic techniques. However, high sensitivity alone
is insufficient for spectroscopic measurements in spectrally congested regions.
Two-color cavity ringdown spectroscopy (2C-CRDS), based on intra-cavity
pump-probe detection, simultaneously achieves high detection sensitivity and
selectivity. The technique enables mid-infrared detection of radiocarbon
dioxide ($^{14}$CO$_2$) molecules in room-temperature CO$_2$ samples, with
better than 10 parts-per-quadrillion (ppq, 10$^{15}$) quantitation accuracy (4
ppq on average). These $highly$-$reproducible$ measurements, which are the most
$\it{sensitive}$ and $\it{quantitatively}$ $\it{accurate}$ in the mid-infrared,
are accomplished despite the presence of
$\it{orders}$-$\it{of}$-$\it{magnitude}$ stronger, one-photon signals from
other CO$_2$ isotopologues. This is a major achievement in laser spectroscopy.
A room-temperature-operated, compact, and low-cost 2C-CRDS sensor for
$^{14}$CO$_2$ benefits a wide range of scientific fields that utilize $^{14}$C
for dating and isotope tracing, most notably atmospheric $^{14}$CO$_2$
monitoring to track CO$_2$ emissions from fossil fuels. The 2C-CRDS technique
significantly enhances the general utility of high-resolution mid-infrared
detection for analytical measurements and fundamental chemical dynamics
studies. | Jun Jiang, A. Daniel McCartt | 2023-06-17T21:08:16Z | http://arxiv.org/abs/2306.10424v4 | # Mid-infrared trace detection
###### Abstract
Detection sensitivity is one of the most important attributes to consider during selection of spectroscopic techniques. However, high sensitivity alone is insufficient for spectroscopic measurements in spectrally congested regions. Two-color cavity ringdown spectroscopy (2C-CRDS), based on intra-cavity pump-probe detection, simultaneously achieves high detection sensitivity and selectivity. The technique enables mid-infrared detection of radiocarbon dioxide (\({}^{14}\)CO\({}_{2}\)) molecules in room-temperature CO\({}_{2}\) samples, with better than 10 parts-per-quadrillion (ppq, 10\({}^{15}\)) quantitation accuracy (4 ppq on average). These _highly-reproducible_ measurements, which are the most _sensitive_ and _quantitatively accurate_ in the mid-infrared, are accomplished despite the presence of _orders-of-magnitude_ stronger, one-photon signals from other CO\({}_{2}\) isotopologues. This is a major achievement in laser spectroscopy. A room-temperature-operated, compact, and low-cost 2C-CRDS sensor for \({}^{14}\)CO\({}_{2}\) benefits a wide range of scientific fields that utilize \({}^{14}\)C for dating and isotope tracing, most notably atmospheric \({}^{14}\)CO\({}_{2}\) monitoring to track CO\({}_{2}\) emissions from fossil fuels. The 2C-CRDS technique significantly enhances the general utility of high-resolution mid-infrared detection for analytical measurements and fundamental chemical dynamics studies.
LLNL-JRNL-850018
Quantifying light absorption is one of the most commonly used strategies to determine the concentration, transition frequencies, and transition cross sections for an analyte of interest. The most sensitive laser absorption techniques invariably utilize an optical cavity [1], which can provide \(>\)1 km light-matter interaction pathlengths. However, the increased detection sensitivity of cavity-based techniques applies equally to all resonant transitions of every molecular species inside the interaction volume. The lack of sufficient detection selectivity is problematic in the "molecular-fingerprint" mid-infrared (mid-IR) range. Because of high density of strongly overlapping transitions, spectroscopic detection and assignments of weak mid-IR signals can be prohibitively difficult with conventional cavity-enhanced techniques.
The development of optical detection for the rare radiocarbon dioxide molecule (\({}^{14}\)CO\({}_{2}\)), with \(\sim\)1200 parts-per-quadrillion (10\({}^{15}\), ppq) \({}^{14}\)C/C natural abundance, exemplifies this need for a spectroscopic technique that simultaneously achieves high detection \(sensitivity\), \( selectivity\), and quantitation \(accuracy\)[2; 3]. Traditionally measured by accelerator mass spectrometry (AMS) [4; 5], the \({}^{14}\)C tracer (half-life of 5730\(\pm\)40 years) [6] has been used in a wide range of applications, such as archaeological dating [7], bio-medicine development [8; 9], earth carbon-cycle studies [10], and monitoring of fossil-fuel-CO\({}_{2}\) emission [11; 12; 13]. \(In~{}situ\) field measurements of \({}^{14}\)C are not possible with AMS, which utilizes a room-size, mega-volt accelerator to filter the interfering molecular isobars of \({}^{14}\)C (e.g., \({}^{13}\)CH). Even for laboratory measurements, the investment and operational cost of AMS (multiple million dollar in equipment and staff) are too high for many applications.
Mid-IR detection of \({}^{14}\)CO\({}_{2}\), by measuring its \(\nu_{3}\)-band ro-vibrational transitions, has been proposed as a cheaper and potentially field-deployable \({}^{14}\)C sensing technique [2; 3; 14; 15; 16; 17; 18; 19; 20]. Quantifying fossil-fuel-CO\({}_{2}\) emission based on measurements of the total atmospheric CO\({}_{2}\) content is subject to the \(large\) and \(highly~{}variable\) CO\({}_{2}\) emissions from the bio-sphere [11; 12; 13]. Combustion of fossil fuels, which are depleted of \({}^{14}\)C, leads to location- and time-dependent
decrease in the atmospheric \({}^{14}\)CO\({}_{2}\):CO\({}_{2}\) ratio, with the measured dip typically \(<\)100 ppq (i.e., \(\lessapprox\)10\(\%\) of natural \({}^{14}\)CO\({}_{2}\) concentration) in a mega-city [12]. This signature dip is an \(unambiguous\)\(gold\)-\(standard\) tracer for fossil-fuel-CO\({}_{2}\). Large-scale and year-long measurement campaigns of atmospheric \({}^{14}\)CO\({}_{2}\) have only been occasionally implemented in a very few locations in the world, because of the high costs of AMS measurements [12, 13]. A compact field-deployable \({}^{14}\)CO\({}_{2}\) sensor will provide accurate, permanent, and low-latency monitoring of fossil-fuel-CO\({}_{2}\) emission, and thereby facilitates evaluating the efficacy of various carbon reduction programs [21].
Optical detection of \({}^{14}\)CO\({}_{2}\) is challenging at the concentration (\(\lesssim\) natural abundance) and accuracy level (1-100 ppq \({}^{14}\)C/C) required for many of the aforementioned applications. Mid-IR detection of \({}^{14}\)CO\({}_{2}\) in room-temperature samples with better than 10-ppq sensitivity and accuracy, demonstrated in this work with two-color cavity ringdown spectroscopy (2C-CRDS), pushes the limit of mid-IR laser absorption techniques in \(sensitivity\), \( selectivity\), and \(accuracy\). Our current 2C-CRDS setup allows measurements of a minimum absorption coefficient (\(k\)) of \(4\times 10^{-13}\) cm\({}^{-1}\) from \({}^{14}\)CO\({}_{2}\) in the presence of strong one-color (1C) hot-band absorption signals from other CO\({}_{2}\) isotopologues, with the background \(k\) typically \(>\)\(10^{-7}\) cm\({}^{-1}\) (4.55 \(\mu\)m, 20 torr, 300 K) [22]. This background/signal ratio (\(>\)10\({}^{5}\)) is too large for \({}^{14}\)CO\({}_{2}\) detection by other cavity-enhanced techniques based on single-photon absorption. To mitigate severe spectral overlap, gas-cooling to 170 K has been necessary for previous 10-ppq level measurements of \({}^{14}\)CO\({}_{2}\), which was achieved by a 1C variant of CRDS, the saturated-absorption cavity ringdown (SCAR) technique [15, 23]. The gas-cooling requirement for SCAR increases instrumental complexity and size, and is not ideal for field-work applications.
The built-in baseline compensation capability of 2C-CRDS detection leads to its significantly enhanced sensitivity, selectivity, and quantitation accuracy relative to conventional CRDS methods [2, 3]. In our experiment (Fig. 1a), the outputs from two quantum cascade lasers (QCL)
Figure 1: 2C-CRDS experimental schemes. (a) Experimental schematic. The counter-propagating pump and probe beams are coupled, respectively, to a \(p\)- (finesse=5300) and \(s\)-polarization (finesse=67700) mode of the three-mirror cavity. See Methods and SI Appendix, Section S1.2 for further details on the 2C-CRDS technique and the detection system. (b) Time traces of the pump and probe signals. (c) Diagrams that show two pump-probe detection schemes with 2C-CRDS, scheme (i) for our previous work [2, 3] and scheme (ii) for the present work.
excite a pair of \(\nu_{3}=1\gets 0\) (pump) and \(\nu_{3}=2\gets 1\) (probe) ro-vibrational transitions of \({}^{14}\)CO\({}_{2}\) inside a three-mirror, traveling-wave cavity. With the pump radiation switched off during alternative probe ringdown events (Fig. 1b), the net 2C signals are immune to the drift of the cavity ringdown rates and signals from one-photon molecular transitions.
The 2C-CRDS method has been previously applied by our group to achieve the first-ever room-temperature optical detection of \({}^{14}\)CO\({}_{2}\) below its natural abundance, with measurement accuracy of \(\sim\)100 ppq (\(\sim\)8\(\%\) natural abundance) [2]. A three-level excitation (scheme i in Fig. 1c) is used to quantify the \({}^{14}\)CO\({}_{2}\) concentrations of several combusted \({}^{14}\)C "standard" samples. The observed 2C-CRDS spectra are free of interference from one-photon hot-band transitions of other CO\({}_{2}\) isotopologues that lead to \(>\)10000 s\({}^{-1}\) ringdown rate loss [2]. However, small background 2C signals (\(\sim\)6.5 s\({}^{-1}\) ringdown rate loss) are observed near the \(\nu_{3}=2\gets 1\), R(13) probe transition of \({}^{14}\)CO\({}_{2}\). Collisional excitation of vibrationally excited levels of other CO\({}_{2}\) isotopologues, which are inadvertently populated by the strong intra-cavity pump radiation, are believed to be the cause of this 2C background.
These collision-induced signals are significantly more sensitive to changes in the experimental conditions than the 2C signals from \({}^{14}\)CO\({}_{2}\) (SI Appendix, Section S3.1). Unlike the pump-power saturated \({}^{14}\)CO\({}_{2}\) signals, the background signals have a linear dependence on the pump power. In addition, the background signals are sensitive to small fluctuations of the gas temperature (\(\sim\)1\(\%\) signal variation from a 0.1\({}^{\circ}\)C temperature change), because of the involvement of hot-band pump excitation of CO\({}_{2}\) levels in the 5000 cm\({}^{-1}\) energy region. Thanks to the relatively small magnitude of this background, equivalent to \({}^{14}\)CO\({}_{2}\) signals at 1.5\(\times\) natural abundance (1800 ppq), 2C-CRDS detection of \({}^{14}\)CO\({}_{2}\) was still feasible below its natural abundance in our previous work, given the moderately stable gas temperature (\(\sim\)0.1\({}^{\circ}\)C variation) and pump power (\(<\)5\(\%\) variation) during the experiments. However, to achieve ppq-level detection accuracy, significant further background reduction is imperative.
After achieving \(\sim\)\(10\times\) reduction in the background 2C signal (guided by a collision model) and \(25\times\) improvement in the detection signal-to-noise ratios, we have accomplished, by 100-s averaging at the maximum of the \({}^{14}\)CO\({}_{2}\) 2C peak, optical detection of \(room\)-\(temperature\)\({}^{14}\)CO\({}_{2}\) with 7-ppq accuracy. The accuracy further improves to 4 ppq after fitting the 2C-CRDS spectra (20-30 min data acquisition). This record sub-10-ppq measurement performance (the most \(sensitive\) and \(accurate\) in the mid-IR) has been reproducibly demonstrated with several rounds of measurements of combusted \({}^{14}\)C standards and low-\({}^{14}\)C-content bio-fuel samples (10-80 ppq). The high sensitivity, high selectivity, and high accuracy measurement capabilities of the 2C-CRDS technique will have significant impact on analytical trace measurements and fundamental gas-phase chemical physics studies, which are discussed at the end of this paper.
## Room-temperature ppq-level measurements of \({}^{14}\)Co\({}_{2}\)
The use of a three-level pump-probe scheme with a common intermediate level, such as our original \(\nu_{3}=1\gets 0\), P(14) pump and \(\nu_{3}=2\gets 1\), R(13) probe combination, is not necessary for \({}^{14}\)CO\({}_{2}\) detection in a static-gas cavity at \(\sim\)20 torr. The vibrational relaxation rate of the \(\nu_{3}=1\) state of \({}^{14}\)CO\({}_{2}\) (\(\sim\)30 ms\({}^{-1}\)torr\({}^{-1}\), determined from a pump-probe delay experiment similar to that on N\({}_{2}\)O) [3] is significantly slower than its rotational relaxation rate (on the order of 0.1 ns\({}^{-1}\)torr\({}^{-1}\)) [24]. As a result of facile rotational relaxation and negligible diffusion loss at 20 torr (\(\gg\)10\(\times\) slower than the \({}^{14}\)CO\({}_{2}\) vibrational relaxation loss), a population distribution that resembles a thermal distribution at 300 K exists among the \(\nu_{3}=1\) rotational levels under continuous pump excitation during the "pump on" cycle, even though only one \(J\)-level in \(\nu_{3}=1\) is directly populated by the pump.
A rotational-relaxation-assisted, four-level detection scheme of \({}^{14}\)CO\({}_{2}\), \(\nu_{3}=1\gets 0\), P(14) pump [25] and \(\nu_{3}=2\gets 1\), P(17) probe [26, 27], is used in the measurements presented here (scheme ii in Fig. 1c). The background 2C signal is \(\sim\)10\(\times\) smaller at the probe resonance
Figure 2: 2C-CRDS measurements of combusted \({}^{14}\)C standard samples (20.1 torr) (SI Appendix, Section S2). (a) 2C-CRDS spectra (100-s averaging per data point), and their spectral fit models. Note that “ppt” stands for part-per-trillion (10\({}^{12}\)). (b) Comparison of the on-resonance, fixed-frequency 2C-CRDS signals with the sample \({}^{14}\)C contents. (c) Measurement errors for the sample \({}^{14}\)CO\({}_{2}\) concentrations, based on the residuals of the linear fit in panel (b). (d) Weighted spectral-fit errors for two 2C-CRDS spectra in panel (a). See SI Appendix, Section S1.1 for the statistical weights (\(\sigma\)) used in the fit. (e) Differences in the 2C-CRDS spectra of two “Tiriwood” samples from the “Aug 2022” and “Sep 2022” measurements (see Table 1). The arrow highlights the extra background signals present in the “Sep 2022” spectrum. (f) Variations of the background 2C signals from four overlapping sample types from “Aug 2022” and “Sep 2022”. The “Integrated area” values are obtained by numerical integration of the observed spectra, and the “Fit amplitude” values for the background 2C signals are derived from the spectral fit.
frequency for this P(14)-P(17) combination than the original P(14)-R(13) scheme.
The 2C-CRDS technique allows, with very high signal-to-noise ratios, differentiation of six combusted \({}^{14}\)C standard samples, for which the \({}^{14}\)C content ranges from 0 to 1.5\(\times\) natural abundance (Fig. 2a). Similarly, despite the very low \({}^{14}\)C content (10-80 ppq), 2C-CRDS measurements of the four bio-fuel samples yield different signal levels at the \({}^{14}\)CO\({}_{2}\) transition region with only 60-s averaging per data point (Fig. 3a). The magnitude of the collision-induced background at the maximum of the \({}^{14}\)CO\({}_{2}\) 2C peak (determined from the "Coal" and "Petrogenic gas" samples) is equivalent to that from 210 ppq of \({}^{14}\)CO\({}_{2}\). Considering that the 2C baseline is essentially flat from a \({}^{12}\)C-enriched and \({}^{14}\)C-depleted CO\({}_{2}\) sample, collision-induced 2C transitions of at least one of the six \({}^{13}\)C isotopologues of CO\({}_{2}\) must be responsible for the observed
Figure 3: 2C-CRDS measurements of combusted bio-fuel samples (20.1 torr) (SI Appendix, Section S2). (a) 2C-CRDS spectra (60-s averaging per data point), and their spectral fit models. (b) Comparison of the sample \({}^{14}\)C content determined by 2C-CRDS (spectral fit) and AMS. (c) Measurement errors for the sample \({}^{14}\)C content, based on the deviations from the line in panel (b). The errorbars indicate the standard errors of the spectral fit for the amplitudes of the \({}^{14}\)CO\({}_{2}\) signal in the observed 2C spectra. The \({}^{14}\)C contents for the four bio-fuel samples are calibrated based on the 2C-CRDS measurements of “Petrogenic gas” and combusted “ANU” (1.81 ppt \({}^{14}\)C/C) samples. These two types of samples were measured daily with these four bio-fuel samples.
background signals in Figs. 2a and 3a. This observation agrees with the results of our model for the collision-induced processes relevant to 2C-CRDS detection, which suggests hot-band pump excitation of \({}^{13}\)C\({}^{16}\)O\({}_{2}\) as the cause of the remaining background.
For each of the six combusted \({}^{14}\)C samples in Fig. 2a, the signal at the maximum of the \({}^{14}\)CO\({}_{2}\) 2C transition is measured five times in 1.5 hours, each for a duration of 100 s. These 2C signals at fixed pump-probe frequencies scale linearly with the \({}^{14}\)C content of the corresponding samples (Fig. 2b). Residuals from a linear fit to the 100-s measurements (Fig. 2c) have a mean absolute error equivalent to 0.7 \(\%\) of the \({}^{14}\)CO\({}_{2}\) natural abundance (8.4 ppq). The measurement accuracy improves to 6.1 ppq after averaging the five 100-s measurements of each sample. Because of effective background compensation, the 2C signals measured with the fixed-frequency approach are highly repeatable, with month-to-month stability for their \(absolute\) intensities at the 10-ppq level. However, given that the amount of improvement in the measurement accuracy (8.4 ppq\(\rightarrow\)6.1 ppq) is smaller than expected based on the amount of increase in averaging time (100 s\(\rightarrow\)500 s), the fixed-frequency measurements must have suffered from small systematic errors. Certain types of errors, such as variability in the sample \({}^{13}\)C content (1-2\(\%\) typical) and variations of the background 2C signal due to changes in the experimental conditions (Figs. 2e and 2f), can be compensated by spectral fitting (SI Appendix, Section S1.1). For all four trial measurements of combusted \({}^{14}\)C samples, the spectral fit approach consistently yields improved measurement accuracy (4.0 ppq) compared to the fixed-frequency method (Table 1).
Prior to our current results, the SCAR technique achieved the most sensitive measurements in the mid-IR. By utilizing the high intra-cavity power from a cavity-locked probe, SCAR allows simultaneous measurements of the empty-cavity ringdown rate and the gas-induced absorption. With baseline compensation and 2-hour signal averaging, SCAR achieved 10-ppq measurement of \({}^{14}\)CO\({}_{2}\) at 170 K [15, 23]. Room-temperature detection of \({}^{14}\)CO\({}_{2}\) is not possible with SCAR, even above the natural abundance, because of the overwhelmingly large background 1C signals
from other CO\({}_{2}\) isotopologues.
## Current optical detection sensitivity
In our initial 2C-CRDS measurements [2, 3], the beginning of each probe ringdown transient was contaminated by random oscillations with amplitudes much larger than the detector noise [28]. We have shown that the shot-to-shot ringdown rate fluctuations (\(\sigma_{sts}\)) of our detection system can be reduced by 25\(\times\), after a small current is applied to the probe laser current driver concurrent with the trigger for the probe AOM. This fast current injection, which is achieved by temporarily setting an incorrect "zero" level for the locking servo of the probe laser, detunes the probe laser frequency from the original cavity resonance. The combination of this laser-frequency-jump and the usual AOM-controlled beam shut-off leads to a "cleaner" initiation of the ringdown events than the use of an AOM alone [29, 30]. The extra noises in the original setup are most likely caused by interference between the ringdown signal and the leaked probe radiation through the AOM due to its finite light extinction ratio [30]. A \(\sigma_{sts}\) value of 5 s\({}^{-1}\) for the 2C signal with our current detection system (i.e., a shot-to-shot \(k\) value of \(1.7\times 10^{-10}\) cm\({}^{-1}\)) is only 25\(\%\) higher than the \(single\)-shot noise-equivalent signal from \({}^{14}\)CO\({}_{2}\) at natural abundance. The short-term detection sensitivity reaches \(1.7\times 10^{-13}\) cm\({}^{-1}\) after 23
\begin{table}
\begin{tabular}{|l c c|c|c|c|} \hline Samples & Measurement & \({}^{14}\)C range & 100 s fixed & 500 s fixed & Spectral fit \\ & periods & (ppq) & (ppq) & (ppq) & (ppq) \\ \hline
6 standards & Aug. 2022 (3 days) & 0-1800 & 8.4 & 6.1 & 4.5 \\
4 standards & Sep. 2022 (1 day) & 0-1800 & 6.6 & 3.5 & 0.9 \\
7 standards & Nov. 2022 (3 days) & 0-1800 & 8.5 & 7.9 & 6.5 \\
4 bio-fuels & Dec. 2022 (4 days) & 10-80 & 6.1 & 6.1 & 4.2 \\ \hline & & \(Average\) & 6.9 & 5.1 & 4.0 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of 2C-CRDS measurements of combusted \({}^{14}\)C standards and bio-fuel samples (20.1 torr).
minute averaging, based on Allan deviation analysis of 2C-CRDS signals from multiple \({}^{14}\)CO\({}_{2}\) samples (SI Appendix, Fig. S1). To our knowledge, this "ultimate sensitivity" level, equivalent to \({}^{14}\)CO\({}_{2}\) signals at 1.4 ppq concentration, is better than any previous optical measurements in the mid-IR.
## Collision-induced 2C background
Adoption of the current P(14)-P(17) detection scheme for \({}^{14}\)CO\({}_{2}\) is guided by the results of a series of pump-probe experiments to study collision-induced background 2C signals from other CO\({}_{2}\) isotopologues. In each of those experiments, the pump directly populates a rotational level in one of the following six nearly-degenerate vibrational states of \({}^{13}\)CO\({}_{2}\) in the 5100-5300 cm\({}^{-1}\) energy region: \(04^{4}1(1)\), \(12^{2}1(1)\), \(12^{2}1(2)\), \(20^{0}1(1)\), \(20^{0}1(2)\), and \(20^{0}1(3)\) (see Fig. 4 caption for the \(v_{1}v_{2}^{l}v_{3}(P)\) notation used for vibrational assignments). Probe signals from two \({}^{13}\)CO\({}_{2}\) transitions, \(20^{0}2(1)\)\(\leftarrow\)\(20^{0}1(1)\) P(5\(e\)) (2213.90 cm\({}^{-1}\)) and \(11^{1}2(1)\)\(\leftarrow\)\(11^{1}1(2)\) P(16\(e\)) (2214.12 cm\({}^{-1}\)), are observed in all six experiments.
Three dominant collisional relaxation pathways (Fig. 4a) and propensity rules on the \(\nu_{3}\) and \(l\) quantum numbers can be identified from these experiments. Efficient \(J\)-changing collisions (Pathway 1) occur within each vibrational state. \(l\)-changing population transfer is observed from five of the pump-populated \({}^{13}\)CO\({}_{2}\) vibrational states into the nearly-degenerate \(20^{0}1(1)\) (Pathway 2, see Fig. 4b). The pump-populated level also exchanges quanta of \(\nu_{2}\) with the background \({}^{12}\)CO\({}_{2}\) bath (Pathway 3, see Fig. 4c) with a small energy change (\(\Delta E\)). For Pathway 2 and 3, the intensities of the 2C peaks, in general, decrease by \(\sim\)\(10\times\) for every 100-200 cm\({}^{-1}\) increase in \(\Delta E\) (Fig. 4d). Note that the \(\nu_{3}\) quantum number is \(conserved\) in all three observed pathways in Fig. 4a. The \(\nu_{3}\)-\(changing\) vibrational energy transfer between nearly-degenerate states, e.g., \(12^{2}1(1)\)\(\rightarrow\)\(32^{2}0(4)\) with \(\Delta E\sim 20\) cm\({}^{-1}\) (see Fig. 4b), is significantly less efficient than these three pathways.
Figure 4: Collision-induced 2C signals following pump excitation of selected hot-band transitions of \({}^{13}\)CO\({}_{2}\). (a) Overview of six 2C spectra. The amplitude of each spectrum is normalized based on the spectral line intensity of the corresponding pump transition [31] and the measured pump power during the experiment (assuming a linear pump power dependence). In the \(v_{1}v_{2}^{l}v_{3}(P)\) notation, \(v_{i}\)’s are the nominal quantum numbers in the three corresponding vibrational modes, \(l\) is the vibrational angular momentum quantum number, and \(P\) indicates the energy rank for vibrational states that belong to the same Fermi interaction polyad (i.e., with the same values of 2\(v_{1}\)+\(v_{2}\), \(v_{3}\), and \(l\)). (b) Level diagram for collision Pathway 2 and the forbidden \(\nu_{3}\)-changing pathway from the pump-populated \(12^{2}1(1)\) state. The species M in panel (b) is predominantly the \({}^{12}\)C\({}^{16}\)O\({}_{2}\) molecule. (c) Level diagram for collision Pathway 3 from the pump-populated \(12^{2}1(1)\) state. (d) 2C peak intensities in panel (a) as a function of the energy gap for the corresponding collisional mechanisms. (e) Experimental spectrum following pump excitation into the \(12^{2}1(1)\), J=29 \(e\) level. (f) Simulated spectrum for (e), with two assumptions regarding changes for the \(\nu_{3}\) quantum number.
The identification of three dominant collisional processes, together with the use of the "energy-gap law" and quantum-number propensity rules [24], allows us to model the collision-induced 2C spectra in Fig. 4a (SI Appendix, Sections S3.2 and S3.3). According to our simulation for pump excitation into \(12^{2}1(1)\), J=29\(e\) (Fig. 4f), additional collision-induced 2C peaks would have been observed in the corresponding probe spectrum (Fig. 4e), if the \(\nu_{3}\)-changing population transfer were allowed. The absence of these additional features strongly supports our proposed propensity rule on the conservation of the \(\nu_{3}\) quantum number during 2C-CRDS measurements of CO\({}_{2}\).
We have simulated the background 2C signals at various pump-probe combinations for \({}^{14}\)CO\({}_{2}\) detection, based on the fit model derived from Fig. 4 (SI Appendix, Section S3.3). The use of collision-assisted four-level detection significantly improves the likelihood of finding a pump-probe combination with reduced 2C background. With the same \(\nu_{3}=1\gets 0\), P(14) pump transition as in the original experiment, background signals of \(<\)10 s\({}^{-1}\) are predicted at the resonance frequencies of every \(\nu_{3}=2\gets 1\), P-branch transition from P(31) to P(7). We measure the background signals with six of these P-branch probes, P(27) to P(17), that fall within the tuning range of the available QCL in our laboratory. For four of these P-branch transitions, the background signal is considerably smaller than that from the original R(13) probe (6.5 s\({}^{-1}\)). The current P(17) probe yields the smallest observed background signal. Work is ongoing to investigate other pump-probe combinations for further reduction of background 2C signals.
## Implications and outlook
By monitoring the baseline fluctuations and background 1C absorption during alternative probe ringdown events, the 2C-CRDS method significantly enhances laser spectroscopic detection in \(sensitivity\), \( selectivity\), and \(quantitative\)\(accuracy\). In combination with recent advances in
laser radiation sources, detectors, and mirror coatings in the mid-IR, the technique will greatly enhance the utility of high-resolution mid-IR detection for analytical and spectroscopic studies.
In addition to atmospheric \({}^{14}\)CO\({}_{2}\) measurements, \(in\)\(situ\) mid-IR detection of trace reactive radical molecules in the atmosphere, such as OH, HO\({}_{2}\), and NO\({}_{3}\), can be achieved with the 2C-CRDS method. Field measurements of atmospheric radicals provide valuable experimental inputs for evaluating different models of oxidation chemistry in the earth's troposphere [32, 33]. Among these radicals, detection of OH, often referred to as the "detergent of the atmosphere" for removing CH\({}_{4}\) and other harmful gases (e.g., CO and volatile organic compounds), is particularly challenging because of its very low steady-state concentration, in the range of 40-400 ppq by volume (ppqv) during daytime. A mid-IR instrument will be lighter, more compact, and cheaper than the existing UV spectrometers for direct OH detection, which is based on either the fluorescence assay by gas expansion (FAGE) technique or multi-pass differential optical absorption spectroscopy (DOAS) to measure the \(A^{2}\Sigma^{+}-X^{2}\Pi_{i}\) transitions. Large and heavy high-throughput vacuum pumps, necessary for FAGE to avoid detection of the OH artifacts generated by the probe UV pulses [32, 33], would not be needed for mid-IR detection. While OH artifacts do not affect DOAS, which uses a low-intensity probe, the DOAS technique is not compatible with airborne measurements because of its large footprint (\(>\)1 km total pathlength with 10-40 m mirror separation) [32].
Mid-IR detection of atmospheric OH encounters challenges similar to those for \({}^{14}\)CO\({}_{2}\). One-photon detection of OH (using \(X^{2}\Pi_{3/2}\), \(v=1-0\) transitions near 3570 cm\({}^{-1}\)) is not possible because of spectral overlap with the \(\nu_{3}\)-band transitions of water (3750 cm\({}^{-1}\)). An intra-cavity pump-probe scheme, e.g., \(X^{2}\Pi_{3/2}\), \(v=1-0\), P(2.5e/f) and \(v=2-1\), Q(1.5e/f), would allow accurate 2C-CRDS measurements of atmospheric OH concentrations without significant interference from nearby water transitions. Even though the transition dipole moments for OH ro-vibrational excitation are \(\sim\)10\(\times\) smaller than those for the \(\nu_{3}\)-band transitions of CO\({}_{2}\)[22],
detection sensitivity of \(\sim\)50 ppqv OH could be achieved based on the sensitivity of our current setup. For detection of radical species, background 2C signals from closed-shell molecules (e.g., water and CO\({}_{2}\)) can be further filtered by taking advantage of the much larger Zeeman effects of radicals. The measurement sensitivity, selectivity, and accuracy of radical species will all be significantly enhanced with the incorporation of AC Zeeman modulation [34, 35, 36] in the 2C-CRDS method.
The 2C-CRDS technique provides a new mid-IR detection scheme for spectroscopic and chemical dynamics studies. The technique will enable probing chemical species at concentrations, internal energies, and conformations that are not easily accessible with other methods. The incorporation of a mid-IR frequency comb [37] as the probe is a particularly attractive direction for the 2C-CRDS technique. With single-frequency pump and broadband probe, the technique will enable high-sensitivity, high-selectivity, and \(multiplexed\) investigation of the molecular level structure in the high internal energy region of the electronic ground state of many molecules. Classic, fundamental problems in chemical physics, such as intramolecular vibrational energy redistribution, isomerization, and bond dissociation will be systemically explored in a highly-\(sensitive\) and level-\(specific\) manner. In combination with a Chen-type hyperthermal nozzle [38], which produces a vibrationally-hot but rotationally-cold population distribution [39], the 2C-CRDS method and its cavity-enhanced 2C variants complement (and, in certain applications, exceed) the capabilities of the widely-used stimulated emission pumping technique for studying molecular dynamics in the ground electronic state [40, 41], in particular for molecules with only short-lived (\(<\)1 ns) electronically excited states.
## Methods
### 2C-CRDS detection
With a 67-cm round trip, the free spectral range (FSR) of the three-mirror cavity is 443.3 MHz. The \(s\)- and \(p\)-mode cavity resonances are interleaved with a spacing of \(\sim\)\(\frac{\text{FSR}}{2}\), because of a net \(\sim\)\(\pi\)-phase shift between the two polarizations upon reflection inside the three-mirror cavity [42]. Unlike the free-space pump-probe experiments, the pump laser frequency cannot be set at a fixed value in an intra-cavity excitation scheme. A change in the cavity FSR value leads to a shift in both the pump and probe laser frequencies [2, 3]. As a result, in general, the pump (\(p\)-polarized) and probe (\(s\)-polarized) frequencies will not be simultaneously on-resonance with their respective target molecular transitions in our 2C-CRDS experiments. For \({}^{14}\)CO\({}_{2}\) detection reported in this work, the pump radiation is coupled to a cavity \(p\)-mode resonance that lies closest to the resonance frequency of a target \({}^{14}\)CO\({}_{2}\) pump transition. Under this experimental scheme, 2C signals from \({}^{14}\)CO\({}_{2}\) are observed with a maximum absolute pump detuning frequency of \(\frac{\text{FSR}}{2}\) (221.65 MHz), regardless of the choice of the pump and probe transitions. Because of the strong cavity-enhanced pump power (20 W) that leads to significant power saturation and broadening (\(\sim\)300 MHz at full-width half-maximum) of the pump transition, the observed 2C signals are minimally affected by the absence of double-resonance excitation condition.
The strong and sustained intra-cavity pump power is achieved by stabilizing the pump laser frequency to a cavity resonance with the Pound-Drever-Hall (PDH) method [43]. The output frequency from a third laser ("Mol" in Fig. 1a) is locked to the center of an N\({}_{2}\)O transition. The beatnote between this third laser and the pump is used to stabilize the cavity length, and to calibrate both the pump and probe laser frequencies. Further details on our 2C-CRDS detection system are provided in SI Appendix, Section S1.2.
The resonance frequencies of the \(\nu_{3}=1\gets 0\) P(14) (pump) and \(\nu_{3}=2\gets 1\) P(17) (probe) transitions of \({}^{14}\)CO\({}_{2}\)[25, 26, 27] are separated by nearly an exact odd integer multiple of the frequency spacing between neighboring \(p\)- and \(s\)-mode cavity resonances of our cavity (i.e., \(\sim\)\(3329\times\frac{\text{FSR}}{2}\)). As a result, the \({}^{14}\)CO\({}_{2}\) 2C transition from the current P(14)-P(17) pump-probe scheme occurs at the near double-resonance excitation condition, with a frequency detuning of \(\sim\)30 MHz and \(\sim\)0 MHz, respectively, for the pump and probe lasers. In addition, the \(J=17\) level is near the maximum of the room-temperature rotational distribution of CO\({}_{2}\). The observed \({}^{14}\)CO\({}_{2}\) signals in our current experiments are thus nearly maximized for intra-cavity pump-probe detection of the molecule.
As in our previous work [2], 2C-CRDS measurements of the CO\({}_{2}\) gas samples are all taken at 20.1 torr. At this measurement pressure, the probe transition of \({}^{14}\)CO\({}_{2}\) is weakly saturated, considering that the magnitude of the \({}^{14}\)CO\({}_{2}\) 2C signals depends on the starting voltage (\(V_{0}\)) of the ringdown fit, e.g., a 50\(\%\) decrease of \(V_{0}\) leads to \(\sim\)30\(\%\) increase in the 2C signals. Note that, in the strongly-saturated regime, the gas-absorption-induced ringdown rate (\(\gamma_{gas}\)) approaches zero, while \(\gamma_{gas}\) is independent of \(V_{0}\) in the non-saturated limit. The background 2C signals at the \({}^{14}\)CO\({}_{2}\) pump-probe transition region relevant to this work is significantly less saturated than the \({}^{14}\)CO\({}_{2}\) probe transition. While the degree of saturation of the \({}^{14}\)CO\({}_{2}\) probe transition can be reduced at higher gas pressures, collision-induced homogeneous broadening will lead to an increase in the background 2C signal level. We are currently working on optimizing various experimental conditions, such as the gas pressure and selection of pump-probe transitions (for further reduction of the background 2C signal), to improve the sensitivity and accuracy of 2C-CRDS detection of \({}^{14}\)CO\({}_{2}\).
## References
* [1] Romanini, D., Ventrillard, I., Mejean, G., Morville, J. & Kerstel, E. in _Cavity-enhanced Spectroscopy and Sensing_ (eds Gagliardi, G. & Loock, H.-P.) 1-51 (Springer, 2014).
* [
McCartt, A. D. & Jiang, J. Room-temperature optical detection of \({}^{14}\)CO\({}_{2}\) below the natural abundance with two-color cavity ring-down spectroscopy. _ACS Sensors_**7,** 3258-3264 (2022).
* [3] Jiang, J. & McCartt, A. D. Two-color, intracavity pump-probe, cavity ringdown spectroscopy. _The Journal of Chemical Physics_**155,** 104201 (2021).
* [4] Bennett, C. L. _et al._ Radiocarbon dating using electrostatic accelerators: Negative ions provide the key. _Science_**198,** 508-510 (1977).
* [5] Nelson, D. E., Korteling, R. G. & Stott, W. R. Carbon-14: Direct detection at natural concentrations. _Science_**198,** 507-508 (1977).
* [6] Godwin, H. Half-life of radiocarbon. _Nature_**195,** 984-984 (1962).
* [7] Taylor, R. E. & Bar-Yosef, O. _Radiocarbon dating: An archaeological perspective_ (Routledge, 2016).
* [8] Turteltaub, K. W. & Vogel, J. S. Bioanalytical applications of accelerator mass spectrometry for pharmaceutical research. _Current Pharmaceutical Design_**6,** 991-1007 (2000).
* [9] Wong, S. G. & Ma, S. in _Overcoming Obstacles in Drug Discovery and Development_ 137-174 (Elsevier, 2023).
* [10] Heaton, T. J. _et al._ Radiocarbon: A key tracer for studying Earth's dynamo, climate system, carbon cycle, and Sun. _Science_**374,** eabd7096 (2021).
* [11] Sargent, M. _et al._ Anthropogenic and biogenic CO\({}_{2}\) fluxes in the Boston urban region. _Proceedings of the National Academy of Sciences_**115,** 7491-7496 (2018).
* [12] Miller, J. B. _et al._ Large and seasonally varying biospheric CO\({}_{2}\) fluxes in the Los Angeles megacity revealed by atmospheric radiocarbon. _Proceedings of the National Academy of Sciences_**117,** 26681-26687 (2020).
* [13] Basu, S. _et al._ Estimating US fossil fuel CO\({}_{2}\) emissions from measurements of \({}^{14}\)C in atmospheric CO\({}_{2}\). _Proceedings of the National Academy of Sciences_**117,** 13300-13307 (2020).
* [14] Galli, I. _et al._ Molecular gas sensing below parts per trillion: Radiocarbon-dioxide optical detection. _Physical Review Letters_**107,** 270802 (2011).
* [15] Galli, I. _et al._ Spectroscopic detection of radiocarbon dioxide at parts-per-quadrillion sensitivity. _Optica_**3,** 385-388 (2016).
* [16] McCartt, A. D., Ognibene, T. J., Bench, G. & Turteltaub, K. W. Quantifying carbon-14 for biology using cavity ring-down spectroscopy. _Analytical Chemistry_**88,** 8714-8719 (2016).
* [17] Fleisher, A. J., Long, D. A., Liu, Q., Gameson, L. & Hodges, J. T. Optical measurement of radiocarbon below unity fraction modern by linear absorption spectroscopy. _The Journal of Physical Chemistry Letters_**8,** 4550-4556 (2017).
Genoud, G., Vainio, M., Phillips, H., Dean, J. & Merimaa, M. Radiocarbon dioxide detection based on cavity ring-down spectroscopy and a quantum cascade laser. _Optics Letters_**40,** 1342-1345 (2015).
* [19] Terabayashi, R. _et al._ Mid-infrared cavity ring-down spectroscopy using DFB quantum cascade laser with optical feedback for radiocarbon detection. _Japanese Journal of Applied Physics_**59,** 092007 (2020).
* [20] Kratochwil, N. A. _et al._ Nanotracing and cavity-ring down spectroscopy: A new ultrasensitive approach in large molecule drug disposition studies. _PloS one_**13,** e0205435 (2018).
* [21] Mitchell, L. E. _et al._ A multi-city urban atmospheric greenhouse gas measurement data synthesis. _Scientific Data_**9,** 361 (2022).
* [22] Gordon, I. _et al._ The HITRAN2020 molecular spectroscopic database. _Journal of Quantitative Spectroscopy and Radiative Transfer_**277,** 107949 (2022).
* [23] Delli Santi, M. G. _et al._ Biogenic fraction determination in fuel blends by laser-based \({}^{14}\)CO\({}_{2}\) detection. _Advanced Photonics Research_**2,** 2000069 (2021).
* [24] Yardley, J. _Introduction to Molecular Energy Transfer_ (Elsevier, 2012).
* [25] Galli, I. _et al._ The \(\nu_{3}\) band of \({}^{14}\)C\({}^{16}\)O\({}_{2}\) molecule measured by optical-frequency-comb-assisted cavity ring-down spectroscopy. _Molecular Physics_**109,** 2267-2272 (2011).
* [26] Zak, E. J. _et al._ Room temperature line lists for CO\({}_{2}\) symmetric isotopologues with ab initio computed intensities. _Journal of Quantitative Spectroscopy and Radiative Transfer_**189,** 267-280 (2017).
* [27] Huang, X., Schwenke, D. W., Freedman, R. S. & Lee, T. J. Ames-2016 line lists for 13 isotopologues of CO\({}_{2}\): Updates, consistency, and remaining issues. _Journal of Quantitative Spectroscopy and Radiative Transfer_**203,** 224-241 (2017).
* [28] Lehmann, K. K. & Huang, H. in _Frontiers of Molecular Spectroscopy_ 623-658 (Elsevier, 2009).
* [29] Balslev-Clausen, D. M. _Application of cavity ring down spectroscopy to isotopic bio- geo- & climate-sciences & the development of a mid-infrared CRDS analyzer for continuous measurements of \(N_{2}\)O isotopomers_ PhD thesis (University of Copenhagen, 2011).
* [30] Huang, H. & Lehmann, K. Noise caused by a finite extinction ratio of the light modulator in CW cavity ring-down spectroscopy. _Applied Physics B_**94,** 355-366 (2009).
* [31] Huang, X., Schwenke, D. W., Freedman, R. S. & Lee, T. J. Ames-2021 CO\({}_{2}\) Dipole Moment Surface and IR Line Lists: Toward 0.1% Uncertainty for CO\({}_{2}\) IR Intensities. _The Journal of Physical Chemistry A_**126,** 5940-5964 (2022).
* [32] Heard, D. E. & Pilling, M. J. Measurement of OH and HO\({}_{2}\) in the troposphere. _Chemical Reviews_**103,** 5163-5198 (2003).
* [33] Stone, D., Whalley, L. K. & Heard, D. E. Tropospheric OH and HO\({}_{2}\) radicals: Field measurements and model comparisons. _Chemical Society Reviews_**41,** 6348-6404 (2012).
* [34] Zhao, W. _et al._ Sensitive and selective detection of OH radicals using Faraday rotation spectroscopy at 2.8 \(\mu\)m. _Optics Express_**19,** 2493-2501 (2011).
* [35] Pfeiffer, J., Kirsten, D., Kalkert, P. & Urban, W. Sensitive magnetic rotation spectroscopy of the OH free radical fundamental band with a colour centre laser. _Applied Physics B_**26,** 173-177 (1981).
* [36] Lewicki, R., Doty III, J. H., Curl, R. F., Tittel, F. K. & Wysocki, G. Ultrasensitive detection of nitric oxide at 5.33 \(\mu\)m by using external cavity quantum cascade laser-based Faraday rotation spectroscopy. _Proceedings of the National Academy of Sciences_**106,** 12587-12592 (2009).
* [37] Schliesser, A., Picque, N. & Hansch, T. W. Mid-infrared frequency combs. _Nature Photonics_**6,** 440-449 (2012).
* [38] Kohn, D. W., Clauberg, H. & Chen, P. Flash pyrolysis nozzle for generation of radicals in a supersonic jet expansion. _Review of Scientific Instruments_**63,** 4003-4005 (1992).
* [39] Changala, P. B., Baraban, J. H., Merer, A. J. & Field, R. W. Probing \(cis\)-\(trans\) isomerization in the S\({}_{1}\) state of C\({}_{2}\)H\({}_{2}\) via H-atom action and hot-band-pumped IR-UV double resonance spectroscopies. _The Journal of Chemical Physics_**143,** 084310 (2015).
* [40] Hamilton, C. E., Kinsey, J. L. & Field, R. W. Stimulated emission pumping: New methods in spectroscopy and molecular dynamics. _Annual Review of Physical Chemistry_**37,** 493-524 (1986).
* [41] Dai, H.-L. & Field, R. W. _Molecular Dynamics and Spectroscopy by Stimulated Emission Pumping_ (World scientific, 1995).
* [42] Saraf, S., Byer, R. L. & King, P. J. High-extinction-ratio resonant cavity polarizer for quantum-optics measurements. _Applied Optics_**46,** 3850-3855 (2007).
* [43] Drever, R. _et al._ Laser phase and frequency stabilization using an optical resonator. _Applied Physics B_**31,** 97-105 (1983).
## Acknowledgments
We thank Professor Robert W. Field (MIT), Stephan L. Coy (MIT), and Professor Kevin K. Lehmann (UVA) for insightful and encouraging comments on the manuscript, and Bruce Buchholz, Kari Finstad, Esther Ubick, and Ted Ognibene (LLNL) for their assistance with the sample preparations.
**Funding:** Research reported in this publication was supported by the National Institute Of General Medical Sciences of the National Institutes of Health (Award Number R01GM127573). This work was partially supported by the National Nuclear Security Administration's Office of Defense Nuclear Nonproliferation Research and Development. Work was performed in part at the National User Resource for Biological Accelerator Mass Spectrometry, which is operated at LLNL under the auspices of the U.S. Department of Energy under contract DE-AC52-07NA27344. The User Resource is supported by the National Institutes of Health, National Institute of General Medical Sciences under grant R24GM137748.
**Author contributions:** Conceptualization: J.J. and A.D.M.; Experimental design: J.J. and A.D.M.; Data acquisition: J.J. and A.D.M.; Modeling: J.J.; Funding acquisition: A.D.M.; Writing (original draft): J.J.; Writing (revision): J.J. and A.D.M.
**Competing interests:** J.J. and A.D.M. are employees of Lawrence Livermore National Laboratory, which is managed by Lawrence Livermore National Security (LLNS) LLC. A patent application based on the 2C-CRDS technique, which is applied to the \({}^{14}\)CO\({}_{2}\) measurements in this study, was filed by LLNS. The patent application was approved by the US Patent Office with Patent number US11585753.
**Data and materials availability:** The data that support the findings of this study are available from the corresponding authors upon reasonable request.
## Supplementary information
Spectral fit
Details of the 2C-CRDS detection system
Sample preparation
Supplementary text
Figs. S1 to S5
# Mid-infrared trace detection
with parts-per-quadrillion quantitation accuracy:
Expanding frontiers of radiocarbon sensing
Jun Jiang,\({}^{1*}\) A. Daniel McCartt\({}^{1*}\)
\({}^{1}\)Center for Accelerator Mass Spectrometry, Lawrence Livermore National Laboratory,
Livermore, California 94550, USA
\({}^{*}\)To whom correspondence should be addressed: [email protected], [email protected].
Methods
### Spectral fit
Traditional lineshape functions such as the Voigt profile are inadequate for fitting the intra-cavity 2C peaks, which are in general asymmetric with respect to the pump detuning frequencies [1, 2], e.g., the maximum of the \({}^{14}\)CO\({}_{2}\) 2C transition from our current P(14)-P(17) pump-probe scheme occurs at a pump-detuning frequency of \(\sim\)30 MHz. The \({}^{14}\)CO\({}_{2}\) signals in Figs. 2a and 3a of the main text are modeled using the cavity-resonance-constrained Bloch equation formalism outlined in our previous papers [1, 2]. Considering the rapid rotational relaxation among various \(J\)-levels of \(\nu_{3}=1\), a three-level system equation is effectively applied here to treat the four-level problem (scheme ii in Fig. 1c of the main text), i.e., the lower level of the probe transition, \(\nu_{3}=1\)\(J=17\), is assumed to be populated immediately after pump excitation of \(\nu_{3}=1\)\(J=13\). Limited by the current signal-to-noise ratios, the background 2C spectra from the \({}^{14}\)C-depleted "dead" samples (i.e., "Petrogenic gas" and "Coal") are modeled as a sum of two Voigt functions. The homogenous and inhomogenous linewidths are assumed to have the same value for these two Voigt profile peaks. The fit parameters for the background signals are determined from the "Coal" spectrum in Fig. 2a of the main text, and those for modeling the \({}^{14}\)CO\({}_{2}\) 2C contribution are determined from a fit to the 2C-CRDS spectrum of a highly enriched \({}^{14}\)CO\({}_{2}\) sample (40\(\times\) natural abundance) after subtraction of the background 2C contribution. The observed 2C spectra from all other samples are modeled as a linear combination of the simulated \({}^{14}\)CO\({}_{2}\) and dead spectra, by fitting only their respective amplitudes. The fit uncertainty for each data point (\(\sigma\)) is set according to the expected measurement precision, determined by the shot-to-shot 2C ringdown rate fluctuations (\(\sigma_{sts}\sim 5\) s\({}^{-1}\)) of the current detection system and the averaging time (\(t_{avg}\)), i.e., \(\sigma=\sigma_{sts}/\sqrt{t_{avg}}\). The fitted model reproduces the experimental spectra with an average \(\sigma\)-weighted error close to unity (see Fig. 2d of the main text).
### Details of the 2C-CRDS detection system
The three-mirror cavity consists of two plano mirrors and a plano-concave mirror with 1-m radius of curvature (LohnStar). The two plano mirrors are glued onto an invar cavity spacer. The concave mirror is housed in a piezoelectric-transducer (PZT) assembly, which is attached to the cavity spacer. The laser incidence angle at the PZT mirror is \(\sim\)1.5\({}^{\circ}\). The pump, probe, and reference lasers are continuous-wave QCL (Hamamatsu, HHL-package). The pump and probe lasers (1000 mA maximum current) are each driven by a battery-powered QubeCL system from ppqSense. For both the pump and probe lasers (each modulated at 6 MHz), light reflection off the cavity is measured with a HgCdTe (MCT) photodetector (Thorlabs PDAVJ8), and the MCT signal is demodulated with a frequency mixer (Mini-Circuit, ZRPD-1+). The resulting error signal is used as the input to a PID servo control loop (Vescent, D2-125-PL) to achieve laser frequency-locking to the cavity with the PDH method (\(\sim\)1 MHz locking bandwidth). The probe cavity ringdown signals are measured by a liquid-nitrogen-cooled InSb detector (InfraRed Associates, Model IS-0.25) coupled to a pre-amplifier with 1 MHz bandwidth (InfraRed Associates, INSB-1000). The intra-cavity pump power is stabilized based on the cavity-transmitted pump power, which is measured by an MCT detector (VIGO, PVI-4TE-6-1\(\times\)1/PIP-DC-20M-F-M8). Both the pump and probe signals are digitized on an oscilloscope (National Instrument, PXI-5922) that operates at 4 Ms/s sampling rate and 20-bit analog input resolution.
The reference QCL is driven by a current controller from Wavelength Electronics (QCL500 Laboratory Series). The temperature of this QCL is regulated with a PI servo control loop (Wavelength Electronics, PTC2.5K-CH). After a double-pass through an optical cell (10 cm, 4.5 torr N\({}_{2}\)O), the transmitted intensity of the reference laser (modulated at 3 MHz) is measured by an MCT photodetector (VIGO, PVI-4TE-6/PIP-DC-20M), and the signal is demodulated with a frequency mixer (Mini-Circuit, ZRPD-1+). The frequency of the reference laser is locked to the center of the \(\nu_{3}=1-0\), R(16) transition of \({}^{15}\)N\({}^{14}\)N\({}^{16}\)O at 2214.339 \(\pm\) 0.001 cm\({}^{-1}\) by a PI
servo loop (New Focus, LB1005). The beatnote of the reference and pump lasers is measured by another MCT detector (VIGO PVI-4TE-10.6/FIP-1k-1G). This beatnote provides frequency calibrations for the pump and probe lasers. In our experiments, the probe laser frequency is first roughly measured using a wavemeter (Bristol 771) and then assigned a frequency using the beatnote and the cavity mode spacing (using Eq. 1 in Ref [2]). Timing for the experiment is controlled with a custom code implemented on a field programmable gate array (FPGA, National Instrument, PXIe-7976R and NI-5783). The FPGA system controls the AOMs (IntraAction Corp) for the pump and probe lasers, and provides corrections to the PDH servo.
The Allan deviation analysis of 2C-CRDS signals from multiple \({}^{14}\)CO\({}_{2}\) samples is provided in Fig. S1. The minimum of the sample-averaged Allan deviation curve is used to determine the "ultimate sensitivity" of our current 2C-CRDS setup for \({}^{14}\)CO\({}_{2}\) detection.
Materials
The sample preparation procedures of the \({}^{14}\)C standard and bio-fuel samples are similar to those described in our previous work [1]. The "Petrogenic gas" sample is instrument-grade CO\({}_{2}\) from petroleum feedstock (Airgas). The other \({}^{14}\)C samples used in this work are either in the solid (\({}^{14}\)C standards) or liquid form (bio-fuels). The bio-fuel samples are transportation fuels containing 1-7\(\%\) of biologically-derived carbon-containing materials. The "Coal" sample is composed of coal. The "Tiriwood" sample is "Belfast Pine, Sample B" from the Third International Radiocarbon Intercomparison with designator Q7780 [3]. It was collected from a 5240-year-old tree (TIRI Wood, Pinus sylvestris). "Ox1" (Oxalic acid I, NIST designation SRM 4990 B) is a principle \({}^{14}\)C standard, and was derived from a crop of 1955 sugar beet [4, 5, 6]. "Ox2" (Oxalic acid II, NIST designation SRM 4990 C) was derived from a crop of 1977 French beet molasses [4, 5, 6]. "IAEAC3" is cellulose produced in 1989 from \(\sim\)40 year old trees in Sweden [7]. "ANU" (IAEAC6) is sucrose produced from sugar cane grown between September 1965 and June 1971 [7]. The "ANU" sample is named after the Australian National University, the home institute of H.A. Polach who prepared the first batch of the sample [8]. The \({}^{14}\)C contents in the \({}^{14}\)C standard and bio-fuel samples have all been previously measured by AMS.
Each of these samples is sealed in a quartz tube along with excess amount of copper oxide (\(>\)150 mg), and is combusted at 900\({}^{\circ}\)C for 2-4 hours. The quartz tube containing the CO\({}_{2}\) gas is cracked under vacuum inside a bellow tube attached to the gas manifold. The released gas first passes over an isopropanol/dry ice water trap. The gas sample is then exposed to a liquid nitrogen cold finger for removal of the non-condensible gas components (O\({}_{2}\) and N\({}_{2}\)). The purified CO\({}_{2}\) gas is introduced to the optical cavity for 2C-CRDS measurements at 20.1 torr.
## S3 Supplementary text
### 2C-CRDS signal variations
We have investigated the pump power and temperature dependence of the 2C-CRDS signals (Fig. S2). In general, the 2C signals from \({}^{14}\)CO\({}_{2}\) are significantly less sensitive to changes in the experimental conditions than the background 2C signals, which result from inadvertent hot-band excitation of other CO\({}_{2}\) isotopogoues. Among these observations, the strong dependence of the 2C signals on the ambient temperature is unexpected (Fig. S2c). A 2\({}^{\circ}\)C increase of the
room temperature leads to \(\sim\)2\(\%\) and \(\sim\)13\(\%\) increase, respectively, for the 2C signals from \({}^{14}\)CO\({}_{2}\) and the background 2C process. Note that this 2\({}^{\circ}\)C change of the ambient temperature leads to only a modest increase in the temperature of the gas cavity (\(T_{\text{gas}}\)\(\sim\)0.1\({}^{\circ}\)C), which is housed inside a temperature-controlled experimental chamber. As can be inferred from Fig. S2b, \(>\)1\({}^{\circ}\)C increase of \(T_{\text{gas}}\) would be needed to cause the observed \(\sim\)13\(\%\) increase of the background 2C signal in Fig. S2c. Furthermore, the \({}^{14}\)CO\({}_{2}\) 2C signal from a hotter gas sample slightly decreases (by \(\sim\)0.6\(\%\) for a 0.8\({}^{\circ}\)C increase of \(T_{\text{gas}}\)) because of depletion of thermal population in the initial \(\nu_{3}=0\), \(J=14\) level of the pump-probe scheme. The observed ambient temperature dependence of the 2C signals in Fig. S2c must be caused by temperature-induced changes in the experimental conditions external to the gas cavity.
We believe that changes in the ambient temperature leads to mis-control of the intra-cavity pump power, which is stabilized based on the photodiode measurement of the cavity-transmitted pump power (see Fig. 1a of the main text and Section S1B). As can be seen from Figs. S2d and S2e, the temperature-related variations of the 2C-CRDS signals (from a \({}^{14}\)CO\({}_{2}\) sample with 2\(\times\) natural abundance concentration) correlate with changes of the control voltage for the pump laser power during the experiment. It appears that, as a result of the 2\({}^{\circ}\)C increase of the room temperature, our current pump power controlling scheme inadvertently leads to nearly 20\(\%\) increase for the pump laser power. Based on the pump power dependence of the 2C-CRDS signals from the same sample (see the inset of Fig. S2a), this unexpected 20\(\%\) variation of the pump power could be responsible for the observed \(\sim\)4\(\%\) change of the 2C-CRDS signal in Fig. S2d.
We must point out that the applied changes to the experimental conditions in Fig. S2 are \(\sim\)10\(\times\) larger than those during any of the \({}^{14}\)C standards and bio-fuel measurements. As discussed in the main text, variations of the background 2C signals can be at least partially compensated by spectral fitting. It is more difficult to directly account for the much smaller but
non-negligible variations of the \({}^{14}\)CO\({}_{2}\) signals. For samples with close to natural \({}^{14}\)C concentration, these variations could lead to \(\sim\)2 ppb measurement error, given the stability of the experimental conditions in our laboratory. A more reliable and robust pump power stabilization scheme will be implemented in a future generation of the 2C-CRDS setup.
### Details of the intra-cavity pump-probe experiments on collision-induced population transfer
The purpose of the pump-probe experiments on \({}^{13}\)CO\({}_{2}\) in Fig. 4a of the main text is to gain insights into the collisional mechanisms responsible for the background 2C signals relevant to 2C-CRDS detection of \({}^{14}\)CO\({}_{2}\). In each of these experiments, the probe laser wavenumber spans continuously from approximately 2213.88 cm\({}^{-1}\) to 2214.13 cm\({}^{-1}\). This continuous wavenumber coverage is achieved by splicing together 16 sectional spectra, each of which covers a frequency span of the cavity FSR (443.3 MHz). During each sectional FSR scan, the pump radiation is coupled to the \(same\) cavity resonance (i.e., with the same absolute mode index) that tunes across the resonance frequency of a target pump transition at approximately the mid-point of the FSR scan. This pump-probe frequency tuning scheme ensures that, during each probe sectional scan, the maximum absolute pump detuning frequency is less than \(\frac{\text{FSR}}{2}\) from the target \({}^{13}\)CO\({}_{2}\) pump transition frequency. Both positive and negative 2C signals can occur depending on whether the pump-initiated population redistribution increases or decreases the population in the lower level of a specific probe transition.
The intensities of the collision-induced 2C spectra are, in general, \(not\) expected to be continuous across different sectional spectra, because of the discontinuity of the pump laser detuning frequency and thereby pump excitation probability at the splicing points of neighboring spectra. However, given that the resonance frequency of a target pump transition occurs, by design, close to the mid-point of each FSR scan in our experiments here, the pump excitation efficiency
is largely equal at those splicing points. The intensities of the 2C signals due to excitation of a target pump transition are thus expected to be nearly continuous across neighboring spectra, which is largely found to be the case in Fig. S3. Clear discontinuities are, however, present between some of the neighboring sectional spectra, in particular from the experiment for which \(20^{0}1(1)\) J=31\(e\) is the target pump-populated \({}^{13}\)CO\({}_{2}\) level (e.g., at the 2214.1 cm\({}^{-1}\) region of the corresponding spectra in Fig. S3). The presence of these discontinuities indicates that at least part of the observed signals originate from a collision-induced four-level excitation process that does \(not\) involve the target pump transition. The resonance frequency of the pump transition in that additional four-level excitation process could be far from the center frequency of the pump frequency tuning range designed for the original target pump transition. For example, 2C signals due to far-off-resonant pump excitation of the \(11^{1}1(2)\gets 11^{1}0(2)\) P(42\(f\)) transition of \({}^{13}\)CO\({}_{2}\) (\(>\)500 MHz detuned) are believed to be the cause of the intensity jumps between neighboring spectra at the 2214.1 cm\({}^{-1}\) region of the "\(20^{0}1(1)\) J=31\(e\)" experiment in Fig. S3. As can be seen from Fig. S3, the non-continuous 2C spectra from that experiment are well reproduced by our simulation, which takes into account all possible pump excitation transitions within 8 GHz of the resonance frequency of a chosen target pump transition.
### Modeling the collision-induced background signals
The collision model for the background 2C signals described in the main text is applied to simulate the collision-induced 2C spectra from all possible isotopologues of CO\({}_{2}\), based on the linelist in Ref [9]. The simulated spectra for different isotopologues, weighted by their natural abundances used in Ref [10], are combined to yield the simulated spectra. The model suggests that the observed 2C signals in Fig. 4a of the main text and Figs. S3-S5 here originate predominantly from the \({}^{13}\)C\({}^{16}\)O\({}_{2}\) species, although non-negligible contributions from \({}^{13}\)C\({}^{16}\)O\({}^{18}\)O are present in some of the spectra.
Figure S3: Simulation of the observed collision-induced 2C spectra following pump excitation of six hot-band ro-vibrational transitions of \({}^{13}\)CO\({}_{2}\). The six target \({}^{13}\)CO\({}_{2}\) levels for the pump are indicated on the figures.
In the model, the "\(J\)-changing collisions" pathway (Pathway 1) is treated as a special case of Pathway 2, for which the vibrational energy difference is zero for Pathway 1. The intensity of the collision-induced signal is assumed to be dependent on the vibrational energy gap for the responsible collisional pathway. For simplicity, the simulated 2C signals decrease by 10\(\times\) for every 200 cm\({}^{-1}\) increase in \(|\Delta E|\). Furthermore, the effects on the collisional transfer rates due to differences in the basis state composition among different Fermi polyads and polyad members are ignored in our model. The collision-induced homogenous linewidths are taken to be 4 MHz/torr (half-width at half maximum) for all the probe transitions. Their inhomogenous linewidths are fixed to the room-temperature value.
Under these assumptions, we qualitatively reproduce the observed collision-induced 2C spectra in Fig. S3 by adjusting an overall intensity scaling factor and the power-broadened linewidth for the pump transitions (assumed to be the same for all pump transitions used in this work). Initially, we assume that the rotational distribution is thermal (300 K) for all the pump-populated vibrational states in the model. Later, we find that by further assuming a weak \(\Delta E\)-dependence for rotational relaxation, in the form of the exponential model of Polanyi and Woodall [11; 12] (i.e., \(\propto e^{-c\Delta E}\), where \(c=0.00355\) (cm\({}^{-1}\))\({}^{-1}\)), the simulation achieves better
Figure S5: Investigation of collision-induced background 2C signals at the resonance frequency regions of various \(\nu_{3}=2\gets 1\), P-branch probe transitions of \({}^{14}\)CO\({}_{2}\) (with the same \(\nu_{3}=1\gets 0\), P(14) pump). The predicted background 2C spectra are based on the collision model derived from the pump-probe experiments shown in Fig. 4a of the main text. The background 2C signals are measured with a “Petrogenic gas” (dead) sample. The observed P-branch probe spectra from a \({}^{14}\)CO\({}_{2}\) sample at 40\(\times\) natural abundance concentration are also shown here. The double-headed right-angle arrows are used to indicate the background 2C signal levels at the resonance frequency of these P-branch transitions.
agreement with the experimental observations.
The target pump-populated vibrational state in Fig. S4, 05\({}^{5}\)1(1), has one more quantum of excitation in \(\nu_{2}\) than those in Fig. S3. Following pump excitation into the six target \({}^{13}\)CO\({}_{2}\) levels in Fig. S3, only one quantum exchange of the \(\nu_{2}\) mode vibration with the \({}^{12}\)CO\({}_{2}\) bath is required to observe the probe transition at 2214.12 cm\({}^{-1}\). However, an overall two-quantum exchange is needed to observe the same probe signal from 05\({}^{5}\)1(1). The probe spectrum from this higher-energy 05\({}^{5}\)1(1) state in Fig. S4 thus allows us to estimate the effect of exchanging additional quanta of \(\nu_{2}\) on the observed probe signal intensities. The simulated spectra in Figs. S4 and S5 are generated by assuming a 10\(\times\) reduction in the probe signal for each additional \(\nu_{2}\) exchange.
Note that, our CO\({}_{2}\)-transition-based model fails to predict the \(negative\) 2C peak from the P(21) probe spectrum in Fig. S5. This observed negative signal can be explained by a "\(V\)-type" four-level excitation of the \({}^{14}\)N\({}_{2}\)\({}^{18}\)O contaminant in the gas sample, i.e., \(\nu_{3}=1\gets 0\), P(3) pump (with -1 GHz detuning) and \(\nu_{3}=1\gets 0\), P(34) probe. |
2303.15382 | Tensor triangulated category structures in the derived category of a
variety with big (anti-)canonical bundle | Let $X$ be a smooth projective variety over $\mathbb{C}$ with big
(anti-)canonical bundle. It is known that in this situation the Balmer spectrum
of the tensor triangulated category of perfect complexes $Perf(X)$ of $X$
equipped with the derived tensor product $\otimes_{X}^{\mathbb{L}}$ recovers
the space $X$. In this work we study the possible tensor triangulated category
structures one can put on $Perf(X)$. As an application we prove a monoidal
version of the well-known Bondal-Orlov reconstruction theorem. | Angel Israel Toledo Castro | 2023-03-27T16:55:56Z | http://arxiv.org/abs/2303.15382v1 | Tensor triangulated category structures in the derived category of a variety with big (anti-)canonical bundle
###### Abstract.
Let \(X\) be a smooth projective variety over \(\mathbb{C}\) with big (anti-)canonical bundle. It is known that in this situation the Balmer spectrum of the tensor triangulated category of perfect complexes \(Perf(X)\) of \(X\) equipped with the derived tensor product \(\otimes_{X}^{\mathbb{L}}\) recovers the space \(X\). In this work we study the possible tensor triangulated category structures one can put on \(Perf(X)\). As an application we prove a monoidal version of the well-known Bondal-Orlov reconstruction theorem.
###### Contents
* 1 Introduction
* 2 Derived categories and the Balmer reconstruction
* 2.1 Tensor triangulated geometry
* 3 TTC's and Picard groups
## 1. Introduction
In [1], Bondal and Orlov showed that if \(X\) is a smooth projective variety over \(\mathbb{C}\) with ample (anti-)canonical bundle then its bounded derived category \(D^{b}(X)\) completely recovers the space. More precisely, they showed that
**Theorem 1.1**.: _[_1_, Theorem 2.5]_ _Let X be an irreducible smooth projective variety with ample (anti-)canonical bundle. If \(D^{b}(X)\simeq D^{b}(Y)\) for some other smooth algebraic variety Y, then \(X\cong Y\)._
This theorem came in contrast with the discovery by Mukai ([17]) that for an abelian variety \(A\), there exists an equivalence as triangulated categories \(D^{b}(A)\simeq D^{b}(\hat{A})\) between the bounded derived category of \(A\) and the bounded derived category of its dual \(\hat{A}\).
This observation sparked the study of what is now called Fourier-Mukai partners of a given variety \(X\), those varieties which are triangulated equivalent to the bounded derived category of \(X\).
Bondal and Orlov's reconstruction pointed out that a (birational) geometric condition on the variety can introduce some control on these derived equivalences and with this in mind Kawamata generalized this theorem for varieties with big (anti-)canonical bundle clarifying from a geometric point of view what is the role of this condition on the possible equivalence of derived categories. Namely he showed:
**Theorem 1.2**.: _[_1_, Theorem 1.4]_ _Let \(X,Y\) be smooth projective varieties such that there is an equivalence_
\[\mathcal{F}:D^{b}(X)\stackrel{{\simeq}}{{\longrightarrow}}D^{b }(Y)\]
_as triangulated categories, then_
1. _dim X = dim Y._
2. _If the canonical divisor_ \(K_{X}\) _is nef, so is_ \(K_{Y}\) _and there is an equality in the numerical Kodaira dimensions_ \(\nu(X)\) _and_ \(\nu(Y)\)_._
3. _If X is of general type, then X and Y are birational and furthermore, there is a smooth projective variety_ \(p:Z\to X\)_,_ \(q:Z\to Y\) _such that_ \(p^{*}K_{X}\simeq q^{*}K_{Y}\)_._
This theorem should be understood as a strong indication of a relationship between the birational geometry of a variety and its derived category.
On the other hand, Balmer showed in [1, 1] that when equipped with the derived tensor product \(\mathbb{S}^{\mathbb{L}}_{X}\), the derived category of perfect complexes \(Perf(X)\) of any coherent scheme \(X\) can recover the space \(X\) by what is now known as the Balmer spectrum \(Spc(Perf(X),\mathbb{S}^{\mathbb{L}}_{X})\)
The Balmer spectrum can be constructed for a general tensor triangulated category, a triangulated category equipped with a compatible monoidal structure, and produce a locally ringed space.
The existence of non isomorphic Fourier-Mukai partners \(Y\) for a smooth variety \(X\) implies using the Balmer spectrum construction that the bounded derived category \(D^{b}(X)\) can be equipped with at least as many tensor triangulated category structures as non-isomorphic Fourier-Mukai partners, up to monoidal equivalence.
In other words, if \(FM(X)\) is the set of isomorphism classes of Fourier-Mukai partners of \(X\) and \(TTS(X)\) is the set of equivalence classes of tensor triangulated category structures on the bounded derived category \(D^{b}(X)\) there exists an injection
\[FM(X) \to TTS(X)\] \[Y \mapsto(\otimes_{Y}^{\mathbb{L}},\mathscr{O}_{Y})\]
Where the pair \((\otimes_{Y}^{\mathbb{L}},\mathscr{O}_{Y})\) denotes the tensor triangulated category structure given by the derived tensor product \(\otimes_{Y}^{\mathbb{L}}\) with unit \(\mathscr{O}_{Y}\).
Our main interest in this work is the study of this function, its surjectivity and the properties that one can deduce about possible tensor triangulated category structures outside of the image of this injection, all under the condition that the (anti-)canonical bundle of \(X\) is big.
In Section 2 we give a brief general overview of the results we will need about general derived categories of quasi-coherent sheaves on a smooth projective variety, together with a reminder of the Balmer spectrum construction through Thomason's classification theorem.
In Section3, given a tensor triangulated category structure \((D^{b}(X),\boxtimes,\mathbb{1})\) with unit \(\mathbb{1}\) on a bounded derived category \(D^{b}(X)\), we introduce the notion of almost spanning class with respect to a thick subcategory \(I\) (Definition 3.9) and we show (Theorem 3.10) that if \(X\) is a smooth projective variety of general type then there exists a proper tensor ideal \(I_{X*}\) of \((D^{b}(X),\otimes_{X}^{\mathbb{L}},\mathscr{O}_{X})\) such that the set of tensor powers of \(\omega_{X}\) forms an almost spanning sequence with respect to this ideal \(I_{X*}\). This result is meant to highlight the more general behavior of almost spanning classes through the use of Thomason's classification theorem and properties of the Balmer spectrum. We see that this collection of objects can be used to prove the following theorem:
**Lemma 1.3**.: _(Lemma 3.12) Suppose \(X\) is a smooth projective variety of general type. If \(\boxtimes\) is a tensor triangulated structure on \(D^{b}(X)\) with unit \(\mathscr{O}_{X}\), and \(U\) is a \(\boxtimes\)-invertible object such that \(U\boxtimes I_{X*}\subseteq I_{X*}\). Then there is a natural equivalence between the functors induced by \(U\boxtimes\) and \(U\otimes_{X}^{\mathbb{L}}\) in \(D^{b}(X)/I_{X*}\)._
When the \(\otimes_{X}^{\mathbb{L}}\)-tensor ideal \(I_{X^{*}}\) is also a \(\boxtimes\)-tensor ideal for a tensor triangulated category structure as described in the previous lemma, then we obtain that the Picard group of \(\boxtimes\)-invertible objects is a subgroup of the Picard group of \(\otimes_{X}^{\mathbb{L}}\)-invertible objects (Corollary 3.15). This hypothesis holds true in particular when the (anti-)canonical bundle of \(X\) is ample.
With this observation, our main corollary is the following monoidal version of the Bondal-Orlov reconstruction theorem:
**Corollary 1.4**.: _(Corollary 3.18) Let \(X\) be a smooth projective variety with ample (anti-)canonical bundle, then if \(\omega_{X}[n]\) is an invertible object for a tensor triangulated structure \(\boxtimes\) on \(D^{b}(X)\) with unit \(\mathscr{O}_{X}\) then \(\boxtimes\) and \(\otimes_{X}^{\mathbb{L}}\) coincide on objects._
The results in this work were obtained as part of the author's PhD thesis at the Laboratoire J.A. Dieudonne at the Universite Cote d'Azur. The author would like to thank his advisor Carlos Simpson for many discussions and to Ivo Dell'Ambrogio and Bertrand Toen for their careful and valuable comments on the thesis manuscript. The PhD thesis was partially financed by the CONACyT-Gobierno Frances 2018 doctoral scholarship.
## 2. Derived categories and the Balmer reconstruction
Through the rest of this work we will be working exclusively with smooth projective varieties over \(\mathbb{C}\). We will from now on omit the mention of the base field in our exposition. Some of the material presented in this section can be found in deeper detail in [10].
The goal of this section is to introduce the basic results and notions we will be using for our results.
Let us start by recalling that if \(X\) is a smooth projective variety then there exists an equivalence as triangulated categories between the derived category \(Perf(X)\) of perfect complexes on \(X\) and the bounded derived category \(D^{b}(X)\). As a consequence of this whenever we work with such a variety we will at times make no distinction between these two categories.
One important feature of these categories is the existence of Serre functors, let us recall:
**Definition 2.1**.: _Let \(\mathscr{T}\) be a triangulated category an autoequivalence \(S:\mathscr{T}\to\mathscr{T}\) satisfying \(Hom(A,B)\cong Hom(B,S(A))^{*}\) for all objects \(A,B\in\mathscr{T}\), is called a Serre functor._
**Example 2.1**.: _Specifically if for example the triangulated category is a derived category of a smooth projective scheme of dimension n, we
have Grothendieck-Verdier duality which implies that for every pair of objects \(M,N\in D^{b}(X)\), \(Hom(M,N)=Hom(N,M\otimes\omega_{X}[n])^{*}\) where \(\omega_{X}\) is the canonical bundle of X._
This notion was first defined by Kapranov and Bondal in [1]. The following two properties of the Serre functor are essential to our work:
**Lemma 2.2**.: _(Proposition 1.3 [1]) Let \(\mathscr{T}\) be a triangulated category with Serre functor S, and let \(\psi:\mathscr{T}\to\mathscr{T}\) be any autoequivalence, then \(\psi\circ S\cong S\circ\psi\)._
**Proposition 2.3**.: _(Proposition 3.4 [1]) Let \(\mathscr{T}\) be a triangulated category and let \(S\) be a Serre functor in \(\mathscr{T}\), then it is unique up to graded isomorphism._
This latter proposition implies that whenever the Serre functor exists it is part of the data of the given category. In our case of interest, as one can write this functor using the derived tensor product \(\otimes_{X}^{\mathbb{L}}\) we have now some possible control on the monoidal structure \(\otimes_{X}^{\mathbb{L}}\) directly from the category without knowledge of \(X\).
Another crucial notion we will use is that of spanning classes, we recall the definition:
**Definition 2.4**.: _A collection of objects \(\{X_{i}\}\subseteq\mathscr{T}\) of a triangulated category is called a spanning class if:_
1. _If_ \(Hom(X_{i},D[j])=0\) _for all_ \(i,j\) _then_ \(D\simeq 0\)__
2. _If_ \(Hom(D[j],X_{i})=0\) _for all_ \(i,j\) _then_ \(D\simeq 0\)__
However, whenever the Serre functor exists in the triangulated category we see that only one of the conditions is necessary and the other will be automatically satisfied by use of the Serre functor isomorphism. A general way to produce spanning classes in derived categories of abelian categories is from ample sequences:
**Definition 2.5**.: _We call a collection of objects of an abelian category \(\mathcal{A}\), \(\{L_{i}\}\subset\mathcal{A}\) an ample sequence if the following conditions are met: For \(i<<0\), and all \(A\in\mathcal{A}\)_
1. \(Hom(L_{i},A)\otimes_{k}L_{i}\to A\) _is surjective._
2. \(Hom(A,L_{i})=0\)__
3. \(Ext^{j}(L_{i},A)=0\)_,_ \(j\pm 0\)__
As the name suggest, an important example of such sequences comes from collections of tensor powers of ample line bundles. The relation between the two notions of spanning class and ample sequence was shown by Bondal and Orlov in the following result:
**Lemma 2.6**.: _Let \(\mathcal{A}\) be an abelian category of finite homological dimension and let \(\{L_{i}\}\) be an ample sequence, then the collection \(\{L_{i}\}\) seen as objects of \(\mathcal{D}(\mathcal{A})\) form a spanning class._
The following example illustrates how we should be exploiting the existence of ample sequences.
**Example 2.2**.: _Let \(X\) be a smooth projective variety with ample canonical bundle. Then the set \(\{\omega_{X}^{\otimes X}\}\) forms an ample sequence and so by the previous lemma it forms a spanning class in the derived category \(D^{b}(X)\). As a consequence, we see that any complex \(\mathscr{F}\) of coherent sheaves can be resolved by tensor powers of the canonical bundle \(\omega_{X}\). In other words, there exists a sequence:_
\[0\rightarrow\oplus_{j_{0}}(\omega_{X}^{\otimes i_{0}})\rightarrow\cdots \rightarrow\oplus_{j_{k}}(\omega_{X}^{\otimes i_{k}})\rightarrow\mathscr{F}\to 0\]
**Remark 2.7**.: _We remark too that in general for a triangulated category \(\mathscr{T}\) with a spanning class \(\Omega\subset\mathscr{T}\), if \(\phi:\mathscr{T}\rightarrow\mathscr{T}\) is an autoequivalence then the set \(\phi(\Omega)\) is too a spanning class. In the example above, this remark implies that one can resolve any complex \(\mathscr{F}\) by tensor powers of sheaves of the form \(\omega_{X}(i)[j]\) for a fixed \(i,j\in\mathbb{Z}\)._
### Tensor triangulated geometry
When dealing with derived categories of coherent sheaves on a variety one can equip this category with a monoidal structure given by the derived tensor product. One can axiomatize this sort of structure in what is known as a tensor triangulated category.
In this subsection we recall Balmer's spectrum construction which inputs a tensor triangulated category and outputs a locally ringed space which as we will see recovers a variety whenever we work with the derived category of perfect complexes on said variety.
**Definition 2.8**.: _A tensor triangulated category (TTC for short) \(\mathscr{T}\) is a triangulated category together with the following data:_
1. _A closed symmetric monoidal structure given by a functor_ \(\otimes:\mathscr{T}\times\mathscr{T}\rightarrow\mathscr{T}\) _additive and exact ( with respect to the k-linear structure ) on both entries._
2. _The internal Hom functor_ \(\underline{\text{hom}}:\mathscr{T}\times\mathscr{T}\rightarrow\mathscr{T}\) _sends triangles to triangles ( up to a sign )._
3. _Coherent natural isomorphisms for each n and m in_ \(\mathbb{Z}\)_,_ \(r:x\otimes(y[n])\cong(x\otimes y)[n]\) _and_ \(l:(x[n])\otimes y\cong(x\otimes y)[n]\) _compatible with the symmetry, associative and unit coherence morphisms from
the symmetric monoidal category structure. (See for example_ _[_16_, Section 2.1.1]_ _for the explicit diagrams)._
We will refer to a TTC by the triple \((\mathscr{T},\otimes,\mathbb{1}_{\mathscr{T}})\) where \(\otimes\) refers to the monoidal structure and \(\mathbb{1}_{\mathscr{T}}\) to the unit object. Often if there is no confusion or the unit plays no role we will omit it and write \((\mathscr{T},\otimes)\) instead.
At times when we deal with a fixed underlying triangulated category \(\mathscr{T}\) we will write \(\otimes\) or \((\otimes,\mathbb{1})\) to refer to a tensor triangulated category structure on \(\mathscr{T}\). Let us remark however that the functor \(\otimes\) and unit \(\mathbb{1}_{\mathscr{T}}\) do not completely determine a tensor triangulated category since the compatibility conditions in the symmetric monoidal category structure can in principle change while maintaining the functor \(\otimes\) and unit \(\mathbb{1}\). As we will explain in the following this does not represent a problem for our purposes.
We proceed with a number of definitions.
**Definition 2.9**.: _Let \(\mathscr{T}\) be a triangulated category, and \(\mathscr{I}\subseteq\mathscr{T}\) a full triangulated subcategory, we say that it is thick if it is closed under direct summands. So that if \(A\oplus B\in\mathscr{I}\) then \(A,B\in\mathscr{I}\)._
**Definition 2.10**.: _Let \((\mathscr{T},\otimes)\) be a TTC. We will say that a thick subcategory \(\mathscr{I}\subset\mathscr{T}\) is a \(\otimes-\)ideal if for every \(A\in\mathscr{T}\) we have \(A\otimes\mathscr{I}\subset\mathscr{I}\)_
**Definition 2.11**.: _Let \((\mathscr{T},\otimes)\) be a tensor triangulated category. Let \(\mathscr{I}\) be a \(\otimes\)-ideal, we will say that it is prime if for any \(A,B\in\mathscr{T}\) with \(A\otimes B\in\mathscr{I}\) then \(A\in\mathscr{I}\) or \(B\in\mathscr{I}\)._
As in affine algebraic geometry we can define the spectrum of a tensor triangulated category.
**Definition 2.12**.: _Let \((\mathscr{T},\otimes,\mathbb{1})\) be a tensor triangulated category, the set of all prime \(\otimes\)-ideals will be denoted by \(Spc(\mathscr{T},\otimes,\mathbb{1}))\) (alternatively \(Spc(\mathscr{T})\), \(Spc(\otimes,\mathbb{1})\) or \(Spc(\otimes)\) depending on which information is clear from context)._
Importantly, whenever the triangulated category \(\mathscr{T}\) is non-zero we have that \(Spec(\mathscr{T},\otimes)\neq\emptyset\) for any tensor triangulated category structure \(\otimes\) we can put on \(\mathscr{T}\) (see [1, Proposition 2.3]).
To this set we will put a topology structure.
**Definition 2.13**.: _Let \((\mathscr{T},\otimes,\mathbb{1})\) be a TTC, the support of an object \(A\in\mathscr{T}\), denoted \(supp(A)\), is the set \(\{\mathfrak{p}\in Spc(\mathscr{T})\mid A\notin\mathfrak{p}\}\)._
**Lemma 2.14**.: _[_16_, Lemma 2.6]_ _The sets of the form \(\mathcal{Z}(S):=\bigcap_{A\in S}supp(A)\), for a family of objects \(S\subset\mathscr{T}\), form a basis for a topology on \(Spc(\mathscr{T})\)._
An important result regarding this topology is the following, which restricts the kind of spaces we should be expecting from the construction.
**Theorem 2.15**.: _[_1_, Propositions 2.15,2.18]_ _For any TTC \((\mathscr{T},\otimes,\mathbb{1})\), the space \(Spc(\mathscr{T})\) is a spectral space in the sense of Hochster, meaning it is sober and has a basis of quasi-compact open subsets._
Now that the topology on \(Spc(\mathscr{T})\) has been chosen, the next step is to equip this space with sheaf of rings which will act as the structure sheaf.
To a subset \(Y\subset Spc(\mathscr{T})\) we can assign a thick \(\otimes\)-ideal denoted by \(\mathscr{I}_{Y}\) and defined as the subcategory supported on Y, meaning \(\mathscr{I}_{Y}:=\{A\in\mathscr{T}\mid supp(A)\subset Y\}\).
Finally, with Y as above, we denote by \(\mathbb{1}_{T_{Y}}\) the image of the unit \(\mathbb{1}\) of \(\mathscr{T}\) under the localization functor \(\pi:\mathscr{T}\to\mathscr{T}/\mathscr{I}_{Y}\).
**Definition 2.16**.: _Let \(\mathscr{T}\) be a nonzero TTC and we define a structure sheaf \(\mathcal{O}_{Spc(\mathscr{T})}\) over \(Spc(\mathscr{T})\) as the sheafification of the assignment \(U\mapsto End(\mathbb{1}_{T_{Z}})\) where \(Z:=Spc(\mathscr{T})\backslash U\), for an open subset \(U\subset Spc(\mathscr{T})\)._
It is not hard to see the assignment \(Spc(F)\) respects composition of exact monoidal functors, so if \(F:\mathscr{T}\to\mathscr{T}^{\prime}\) is such a functor, we get a morphism of ringed spaces since for a closed \(Z=Spc(\mathscr{T})\backslash U\) we have \(F(\mathscr{I}_{Z})\subset\mathscr{I}_{Z^{\prime}}\) where \(Z^{\prime}=Spc(\mathscr{T}^{\prime})\backslash Spc(F)^{-1}(U)\) which implies there is a morphism \(\mathcal{O}_{\mathscr{T}}\to Spc(F)_{*}\mathcal{O}_{\mathcal{L}}\) and so \(Spc:\mathbb{T}\mathbb{T}\mathbb{C}\to RS\) is a functor, and under nice conditions ( for example \(\mathscr{T}\) being rigid ) this can be shown to be a functor \(Spc:\mathbb{T}\mathbb{T}\mathbb{C}\to LRS\).
With this construction in mind we can now describe the anticipated reconstruction theorem as described by Balmer.
**Theorem 2.17**.: _[_1_, Corollary 5.6]_ _Let \(X\) be a quasi-compact and quasiseparated scheme. There is a homeomorphism_
\[f:X\stackrel{{\cong}}{{\longrightarrow}}Spc(Perf(X),\otimes_{X}^ {\mathbb{L}}).\]
This homeomorphism follows from Thomason's classification theorem [10, Theorem 3.15] which establishes a correspondence between certain subsets of a quasicompact and quasi-separated scheme \(X\) and \(\otimes_{X}^{\mathbb{L}}\)-ideals of \(Perf(X)\). The following is a general version of this classification for tensor triangulated categories as presented by Balmer in [1, Theorem 4.10]
**Theorem 2.18**.: _Let \((\mathscr{T},\otimes,U)\) be a TTC. Let \(\mathscr{S}\) of those subsets \(Y\subset Spc(\mathscr{T})\) which are unions \(Y=\bigcup_{i\in I}Y_{i}\) where \(Y_{i}\) are closed subsets with quasi-compact complement for all \(i\in I\). Let \(\mathscr{R}\) be the set of radical
\(\otimes\)-ideals of \(\mathscr{T}\). Then there is an order-preserving bijection \(\mathscr{S}\to\mathscr{S}\) given by the assignment which sends \(Y\) to the subcategory \(\mathscr{T}_{Y}:=\{A\in\mathscr{T}\mid supp(A)\subset Y\}\) and with inverse sending a radical \(\otimes\)-ideal \(\mathscr{I}\) to the subset \(S_{\mathscr{I}}:=\bigcup_{A\in\mathscr{I}}supp(A)\)._
Here by radical \(\otimes\)-ideal we mean a \(\otimes\)-ideal \(\mathscr{I}\) such that whenever \(A^{\otimes n}\) is in \(\mathscr{I}\) then \(A\) is in \(\mathscr{I}\).
In practice every \(\otimes\)-ideal is automatically a radical \(\otimes\)-ideal and it certainly depends on the monoidal structure one can put on the triangulated category \(\mathscr{T}\). As pointed out by Balmer in [1] this condition is satisfied as soon as the tensor triangulated category is rigid, meaning that every object is dualizable.
When \(X\) is a variety the classification theorem can be specialized to a very simple form as pointed out by Rouquier in [14].
**Theorem 2.19**.: _Let \(X\) be a variety, there is a correspondence between the set of closed subsets of \(X\) and \(\otimes_{X}^{\mathbb{L}}\)-ideals of finite type, those ideals generated by a single object._
Using the homeomorphism from Theorem 2.17 and the construction of the structure sheaf on \(Spc(Perf(X),\otimes_{X}^{\mathbb{L}})\) from Definition 2.16 we only need the following theorem to complete the reconstruction theorem of Balmer.
**Theorem 2.20**.: _[_1_]_ _Let \(X\) be quasi-compact and quasiseparated scheme X. There is an isomorphism \(\mathscr{O}_{X}\cong\mathscr{O}_{Spc(Perf(X),\mathscr{O}_{X})}\)._
The following proposition should inform us how localizations behave under taking \(Spc\).
**Proposition 2.21**.: _[_1_, Proposition 3.11]_ _Let \(\mathscr{I}\subset\mathscr{T}\) be thick \(\otimes\)-ideal, then the localization functor \(\pi:\mathscr{T}\to\mathscr{T}/\mathscr{I}\) is an exact monoidal functor and induces an homeomorphism \(Spc(\mathscr{T}/\mathscr{I})\cong\{\mathfrak{p}\in Spc(\mathscr{T})\mid \mathscr{I}\subset\mathfrak{p}\}\)._
In particular when combined with the classification theorem in the form of Theorem 2.19 we see that open subvarieties \(U\) of a variety \(X\) are isomorphic to \(Spc(Perf(X),\otimes_{X}^{\mathbb{L}})/\mathscr{I}_{Z}\) where \(Z\) is the complement of \(U\) in \(X\).
We close this section with the following remark.
**Remark 2.22**.: _So far we have been dealing with tensor triangulated categories as described in the Definition 2.8, meaning we require there to be a closed symmetric monoidal category structure on \(\mathscr{T}\). However under closer inspection one sees that nowhere in the classification theorem nor in Balmer's construction one needs the full monoidal structure._
_In fact so far we really only need the data of a functor \(\otimes:\mathscr{T}\times\mathscr{T}\to\mathscr{T}\) covariant and exact in each variable, together with a unit object \(\mathbb{1}\) and isomorphisms corresponding to the symmetric, associative and unit conditions. In other words, if \((\mathscr{T},\otimes,\mathbb{1})\) and \((\mathscr{T},\boxtimes,\mathbb{1}^{\prime})\) are two tensor triangulated categories with underlying triangulated category \(\mathscr{T}\) such that \(\otimes\simeq\boxtimes\) for every pair of objects in \(\mathscr{T}\), and \(\mathbb{1}\simeq\mathbb{1}^{\prime}\)then the Balmer spectra \(Spc(\mathscr{T},\otimes,\mathbb{1})\cong Spc(\mathscr{T},\boxtimes,\mathbb{1}^{ \prime})\) as locally ringed spaces. The associators, unitors and braidings of the monoidal categories have no influence in the resulting space. It is this that justifies our notation \((\mathscr{T},\otimes,\mathbb{1})\) as we have mentioned before. In the following we shall keep referring to tensor triangulated categories although our results apply for slightly more general but more awkward structures._
## 3. TTC's and Picard groups
While the Bondal-Orlov reconstruction (Theorem 1.1) tells us that one can directly recover a smooth projective variety \(X\) with ample (anti-)canonical bundle from the derived category \(D^{b}(X)\simeq Perf(X)\), there are plenty of smooth projective varieties which have non-isomorphic Fourier-Mukai partners, varieties \(Y\) such that \(D^{b}(X)\simeq D^{b}(Y)\), which implies that on a given derived category \(D^{b}(X)\) there might be many nonequivalent tensor triangulated category structures.
However even in the case where our variety \(X\) has ample (anti-)canonical bundle as in the hypothesis of the Bondal-Orlov reconstruction theorem, it is not immediate that there is only one possible tensor triangulated category structure. It is, in principle, possible that there might be one such structure \((D^{b}(X),\boxtimes,\mathbb{1})\) such that \(D^{b}(Spc(\boxtimes,\mathbb{1}))\neq D^{b}(X)\) and so Bondal-Orlov does not apply.
In some sense our motivating question is whether Balmer's reconstruction implies Bondal-Orlov. In this section we will be looking into this and related ideas by exploring the possible tensor triangulated categories one can equip on \(D^{b}(X)\) under the slightly more general hypothesis of \(X\) having a big (anti-)canonical bundle.
We start by mentioning the following result by Liu and Sierra from [13] that shows in particular that there are smooth projective varieties \(X\) with ample anti-canonical bundle, so under the hypothesis of Bondal-Orlov, for which the derived category \(D^{b}(X)\) admits a tensor triangulated category structure \((\boxtimes,\mathbb{1})\) such that \(Spc(\boxtimes,\mathbb{1})\not\cong X\).
Recall that there are varieties \(X\) that are known to have derived categories equivalent to the derived category of representations on a quiver (possibly with relations). For example, in the presence of a full strong
exceptional collection \(\{E_{i}\}\) then we have that \(D^{b}(X)\) is equivalent to \(D^{b}(mod-End(\bigoplus E_{i}))\), the derived category of finitely generated modules over the algebra \(End(\bigoplus E_{i})\). This latter algebra on the other hand is equivalent to the path algebra of a quiver, and so we obtain an equivalence between the derived category of \(X\) and the derived category of finite dimensional representations of a quiver \(Q=(P_{n},E_{ij})\).
The important point here is that this derived category of representations of a quiver comes with a tensor triangulated category structure induced by the tensor product of representations. To recall, let \((V_{i},p_{ik})\) and \((W_{j},q_{js})\) two such representations, then the tensor product is given entry-wise: \((V_{i},p_{ik})\otimes_{rep}(W_{j},q_{js}):=(V_{i}\otimes W_{j},p_{ik}\otimes q _{js})\).
Let us denote by \((D^{b}(repQ),\otimes^{\mathbb{L}}_{rep},\mathbb{1}_{rep})\) the resulting tensor triangulated category structure on \(D^{b}(repQ)\) by deriving this tensor product and where \(\mathbb{1}_{rep}:=(k_{i},Id_{ij})\) is the representation given by putting \(k\) on every vertex and the identity morphism in each edge of the quiver.
Liu and Sierra consider quivers with relations satisfying a compatibility condition with the tensor product ([13, Definition 1.2.5]) and say that in this case the quiver has tensor relations.
**Theorem 3.1**.: _[_13_, Theorem 2.1.5.1]_ _Let \(Q\) be a finite ordered quiver with tensor relations. Then \(Spc((D^{b}(repQ),\otimes^{\mathbb{L}}_{rep}))\) is the discrete space \(\{P_{n}\}\)._
They also describe completely the structure sheaf in this case.
**Theorem 3.2**.: _[_13_, Theorem 2.2.4.1]_ _Let \(Q\) be a finite ordered quiver with tensor relations. Then \(\mathscr{O}_{Q}:=\mathscr{O}_{Spc(\otimes^{\mathbb{L}}_{rep})}\) is the constant sheaf of algebras \(k\). So that for any open \(W\subset Spc(\otimes^{\mathbb{L}}_{rep})\) we have \(\mathscr{O}_{Q}(W)=k^{\oplus W}\)._
In particular, for \(X=\mathbb{P}^{n}\) we have by a well-known result of Beilinson ([1]) that \(D^{b}(X)\) is equivalent to the category of representations of a quiver with \(n+1\) vertices. Thus the derived category \(D^{b}(X)\) has a tensor triangulated category structure \((\otimes^{\mathbb{L}}_{rep},\mathbb{1}_{rep})\) such that \(Spc((\otimes^{\mathbb{L}}_{rep},\mathbb{1}_{rep}))\not\cong X=\mathbb{P}^{n}\). As \(\mathbb{P}^{n}\) is a smooth projective variety with ample anti-canonical bundle, this previous result implies that the study of tensor triangulated category structures on \(D^{b}(X)\) is not trivial even in the cases falling under the hypothesis of the Bondal-Orlov reconstruction theorem and might shed some light in the internal structure of the derived category in itself.
In general the behavior of the dynamics of the Balmer spectrum and
taking derived categories can be complex. As we know that the Balmer spectrum is a locally ringed space it has an abelian category of sheaves of modules which admits a tensor product then we can derive this category as usual, however the category of sheaves of modules is in general much more complicated than a category of coherent or even quasi-coherent sheaves.
Having said that, let us put ourselves in the slightly more general situation of derived categories of varieties of general type. Recall a variety is of general type if its canonical bundle is big. In particular varieties with ample canonical bundle are of general type.
One alternative characterization of bigness for a variety is the following:
**Theorem 3.3**.: _[_12_, Example 2.2.9]_ _A smooth projective variety is of general type if and only if, for any sheaf \(\mathscr{F}\in Coh(X)\), there exists an integer \(i_{0}\) depending on \(\mathscr{F}\) such that the sheaf \(\mathscr{F}\otimes_{X}\omega_{X}^{i}\) is generically globally generated for \(i>>i_{0}\)._
As a consequence of the Kodaira lemma (cf. [12, Prop 2.2.6]) we have the corollary:
**Corollary 3.4**.: _Let \(X\) be a smooth projective variety of general type, then there exists an open sub-variety \(X^{*}\) such that for any \(\mathscr{F}\in Coh(X)\), there exists a positive integer \(i_{0}\) for which for any \(i>>i_{0}\), the sheaf \(\mathscr{F}\mid_{X^{*}}\otimes_{X}\omega_{X^{*}}^{i}\) on \(X^{*}\) is globally generated._
Let us explain the previous corollary and the nature of the open sub-variety \(X^{*}\). We recall some basic definitions.
**Definition 3.5**.: _Let \(X\) be a projective variety and \(\mathscr{L}\) a line bundle on \(X\), the augmented base locus is the Zariski closed set_
\[B_{+}(\mathscr{L}):=\bigcap_{m\in\mathbb{N}}B(m\mathscr{L}-A)\]
_Where \(A\) is any ample line bundle, and for any line bundle \(\mathscr{L}^{\prime}\) the set \(B(\mathscr{L}^{\prime})\) is defined as the intersection of the base loci of multiples of the line bundle, that is_
\[B(\mathscr{L}^{\prime}):=\bigcap_{m\in\mathbb{N}}Bs(m\mathscr{L}^{\prime})\]
In [1] the following theorem characterizing the complement of the augmented base focus is proven:
**Theorem 3.6**.: _Let \(\mathscr{L}\) be a big line bundle on a normal projective variety \(X\) over an algebraically closed field. Then the complement
\(X\backslash B_{+}(\mathscr{L})\) of the augmented base locus is the largest Zariski open subset \(U\subseteq X\backslash B(\mathscr{L})\) such that for all large and divisible \(m(\mathscr{L})\in\mathbb{Z}\) the restriction of the morphism_
\[\phi_{m}:X\backslash B(\mathscr{L})\dashrightarrow\mathbb{P}H^{0}(X,m \mathscr{L})\]
_to U is an isomorphism onto its image._
The following couple important observations follow immediately from the definition, the fact that the augmented base locus is independent of the choice of ample line bundle, and Kodaira's decomposition of big line bundles.
**Remark 3.7**.:
1. \(B_{+}(\mathscr{L})=\emptyset\) _if and only if_ \(\mathscr{L}\) _is ample._
2. \(B_{+}(\mathscr{L})\)__\(\pm\)__\(X\) _if and only if_ \(\mathscr{L}\) _is big._
From the remarks above and using Thomason's classification Theorem (2.18) we know that as there exists a correspondence between closed subsets of the Balmer spectrum and radical tensor ideals in the tensor triangulated category then there exists a radical tensor ideal corresponding to the augmented base locus \(B_{+}(\mathscr{L})\) for any given line bundle \(\mathscr{L}\). In particular the open sub-variety \(X^{*}\) from Corollary 3.4 is the complement of the augmented base locus, \(X\backslash B_{+}(\omega_{X})\) and corresponds to a \(\otimes_{X}^{\mathbb{L}}\)-ideal generated by a single object ( using Theorem 2.19 ) whose homological support gives back the closed subset \(B_{+}(\omega_{X})\).
**Remark 3.8**.: _Let us denote by \(I_{X^{*}}\) the \(\otimes_{X}^{\mathbb{L}}\)-ideal corresponding to the open subvariety \(X^{*}\). By our previous Remark 3.7 we have that this ideal must be a proper \(\otimes_{X}^{\mathbb{L}}\)-ideal of \(D^{b}(X)\) and is the ideal \(0\) precisely when the (anti-)canonical bundle is ample._
We would like to understand the effect of the positivity of the canonical bundle ( in this case the fact that the variety is of general type ) on the tensor triangulated structure of the category. We know from Theorem 2.3 that the Serre functor in a triangulated category is unique up to degree whenever it exists and so it is a property of the category and not extra data. In our concrete case we know furthermore that the Serre functor is isomorphic to \(\_\otimes_{X}^{\mathbb{L}}\omega_{X}[n]\) where \(n\in\mathbb{N}\) is the dimension of the variety and \(\omega_{X}\) is the dualizing sheaf of \(X\).
Let us start with a definition mimicking that of spanning class:
**Definition 3.9**.: _Let \((\mathscr{T},\otimes)\) be a tensor triangulated category, let \(\mathscr{I}\subseteq\mathscr{T}\) be a thick subcategory and let us denote by \(\pi:\mathscr{T}\rightarrow\mathscr{T}/\mathscr{I}\) the localization functor. We say that a collection of objects \(\Omega\subset\mathscr{T}\) is an almost spanning class with respect to \(\mathscr{I}\) if the following two conditions hold._
1. _If_ \(X\in\mathscr{T}/\mathscr{I}\) _is such that_ \(Hom_{\mathscr{T}/\mathscr{I}}(\pi(B),X[j])=0\) _for all_ \(B\in\Omega\) _and_ \(j\in\mathbb{Z}\)_, then_ \(X\cong 0\)_._
2. _If_ \(X\in\mathscr{T}/\mathscr{I}\) _is such that_ \(Hom_{\mathscr{T}/I}(X[j],\pi(B))=0\) _for all_ \(B\in\Omega\) _and_ \(j\in\mathbb{Z}\)_, then_ \(X\cong 0\)_._
It is immediate to see that the previous definition is equivalent to asking that the collection \(\Omega\) maps through \(\pi\) to a spanning class on the quotient \(\mathscr{T}/\mathscr{I}\). When the thick subcategory in question is the \(0\) subcategory then the definition reduces to that of a spanning class as in Definition 2.4.
Additionally when the triangulated category \(\mathscr{T}/\mathscr{I}\) has a Serre functor, only one of the conditions in the definition is necessary as the Serre duality implies the other automatically.
We would like to generalize Theorem 2.6 but for a big canonical bundle instead of an ample one and see that a big bundle induces an almost spanning class in the derived category with respect to a \(\otimes_{X}^{\mathbb{L}}\)-ideal \(\mathscr{I}\).
**Theorem 3.10**.: _Let \(X\) be a smooth projective variety of general type. Then the collection of tensor powers \((\omega_{X}^{\otimes i})_{i\in\mathbb{Z}}\) forms an almost spanning class with respect to the tensor ideal \(I_{X*}\) in the tensor triangulated category \((D^{b}(X),\otimes_{X}^{\mathbb{L}})\)._
Proof.: We need to show that \(\pi(\{\omega_{X}^{\otimes i}\})\) forms a spanning class in the quotient \(D^{b}(X)/I_{X*}\). As \(I_{X*}\) is the ideal corresponding to the open smooth subvariety \(X^{*}\) from Corollary 3.4 then we know that there is an isomorphism \(Spc(D^{b}(X)/I_{X*})\cong X^{*}\). Since \(\omega_{X}\) restricted to \(X^{*}\) is ample by the characterization of Theorem 3.6, we get that \(\{\omega_{X}^{\otimes i}\mid_{X*}\}\) forms an spanning class by Lemma 2.6 of the derived category of \(X^{*}\) which coincides with the quotient category \(D^{b}(X)/I_{X*}\) by Proposition 2.21.
The main key in our arguments is the fact that one can construct, as in the ample case, a resolution for any complex of coherent sheaves on \(X^{*}\) in terms of tensor powers of the canonical bundle \(\omega_{X*}\) of \(X^{*}\). With the advantage that one is able to have a concrete description of the derived category of this space in terms of a quotient of the derived category of the larger variety \(X\).
Explicitly for any complex \(A\) of coherent sheaves over \(X^{*}\) there is a resolution:
\[\cdots\to\oplus_{j_{0}}(\omega_{X^{*}}^{\otimes i_{0}})\to\cdots\to\oplus_{j_ {k}}(\omega_{X^{*}}^{\otimes i_{k}})\to A\to 0.\]
Another thing to notice is that in the example given above for the non
equivalent tensor triangulated category structures on \(D^{b}(\mathbb{P}^{n})\), one immediate issue with the two given such structures was that the units were non-isomorphic. For this reason we should proceed to work with tensor triangulated categories with a fixed unit isomorphic to \(\mathscr{O}_{X}\).
**Definition 3.11**.: _Let \((\mathscr{T},\otimes,\mathbb{1})\) be a TTC, an object \(X\in\mathscr{T}\) is \(\otimes\)-invertible if there exists \(X^{-1}\in\mathscr{T}\) such that \(X\otimes X^{-1}\cong\mathbb{1}\). We will denote by \(Pic(D^{b}(X),\boxtimes)\) the group of isomorphism classes of \(\boxtimes\)-invertible objects._
We will make use of the following lemma:
**Lemma 3.12**.: _Suppose \(X\) is a smooth projective variety of general type of dimension \(n\). If \(\boxtimes\) is a tensor triangulated structure on \(D^{b}(X)\) with unit \(\mathscr{O}_{X}\), and \(U\) is a \(\boxtimes\)-invertible object such that \(U[\boxtimes I_{X}*\subseteq I_{X}*\). Then there is a natural equivalence between the functors induced by \(U[\boxtimes]\) - and \(U\otimes^{\mathbb{L}}_{X}\) - in \(D^{b}(X)/I_{X}*\)._
Proof.: By our previous discussion we know that any complex can be resolved in \(D^{b}(X)/I_{X}*\) by a resolution
\[\cdots\to\oplus_{j_{0}}(\omega_{X}^{\otimes i_{0}})\to\cdots\to\oplus_{j_{k}} (\omega_{X}^{\otimes i_{k}})\to A\to 0.\]
As the Serre functor in \(D^{b}(X^{*})\) is given by \({}_{-}\otimes^{\mathbb{L}}_{X}\omega_{X}*[n^{\prime}]\), where \(n^{\prime}\) is the dimension of \(X^{*}\) and we know any exact equivalence must commute with it, if we let \(U[\widehat{\boxtimes}]\) and \(U[\widehat{\otimes^{\mathbb{L}}_{X}}]\) denote the autoequivalences of \(D^{b}(X)/I_{X}*\) induced respectively by \(U[\boxtimes]\) and \(U[\otimes^{\mathbb{L}}]\), then we have that
\[(U[\widehat{\boxtimes}\hat{A}]\widehat{\otimes^{\mathbb{L}}_{X}}\omega_{X}*[n ^{\prime}])\simeq U[\widehat{\boxtimes}(\widehat{A}\widehat{\otimes^{\mathbb{ L}}_{X}}\omega_{X}*[n^{\prime}]).\]
As \(\mathscr{O}_{X}\) is a unit for both \(\otimes_{X}\) and \(\boxtimes\), and after shifting by \([-n^{\prime}]\) we deduce
\[U[\widehat{\otimes^{\mathbb{L}}_{X}}\omega_{X}*\,\simeq U[\widehat{\boxtimes }\omega_{X}*.\]
From this, the exactness of \(\otimes^{\mathbb{L}}\) and \(\boxtimes\), and the resolutions in terms of \(\omega_{X}^{i}*\), we obtain the isomorphisms
\[U[\widehat{\otimes^{\mathbb{L}}}A\cong U[\widehat{\boxtimes}]A.\]
**Remark 3.13**.: _Let us point out the slight abuse of notation of the autoequivalence \(U[\widehat{\otimes^{\mathbb{L}}}]\). This functor would formally be denoted by \(\widehat{U}[\otimes^{\mathbb{L}}_{D^{b}(X)/I_{X}*}\) as it is induced by the object \(\widehat{U}\) in the tensor triangulated category \((D^{b}(X)/I_{X^{*}},\otimes^{\mathbb{L}}_{D^{b}(X)/I_{X}*})\), but as the only tensor ideal we are taking a quotient by in this section is \(I_{X}*\), we believe our notation is lighter without losing sight of which functors they represent._
We have the following corollary:
**Corollary 3.14**.: _Let \(X\) be a variety of general type and let \(\boxtimes\) a tensor triangulated category structure on \(D^{b}(X)\) with unit \(\mathscr{O}_{X}\). Then for any \(\boxtimes\)-invertible object \(U\) such that \(U\boxtimes I_{X*}\subseteq I_{X*}\), the equivalence \(U\widehat{\boxtimes}:D^{b}(X)/I_{X*}\to D^{b}(X)/I_{X*}\) induced by \(U\boxtimes\_\) is equivalent to an equivalence given by objects in the group \(Pic(D^{b}(X)/I_{X*},\widehat{\otimes^{\mathbb{L}}})\) of invertible \(\widehat{\otimes^{\mathbb{L}}}\)-objects._
Proof.: From Lemma 3.12 we have that if \(U^{-1}\) is such that \(U\boxtimes U^{-1}\cong\mathscr{O}_{X}\) then in the quotient \(D^{b}(X)/I_{X*}\),
\[U\widehat{\otimes^{\mathbb{L}}}\widehat{U^{-1}}\cong U\widehat{\boxtimes} \widehat{U^{-1}}\cong\mathscr{O}_{X*}.\]
As \((D^{b}(X)/I_{X*},\widehat{\otimes^{\mathbb{L}}})\) is a tensor triangulated category, we have that \(\widehat{U}\in D^{b}(X)/I_{X*}\) is a \(\widehat{\otimes^{\mathbb{L}}}\)-invertible objects.
In Lemma 3.12 and Corollary 3.14 above, the ideal \(I_{X*}\) might not be a \(\boxtimes\)-tensor ideal and thus the quotient \(D^{b}(X)/I_{X*}\) does not necessarily carry a tensor triangulated category structure induced by \(\boxtimes\). However, our result guarantees that after passing to the quotient, the equivalences induced by the functors \(U\boxtimes\) - are equivalent to equivalences given by invertible objects in \((D^{b}(X)/I_{X*},\widehat{\otimes}^{\mathbb{L}}_{X})\) induced by the same object, under the condition that \(I_{X*}\) is stable by \(U\boxtimes\).
In particular, we have:
**Corollary 3.15**.: _Let \(X\) be a variety of general type and \(\boxtimes\) a tensor triangulated structure on \(D^{b}(X)\) with unit \(\mathscr{O}_{X}\). If \(I_{X*}\) is a \(\boxtimes\)-ideal then the Picard group \(Pic(D^{b}(X)/I_{X*},\widehat{\boxtimes})\) is a subgroup of the Picard group \(Pic(D^{b}(X)/I_{X*},\widehat{\otimes^{\mathbb{L}}_{X}})\)._
Proof.: The proof is as in the previous two, if \(U\) is in \(Pic(D^{b}(X)/I_{X*},\widehat{\boxtimes})\) then it induces an autoequivalence of \(D^{b}(X)/I_{X*}\) and so it commutes with the Serre functor on \(D^{b}(X^{*})\simeq D^{b}(X)/I_{X*}\). By writing a resolution for any complex \(A\) in terms of direct sums of derived tensor powers of \(\omega_{X*}\) we can use the same argument than in the proof of Lemma 3.12 and we arrive at the isomorphisms
\[U\widehat{\otimes^{\mathbb{L}}}A\cong U\widehat{\boxtimes}A.\]
**Remark 3.16**.: _Let us point out that in the results above we have chosen to work with varieties of general type, but the same argument applies to varieties with big anti-canonical bundle._
The case when our variety has an ample (anti-)canonical bundle allows us to relate the Picard group of the full derived category to that of any other tensor triangulated category structure on it.
The following result follows from the previous argument.
**Corollary 3.17**.: _Let X be a variety with ample (anti-)canonical bundle. Then if \(\boxtimes\) is a tensor triangulated category structure on \(D^{b}(X)\) with unit \(\mathscr{O}_{X}\), the Picard group \(Pic(D^{b}(X),\boxtimes)\) is isomorphic to a subgroup of \(Pic(D^{b}(X),\otimes_{X})\)._
Proof.: We just need to notice that in this case the \(\otimes_{X}\)-ideal from Corollary 3.4 is the \(0\) ideal and thus we can resolve any object \(A\in D^{b}(X)\) by a sequence of powers of the Serre functor. By the same reasoning as above we see that
\[U\otimes^{\mathbb{L}}A\cong U\boxtimes A.\]
One thing to note here is that although Bondal and Orlov had already classified the group of autoequivalences of a derived category of a variety with ample (anti-)canonical bundle, we are working without the condition of an equivalence between the derived category of the Balmer spectrum of \(\boxtimes\) and the derived category \(D^{b}(X)\), and as such it is not immediate from their result that the Picard group of \(\boxtimes\) must involve invertible sheaves over \(X\).
In other words, as \(Spc(\boxtimes)\) is not necessarily isomorphic to \(X\) then understanding the autoequivalences of \(D^{b}(X)\) alone does not give us an immediate relationship to the Picard group of \(\boxtimes\).
We should think of the following corollary as a monoidal version of the Bondal-Orlov reconstruction theorem:
**Corollary 3.18**.: _Let \(X\) be as above, then if \(\omega_{X}[n]\) is an invertible object for a tensor triangulated structure \(\boxtimes\) on \(D^{b}(X)\) with unit \(\mathscr{O}_{X}\) then \(\boxtimes\) and \(\otimes_{X}^{\mathbb{L}}\) coincide on objects._
Proof.: As \(\omega_{X}\) is \(\boxtimes\)-invertible, Corollary 3.17 tells us that for any \(A\in D^{b}(X)\) we have
\[\omega_{X}\otimes_{X}^{\mathbb{L}}A\cong\omega_{X}\boxtimes A.\]
But we can resolve any other complex \(B\) in terms of derived powers of the canonical sheaf, by the exactness of \(\boxtimes\) we have then
\[B\otimes_{X}^{\mathbb{L}}A\cong B\boxtimes A.\]
The nature of this result comes precisely from the fact that the tensor triangulated category structure \((\boxtimes\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
line bundle in \(Spc(\boxtimes)\) has to be \(\otimes_{X}^{\mathbb{L}}\)-invertible.
**Remark 3.21**.: _From Bondal-Orlov's reconstruction original proof we know that it is actually possible to fully characterize line bundles up to a shift from purely categorical properties. Given the importance of the Picard group of the variety, we can ask whether it is possible to reconstruct the derived tensor product in \(D^{b}(X)\) without having to pass through a reconstruction theorem. In [11] Antieau sketches a construction in which by considering invertible objects ( in the sense of Bondal and Orlov ) one can define a collection of tensor products \(\otimes_{U}^{\mathbb{L}}\) by exploiting the resolution by derived tensor powers of \(\omega_{X}\). The idea is to pick an invertible object \(U\) which are shown in [1] to be isomorphic to a shift of a line bundle in \(X\), then by use of the resolution we only need to define the products \(\omega_{X}^{\otimes_{X}^{\mathbb{L}}i}[ni]\otimes_{U}^{\mathbb{L}}A^{\bullet}\) for any object \(A^{\bullet}\). As the Serre functor \(S\simeq\_\otimes_{X}^{\mathbb{L}}\omega_{X}[n]\) comes with the categorical structure alone then we can set these products to be simply \(S^{i}(A^{\bullet})\). These tensor products \(\otimes_{U}^{\mathbb{L}}\) have \(U\) as unit and all of them have \(X\) as Balmer spectrum._
In general for a triangulated category \(\mathscr{T}\) we have an action by \(Aut(\mathscr{T})\) on the collection \(TTS(\mathscr{T})\).
If \((\otimes,\mathbb{1})\in TTS(\mathscr{T})\) and \(\phi\in Aut(\mathscr{T})\) we have a tensor structure defined by
\[X\otimes_{\phi}Y:=\phi^{-1}(\phi(X)\otimes\phi(Y)).\]
And with unit given by \(\phi^{-1}(\mathbb{1})\).
We have now justified enough the following definition:
**Definition 3.22**.: _Let \(\mathscr{T}\) be a triangulated category, denote by \(TTS(\mathscr{T})\) the collection of equivalence classes of tensor triangulated category structures on \(\mathscr{T}\). Where we consider two tensor triangulated category structures to be equivalent if there is a monoidal equivalence between the two of them._
To keep some control and avoid counting structures coming from autoequivalences as we saw, we should at least fix the unit object.
**Definition 3.23**.: _Let \(\mathscr{T}\) be a triangulated category and \(U\in\mathscr{T}\) an object. Then the set \(TTS_{U}(\mathscr{T})\) is the set of equivalence classes of tensor triangulated structures on \(\mathscr{T}\) where \(U\) is the unit._
It is this set the one we are mainly interested in classifying.
Let us finish by discussing the original Bondal-Orlov reconstruction
theorem in terms of the results we have shown so far.
**Theorem 3.24**.: _Let \(X\) be a variety with ample (anti-)canonical divisor, and let \(\boxtimes\) be a tensor triangulated structure on \(D^{b}(X)\) with unit \(\mathscr{O}_{X}\). Suppose \(Spc(\boxtimes)\) is a smooth projective space with ample (anti-)canonical bundle and that there is an equivalence \(D^{b}(X)\simeq D^{b}(Spc(\boxtimes))\), then \(X\cong Spc(\boxtimes)\)_
Proof.: In fact the only thing to note here is that as \(Spc(\boxtimes)\) has ample (anti-)canonical bundle then \(\omega_{X}\) has to be \(\boxtimes\)-invertible. Indeed, we recall that one can pick the equivalence \(D^{b}(X)\simeq D^{b}(Spc(\boxtimes))\) to send \(\omega_{X}\) to \(\omega_{Spc(\boxtimes)}\) and then the assertion follows by applying Corollary 3.17 to \(Spc(\boxtimes)\) we obtain that \(Pic(D^{b}(X),\otimes_{X}^{\mathbb{L}})\) has to be isomorphic via the assignment \(\mathscr{L}\mapsto\mathscr{L}\) to a subgroup of \(Pic(D^{b}(X),\boxtimes)\). Since \(\omega_{X}\) is \(\boxtimes\)-invertible, by Corollary 3.18 we obtain our result.
**Remark 3.25**.: _We need to explain our choice of hypothesis here. On the first hand the assumption that \(Spc(\boxtimes)\) is a smooth projective variety is necessary just as in the original Bondal-Orlov theorem formulation. We have added a couple more assumptions, however. We suppose that the (anti-)canonical bundle of \(Spc(\boxtimes)\) is also ample to highlight the use of the monoidal structures in the theorem. This hypothesis is however not necessary as it can be directly deduced from the derived equivalence between the two spaces, just as in the original proof of Bondal and Orlov. Alternatively, we can formulate the theorem as follows:_
**Theorem 3.26**.: _Let \(X\) be a variety with ample (anti-)canonical divisor, and let \(\boxtimes\) be a tensor triangulated structure on \(D^{b}(X)\) with unit \(\mathscr{O}_{X}\). Suppose \(Spc(\boxtimes)\) is a smooth projective space, and that we have an equivalence \(D^{b}(X)\simeq D^{b}(Spc(\boxtimes))\), then \(X\cong Spc(\boxtimes)\)._
_Of more importance is perhaps the choice of unit, as we have seen that there are tensor triangulated category structures on the derived category of such a variety which will produce very different spaces under the Balmer reconstruction. This choice of unit allows us to keep some control in the classification of structures producing the same space. A natural next step for future work would be to deal with the possible sort of objects which can be units for such a structure._
**Remark 3.27**.: _We wish to point out that there is some nuance in the way in which Bondal-Orlov follows from our results as we make use of some important technical results from the original proof. We expect however that the discussion in this work has provided enough of
a justification and motivation for looking at this problem in terms of monoidal structures._
We can close our discussion with the following theorem:
**Theorem 3.28**.: _Let \(X\) be a smooth projective variety with (anti-)canonical bundle. Consider a tensor triangulated category structure \(\boxtimes\) on \(D^{b}(X)\) such that \(\mathscr{O}_{X}\) is its unit and \(\mathit{Spc}(\boxtimes)\) is isomorphic to \(X\), then \(\boxtimes\) and \(\otimes_{X}^{\mathbb{L}}\) coincide on objects._
This however does not fully classify \(TTS_{\mathscr{O}_{X}}(D^{b}(X))\) as we require Balmer's spectrum to be a Fourier-Mukai partner, but there is no reason to expect in general a relationship between the derived category of the Balmer spectrum and the original triangulated category.
The lack of morphisms between a space \(X\) and the Balmer spectrum \(\mathit{Spc}(\boxtimes)\) for some tensor triangulated structure, and thus of functors between the derived categories of these two spaces is one of the obstacles to being able to understand the possible structures \(\boxtimes\).
|
2307.07271 | Universal lower bound for community structure of sparse graphs | We prove new lower bounds on the modularity of graphs. Specifically, the
modularity of a graph $G$ with average degree $\bar d$ is
$\Omega(\bar{d}^{-1/2})$, under some mild assumptions on the degree sequence of
$G$. The lower bound $\Omega(\bar{d}^{-1/2})$ applies, for instance, to graphs
with a power-law degree sequence or a near-regular degree sequence.
It has been suggested that the relatively high modularity of the
Erd\H{o}s-R\'enyi random graph $G_{n,p}$ stems from the random fluctuations in
its edge distribution, however our results imply high modularity for any graph
with a degree sequence matching that typically found in $G_{n,p}$.
The proof of the new lower bound relies on certain weight-balanced bisections
with few cross-edges, which build on ideas of Alon [Combinatorics, Probability
and Computing (1997)] and may be of independent interest. | Vilhelm Agdur, Nina Kamčev, Fiona Skerman | 2023-07-14T10:53:12Z | http://arxiv.org/abs/2307.07271v1 | # Universal lower bound for community structure of sparse graphs
###### Abstract
We prove new lower bounds on the modularity of graphs. Specifically, the modularity of a graph \(G\) with average degree \(\bar{d}\) is \(\Omega(\bar{d}^{-1/2})\), under some mild assumptions on the degree sequence of \(G\). The lower bound \(\Omega(\bar{d}^{-1/2})\) applies, for instance, to graphs with a power-law degree sequence or a near-regular degree sequence.
It has been suggested that the relatively high modularity of the Erdos-Renyi random graph \(G_{n,p}\) stems from the random fluctuations in its edge distribution, however our results imply high modularity for any graph with a degree sequence matching that typically found in \(G_{n,p}\).
The proof of the new lower bound relies on certain weight-balanced bisections with few cross-edges, which build on ideas of Alon [Combinatorics, Probability and Computing (1997)] and may be of independent interest.
## 1 Introduction
In numerous real-world examples of graphs, we anticipate a certain community structure - for instance, people form friend groups, neurons cluster into functional units, academic papers divide into subfields. To infer this structure from graph data, several metrics have been proposed to evaluate the quality of vertex partitions. One of the most widely used metrics is _modularity_, introduced by Newman and Girvan [15].
Each vertex partition \(\mathcal{A}\) of a graph is given a _modularity score_\(q_{\mathcal{A}}(G)\), with higher scores taken to indicate that a partition better captures the community structure of a graph. In practice, for large networks, community detection is performed through algorithms that iteratively try to optimise this score [10], such as the Louvain [1] or Leiden [14] algorithms.
The modularity score of a partition \(\mathcal{A}\) of a graph \(G\) with \(m\) edges is given by
\[q_{\mathcal{A}}(G)=\sum_{A\in\mathcal{A}}\frac{e(A)}{m}-\left(\frac{\mathrm{ vol}(A)}{2m}\right)^{2} \tag{1.1}\]
where the sum runs over parts \(A\subseteq V(G)\) of the partition, \(e(A)\) is the number of edges within part \(A\), and the volume \(\mathrm{vol}(A)\) is the sum of the degrees of the vertices in part \(A\).
As can be seen, the formula for modularity consists of two terms balancing against one another - one which partitions with edges within the parts, and a sum of squares term that rewards having many small parts, or for a fixed number of parts rewards parts of approximately equal volume. We call these terms the _coverage_ or _edge contribution_ and the _degree tax_ respectively.
The score of any partition is between \(-0.5\) and \(1\)[2]. The modularity of a graph, \(q^{*}(G)\), is the maximum of \(q_{\mathcal{A}}(G)\) over all partitions \(\mathcal{A}\) of \(G\). It is easy to see that the partition that puts all vertices in the same part gets a score of exactly zero, so that \(q^{*}(G)\) is always between zero and one.
While this gives us a reasonable way of comparing two different partitions of the same graph and telling which is better, it does not immediately give us a way of taking a graph and telling if it has a significant community structure or not. If you are given a graph, and compute that its modularity score is \(0.23\), does this mean the graph has or does not have a community structure?
One might initially hope that random graphs with no underlying structure would have modularity essentially zero, but this turns out not to be true, at least if the graph is sparse, which many real-world graphs are. The binomial random graph, \(G_{n,p}\), is likely to have high modularity so long as the average degree is bounded [1, 2, 1]. As proved by Ostroumova, Pralat and Raigorodskii [1, Theorem 6], any graph with maximum degree \(o(n)\) and average degree \(\bar{d}\) has modularity at least about \(2\bar{d}^{-1}\). For random graphs with a given bounded degree sequence (unders some natural assumptions), this lower bound was improved to \((2+\varepsilon)\bar{d}^{-1}\) by Lichev and Mitsche [1].
We give another result in this direction, showing that any graph whose degree sequence is not too heavy-tailed will have modularity \(\Omega\left(\bar{d}^{-\frac{1}{2}}\right)\). One motivation for this result are applications to graphs with a _power-law_ degree sequence, and specifically, to preferential-attachment graphs, which are discussed in Section 1.1 extending a result of [1]. The following statement is a concise version of the result as \(n\to\infty\) (so \(o(1)\)-terms are with respect to \(n\)), whereas the error terms are stated explicitly as Proposition 3.4.
**Theorem 1.1**.: _Let \(G\) be an \(n\)-vertex graph with average degree \(\bar{d}\geq 1\), \(L=\left\{v\in G\ \big{|}\ d_{v}<C\bar{d}\right\}\) for some \(C>1\), and assume that \(\operatorname{vol}(L)\geq(1+\gamma)m=(1+\gamma)\frac{n\bar{d}}{2}\) for some \(\gamma>0\)._
_If \(\Delta(G)n^{-1}=o(1)\) and \(\bar{d}^{10}n^{-1}=o(1)\), then_
\[q^{*}(G)\geq\frac{0.26\gamma}{\sqrt{Cd}}(1+o(1)).\]
One way to interpret the result is that, assuming \(\bar{d}=o(n^{1/10})\), the only obstruction to modularity \(\Omega(\bar{d}^{-1/2})\) is if we have a minority of vertices which contain at least half the volume of the graph. This happens for unbalanced bipartite graphs, as discussed below.
The \(\Omega\left(\bar{d}^{-1/2}\right)\) is the best lower bound we could hope for without imposing more conditions, because there exist families of graphs which achieve this bound. For example, \(d\)-regular graphs \(G_{n,d}\), for large enough \(d\), have modularity \(q^{*}(G_{n,d})=\Theta\left(d^{-1/2}\right)\)[20]. Another example are Erdos-Renyi random graphs, see the following section, as well as the Chung-Lu model, see Section 5.
**Modularity from fluctuations in random graphs? Or automatically by average degree?** Guimera, Sales-Pardo and Amaral published a highly influential paper showing the binomial random graph \(G_{n,p}\) can have high modularity [1]. They estimated it to have modularity \(\Theta\left((np)^{-1/3}\right)\), meaning the modularity does not go to zero for constant average degree, the usual regime of interest for real networks. Using deep but non-rigorous insights from statistical physics Reichardt and Bornholdt [1] conjectured the modularity of \(G_{n,p}\) to be \(\Theta\left((np)^{-1/2}\right)\) whp, and this was confirmed to hold whp for \(1/n\leq p\leq 0.99\) in [20].
Notice that this matches the bound in Theorem 1.1. To be precise since the average degree of \(G_{n,p}\) is tightly concentrated about \((n-1)p=np(1+o(1))\), [20] implies that for \(1/n\leq p\leq 0.99\) and for \(G\sim G(n,p)\) whp we have \(q^{*}(G)=\Theta(\bar{d}(G)^{-1/2})\).
Thus, our result shows that these lower bounds on the modularity of \(G_{n,p}\) hold simply because of its average degree and the well-behaved nature of its degree sequence, without needing any appeal to fluctuations or any other particular feature of the model - the same bound holds for any graph with a similar degree sequence.
Furthermore, our lower bounds offer a certain level of validation to the concept of modularity as a measure for community structure. It would be less than satisfactory if there existed graphs with considerably lower modularity than a random graph with a _similar_ degree sequence. Our
results, on the other hand, imply that random graphs do, in a sense, have the minimum achievable modularity.
Modularity resultsThe modularity of graphs from random graph models and the relationship between graph properties and modularity has received much recent attention. We have mentioned already the deterministic lower bound of \(2\bar{d}^{-1}\) for graphs with sublinear maximum degree [1] and results for random regular [13] and Erdos-Renyi graphs [13]. For random cubic graphs Lichev and Mitsche [11] proved the modularity whp lies in the interval \([0.667,0.79]\) and more generally that random graphs with a given degree sequence have modularity at least \((2+\varepsilon)\bar{d}^{-1}\). Preferential attachment graphs with \(h\geq 2\) edges added at each step whp have modularity at most \(15/16\) and at least \(\Omega(\bar{d}^{-1/2})\)[1], see also the next section. Sampling a random subgraph of a given graph by including each edge independently with probability \(p\) whp has modularity approximating that of the underlying graph for \(p\) such that the expected degree in the sampled graph is at least a large constant [13].
There are also graphs known to be'maximally modular' [10], i.e. with modularity tending to \(1\) as the number of edges tends to infinity. It has been shown that graphs with sublinear maximum degree from a minor-closed class [12] and (whp) hyperbolic graphs [11] and spatial preferential attachment [1] are maximally modular. In another direction, see [1] for a geometric interpretation of modularity.
Tightness of Theorem 1.1 - complete bipartite and random graphs.The result is tight in two senses. Firstly, the \(\gamma>0\) condition is necessary and secondly the \(\Omega(\bar{d}^{-1/2})\) is the best lower bound we could hope for without imposing more conditions.
To see that \(\gamma>0\) is necessary, we note that for any even \(d\) we may construct a graph \(G\) with average degree about \(d\) with \(q^{*}(G)=0\) and such that \(\operatorname{vol}(L)=\operatorname{vol}(G)/2\). It is known that complete bipartite graphs have modularity zero [1, 13]. Take the complete bipartite graph \(G=K_{d/2,t}\), and note the average degree is \(\bar{d}=dt/(d+t)\) and thus for sufficiently large \(t\) we have \(\bar{d}\approx d\). The graph has many vertices with degree \(d/2\), few vertices of degree \(t\) and both sets have volume \(\operatorname{vol}(G)/2\). Thus the set \(L\) in the theorem statement will consist of vertices of degree \(d/2\) and have volume \(\operatorname{vol}(G)/2\), yielding \(\gamma=0\).
To see that \(\Omega(\bar{d}^{-1/2})\) is the best lower bound possible without imposing more conditions, we recall that a random \(d\)-regular graph has modularity at most \(2d^{-1/2}\)[13]. Moreover, Corollary 5.3 gives examples of graphs with a large family of possible degree sequences and modularity \(\Omega(\bar{d}^{-1/2})\).
### Application to power-law graphs
In this section, we discuss two applications of Theorem 1.1. Informally, graphs with a _power-law degree_ sequence and preferential-attachment graphs have modularity \(\Omega(\bar{d}^{-1/2})\), generalising a result of [1].
Many real-world graphs follow a power-law degree distribution, for instance the World Wide Web, genetic networks and collaboration networks [1, 2, 13]. This means that the proportion of vertices of degree \(k\) is \(O(k^{-\tau})\) for a parameter \(\tau\), called the _shape coefficient_. Most examples found in literature have the _shape coefficient_\(\tau\) in the interval \((2,3]\) - for example roughly 2.2 for the Internet or 2.3 for the movie actors network [14, Chapter 8]. For \(\tau>2\), that is, as soon as the first moment of the sequence is well-defined, most of the volume is on the vertices whose degree is near average. This allows us to apply Theorem 1.1 and obtain the following lower bound.
**Theorem 1.2**.: _Let \(G\) be a graph with degree sequence \(\mathbf{d}=(d_{i})_{i\in[n]}\), with average degree \(\bar{d}\), satisfying_
\[\frac{1}{n}|\{i:d_{i}\geq k\}|\leq A\bar{d}^{\tau-1}k^{1-\tau} \tag{1.2}\]
_for all \(i\), with constants \(\tau>2\) and \(A>0\). For \(b=0.1\left(\frac{(\tau-2)}{8A}\right)^{\frac{1}{2(\tau-2)}}\) and sufficiently large \(n\),_
\[q^{*}(G)\geq b\bar{d}^{-1/2}.\]
As mentioned, the best previously known lower bound for the modularity of graphs satisfying 1.2 is \(2\bar{d}^{-1}\)[10] - since their max degree is sublinear. The modularity was already known to be \(\Omega(\bar{d}^{-1/2})\) for preferential attachment graphs (with \(\delta=0\)) [10] so our Theorems 1.2 and 4.4 generalise this.
Modularity of random power-law graph modelsThere are numerous random graph models which aim to model existing networks with a power-law degree distribution (often referred to as _scale-free networks_). They fall into two basic categories,
1. graphs whose degree sequence is specified a priori, and
2. graphs in which the degrees emerge from stochastic local growth rules, such as preferential-attachment graphs.
For (i), a lower bound on the modularity follows directly from Theorem 1.1, assuming that the empirical degree sequence is close to the prescribed one. Notice that this holds by definition for random graphs with a given _fixed_ degree sequence, so Theorem 1.1 trivially applies to the uniform model, extending the results of [11] for \(G_{n,d}\). It also implies a lower bound for the Chung-Lu model for graphs with a given _expected_ degree sequence. For the Chung-Lu model, we are also able to give a matching (up to constant factor) lower bound on the modularity, see Section 5.
The models from category (ii) are usually based on the preferential attachment model (PAM) [1, 2], which is described in more detail in Section 4.1. The preferential attachment model was introduced in a seminal paper by Albert and Barabasi [1], which also demonstrated its ability to explain the emergence of _scale-free_ networks and laid the foundation for the study of complex networks. For more information on mathematical properties and applications of the PAM, see for instance [1, 1, 2]. For PAM-type models (ii), it is not easy to prove rigorous results about the degree sequence, and controlling high-degree vertices seems particularly inaccessible (see, e.g., [1][Section 2.2] and Proposition 4.3 (i) in this paper). For this reason, Theorem 1.2 does not apply directly, but we demonstrate how Theorem 1.1 can be applied to the class of preferential attachment models presented in Section 4.
### Key Techniques
Alon's bisection methodThroughout this section, \(G\) is a given graph with average degree \(\bar{d}\) and we wish to find a bisection of \(G\) with modularity \(\Omega(\bar{d}^{-1/2})\). Central to our proof is the method of Alon [1] which gives a bisection of the graph with \(n(\bar{d}/4-\Omega(\bar{d}^{1/2}))\) edges between the two parts. Notice that the second term \(\Omega(\bar{d}^{1/2})\) is the deviation from a random bisection. A crucial idea in this method is to find a pairing of the vertices which does not _interfere_ with the edges of the graph in undesired ways, so that a randomised bisection of the vertices along those pairs can be analysed (see Lemma 3.1).
In obtaining bisections with _high_ modularity, we face two obstacles - the degree tax, and the fact that Alon's bisection technique only applies to graphs with maximum degree \(O(n^{1/9})\).
Pairings which equalise the volumeRegarding the degree-tax obstruction, by definition (1.1), if a bisection obtained above is to have high modularity, it needs to have degree tax as small as possible, i.e. the two parts need to have approximately the same volume to give degree tax near a half (see definition p1). Thus our problem is to find a bisection of \(G\) with the same guarantee of few edges between parts, but also such that the volume of the two parts is similar. The technical
result allowing us to partition a graph while controlling the volume is Lemma 2.3 where we find a pairing of almost all vertices such that the vertices of each pair are _near_ in the degree-ordering of the vertices, but the pairing is still suitable for Alon's bisection technique to apply. This together with a load-balancing result Lemma 2.4 yields Theorem 3.1 - informally, high modularity for graphs with maximum degree \(o(n^{1/9})\).
Processing high-degree verticesHowever, the constraint that \(\Delta(G)=o(n^{1/9})\) is still too strong for many desired applications - for instance, graphs with a power-law degree sequence often have a significantly higher maximum degree (see Section 4). To circumvent this problem, we essentially apply the bisection method as above to the _bulk_ of the graph, that is, to the vertices whose degrees are not too far above the mean which we denote by \(L\). Then we randomly divide \(H=V(G)\setminus L\), the high-degree vertices, into our two parts. With positive probability, such a partition will have modularity \(\Omega(\bar{d}^{-1/2})\) - the main contribution to positive modularity will come from partitioning \(L\), and for \(H\), we only need to show that they behave approximately _as expected_, even with respect to the previously found partition of \(L\).
## 2 Weight-balanced bisections
We now describe the bisection idea due to Alon [1]. The method starts from a convenient matching on \(V(G)\), which we now define.
**Definition 2.1**.: Given a graph \(G=([n],E(G))\) and a matching \(M\) disjoint from \(E(G)\), a _short loop_ of \(G\) and \(M\) is a loop of length at most twelve containing between one and three edges from \(M\) and never more than three consecutive edges of \(G\).
Note that in particular, this definition implies that \(M\) and \(G\) are edge-disjoint. Given such a matching \(M\), Alon proposed and analysed a simple randomised algorithm which splits the vertices of the graph \(G\) 'along' \(M\). We only describe the idea informally as we will not explicitly use it in this paper, it will suffice to use the result Theorem 2.2 as a black-box. The first step is to orient the edges of \(M\) independently and uniformly at random, which splits the vertex set into the set of sources and sinks in this orientation. An edge \(uv\) of \(M\) is marked _active_ if reorienting \(uv\) would not increase the number of 'cross-neighbours' of both \(u\) and \(v\) in the opposite part. The second step is to uniformly resample the orientations of the active edges, and to output the induced partition.
This partition is shown to have _very few_ cross-edges with positive probability, and the requirement for no short loops is important in the analysis. Below we state the result in a self-contained form. In [1], the computations are carried out for \(d\)-regular graphs, but the argument covers arbitrary degree sequences verbatim, and this is also stated in the concluding remarks in [1]. Throughout the paper, \(c=\frac{3}{8\sqrt{2}}\approx 0.265\) is a fixed constant.
**Theorem 2.2** ([1]).: _Given any graph \(G\), and any perfect matching \(M\) on \([n]\) disjoint from \(E(G)\) such that there exist no short loops of \(G\) and \(M\), there exists a \(U\subset[n]\) such that \(M\subset U\times U^{c}\), and_
\[e_{G}(U,U^{c})\leq\frac{1}{2}\sum_{i=1}^{n}d_{i}\left(\frac{1}{2}-\frac{c}{ \sqrt{d_{i}}}\right). \tag{2.1}\]
It is convenient to identify the vertex set of our graphs with \([n]\), since this gives a natural ordering on the vertices of \(G\). Later, we will choose a specific vertex ordering to which the following lemma will be applied.
**Lemma 2.3**.: _Given any graph \(G=([n],E(G))\) with maximum degree \(\Delta>1\), there exists a partial matching \(M\) on \([n]\) disjoint from \(E(G)\) such that the following holds:_
1. _For any_ \(vw\in M\)_,_ \(|v-w|\leq\Delta^{9}\)_._
2. _There are no short loops of_ \(G\) _and_ \(M\)_._
3. \([n-\Delta^{9}]\subset V(M)\)__
Note that the last statement in particular means the lemma is void when \(n<\Delta^{9}\).
Proof.: Let \(H=G^{3}\), the graph where there is an edge \((u,v)\) if there is a path of length at most \(3\) from \(u\) to \(v\) in \(G\). It is straightforward to verify that our condition 2 on the matching \(M\) is implied by
1. There does not exist any cycle of length two, four or six consisting of alternating edges from \(H\) and \(M\). (In particular, \(H\) and \(M\) are edge-disjoint.)
Recalling that the maximum degree of \(G\) is \(\Delta>1\), we have that the maximum degree of \(H\) is
\[\Delta(H)\leq\Delta+\Delta(\Delta-1)+\Delta(\Delta-1)^{2}\leq\Delta^{3}-1.\]
Intuitively, the idea of the proof is to construct the matching greedily, taking the smallest currently unmatched vertex \(v\) and joining it to the first available vertex. It will be enough to show that until the very last rounds, the number of unavailable vertices will be not too large. Vertices are made unavailable when they are already incident to edges in the matching or when there is a particular _dangerous_ configuration of alternating edges (to ensure we do not violate property 2'). Loosely speaking, we maintain an upper bound on the number of unavailable vertices using property 1, which guarantees that the matched vertices are not too far from one another, and is established at the start of each step.
We construct this matching \(M\) using the following greedy algorithm, see Figure 1. We identify the graphs \(H\) and \(M\) with their edge sets. For a matching \(M\), we write \(V(M)\) for the set of vertices incident to an edge of \(M\).
```
\(M_{1}\leftarrow\emptyset\) for\(v=1,2,\ldots,n-\Delta^{9}-1\)do if\(v\not\in V(M_{v})\)then Let \(F_{v}^{+}\) be the set of all \(u\in\{v+1,\ldots,n\}\) such that there is a path between \(u\) and \(v\) consisting of alternating edges from \(H\) and \(M_{v}\) of the form \(H\), \(HM_{v}H\) or \(HM_{v}HM_{v}H\). Pick the least \(w\in[n]\) such that \(w\not\in F_{v}^{+}\cup V(M_{v})\). \(M_{v+1}\gets M_{v}\cup vw\) else \(M_{v+1}\gets M_{v}\) endif endfor
```
**Algorithm 1** Construction of \(M\)
We will first show that the argument terminates, i.e. that a suitable vertex \(w\) can be found as long as \(v\leq n-\Delta^{9}\), and then that the resulting matching \(M_{n-\Delta^{9}+1}\) has the desired properties.
_Claim:_ Let \(M_{v}^{+}=V(M_{v})\cap\{v+1,\ldots,n\}\). If \(v\notin V(M_{v})\) then \(|F_{v}^{+}\cup M_{v}^{+}|\leq\Delta^{9}-1\).
To begin the proof of the claim, similarly to \(F_{v}^{+}\), define \(F_{v}^{-}\) to be the set of \(x\in\{1,\ldots v-1\}\) such that there is a path between \(x\) and \(v\) of the form \(H\), \(HM_{v}H\) or \(HM_{v}HM_{v}H\) and define \(F_{v}=F_{v}^{+}\cup F_{v}^{-}\).
Let \(u\in M_{v}^{+}\), and note we have \(uw\in M_{v}\) for some \(w<v\). But given the greedy algorithm to construct \(M\), and \(v<u\), this implies that \(v\) was not a valid choice for \(w\) to pick, that is, it must
be the case that \(v\in F_{w}^{+}\cup V(M_{w})\). Since \(M_{w}\subseteq M_{v}\) and \(v\notin V(M_{v})\), this implies \(v\in F_{w}^{+}\). Thus there is a path between \(w\) and \(v\) of the form \(H\), \(HM_{w}H\) or \(HM_{w}HM_{w}H\) and so \(w\in F_{v}^{-}\), again using \(M_{w}\subseteq M_{v}\). In particular, we have shown that for each \(u\in M_{v}^{+}\) there is a distinct \(w\in F_{v}^{-}\) and hence
\[|F_{v}^{+}\cup M_{v}^{+}|\leq|F_{v}|. \tag{2.2}\]
The number of vertices \(u\) incident to \(v\) is at most \(\Delta(H)\). Similarly, the number of paths starting from \(v\) of the form \(HM_{v}H\) is at most \(\Delta(H)^{2}\) and of the form \(HM_{v}HM_{v}H\) is at most \(\Delta(H)^{3}\). Thus \(|F_{v}|\leq\Delta(H)+\Delta(H)^{2}+\Delta^{3}\leq(\Delta(H)+1)^{3}-1\leq\Delta ^{9}-1\) and by (2.2) we have shown the claim.
The claim above implies that \(\{v+1,\ldots,v+\Delta^{9}\}\setminus(V(M_{v})\cup F_{v})\neq\emptyset\) for \(v\leq n-\Delta^{9}\), so indeed, there is a valid choice for \(w\) in each step, and the algorithm terminates. Let \(M=M_{n-\Delta^{9}+1}\). For each vertex \(v\), \(V(M_{v+1})\) contains the initial segment \([v]\), so in particular, \(M\) contains \([n-\Delta^{9}]\), certifying property 3.
Finally, let us show that \(M_{v}\) satisfies condition \(2^{\prime}\) for each \(v\). This clearly holds for \(M_{1}=\emptyset\), and suppose that some \(M_{v}\) satisfies condition \(2^{\prime}\). If \(M_{v+1}=M_{v}\), \(M_{v+1}\) clearly satisfies condition \(2^{\prime}\). Moreover, if \(M_{v+1}=M_{v}\cup\{vw\}\), then \(vw\) does not close an alternating cycle of length 2, 4 or 6 because \(w\) does not lie in \(F_{v}^{+}\). In either case, \(M_{v+1}\) satisfies condition \(2^{\prime}\), as required.
In particular, \(M_{n-\Delta^{9}+1}\) satisfies condition 2', which completes the proof.
We shall use the following load-balancing result to show that the two parts of our partition have similar volume - see [13][Lemma 2.2] or the thesis [16][Lemma 2.1.3].
Figure 1: An illustration of the process of constructing a matching with our greedy algorithm.
The graph \(H\) is on the left, with the edges of the resulting matching \(M\) in dashed green lines. To the right, we see each of the four time-steps in which a new edge is added, with indications of why the chosen new edge (in grey) is the one being added. In the first time-step, we cannot choose to add edge 12 to our matching, because there already is an edge of \(H\) between 1 and 2 – we illustrate this with a blue circle (\(\circ\)).
In the second time-step, we cannot add edge 23, because 3 is already in the matching (indicated by a green square, \(\Box\)), we cannot add edge 24 because that is already an edge in \(H\), and we cannot add edge 25, because that would add a path of length three of \(H\) and \(M\) – we illustrate this by highlighting the path, and a 3 underneath vertex 5.
The construction proceeds similarly for two more steps (vertex three is skipped because it is already in the matching when we get to it) – in the last step, two vertices are forbidden by length-five paths – before we finally arrive at vertex seven, for which the algorithm cannot find a match, and it terminates.
**Lemma 2.4**.: _Suppose that \(f:[n]\to\mathbb{R}\) is some non-increasing function. Then, for any perfect matching \(M\) on \([n]\) such that for every \((i,j)\in M\), \(|i-j|\leq L\), and for any orientation of the edges of \(M\), it holds that_
\[\bigg{|}\sum_{(i,j)\in M}f(i)-f(j)\bigg{|}\leq L|f(n)-f(1)|.\]
We can assemble these lemmata into the following proposition.
**Proposition 2.5**.: _Let \(G=([n],E)\) be a graph with maximum degree \(\Delta\) satisfying \(\Delta^{9}\in\left[1,\frac{n}{2}\right)\), and let \(w_{\max}=w_{1}\geq\ldots\geq w_{n}=w_{\min}\) be non-negative vertex weights. Let \(\bar{w}\) be the average vertex weight, \(\bar{w}=\frac{1}{n}\sum_{u}w_{u}\), and for a set \(A\) of vertices, let \(w(A)=\sum_{u\in A}w_{u}\)._
_There exists a partition \(\{A,B,R\}\) of \(V(G)\) such that_
1. \(|A|=|B|\)_,_
2. \(R\subset\{n-\Delta^{9}+1,\ldots,n\}\)_,_
3. \(e(A,B)\leq\frac{1}{2}\sum_{v\in A\cup B}^{n}d_{v}\left(\frac{1}{2}-\frac{c}{ \sqrt{d_{v}}}\right)\) _,_
4. \(|w(A)-w(B)|\leq\Delta^{9}(w_{\max}-w_{\min})\) _and_
5. \(\max_{v\in R}w_{v}\leq 2\bar{w}\)_._
Proof.: We may apply Lemma 2.3 to our graph - recall that we have assumed our weights \(w_{i}\) are decreasing, which will imply that the unmatched vertices will be the ones of lowest weight. We obtain a matching \(M\) consisting only of edges \(ij\) with \(|i-j|<\Delta^{9}\). Let \(R\) be the set of vertices not matched by \(M\). Lemma 2.3 also tells us that \(|R|\leq\Delta^{9}\), and in fact, \(R\) is contained in the final segment \(\{n-\Delta^{9}+1,\ldots,n\}\).
Let \(G^{\prime}=G\setminus R\). The graph \(G^{\prime}\) along with the matching \(M\) fulfils the conditions of Theorem 2.2, by the construction of \(M\). Hence, we obtain a set \(U\subseteq[n]\) such that \(M\subseteq U\times U^{c}\), and
\[e_{G^{\prime}}(U,U^{c})\leq\frac{1}{2}\sum_{v\in G^{\prime}}d_{v}^{G^{\prime} }\left(\frac{1}{2}-\frac{c}{\sqrt{d_{v}^{G^{\prime}}}}\right)\leq\frac{1}{2} \sum_{v\in G^{\prime}}^{n}d_{v}\left(\frac{1}{2}-\frac{c}{\sqrt{d_{v}}}\right),\]
where the second inequality follows from \(c<\frac{1}{2}\).
Since \(|i-j|<\Delta^{9}\) for all edges \(ij\) of \(M\), we can appeal to Lemma 2.4 and get a bound on the difference of the weights of the sets, namely \(|w(A)-w(B)|\leq\Delta^{9}(w_{\max}-w_{\min})\) as desired.
Finally, let us bound the weights of the unmatched vertices. We established that the remainder \(R\) will be among the \(\Delta^{9}\) vertices of lowest weight. Suppose for contradiction that \(\max_{v\in R}w_{v}\) is larger than \(2\bar{w}\) - then, by the ordering of the vertices, _all_ the vertices \(1,\ldots,n-\Delta^{9}\) must have weight at least \(2\bar{w}\). So we can compute
\[\bar{w}=\frac{1}{n}\sum_{i=1}^{n}w_{i}\geq\frac{1}{n}\sum_{i=1}^{n-\Delta^{9}} w_{i}\geq\frac{n-\Delta^{9}}{n}2\bar{w}=\left(1-\frac{\Delta^{9}}{n}\right)2 \bar{w},\]
and the fact that \(1-\frac{\Delta^{9}}{n}>\frac{1}{2}\) follows from our assumption that \(\Delta^{9}<\frac{n}{2}\), giving us the desired contradiction.
## 3 From weight-balanced partitions to modularity bounds
We now pivot back to considering modularity in particular, as our objective measure on partitions. We start by showing a weaker version of our main theorem, that only applies under the assumption
of a maximum degree bound, in order to illustrate the proof in a simpler setting. Then, to prove the main theorem, the main idea is essentially to apply Theorem 3.1 to the bulk of the graph, that is, to the vertices whose degrees are not too far from the mean (and so we have a bound on the max degree in this bulk), and then randomly divide the high-degree vertices of the graph into our two parts.
Thus the main term of our main theorem is essentially the same as the main term of Theorem 3.1, because this bulk is where the main terms is gained; for the high degree vertices, we merely take a partition that yields roughly the expected number of cross-edges and does not interfere with the previous partition.
For technical reasons, it makes more sense to give a standalone proof of our main theorem that does not directly appeal to the following weaker theorem. We nevertheless include this theorem because its proof highlights the ideas at play without as many details to obscure them.
### Modularity bounds - an easy application of weight-balancing.
The following theorem will follow quickly from our weight-balancing result, Proposition 2.5.
**Theorem 3.1**.: _For any graph \(G\) such that \(\Delta^{9}\in\left[1,\frac{n}{6}\right)\), we have_
\[q^{*}(G)\geq\frac{c}{n}\sum_{i=1}^{n}\frac{\sqrt{d_{i}}}{d}-\frac{\Delta^{20 }}{2(nd)^{2}}.\]
To prove the theorem we will use Proposition 2.5, by taking the vertex degrees as the weights, which gives us a volume balanced nearly-bisection: two large sets \(A\) and \(B\) with similar volumes and a small remainder set \(R\). The modularity score of the partition into these three sets will be high if the number of edges between \(A\) and \(B\) is significantly less than half the edges of the graph, the volumes of \(A\) and \(B\) are sufficiently similar and \(R\) has a sufficiently small volume. The following lemma makes this precise.
**Lemma 3.2**.: _Let \(G\) be a graph and \(\mathcal{A}=\{A,B,R\}\) a vertex partition of \(G\) with \(\mathrm{vol}(R)\leq\mathrm{vol}(G)/3\). Then_
\[q_{\mathcal{A}}(G) \geq \frac{1}{2}-\frac{e(A,B)}{e(G)}-\frac{(\mathrm{vol}(A)-\mathrm{ vol}(B))^{2}}{2\mathrm{vol}(G)^{2}}\]
The proof of Lemma 3.2 is straightforward and so we defer the details to the appendix, see page 24.
Proof of Theorem 3.1.: Since \(G\) satisfies \(\Delta^{9}\in\left[1,\frac{n}{6}\right)\), it follows from Proposition 2.5, taking our vertex weights to be the degrees of the vertices, that there exists a partition \(\{A,B,R\}\) of the vertices of \(G\) with \(|A|=|B|\) and such that
\[e_{G}(A,B)\leq\frac{1}{2}\sum_{i=1}^{n}d_{i}\left(\frac{1}{2}-\frac{c}{\sqrt{ d_{i}}}\right), \tag{3.1}\]
\[|\mathrm{vol}_{G}(A)-\mathrm{vol}_{G}(B)|\leq\Delta^{9}(\Delta-\delta)\leq \Delta^{10} \tag{3.2}\]
and the remainder \(R\) satisfies \(|R|\leq\Delta^{9}\) and \(\max_{v\in R}d_{v}\leq 2\bar{d}\).
We now prove a lower bound on the modularity score of the partition \(\{A,B,R\}\) and hence a lower bound on the modularity value of \(G\). Recalling that \(\sum_{i}d_{i}=2e(G)\) and \(e(G)=n\bar{d}/2\) the bound (3.1) gives
\[e_{G}(A,B)\leq\frac{e(G)}{2}-\frac{c}{2}\sum_{i=1}^{n}\sqrt{d_{i}}=e(G)\bigg{(} \frac{1}{2}-\frac{c}{n}\sum_{i=1}^{n}\frac{\sqrt{d_{i}}}{\bar{d}}\bigg{)}. \tag{3.3}\]
To bound the volume of \(R\)
\[\operatorname{vol}(R)\leq|R|\max_{v\in R}d_{v}\leq 2\Delta^{9}\bar{d},\]
and thus for \(\Delta^{9}\leq n/6\), we have \(\operatorname{vol}(R)\leq\operatorname{vol}(G)/3\) and so can apply Lemma 3.2. Now substituting the bounds in (3.2) and (3.3) into Lemma 3.2 gives the desired result.
### Modularity bounds without the max degree condition
In this section we prove our main theorem. The idea here is that we can apply the same method as we did for Theorem 3.1 for the bulk of the graph, and then deal with the high-degree vertices separately. By doing so, we can remove the condition on the maximum degree, instead replacing it with a mild condition on the upper tail.
The proof still uses the weight-balanced bisection with few edges across the parts to gain its main term, however we will now apply it just to a subgraph \(G[L]\), where \(L\) is the set of vertices whose degree is at most a constant multiple of the average degree.
We then assign the vertices in \([n]\setminus L\) randomly to the two parts of our partition, using one method for vertices with degree at most \(\sqrt{n}\) and one for the rest. Unlike in the bulk, where the weight-balanced bisection actually gains us our main term, in this part we can only hope to keep the additional error terms small, since our random assignments do not really use the structure of the graph.
Note the following simple expression for two-part modularity (which follows for example from Lemma 3.2 by taking \(R\) to be the empty set).
_Remark 3.3_.: For any graph \(G\) and partition \(\{A,B\}\) of its vertices, we have
\[q(G,\{A,B\})=\frac{1}{2}-\frac{e(A,B)}{m}-\left(\operatorname{vol}(A)- \operatorname{vol}(B)\right)^{2}\frac{1}{8m^{2}}\]
We are now ready to prove Theorem 1.1, which we restate here as a proposition with explicit error terms.
**Proposition 3.4**.: _Let \(G\) be an \(n\)-vertex graph with average degree \(\bar{d}\geq 1\), \(L=\left\{v\in G\ \middle|\ d_{v}<C\bar{d}\right\}\) for some \(C>1\), and assume that \(\operatorname{vol}(L)\geq(1+\gamma)m=(1+\gamma)\frac{n\bar{d}}{2}\) for some \(\gamma>0\). If \(\vartheta=(C\bar{d})^{10}n^{-1}<\frac{1}{2}\left(1-\frac{1}{C}\right)\), then_
\[q^{*}(G)\geq\frac{0.26}{\sqrt{C\bar{d}}}\left(\gamma-\frac{2\vartheta}{C\bar{ d}}\right)-\frac{\vartheta^{2}}{2\bar{d}^{2}}-\frac{3}{8\sqrt{n}}-\frac{4\Delta(G) ^{2}}{n^{2}\bar{d}^{2}}.\]
_Moreover, if \(\frac{\Delta(G)}{n}=o(1)\) and \(\vartheta=o(1)\), then \(q^{*}(G)\geq\frac{0.26\gamma}{\sqrt{C\bar{d}}}(1+o(1))\)._
Proof.: To prove the bound, we randomly construct a bipartition with expected modularity score at least as claimed, and thus conclude that there exists a bipartition achieving at least that score. As in Theorem 3.1, we use the weight-balancing result, Proposition 2.5, this time applying it to just the low-degree vertices, \(L\), to get a partition into \(\{A,B,R\}\). For the random partitioning step, we take the remainder \(U=L\cup R\) and randomly divide it into two parts \(U_{A}\) and \(U_{B}\). (Here, we break into vertices \(U^{+}\) with degree at least \(n^{1/2}\) and the remainder \(U^{-}\) and have slightly different procedures for \(U^{+}\) and \(U^{-}\).)
Let \(G^{\prime}=G[L]\) and \(S=V(G)\setminus L\). Given a vertex \(v\in L\), let \(d^{\prime}_{v}\) be the degree of \(v\) in \(G^{\prime}\), that is, its number of neighbours in \(L\). We will apply Proposition 2.5 to the graph \(G^{\prime}\), using the degrees \(d^{\prime}_{v}\) as our vertex weights. This will require us to bound the maximum degree in \(G^{\prime}\) in terms of the number of vertices of \(G^{\prime}\), that is, in terms of \(|L|\).
Observe that \(|L|=n-|S|>n-\frac{n}{C}=n\left(1-\frac{1}{C}\right)\) by Markov's inequality, and so we get that
\[\left(\max_{v\in G^{\prime}}d^{\prime}_{v}\right)^{9}\leq\left(C\bar{d} \right)^{10}<\left(1-\frac{1}{C}\right)\frac{n}{2}<\frac{|L|}{2}\]
where we, for the first inequality, used that the maximum degree of \(G^{\prime}\) is at most \(C\bar{d}\) by construction, and the second inequality follows from our assumption that \(\vartheta=(C\bar{d})^{10}n^{-1}<\frac{1}{2}\left(1-\frac{1}{C}\right)\).
Thus \(G^{\prime}\) will satisfies the condition of Proposition 2.5, and we get a partition \(\{A,B,R\}\) of \(V(G^{\prime})\), and thus a partition \(\{A,B,R,H\}\) of the vertices of \(G\). Since we cut off the vertices of the highest degree, we get the following guarantees on this partition:
1. \(|A|=|B|\),
2. \(|R|<(C\bar{d})^{9}\),
3. \(e(A,B)\leq\frac{1}{2}\sum_{v\in A\cup B}d_{v}^{\prime}\left(\frac{1}{2}-\frac {c}{\sqrt{d_{v}^{\prime}}}\right)\),
4. \(|\mathrm{vol}(A)-\mathrm{vol}(B)|\leq(C\bar{d})^{9}(C\bar{d}-\delta)\leq(C \bar{d})^{10}\),
5. \(\max_{v\in R}d_{v}\leq 2\bar{d}\).
Our strategy will be to divide the vertices we do not have degree bounds for - the ones in \(H\) and \(R\) - randomly into \(A\) and \(B\), and use this randomness to control their contribution to the modularity. As before, the positive contribution to the modularity score will come the fact that there are relatively few edges between \(A\) and \(B\).
Let \(U=H\cup R\), and let \(\{U_{A},U_{B}\}\) be a partition of \(U\). We will first perform some general computations, and then just after (3.10) we specify the random procedure to partition \(U\) into \(\{U_{A},U_{B}\}\). By Remark 3.3, we see that
\[q(G,\{A\cup U_{A},B\cup U_{B}\})=\frac{1}{2}-\frac{e(A\cup U_{A},B\cup U_{B})} {m}-\frac{(\mathrm{vol}(A\cup U_{A})-\mathrm{vol}(B\cup U_{B}))^{2}}{8m^{2}}.\]
Now substituting \(m=e(A\cup B)+e(U)+e(A\cup B,U)\) and
\[e(A\cup U_{A},B\cup U_{B})=e(A,B)+e(U_{A},B)+e(A,U_{B})+e(U_{A},U_{B})\]
we can compute that
\[\frac{1}{2}-\frac{e(A\cup U_{A},B\cup U_{B})}{m} =\frac{e(A\cup B)-2e(A,B)}{2m}+\frac{e(U)+e(A\cup B,U)}{2m}\] \[\qquad-\frac{e(U_{A},U_{B})+e(U_{A},B)+e(A,U_{B})}{m}.\]
Then, we observe that
\[(\mathrm{vol}(A\cup U_{A})-\mathrm{vol}(B\cup U_{B}))^{2} =(\mathrm{vol}(A)-\mathrm{vol}(B))^{2}+2(\mathrm{vol}(A)-\mathrm{ vol}(B))(\mathrm{vol}(U_{A})-\mathrm{vol}(U_{B}))\] \[\quad+(\mathrm{vol}(U_{A})-\mathrm{vol}(U_{B}))^{2}\]
and so, taking this and our previous computations we have the following expression for the modularity score
\[q(G,\{A\cup U_{A},B\cup U_{B}\}) =\frac{e(A\cup B)-2e(A,B)}{2m} \tag{3.4}\] \[-\frac{(\mathrm{vol}(A)-\mathrm{vol}(B))^{2}}{8m^{2}}\] (3.5) \[+\frac{e(U)+e(A\cup B,U)}{2m}-\frac{e(U_{A},U_{B})+e(U_{A},B)+e(A,U_{B})}{m}\] (3.6) \[-\frac{(\mathrm{vol}(A)-\mathrm{vol}(B))(\mathrm{vol}(U_{A})- \mathrm{vol}(U_{B}))}{4m^{2}}\] (3.7) \[-\frac{(\mathrm{vol}(U_{A})-\mathrm{vol}(U_{B}))^{2}}{8m^{2}} \tag{3.8}\]
and thus we have five different terms which we will consider in turn. Firstly, for (3.4), we use that
\[e(A,B)\leq\frac{1}{2}\sum_{v\in A\cup B}d_{v}^{\prime}\left(\frac{1}{2}-\frac{c}{ \sqrt{d_{v}^{\prime}}}\right)=\frac{1}{2}e(A\cup B)-\frac{c}{2}\sum_{v\in A\cup B }\sqrt{d_{v}^{\prime}}\]
and so (3.4) is bounded below by
\[\frac{c}{2m}\sum_{v\in A\cup B}\sqrt{d_{v}^{\prime}}. \tag{3.9}\]
For (3.5) we use our bound on \(|\mathrm{vol}(A)-\mathrm{vol}(B)|\) to see that
\[\frac{\left(\mathrm{vol}(A)-\mathrm{vol}(B)\right)^{2}}{8m^{2}}\leq\frac{(C \bar{d})^{20}}{8m^{2}}. \tag{3.10}\]
Now, upon reaching the terms that involve \(U_{A}\) and \(U_{B}\), we specify how these sets are chosen.
_Random procedure for choosing \(U_{A}\) and \(U_{B}\) (see also Figure 2)._
Let \(U^{+}\subseteq H\subseteq U\) be the set of vertices of degree at least \(n^{1/2}\) in \(G\), with potentially one vertex less to make \(|U^{+}|\) even.
Firstly, pick a perfect matching \(\mathcal{M}\) on \(U^{+}\), matching the highest-degree vertex to the second-highest-degree, the third highest to the fourth, and so on. Secondly, for each edge \(xy\in\mathcal{M}\), choose uniformly at random whether to put \(x\) in \(U_{A}\) and \(y\) in \(U_{B}\), or vice versa. Thirdly, the vertices of \(U^{-}=U\setminus U^{+}\) get placed into \(U_{A}\) or \(U_{B}\) independently at random with probability \(\frac{1}{2}.\) We emphasise that \(U^{-}\) might contain one vertex \(\nu\) with \(d_{\nu}\geq n^{1/2}\), and the remaining vertices have degree at most \(n^{1/2}\).
Note that, since \(d_{v}\geq n^{1/2}\) for \(v\in U^{+}\), we have \(n^{1/2}|U^{+}|\leq\mathrm{vol}(U^{+})\). Moreover, \(U^{+}\subseteq H\) and so \(\mathrm{vol}(U^{+})\leq m\) and thus \(|U^{+}|\leq m/n^{1/2}\). Hence
\[|E(G)\cap\mathcal{M}|\leq\frac{|U^{+}|}{2}\leq\frac{m}{2n^{1/2}}.\]
Figure 2: An illustration of the procedure for choosing the partition \(\{A\cup U_{A},B\cup U_{B}\}\). (i) Let \(L\) be the set of (low degree) vertices, those of degree at most \(C\bar{d}\) and \(H\) be the (high degree) vertices, \(H=[n]\backslash L\). (ii) Apply the weight-balancing result to \(L\) to get a partition into \(A,B\) and a small (remainder) set \(R\). (iii) Let \(U=H\cup R\), and further divide this into (very high degree) \(U^{+}\), an even number of vertices of degree at least \(n^{1/2}\) and let \(U^{-}=U\backslash U^{+}\) and note \(U^{-}\) may contain one vertex \(\nu\) with degree at least \(n^{1/2}\). In \(U^{+}\), match the highest degree vertex to the second highest degree vertex, and the third highest to the fourth highest etc. (iv) For each pair in \(U^{+}\) randomly place one endpoint in \(U_{A}\) and the other in \(U_{B}\); for each vertex in \(U^{-}\) randomly place it in \(U_{A}\) or \(U_{B}\).
Having defined our choice of \(U_{A}\) and \(U_{B}\), we can compute the expectation of the remaining terms of the modularity score, i.e. (3.6)-(3.8). Starting with the random part of (3.6), we get
\[\mathbb{E}\left[e(U_{A},B)\right]=\mathbb{E}\Bigg{[}\sum_{ub\in E(U,B)}\mathds{1 }_{u\in U_{A}}\Bigg{]}=\frac{1}{2}e(U,B)\]
and likewise \(\mathbb{E}\left[e(A,U_{B})\right]=\frac{1}{2}e(A,U)\). Moreover, since for \(x,y\in U\) that are not matched by \(\mathcal{M}\), their assignment to parts is independent, while for \(xy\in M^{\prime}\) the endpoints \(x\) and \(y\) are deterministically assigned to different parts, we have
\[\mathbb{E}\left[e(U_{A},U_{B})\right]=\tfrac{1}{2}|E(U)\backslash\mathcal{M }|+|E(U)\cap\mathcal{M}|=\tfrac{1}{2}e(U)+\tfrac{1}{2}|E(U)\cap\mathcal{M}| \leq\frac{1}{2}e(U)+\frac{m}{4n^{1/2}}.\]
In total, the expectation of the random part of (3.6) is bounded below by
\[\frac{e(U,B)+e(A,U)+e(U)}{2m}+\frac{1}{4n^{1/2}}\]
which then cancels nearly exactly with the deterministic first half, leaving us with a lower bound on the expectation of (3.6) of form
\[-\frac{1}{4n^{1/2}}. \tag{3.11}\]
For (3.7), we compute that
\[\mathbb{E}\left[\operatorname{vol}(U_{A})-\operatorname{vol}(U_{B})\right]= \mathbb{E}\left[\sum_{v\in U_{A}}d_{v}-\sum_{v\in U_{B}}d_{v}\right]=\mathbb{E }\left[\sum_{v\in U}d_{v}(\mathds{1}_{v\in U_{A}}-\mathds{1}_{v\in U_{B}}) \right]=0.\]
and thus (3.7) term has expectation zero. Finally, for (3.8), writing \(U_{A}^{+}=U_{A}\cap U^{+}\) and \(U_{A}^{-}=U_{A}\cap U^{-}\), and defining \(U_{B}^{+},U_{B}^{-}\) similarly, we first note
\[(\operatorname{vol}(U_{A})-\operatorname{vol}(U_{B}))^{2} =\big{(}\operatorname{vol}(U_{A}^{+})-\operatorname{vol}(U_{B}^{ +})+\operatorname{vol}(U_{A}^{-})-\operatorname{vol}(U_{B}^{-})\big{)}^{2}\] \[\leq 2\left(\operatorname{vol}(U_{A}^{+})-\operatorname{vol}(U_{B} ^{+})\right)^{2}+2\left(\operatorname{vol}(U_{A}^{-})-\operatorname{vol}(U_{B }^{-})\right)^{2}. \tag{3.12}\]
For the first term of (3.12), Lemma 2.4 (the load balancing lemma) gives the deterministic bound \(|\operatorname{vol}(U_{A}^{+})-\operatorname{vol}(U_{B}^{+})|\leq\Delta\), where \(\Delta\) is the maximum degree of \(G\), since we may take \(L=1\) in the application of Lemma 2.4. The contribution of \(U^{-}\) is
\[\mathbb{E}\left[(\operatorname{vol}(U_{A}^{-})-\operatorname{vol }(U_{B}^{-}))^{2}\right]=\mathbb{E}\Bigg{[}\bigg{(}\sum_{v\in U^{-}}d_{v}^{2} \Big{(}\mathds{1}_{v\in U_{A}}-\mathds{1}_{v\in U_{B}}\Big{)}\bigg{)}^{2} \Bigg{]}\] \[\quad=\sum_{v\in U^{-}}d_{v}^{2}\leq d_{\nu}^{2}+n^{1/2}\sum_{v \in U^{-}}d_{v}\leq\Delta^{2}+n^{1/2}m;\]
recalling that \(d_{\nu}\) comes from a potentially unmatched high-degree vertex \(\nu\). Thus by (3.12) and using \(m\geq n\), we conclude the expected value of (3.8) is at least
\[-\frac{\mathbb{E}\left[\left(\operatorname{vol}(U_{A})-\operatorname{vol}(U_{ B})\right)^{2}\right]}{8m^{2}}\geq-\frac{\Delta^{2}}{m^{2}}-\frac{n^{1/2}}{8m} \geq-\frac{\Delta^{2}}{m^{2}}-\frac{1}{8n^{1/2}}. \tag{3.13}\]
We may take an instance of the random partition, \(\tilde{U}_{A}\) and \(\tilde{U}_{B}\) say, for which the modularity score of \(q(G,A\cup\tilde{U}_{A},B\cup\tilde{U}_{B})\) is bounded below by our bound on the expectation of the modularity score of the random partition. Gathering our calculations - terms (3.4) and (3.5) are bounded in (3.9)
and (3.10), and the expectation of terms (3.6)-(3.8) are bounded by line (3.11), \(0\) and line (3.13) respectively. Thus,
\[q(G,\{A\cup\tilde{U}_{A},B\cup\tilde{U}_{B}\})\geq\frac{c}{2m}\sum_{v\in A\cup B }\sqrt{d_{v}^{\prime}}-\frac{(C\bar{d})^{20}}{8m^{2}}-\frac{3}{8n^{1/2}}-\frac{ \Delta^{2}}{m^{2}}. \tag{3.14}\]
It remains to simplify the lower bounds in (3.14). We have
\[\frac{c}{2m}\sum_{v\in A\cup B}\sqrt{d_{v}^{\prime}} \geq\frac{c}{2m}\sum_{v\in A\cup B}\frac{d_{v}^{\prime}}{\max_{w \in A\cup B}\sqrt{d_{w}^{\prime}}}\] \[=\frac{c\operatorname{vol}_{G^{\prime}}(A\cup B)}{2m\left(\max_{ v\in A\cup B}\sqrt{d_{v}^{\prime}}\right)}\] \[\geq\frac{c\operatorname{vol}_{G^{\prime}}(A\cup B)}{2m\sqrt{C \bar{d}}}. \tag{3.15}\]
We now consider \(\operatorname{vol}_{G^{\prime}}(A\cup B)\). Recall that we defined \(G^{\prime}=G[L]\), and \(L=A\cup B\cup R\). We compute that
\[\operatorname{vol}_{G^{\prime}}(A\cup B) =\operatorname{vol}_{G^{\prime}}(G^{\prime})-\operatorname{vol}_{ G^{\prime}}(R)\] \[=\operatorname{vol}_{G}(L)-e_{G}(H,L)-\operatorname{vol}_{G^{ \prime}}(R)\] \[\geq\operatorname{vol}_{G}(L)-\operatorname{vol}_{G}(H)- \operatorname{vol}_{G}(R)\]
and so since \(\operatorname{vol}_{G}(L)=(1+\gamma)m\) and \(\operatorname{vol}_{G}(H)=(1-\gamma)m\) by assumption, we get that \(\operatorname{vol}_{G^{\prime}}(A\cup B)\geq 2\gamma m-\operatorname{vol}_{G}(R)\). Moreover, \(\operatorname{vol}_{G}(R)\leq|R|\max_{v\in R}d_{v}\leq(C\bar{d})^{9}\cdot 2 \bar{d}=4m(C\bar{d})^{9}/n\) by Proposition 2.5. Substituting this into (3.15), we get that
\[\frac{c}{2m}\sum_{v\in A\cup B}\sqrt{d_{v}^{\prime}}\geq\frac{c\operatorname{ vol}_{G^{\prime}}(A\cup B)}{2m\sqrt{C\bar{d}}}\geq\frac{c\gamma}{\sqrt{C\bar{d}} }-\frac{2c(C\bar{d})^{9}}{n\sqrt{C\bar{d}}}.\]
Hence, (3.14) implies that
\[q(G,\{A\cup\tilde{U}_{A},B\cup\tilde{U}_{B}\})\geq\frac{c\gamma}{\sqrt{C\bar{ d}}}-\frac{2c(C\bar{d})^{9}}{n\sqrt{C\bar{d}}}-\frac{(C\bar{d})^{20}}{8m^{2}}- \frac{3}{8n^{1/2}}-\frac{\Delta^{2}}{m^{2}}, \tag{3.16}\]
and it remains to express two of the error terms in terms of \(\vartheta=(C\bar{d})^{10}n^{-1}\). Namely,
\[\frac{2c(C\bar{d})^{9}}{n\sqrt{C\bar{d}}}=\frac{2c\vartheta}{(C\bar{d})^{3/2} }\quad\text{and}\]
\[\frac{(C\bar{d})^{20}}{8m^{2}}=\frac{\vartheta^{2}n^{2}}{2n^{2}\bar{d}^{2}}= \frac{\vartheta^{2}}{2\bar{d}^{2}}.\]
Gathering terms and simplifying, we get the final form of our theorem, stating that
\[q^{*}(G)\geq q(G,\{A\cup\tilde{U}_{A},B\cup\tilde{U}_{B}\})\geq\frac{c}{ \sqrt{C\bar{d}}}\left(\gamma-\frac{2\vartheta}{C\bar{d}}-\right)-\frac{ \vartheta^{2}}{2\bar{d}^{2}}-\frac{3}{8n^{1/2}}-\frac{\Delta^{2}}{m^{2}}.\]
as desired. Recall that \(c>0.26\). It follows that if \(\frac{\Delta}{m}=o(1)\) and \(\vartheta=o(1)\), then
\[q^{*}(G)\geq\frac{0.26\gamma}{\sqrt{C\bar{d}}}(1-o(1)).\]
The following proposition may be useful to get better constants in some situations, mainly because we do not lose the \(1/\sqrt{C}\) in the main term.
**Proposition 3.5**.: _Let \(G\) be an \(n\)-vertex graph with average degree \(\bar{d}\), and let \(L=\{v\in[n]:d_{v}\geq C\bar{d}\}\) for some constant \(C\geq 2\). Let \((d_{v}^{\prime})_{v\in L}\) be the degree sequence of \(G[L]\), and \(\vartheta:=(C\bar{d})^{10}n^{-1}<1\). Then_
\[q^{*}(G)\geq\frac{c}{2nd}\sum_{v\in L}\sqrt{d_{v}^{\prime}}-\frac{\vartheta^{ 2}}{2d^{2}}-\frac{3}{8n^{1/2}}-\frac{\Delta^{2}}{4(nd)^{2}}. \tag{3.17}\]
Proof.: We follow the proof of Theorem 1.1 (with the same notation) down to (3.14). Recalling that \(L=A\cup B\cup R\), we have
\[q(G,\{A\cup U_{A},B\cup U_{B}\})\geq\frac{c}{nd}\sum_{v\in A\cup B}\sqrt{d_{v }^{\prime}}-\frac{(C\bar{d})^{20}}{2(nd)^{2}}-\frac{3}{8n^{1/2}}-\frac{\Delta ^{2}}{4(nd)^{2}}. \tag{3.18}\]
It remains to compare \(\sum_{v\in A\cup B}\sqrt{d_{v}^{\prime}}\) with \(\sum_{v\in A\cup B\cup R}\sqrt{d_{v}^{\prime}}\). To this end, note that \(|R|\leq(C\bar{d})^{9}\leq\frac{n}{4}\leq\frac{|A\cup B\cup R|}{2}\), and that \(d_{w}^{\prime}\leq d_{v}^{\prime}\) for all \(w\in R\) and \(v\in A\cup B\). Therefore,
\[\sum_{v\in A\cup B}\sqrt{d_{v}^{\prime}}\geq\sum_{v\in R}\sqrt{d_{v}^{\prime}},\]
so \(\sum_{v\in A\cup B}\sqrt{d_{v}^{\prime}}\geq\frac{1}{2}\sum_{v\in A\cup B\cup R }\sqrt{d_{v}^{\prime}}\), which together with (3.18) yields (3.17).
## 4 Lower bounds for power-law graphs
We will now apply our main result to deduce Theorem 1.2, as well as a more general lower bound in terms of the moments of the degree sequence. Notice that although Theorem 1.2 is stated for constant \(\bar{d}\), the bound actually holds for mildly increasing \(\bar{d}\), up to \(\bar{d}=n^{o(1)}\).
**Theorem 4.1** (restatement of Theorem 1.2).: _Let \(G\) be a graph with degree sequence \(\mathbf{d}=(d_{i})_{i\in[n]}\), with average degree \(\bar{d}\), satisfying_
\[\frac{1}{n}|\{i:d_{i}\geq k\}|\leq A\bar{d}^{r-1}k^{1-\tau} \tag{4.1}\]
_for all \(i\), with constants \(\tau>2\) and \(A>0\). For \(b=0.1\left(\frac{(\tau-2)}{8A}\right)^{\frac{1}{2(\tau-2)}}\) and sufficiently large \(n\),_
\[q^{*}(G)\geq b\bar{d}^{-1/2}.\]
Proof of Theorem 1.2.: For convenience, we may assume that \(\tau\leq 3\); indeed, if a sequence satisfies (1.2) with some \(\tau^{\prime}\), then it also satisfies it with some smaller value \(\tau<\tau^{\prime}\).
To verify the hypothesis of Proposition of 3.4, let \(s_{j}=|\{i:d_{i}\geq j\}|\) and note that \(s_{j}-s_{j+1}=|\{i:d_{i}=j\}|\). We have
\[\sum_{i\in[n]:d_{i}\geq k}d_{i} =\sum_{j\geq k}j(s_{j}-s_{j+1})=\sum_{j\geq k}js_{j}-\sum_{j\geq k +1}(j-1)s_{j}=ks_{k}+\sum_{j\geq k+1}s_{j}\] \[\leq A\bar{d}^{\tau-1}k^{2-\tau}n+\sum_{j\geq k+1}A\bar{d}^{\tau- 1}j^{1-\tau}n,\]
where we second equality follows by changing the summation variable in the second sum, and the inequality uses the hypothesis on \(\mathbf{d}\).
Since
\[\sum_{j\geq k+1}j^{1-\tau}\leq\int_{k}^{\infty}x^{1-\tau}dx=\frac{1}{\tau-2}k^{2- \tau},\]
we have
\[\sum_{i\in[n]:d_{i}\geq k}d_{i}\leq\left(1+\frac{1}{\tau-2}\right)A\bar{d}^{ \tau-1}k^{2-\tau}n.\]
Inserting \(k=\left(4A\cdot\frac{\tau-1}{\tau-2}\right)^{1/(\tau-2)}\bar{d}\) and dividing by \(n\bar{d}/2\), we obtain
\[\frac{2}{n\bar{d}}\sum_{i\in[n]:d_{i}\geq k}d_{i}\leq\frac{2A(\tau-1)}{\tau-2} \cdot\left(4A\cdot\frac{\tau-1}{\tau-2}\right)^{-1}=\frac{1}{2}.\]
Hence
\[\sum_{i\in[n]:d_{i}<k}d_{i}\geq n\bar{d}-\frac{n\bar{d}}{4}=\frac{n\bar{d}}{2} \left(1+\frac{1}{2}\right),\]
and the hypothesis of our proposition is satisfied with \(\gamma=\frac{1}{2}\) and \(C=\left(4A\cdot\frac{\tau-1}{\tau-2}\right)^{\frac{1}{\tau-2}}\leq\left(\frac {8A}{\tau-2}\right)^{\frac{1}{\tau-2}}\). Proposition 3.4 then implies that
\[q^{*}(G)\geq 0.26\gamma\left(\frac{8A}{\tau-2}\right)^{\frac{-1}{2(\tau-2)}} \bar{d}^{-1/2}-O(\vartheta)-\frac{4\Delta(G)^{2}}{n^{2}\bar{d}^{2}}.\]
Now, recall \(\vartheta=(C\bar{d})^{10}/n\) for the value \(C\) earlier and thus \(\vartheta=O(n^{-1})\). For the other error term note (4.1) implies that \(\Delta(G)\leq(An)^{-\frac{1}{1-\tau}}\bar{d}\). It follows that, for sufficiently large \(n\),
\[q^{*}(G)\geq 0.1\left(\frac{8A}{\tau-2}\right)^{\frac{-1}{2(\tau-2)}}\bar{d}^{- 1/2}.\]
Let us also point out a more general statement, which controls modularity in terms of moments of the degree sequence. The moments are one way to capture an assumption that the degree distribution is still'reasonably smooth'. Note that in the statement below, \(\kappa\) can be an arbitrarily small positive real number, to circumvent the fact that for some graph classes occurring in practice, not even the second moment of the degree sequence is bounded. This statement formally implies Theorem 1.2, but verifying this implication is as difficult as proving Theorem 1.2 directly.
**Proposition 4.2**.: _Let \(G\) be a graph with degree sequence \(\mathbf{d}=(d_{1},\ldots,d_{n})\) whose mean is \(\bar{d}=O(1)\). Suppose for some \(\kappa>0\) and \(B>0\),_
\[\sum_{v\in[n]}d_{v}^{1+\kappa}\leq Bn\bar{d}^{1+\kappa} \tag{4.2}\]
_There is a constant \(c^{\prime}\) such that for sufficiently large \(n\), \(q^{*}(G)\geq c^{\prime}\bar{d}^{-1/2}\)._
Proof.: Let \(L\) be the set of vertices of degree at most \((4B)^{1/\kappa}\bar{d}\) and denotes its complement by \(H=L^{c}\). We claim that then \(\operatorname{vol}(H)\leq\bar{d}n/4\). For,
\[B\bar{d}^{1+\kappa}n\geq\sum_{v}d_{v}^{1+\kappa}\geq\sum_{v\notin T}d_{v}^{1+ \kappa}\geq\left(\min_{v\in T^{c}}d_{v}\right)^{\kappa}\cdot\operatorname{ vol}(H).\]
Noting that \(d_{v}^{\kappa}\geq 4B\bar{d}^{\kappa}\) for \(v\in H\) and rearranging gives
\[\frac{n\bar{d}}{4}\geq\operatorname{vol}(H),\]
as required.
Hence, \(\operatorname{vol}(L)\geq\frac{3n\bar{d}}{4}\), so we may apply Theorem 1.1 with \(C=(4B)^{1/\kappa}\) and \(\gamma=\frac{1}{2}\). It follows that
\[q^{*}(G)=\Omega\left(B^{-\frac{1}{2\kappa}}\bar{d}^{-\frac{1}{2}}\right),\]
where \(\Omega\) hides an absolute constant.
### Preferential attachment graphs and related models
Preferential attachment models (PAM) describe graphs which grow in time, that is, vertices are sequentially added to the graph. Given the graph at time \(t\), a vertex with label \(t+1\) is added to the graph and attached to older vertices according to a probability distribution according to which it is more likely to attach to high-degree vertices. Thus the degree sequence of such a graph is not specified a priori, but emerges from the attachment rule. The degree sequence of the classical PAM considered for instance in [1, 2] typically follows a power-law with the exponent \(\tau=3\).
In this section, we demonstrate how Theorem 1.1 can be applied to an entire class of PAMs which _realise_ every power-law exponent \(\tau\) with \(\tau>2\). We will be working with the model presented in [1, Section 8.2], and we follow their notation. In a graph \(G\) on the vertex set \(\{v_{1},\ldots,v_{n}\}\) let \(D_{i}(n)\) denote the degree of \(v_{i}\), and let
\[P_{k}(n)=\frac{1}{n}\{i\in[n]:D_{i}(n)=k\}\]
be the proportion of vertices of degree \(k\). At time \(t\) the graph has vertex set \(\{v_{1},\ldots,v_{t}\}\) and vertex \(i\) has degree \(D_{i}(t)\).
The model has parameters \(m\in\mathbb{N}\), which governs the average degree, and \(-m<\delta<m\). It produces a graph sequence denoted by \(\operatorname{PA}_{n}^{(m,\delta)}\) which, at time \(n\), has \(n\) vertices and \(mn\) edges. The first vertex \(v_{1}\) has \(m\) loops. At time \(t\), the vertex \(v_{t}\) is added, along with \(m\) edges \(e_{1},\ldots,e_{m}\) incident to \(v_{t}\). The other endpoint of the edge \(e_{i}\) is a vertex \(v_{j}\in\{v_{1},\ldots,v_{t}\}\) with probability _roughly_ proportional to \(D_{i}(t)+\delta\) (that is, an affine function of the current degree of \(v_{i}\)). For a full description of the model see [13, 14] from which all the results which we use are taken. We remark that the average degree of this graph is \(2m\), which does conflict with the use of \(m\) (for the number of edges) in the previous section.
For this specific model, Ross [13] showed that the degree sequence follows a power-law with exponent \(\tau=3+\frac{\delta}{m}>2\). Such results were first obtained by Bollobas and Riordan [1], for the less general model with \(\delta=0\) and \(\tau=3\). Thus the results of the previous section in principle imply that such graphs have high modularity, but to prove a rigorous result, we need to deal with loops and multiple edges in the model, as well as with the fact that the results from [14] (and also [1, 13]) do not a priori give sufficient bounds on the number of vertices of degree, say \(n^{1/5}\).
Recall that \(P_{k}(n)\) is the proportion of vertices of degree \(k\) in \(\operatorname{PA}_{n}^{(m,\delta)}\). Let \(p_{k}=p_{k}(m,\delta)\) be the probability mass function defined in [14] and in Appendix B; \(p_{k}(n)\) will be the _limiting degree distribution_ for \(\operatorname{PA}_{n}^{(m,\delta)}\), and for the present we will only use the estimates
\[p_{k}=k^{-3+\frac{\delta}{m}}\left(2+\frac{\delta}{m}\right)(m+\delta)^{3+ \frac{\delta}{m}}\left(1+O(m^{-1})\right)\leq 2^{5}k^{-3+\frac{\delta}{m}}m^{3+ \frac{\delta}{m}}, \tag{4.3}\]
where the second inequality follows from \(3+\frac{\delta}{m}\leq 4\). Let \(\tau=-3+\frac{\delta}{m}\). We will need the following facts deduced from [14]; the proof is deferred to after the main theorem, see page 19.
**Proposition 4.3**.: _With high probability, the following holds in \(\mathrm{PA}_{n}^{(m,\delta)}\) with \(\delta\in(-m,m)\)._
1. _For_ \(k\in[n]\)_,_ \(k\leq n^{1/10}\)_, and some_ \(\varepsilon_{1}>0\)_,_ \[P_{k}(n)=p_{k}(1+O(n^{-\varepsilon_{1}})).\] (4.4)
2. _For_ \(A\leq n^{1/10}\log^{-1}n\)_,_ \(\sum_{k\geq Am}kP_{k}(n)\leq 2m\cdot 32A^{-\tau+2}/(\tau-2)\)__
3. \(\sum_{k\in[n]}k^{2}P_{k}(t)\leq n^{1-\varepsilon_{2}}\) _for some_ \(\varepsilon_{2}>0\)_._
4. _The number of loops in_ \(\mathrm{PA}_{n}^{(m,\delta)}\) _is_ \(O(\log^{2}n)\)_, and the number of multiple edges is at most_ \(n^{1-\varepsilon_{3}}\) _for some_ \(\varepsilon_{3}>0\)_._
Now we can prove the desired bound. As mentioned the case \(\delta=0\) was proven in [1].
**Theorem 4.4**.: _Let \(\tilde{G}\) be an \(n\)-vertex graph obtained from \(G\sim\mathrm{PA}_{n}^{(m,\delta)}\) after removing loops and multiple edges from \(G\), and let \(\delta\in(-m,m)\). There is a constant \(c^{\prime}\) such that whp \(\tilde{G}\) has average degree \(2m(1-o(1))\), and_
\[q^{*}(\tilde{G})\geq c^{\prime}m^{-1/2}.\]
Proof.: Assume that \(\mathrm{PA}_{n}^{(m,\delta)}\) satisfies the claims in Proposition 4.3, which occurs with high probability. Recall that \(\tau=3+\frac{\delta}{m}>2\). Recall \(D_{i}(n)\) is the degree of vertex \(i\) in \(G\). Let \(d_{\tilde{G}}(v_{i})\) denote the degree of \(v_{i}\) in \(\tilde{G}\), and clearly \(d_{\tilde{G}}(v_{i})\leq d_{G}(v_{i})=D_{i}(n)\).
Let \(A\) be a sufficiently large constant such that \(\frac{32A^{-\tau+2}}{\tau-2}<\frac{1}{8}\), let \(H\) be the set of vertices with degree at least \(Am\), and denote its complement by \(L=H^{c}\). By item (ii),
\[\mathrm{vol}(H)=\sum_{v\in H}d_{\tilde{G}}(v)\leq n\sum_{k\geq Am}kP_{k}(n) \leq 2mn\cdot\frac{32A^{-\tau+2}}{\tau-2}\leq\frac{1}{8}\cdot 2mn.\]
By item (iv), \(e(\tilde{G})=mn(1-o(1))\), so
\[\mathrm{vol}(L)\geq\frac{7}{8}\cdot 2mn(1-o(1))=\frac{7}{4}e(G)(1+o(1))\geq \frac{3}{2}e(\tilde{G}).\]
Hence we may apply Theorem 1.1 with \(C=A\) and \(\gamma=\frac{1}{2}\) to deduce that \(q^{*}(G)\geq c^{\prime}m^{-1/2}\), as required.
_Remark 4.5_.: For the classical preferential attachment model, we have \(\delta=0\) and \(\tau=3\), so Theorem 1.1 can be applied with \(A=2^{8}\) to obtain an explicit value for \(c^{\prime}\).
Before proving Proposition 4.3, we need some properties of the sequence \(p_{k}\); the formal definition of \(p_{k}\) and the proof of the following lemma can be found in the Appendix.
**Lemma 4.6**.: _Let \(m\) be a positive integer, \(\delta\in(-m,m)\) and \(\tau=-3+\frac{\delta}{m}\). The sequence \(p_{k}=p_{k}(m,\delta)\) satisfies \(\sum_{k=m}^{\infty}p_{k}=2m\). Moreover, there is a constant \(b_{m,\delta}\) such that_
\[\sum_{k=Cm}^{\infty}kp_{k} \leq\frac{2^{5}}{\tau-2}C^{2-\tau}m\quad\text{and} \tag{4.5}\] \[\sum_{k=m}^{M}k^{2}p_{k} \leq b_{m,\delta}\max\{M^{3-\tau},\log M\}. \tag{4.6}\]
We can now prove Proposition 4.3.
Proof of Proposition 4.3.: Theorem 8.3 in [11] states that whp, for all \(k\),
\[|P_{k}(n)-p_{k}|\leq\frac{\log n}{\sqrt{n}}.\]
It follows from (4.3) that for \(k\leq n^{1/10}\) and \(\tau<4\), we have \(p_{k}\geq n^{-4/10}\). These two facts together imply (i) holds (for any fixed \(\varepsilon_{1}<1/10\)).
For item (ii), notice that Lemma 4.6 implies that
\[\sum_{k=m}^{Am}kp_{k}\geq 2m\left(1-\frac{2^{4}}{\tau-2}A^{2-\tau}\right).\]
Item (ii) will follow from the 'complementary inequality'
\[\sum_{k=m}^{Am}kP_{k}(n)\geq 2m\left(1-\frac{2^{5}}{\tau-2}A^{2-\tau}\right), \tag{4.7}\]
since \(\sum_{k\geq m}kP_{k}(n)=2m\) deterministically. Now, notice that Lemma 4.6 implies that
\[\sum_{k=m}^{Am}kp_{k}\geq 2m\left(1-\frac{2^{4}}{\tau-2}A^{2-\tau}\right).\]
This estimate and (i) yield (4.7).
For (iii), we split into two ranges. For \(k\leq n^{1/11}\), by item (i), we have
\[\sum_{k\leq n^{1/11}}k^{2}P_{k}(n)\leq\sum_{k\leq n^{1/11}}2k^{2}p_{k}.\]
Thus by (4.6) we have
\[\sum_{k\leq n^{1/11}}k^{2}P_{k}(n)\leq b_{m,\delta}\max\{n^{\frac{1}{H}(3- \tau)},\log n\}\leq b_{m,\delta}n^{1/11}.\]
Now, by (ii), the sum of all vertex degrees in \(\mathrm{PA}_{n}^{(m,\delta)}\) which are higher than \(n^{1/11}\) is at most \(C^{\prime}_{m,\delta}n^{1+(2-\tau)/11}\leq n^{1-\varepsilon}\) for some \(\varepsilon>\frac{\tau-2}{11}>0\). Hence, by convexity, the sum \(\sum_{k\geq n^{1/11}}k^{2}P_{k}(n)\) is maximised when there is a single vertex of degree \(\ell=\lfloor n^{1-\varepsilon}\rfloor\), so
\[\sum_{k\geq n^{1/11}}k^{2}P_{k}(n)\leq n^{2-2\varepsilon}\cdot\frac{1}{n}\leq n ^{1-2\varepsilon}.\]
Summing the two results gives the required bound.
For (iv), we let \(\mathcal{E}\) be the event that \(\mathrm{PA}_{n}^{(m,\delta)}\) satisfies (i)-(iii), and we may condition on \(\mathcal{E}\) as it occurs with high probability. Recall that \(D_{i}(t)\) denotes the degree of the vertex \(v_{i}\) in \(\mathrm{PA}_{t}^{(m,\delta)}\) (i.e., after \(t\) vertices are added to the preferential-attachment graph). For the purposes of the present proof, it suffices to use crude upper bounds on the attachment probabilities in \(\mathrm{PA}_{n}^{(m,\delta)}\); moreover, we will only use an upper bound \(D_{i}(t)\leq D_{i}(n)\) for \(t\leq n\). For the exact probabilities, see [11, page 258].
The first vertex \(v_{1}\) has \(m\) loops. When adding the vertex \(v_{t+1}\), \(m\) edges are attached to \(v_{t+1}\), and each of them is a loop with probability at most
\[\frac{2(m-1)}{mt},\]
where the numerator \(2m\) corresponds to the worst-case scenario where \(v_{t+1}\) already has \(m-1\) loops attached to it. Summing over the \(m\) edges attached to \(v_{t+1}\) (for \(t\geq 1\)) and over all \(t\), the expected number of loops is at most
\[m+\sum_{t=1}^{n}\frac{m}{t}\leq 2m\log n.\]
So by Markov's inequality, the number of loops in \(\operatorname{PA}_{n}^{(m,\delta)}\) is at most \(\log^{2}n\) with high probability.
To control multiple edges, note that (iii) implies that conditional on \(\mathcal{E}\),
\[\sum_{i\in[n]}D_{i}^{2}(n)=n\sum_{k=m}^{n}k^{2}P_{k}(n)\leq n^{2-\varepsilon}.\]
Let \(Z_{t}\) denote the number of multiple edges \(v_{i}v_{t+1}\) with \(i\leq t\). The probability that one of the \(m\) edges attached to \(v_{t+1}\) is incident to a given vertex \(v_{i}\) (with \(i\neq t+1\)) is at most \(m\cdot\frac{D_{i}(n)+\delta}{mt(2+\delta)+(1+\delta)}\leq\frac{D_{i}(n)}{t}\). Hence the probability that \(v_{i}v_{t+1}\) is a multiple edge is at most \(\frac{D_{i}^{2}(n)}{t^{2}}\). Thus for \(t\geq n^{1-\varepsilon/4}\),
\[\mathbb{E}[Z_{t}\mid\mathcal{E}]\leq\frac{1}{t^{2}}\sum_{i\in[n]}D_{i}^{2}(n) \leq n^{-\varepsilon/2}.\]
Summing over \(t\), and using the trivial upper bound \(Z_{t}\leq m\) for \(t\leq n^{1-\varepsilon/4}\), we get that the expected number of multiple edges is at most
\[\sum_{t=1}^{n}\mathbb{E}[Z_{t}\mid\mathcal{E}]\leq n^{1-\varepsilon/4}m+n^{1- \varepsilon/2}\leq 2mn^{1-\varepsilon/4}.\]
Again, using Markov's Inequality, we have that \(\sum_{t}Z_{t}\leq n^{1-\frac{\varepsilon}{5}}\) with high probability.
## 5 Upper bounds on modularity
In this section, we show that for a large class of sequences \(\mathbf{d}\), _typical_ graphs with degree sequence _approximately_\(\mathbf{d}\) actually have modularity \(O(\bar{d}^{-1/2})\), matching the lower bound from Theorem 3.1 up to a constant factor.
We consider the Chung-Lu model of random graphs. Let \(\mathbf{w}=(w_{v})_{v\in[n]}\) where each \(w_{v}>0\) and denote \(\bar{w}=n^{-1}\sum_{v}w_{v}\) and \(w_{\min}=\min_{v}w_{v}\). We will also assume that for each \(v\) we have \(w_{v}^{2}=o(\bar{w}n)\). Generate the random graph \(G(n,\mathbf{w})\) by choosing each edge \(uv\) independently with probability (where \(u\neq v\) as we do not allow loops)
\[p_{uv}=\frac{w_{u}w_{v}}{\bar{w}n}.\]
We may see that the expected degree of \(v\) in \(G(n,\mathbf{w})\) is \(w_{v}(1-w_{v}\bar{w}^{-1}n^{-1})=w_{v}(1-o(1))\), i.e. approximately \(w_{v}\). This is why the Chung-Lu model is often referred to as the random graph with a given _expected_ degree sequence. In fact, for a large class of degree sequences, the empirical degree sequence of \(G(n,\mathbf{w})\) is _close to_\(\mathbf{w}\); for details, see Theorems 6.10 and 6.19 in [11]. If the degree sequence of \(G(n,\mathbf{w})\) satisfies the assumptions of Theorem 3, then we can deduce that its modularity is \(\Omega(\bar{w}^{-1/2})\). We will now prove an upper bound of the same order of magnitude, assuming that \(w_{\min}\geq c\bar{w}\) for some constant \(c\).
Throughout this section, we write _whp_ to mean _with high probability_, i.e. with probability converging to \(1\) with \(n\). We recall the normalised Laplacian of a graph \(G\) is defined to be \(\mathcal{L}_{G}=I-D^{-1/2}AD^{-1/2}\) where \(A\) is the adjacency matrix of \(G\) and \(D\) is the diagonal 'degrees matrix'
where the \(u\)-th entry on the diagonal is \(d_{u}\). Let \(\bar{\lambda}_{G}\) be the spectral gap of \(\mathcal{L}_{G}\). A very nice result of Chung, Lu and Vu [13] is that whp
\[\lambda(G(n,\mathbf{w}))>1-4\bar{w}^{-1/2}(1+o(1))-w_{\min}^{-1}\ln^{2}n.\]
Now we recall that the modularity of a graph is bounded above by its spectral gap see for example Lemma 6.1 of [10]: \(q^{*}(G)\leq\bar{\lambda}(G)\). Thus the result of [13] immediately gives the following corollary. Also recall that the modularity value is robust to changes in the edge-set, if we may obtain \(H\) from \(G\) by deleting at most \(\varepsilon\cdot e(G)\) edges then \(|q^{*}(G)-q^{*}(H)|<2\varepsilon\), by Lemma 5.1 of [10] (we will use this to obtain Corollary 5.3).
**Corollary 5.1**.: _Suppose \(\mathbf{w}\) is a degree sequence with \(w_{\min}=\omega(\ln^{2}n)\). Then_
\[q^{*}(G(n,\mathbf{w}))\leq 4\bar{w}^{-1/2}(1+o(1)).\]
For a larger class of \(\mathbf{w}\), Coja-Oghlan and Lanka [13] show lower bounds on the spectral gap not on the entire graph \(G(n,\mathbf{w})\) but for an induced subgraph which comprises most of the volume of the graph.
**Theorem 5.2** ([13]).: _There exists constants \(c_{0}\) and \(w_{0}\) such that the following holds. If \(\mathbf{w}\) satisfies \(w_{0}\leq w_{\min}\leq w_{\max}\leq n^{0.99}\) then whp \(G\) contains an induced subgraph \(H\) with_
1. \(\bar{\lambda}_{H}\geq 1-c_{0}w_{\min}^{-1/2}\) _and_
2. \(e(H)\geq e(G)-n\exp(-w_{\min}/c_{0})\)_._
**Corollary 5.3**.: _There exists constants \(c_{0}\), \(w_{0}\) such that the following holds. If \(\mathbf{w}\) satisfies \(w_{0}\leq w_{\min}\leq w_{\max}\leq n^{0.99}\), then whp_
\[q^{*}(G(n,\mathbf{w}))\leq c_{0}w_{\min}^{-1/2}.\]
Proof.: The corollary follows almost immediately from Theorem 5.2. Since \(\mathbb{E}\left[\operatorname{vol}(G)\right]=\bar{w}n=\omega(1)\) we get that whp \(\operatorname{vol}(G)=\bar{w}n(1+o(1))\geq\frac{2}{3}\bar{w}n\). Thus whp \(G(n,\mathbf{w})\) contains a subgraph \(H\) with \(\frac{e(H)}{e(G)}\geq 1-\frac{n}{e(G)}\geq 1-\frac{3}{\bar{w}}=1-o(1).\) Hence by the spectral upper bound with high probability,
\[q^{*}(G(n,\mathbf{w}))\leq c_{0}^{\prime}\ w_{\min}^{-1/2},\]
which implies the result.
## 6 Concluding remarks
For a large class of sequences \(\mathbf{d}\), we showed that any graph with degree sequence \(\mathbf{d}\) has modularity \(\Omega(\bar{d}^{-1/2})\), improving on the previously known lower bound of order \(\bar{d}^{-1}\). Specifically, this bound applies to graphs with a power-law degree sequence, which includes preferential-attachment graphs (under suitable models).
However, to our knowledge, the best known upper bound on the modularity of the preferential-attachment graph is \(\frac{15}{16}\)[1]. Preferential-attachment graphs are not sampled with an inherent community structure, so one might expect their modularity to decay with the average degree \(\bar{d}\), which is also suggested in [1] where they showed a lower bound of \(\Omega(\bar{d}^{-1/2})\). It would be very interesting to prove such an upper bound, and perhaps even a bound of order \(O(\bar{d}^{-1/2})\). |
2306.07158 | Riemannian Laplace approximations for Bayesian neural networks | Bayesian neural networks often approximate the weight-posterior with a
Gaussian distribution. However, practical posteriors are often, even locally,
highly non-Gaussian, and empirical performance deteriorates. We propose a
simple parametric approximate posterior that adapts to the shape of the true
posterior through a Riemannian metric that is determined by the log-posterior
gradient. We develop a Riemannian Laplace approximation where samples naturally
fall into weight-regions with low negative log-posterior. We show that these
samples can be drawn by solving a system of ordinary differential equations,
which can be done efficiently by leveraging the structure of the Riemannian
metric and automatic differentiation. Empirically, we demonstrate that our
approach consistently improves over the conventional Laplace approximation
across tasks. We further show that, unlike the conventional Laplace
approximation, our method is not overly sensitive to the choice of prior, which
alleviates a practical pitfall of current approaches. | Federico Bergamin, Pablo Moreno-Muñoz, Søren Hauberg, Georgios Arvanitidis | 2023-06-12T14:44:22Z | http://arxiv.org/abs/2306.07158v1 | # Riemannian Laplace approximations for Bayesian neural networks
###### Abstract
Bayesian neural networks often approximate the weight-posterior with a Gaussian distribution. However, practical posteriors are often, even locally, highly non-Gaussian, and empirical performance deteriorates. We propose a simple parametric approximate posterior that adapts to the shape of the true posterior through a Riemannian metric that is determined by the log-posterior gradient. We develop a Riemannian Laplace approximation where samples naturally fall into weight-regions with low negative log-posterior. We show that these samples can be drawn by solving a system of ordinary differential equations, which can be done efficiently by leveraging the structure of the Riemannian metric and automatic differentiation. Empirically, we demonstrate that our approach consistently improves over the conventional Laplace approximation across tasks. We further show that, unlike the conventional Laplace approximation, our method is not overly sensitive to the choice of prior, which alleviates a practical pitfall of current approaches.
## 1 Introduction
_Bayesian deep learning_ estimates the weight-posterior of a neural network given data, i.e. \(p(\theta|\mathcal{D})\). Due to the generally high dimensions of the weight-space, the normalization of this posterior is intractable and approximate inference becomes a necessity. The most common parametric choice approximates the posterior with a Gaussian distribution, \(p(\theta|\mathcal{D})\approx q(\theta|\mathcal{D})=\mathcal{N}(\theta|\mu,\Sigma)\), which is estimated _variationally_(Blundell et al., 2015), using _Laplace approximations_(MacKay, 1992) or with other techniques (Maddox et al., 2019). Empirical evidence, however, suggests that the log-posterior is not locally concave (Sagun et al., 2016), indicating that the Gaussian approximation is overly crude. Indeed, this approximation is known to be brittle as the associated covariance is typically ill-conditioned implying a suboptimal behavior (Daxberger et al., 2021; Farquhar et al., 2020), and for this reason, alternative approaches have been proposed to fix this issue (Mackay, 1992). Nonetheless, the Gaussian approximation is widely used due to the many benefits of parametric distributions, over e.g. _Monte Carlo sampling_(Neal, 1995) or _deep ensembles_(Lakshminarayanan et al., 2017).
**In this paper** we argue that the underlying issue is not with the Gaussian approximation, but rather with the weight-space over which the approximation is applied. We show that a Gaussian approximation can locally adapt to the loss by equipping the weight-space with a simple Riemannian metric and performing the approximation tangentially to the associated manifold. Practically, this ensures that samples from the Riemannian approximate posterior land in regions of weight-space yielding low training loss, which significantly improves over the usual Gaussian approximation. We obtain our
Figure 1: Our Riemannian Laplace approximation is a simple parametric distribution, which is shaped according to the local loss landscape through a Riemannian metric.
Riemannian approximate posterior using a generalization of the Laplace approximation (MacKay, 1992) to general Riemannian manifolds. Sampling from this distribution requires solving a system of ordinary differential equations, which we show can be performed efficiently by leveraging the structure of the used Riemannian metric and automatic differentiation. Empirically, we demonstrate that this significantly improves upon conventional Laplace approximations across tasks.
## 2 Background
Notation & assumptions.We consider independent and identically distributed (i.i.d.) data \(\mathcal{D}=\{\mathbf{x}_{n},\mathbf{y}_{n}\}_{n=1}^{N}\), consisting of inputs \(\mathbf{x}\in\mathbb{R}^{D}\) and outputs \(\mathbf{y}\in\mathbb{R}^{C}\). To enable _probabilistic modeling_, we use a likelihood \(p(\mathbf{y}|f_{\theta}(\mathbf{x}))\) which is either Gaussian (regression) or categorical (classification). This likelihood is parametrized by a deep neural network \(f_{\theta}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{C}\), where \(\theta\in\mathbb{R}^{K}\) represent the weights for which we specify a Gaussian prior \(p(\theta)\). The predictive distribution of a new test point \(\mathbf{x}^{\prime}\) equals \(p(\mathbf{y}|\mathbf{x}^{\prime})=\int p(\mathbf{y}|\mathbf{x}^{\prime},\theta )p(\theta|\mathcal{D})\mathrm{d}\theta\) where \(p(\theta|\mathcal{D})\) is the true weight-posterior given the data \(\mathcal{D}\). To ensure tractability, this posterior is approximated. This paper focuses on the Laplace approximation, though the bulk of the methodology applies to other approximation techniques as well.
### The Laplace approximation
The Laplace approximation (la) is widely considered in _probabilistic_ models for approximating intractable densities (Bishop, 2007). The idea is to perform a second-order Taylor expansion of an unnormalized log-probability density, thereby yielding a Gaussian approximation. When considering inference of the true posterior \(p(\theta|\mathcal{D})\), la constructs an approximate posterior distribution \(q_{\textsc{la}}(\theta|\mathcal{D})=\mathcal{N}(\theta|\theta_{*},\Sigma)\) that is centered at the _maximum a-posteriori_ (map) estimate
\[\theta_{*}=\operatorname*{arg\,max}_{\theta}\left\{\log p(\theta|\mathcal{D}) \right\}=\operatorname*{arg\,min}_{\theta}\underbrace{\left\{-\sum_{n=1}^{N} \log p(\mathbf{y}_{n}\mid\mathbf{x}_{n},\theta)-\log p(\theta)\right\}}_{ \mathcal{L}(\theta)}. \tag{1}\]
A Taylor expansion around \(\theta_{*}\) of the regularized loss \(\mathcal{L}(\theta)\) then yields
\[\hat{\mathcal{L}}(\theta)\approx\mathcal{L}(\theta_{*})+\langle\nabla_{\theta} \mathcal{L}(\theta)\big{|}_{\theta=\theta_{*}},(\theta-\theta_{*})\rangle+ \frac{1}{2}\langle(\theta-\theta_{*}),\mathrm{H}_{\theta}[\mathcal{L}](\theta )\big{|}_{\theta=\theta_{*}}(\theta-\theta_{*})\rangle, \tag{2}\]
where we know that \(\nabla_{\theta}\mathcal{L}(\theta)\big{|}_{\theta=\theta_{*}}\approx 0\), and \(\mathrm{H}_{\theta}[\mathcal{L}](\theta)\in\mathbb{R}^{K\times K}\) denotes the Hessian of the loss. This expansion suggests that the approximate posterior covariance should be the inverse Hessian \(\Sigma=\mathrm{H}_{\theta}[\mathcal{L}](\theta)\big{|}_{\theta=\theta_{*}}^{-1}\). The marginal likelihood of the data is then approximated as \(p(\mathcal{D})\approx\exp(-\mathcal{L}(\theta_{*}))(2\pi)^{D/2}\det(\Sigma)^{ \nicefrac{{1}}{{2}}}\). This is commonly used for training hyper-parameters of both the likelihood and the prior (Immer et al., 2021a; Antoran et al., 2022). We refer to appendix A for further details.
Tricks of the trade.Despite the simplicity of the Laplace approximation, its application to modern neural networks is not trivial. The first issue is that the Hessian matrix is too large to be stored in memory, which is commonly handled by approximately reducing the Hessian to being diagonal, low-rank, Kronecker factored, or only considered for a subset of parameters (see Daxberger et al. (2021) for a review). Secondly, the Hessian is generally not positive definite (Sagun et al., 2016), which is commonly handled by approximating the Hessian with the generalized Gauss-Newton approximation (Foresee and Hagan, 1997; Schraudolph, 2002). Furthermore, estimating the predictive distribution using Monte Carlo samples from the Laplace approximated posterior usually performs poorly (Lawrence, 2001, Chapter 5)(Ritter et al., 2018) even for small models. Indeed, the Laplace approximation can place probability mass in low regions of the posterior. A solution, already proposed by (Mackay, 1992, Chapter 4), is to consider a first-order Taylor expansion around \(\theta_{*}\), and use the sample to use the "linearized" function \(f_{\theta}^{\text{lin}}(\mathbf{x})=f_{\theta_{*}}(\mathbf{x})+\langle\nabla_{ \theta}f_{\theta}(\mathbf{x})\big{|}_{\theta=\theta_{*}},\theta-\theta_{*}\rangle\) as predictive, where \(\nabla_{\theta}f_{\theta}(\mathbf{x})\big{|}_{\theta=\theta_{*}}\in\mathbb{R}^ {C\times K}\) is the Jacobian. Recently, this approach has been justified by Khan et al. (2019), Immer et al. (2021), who proved that the generalized Gauss-Newton approximation is the exact Hessian of this new linearized model. Even if this is a linear function with respect to the parameters \(\theta\), empirically it achieves better performance than the classic Laplace approximation.
Although not theoretically justified, optimizing the prior precision post-hoc has been shown to play a crucial role in the Laplace approximation (Ritter et al., 2018; Kristiadi et al., 2020; Immer et al., 2021; Daxberger et al., 2021). This is usually done either using cross-validation or by maximizing the log-marginal likelihood. In principle, this regularizes the Hessian, and the associated approximate posterior concentrates around the map estimate.
Strengths & weaknesses.The main strength of the Laplace approximation is its simplicity in implementation due to the popularization of automatic differentiation. The Gaussian approximate posterior is, however, quite crude and often does not capture the shape locally of the true posterior (Sagun et al., 2016). Furthermore, the common reduction of the Hessian to not correlate all model parameters limit the expressive power of the approximate posterior.
## 3 Riemannian Laplace approximations
We aim to construct a parametric approximate posterior that better reflects the local shape of the true posterior and captures nonlinear correlations between parameters. The basic idea is to retain the Laplace approximation but change the parameter space \(\Theta\) to locally encode the _training loss_. To realize this idea, we will first endow the parameter space with a suitable Riemannian metric (Sec. 3.1) and then construct a Laplace approximation according to this metric (Sec. 3.2).
### A loss-aware Riemannian geometry
For a given parameter value \(\theta\in\Theta\), we can measure the training loss \(\mathcal{L}(\theta)\) of the associated neural network. Assuming that the loss changes smoothly with \(\theta\), we can interpret the loss surface \(\mathcal{M}=g(\theta)=[\theta,\mathcal{L}(\theta)]\in\mathbb{R}^{K+1}\) as a \(K\)-dimensional manifold in \(\mathbb{R}^{K+1}\). The goal of Riemannian geometry (Lee, 2019; do Carmo, 1992) is to do calculations that are restricted to such manifolds.
The metric.We can think of the parameter space \(\Theta\) as being the _intrinsic coordinates_ of the manifold \(\mathcal{M}\), and it is beneficial to do all calculations directly in these coordinates. Note that a vector tangential to the manifold can be written as \(\mathbf{J}_{g}(\theta)\mathbf{v}\in\mathbb{R}^{K+1}\), where \(\mathbf{J}_{g}:\Theta\rightarrow\mathbb{R}^{K+1\times K}\) is the Jacobian of \(g\) that spans the tangent space \(\mathcal{T}_{g(\theta)}\mathcal{M}\) at the point \(g(\theta)\in\mathcal{M}\) and \(\mathbf{v}\in\mathbb{R}^{K}\) is the vector of _tangential coordinates_ for this basis of the tangent space. We can take inner products between two tangent vectors in the same tangent space as which, we note, is now expressed directly in the intrinsic coordinates. From this observation, we define the _Riemannian metric_\(\mathbf{M}(\theta)=\mathbf{J}_{g}(\theta)^{\mathsf{T}}\mathbf{J}_{g}(\theta)\), which gives us a notion of a local inner product in the intrinsic coordinates of the manifold (see ellipsoids in Fig. 2). The Jacobian of \(g\) is particularly simple \(\mathbf{J}_{g}(\theta)=[\mathbb{I}_{K},\nabla_{\theta}\mathcal{L}]\)! such that the metric takes the form
\[\mathbf{M}(\theta)=\mathbb{I}_{K}+\nabla_{\theta}\mathcal{L}(\theta)\nabla_{ \theta}\mathcal{L}(\theta)^{\mathsf{T}}. \tag{3}\]
The exponential map.A local inner product allows us to define the length of a curve \(c:[0,1]\rightarrow\Theta\) as \(\texttt{length}[c]=\int_{0}^{1}\sqrt{\langle\hat{c}(t),\mathbf{M}(c(t))\hat{ c}(t)\rangle}\mathrm{d}t\), where \(\hat{c}(t)=\partial_{t}c(t)\) is the velocity. From this, the _distance_ between two points can be defined as the length of the shortest connecting curve, where the latter is known as the _geodesic curve_. Such geodesics can be expressed as solutions to a system of second-order non-linear ordinary differential equations (odes), which is given in appendix B alongside further details on geometry. Of particular interest to us is the _exponential map_, which solves these odes subject to an initial position and velocity. This traces out a geodesic curve with a given starting point and direction (see Fig. 2). Geometrically, we can also think of this as mapping a tangent vector _back to the manifold_, and we write the map as \(\texttt{Exp}:\mathcal{M}\times T_{\theta}\mathcal{M}\rightarrow\mathcal{M}\).
The tangential coordinates \(\mathbf{v}\) can be seen as a coordinate system for the neighborhood around \(\theta\), and since the exponential map is locally a bijection we can represent any point locally with a unique tangent vector. However, these coordinates correspond to the tangent space that is spanned by \(\mathbf{J}_{g}(\theta)\), which implies that by changing this basis the associated coordinates change as well. By
Figure 2: The parameter space \(\Theta\) of the bnn together with examples of the Riemannian metric and the exponential map. Note that the Riemannian metric adapts to the shape of the loss which causes the geodesic to follow its shape.
orthonormalizing this basis we get the _normal coordinates_ where the metric vanishes. Let \(\mathbf{v}\) the tangential coordinates and \(\bar{\mathbf{v}}\) the corresponding normal coordinates, then it holds that
\[\langle\mathbf{v},\mathbf{M}(\theta)\mathbf{v}\rangle=\langle\bar{\mathbf{v}}, \bar{\mathbf{v}}\rangle\Rightarrow\mathbf{v}=\mathbf{A}(\theta)\bar{\mathbf{v }}\quad\text{with}\quad\mathbf{A}(\theta)=\mathbf{M}(\theta)^{-\nicefrac{{1}}{ {2}}}. \tag{4}\]
We will use the normal coordinates when doing Taylor expansions of the log-posterior, akin to standard Laplace approximations.
### The proposed approximate posterior
In order to Taylor-expand the loss according to the metric, we first express the loss in normal coordinates of the tangent space at \(\theta_{*}\), \(h(\bar{\mathbf{v}})=\mathcal{L}(\text{Exp}_{\theta_{*}}(\mathbf{M}(\theta_{* })^{-\nicefrac{{1}}{{2}}}\bar{\mathbf{v}}))\). Following the standard Laplace approximation, we perform a second-order Taylor expansion of \(h\) as
\[\hat{h}(\bar{\mathbf{v}})\approx h(0)+\langle\partial_{\bar{\mathbf{v}}}h( \bar{\mathbf{v}})\big{|}_{\bar{\mathbf{v}}=0},\bar{\mathbf{v}}\rangle+\frac{1 }{2}\langle\bar{\mathbf{v}},\text{H}_{\bar{\mathbf{v}}}[h](\bar{\mathbf{v}}) \big{|}_{\bar{\mathbf{v}}=0}\bar{\mathbf{v}}\rangle, \tag{5}\]
where \(\partial_{\bar{\mathbf{v}}}h(\bar{\mathbf{v}})\big{|}_{\bar{\mathbf{v}}=0}= \mathbf{A}(\theta_{*})\bar{\mathbf{v}}\nabla_{\theta}\mathcal{L}(\theta)\big{|} _{\theta=\theta_{*}}\approx 0\) as \(\theta_{*}\) minimize the loss and \(\text{H}_{\bar{\mathbf{v}}}[h](\bar{\mathbf{v}})\big{|}_{\bar{\mathbf{v}}=0}= \mathbf{A}(\theta_{*})\bar{\mathbf{v}}\text{H}_{\theta}[\mathcal{L}](\theta) \mathbf{A}(\theta_{*})\big{|}_{\theta=\theta_{*}}\) with \(\text{H}_{\theta}[\mathcal{L}](\theta)\) the standard Euclidean Hessian matrix of the loss. Further details about this step can be found in appendix B.
**Tangential Laplace.** Similar to the standard Laplace approximation, we get a Gaussian approximate posterior \(\bar{q}(\bar{\mathbf{v}})=\mathcal{N}(\bar{\mathbf{v}}\mid 0,\ \overline{ \Sigma})\) on the tangent space in the normal coordinates with covariance \(\overline{\Sigma}=\text{H}_{\bar{\mathbf{v}}}[h](\mathbf{v})\big{|}_{\bar{ \mathbf{v}}=0}^{-1}\). Note that changing the normal coordinates \(\bar{\mathbf{v}}\) to tangential coordinates \(\mathbf{v}\) is a linear transformation and hence \(\mathbf{v}\sim\mathcal{N}(0,\mathbf{A}(\theta_{*})\overline{\mathbf{\Sigma}} \mathbf{A}(\theta_{*})\bar{\mathbf{v}})\), which means that this covariance is equal to \(\text{H}_{\theta}[\mathcal{L}](\theta)\big{|}_{\theta=\theta_{*}}^{-1}\) since \(\mathbf{A}(\theta_{*})\) is a symmetric matrix, and hence, it cancels out. The approximate posterior \(q_{\mathcal{T}}(\mathbf{v})=\mathcal{N}(\mathbf{v}\mid 0,\Sigma)\) in tangential coordinates, thus, matches the covariance of the standard Laplace approximation.
**The predictive posterior.** We can approximate the predictive posterior distribution using Monte Carlo integration as \(p(y|\mathbf{x}^{\prime},\mathcal{D})=\int p(y|\mathbf{x}^{\prime},\mathcal{D},\theta)q(\theta)\mathrm{d}\theta=\int p(y|\mathbf{x}^{\prime},\mathcal{D}, \text{Exp}_{\theta_{*}}(\mathbf{v}))q_{\mathcal{T}}(\mathbf{v})\mathrm{d} \mathbf{v}\approx\frac{1}{3}\sum_{s=1}^{S}p(y|\mathbf{x}^{\prime},\mathcal{D},\text{Exp}_{\theta_{*}}(\mathbf{v}_{s})),\ \mathbf{v}_{s}\sim q_{\mathcal{T}}(\mathbf{v})\). Intuitively, this generates tangent vectors according to the standard Laplace approximation and maps them back to the manifold by solving the geodesic ode. This lets the Riemannian approximate posterior take shape from the loss landscape, which is largely ignored by the standard Laplace approximation. We emphasize that this is a general construction that applies to the same Bayesian inference problems as the standard Laplace approximation and is not exclusive to Bayesian neural networks.
The above analysis also applies to the linearized Laplace approximation. In particular, when the \(f_{\theta}^{\text{lin}}(\mathbf{x})\) is considered instead of the \(f_{\theta}(\mathbf{x})\) the loss function in (1) changes to \(\mathcal{L}^{\text{lin}}(\theta)\). Consequently, our Riemannian metric is computed under this new loss, and \(\nabla_{\theta}\mathcal{L}^{\text{lin}}(\theta)\) appears in the metric (3).
**Example.** To build intuition, we consider a logistic regressor on a linearly separable dataset (Fig. 3). The likelihood of a point \(\mathbf{x}\in\mathbb{R}^{2}\) to be in one class is \(p(C=1|\mathbf{x})=\sigma(\mathbf{x}^{\prime}\theta+b)\), where \(\sigma(\cdot)\) is the sigmoid function, \(\theta\in\mathbb{R}^{2}\) and \(b\in\mathbb{R}\). After learning the parameters, we fix \(b_{*}\) and show the posterior with respect to \(\theta\) together with the corresponding standard Laplace approximation (Fig. 2(a)).
Figure 3: The la assigns probability mass to regions where the true posterior is nearly zero, and a sample from this region corresponds to a poor classifier. Considering this sample as the initial velocity for the exponential map, the generated sample falls within the true posterior and the associated classifier performs well. As a result, our model quantifies better the uncertainty.
We see that the approximation assigns significant probability mass to regions where the true posterior is near-zero, and the result of a corresponding sample is a poor classifier (Fig. 2(b)). Instead, when we consider this sample as the initial velocity and compute the associated geodesic with the exponential map, we generate a sample at the tails of the true posterior which corresponds to a well-behaved model (Fig. 2(c)). We also show the predictive distribution for both approaches and even if both solve easily the classification problem, our model better quantifies uncertainty (Fig. 2(e)).
### Efficient implementation
Our approach is a natural extension of the standard Laplace approximation, which locally adapts the approximate posterior to the true posterior. The caveat is that computational cost increases since we need to integrate an ode for every sample. We now discuss partial alleviations.
Integrating the ode.In general, the system of second-order nonlinear odes (see appendix B for the general form) is non-trivial as it depends on the geometry of the loss surface, which is complicated in the over-parametrized regime (Li et al., 2018). In addition, the dimensionality of the parameter space is high, which makes the solution of the system even harder. Nevertheless, due to the structure of our Riemannian metric (3), the ode simplifies to
\[\ddot{c}(t)=-\nabla_{\theta}\mathcal{L}(c(t))\left(1+\nabla_{\theta}\mathcal{ L}(c(t))^{\intercal}\nabla_{\theta}\mathcal{L}(c(t))\right)^{-1}\langle\dot{c}(t ),\mathrm{H}_{\theta}[\mathcal{L}](c(t))\dot{c}(t)\rangle, \tag{6}\]
which can be integrated reasonably efficiently with standard solvers. In certain cases, this ode can be further simplified, for example when we consider the linearized loss \(\mathcal{L}^{\text{lin}}(\theta)\) and Gaussian likelihood.
Automatic-differentiation.The ode (6) requires computing both gradient and Hessian, which are high-dimensional objects for modern neural networks. While we need to compute the gradient explicitly, we do not need to compute and store the Hessian matrix, which is infeasible for large networks. Instead, we rely on modern automatic-differentiation frameworks to compute the Hessian-vector product between \(\mathrm{H}_{\theta}[\mathcal{L}](c(t))\) and \(\dot{c}(t)\) directly. This both reduces memory use, increases speed, and simplifies the implementation.
Mini-batching.The cost of computing the metric, and hence the ode, scales linearly with the number of training data, which can be expensive for large datasets. A reasonable approximation is to mini-batch the estimation of the metric when generating samples, i.e. construct a batch \(\mathcal{B}\) of \(B\) random data points and use the associated loss in the ode (6). As usual, we assume that \(\mathcal{L}(\theta)\approx(\nicefrac{{N}}{{B}})\mathcal{L}_{\mathcal{B}}( \theta)\). Note that we only mini-batch the metric and not the covariance of our approximate posterior \(q_{\mathcal{T}}(\mathbf{v})\).
We analyze the influence of mini-batching in our methods and provide empirical evidence in Fig. 4. In principle, the geometry of the loss surface \(\mathcal{L}(\theta)\) controls the geodesics via the associated Riemannian metric, so when we consider the full dataset we expect the samples to behave similarly to \(f_{\theta_{*}}(\mathbf{x})\). In other words, our approximate posterior generates weights near \(\theta_{*}\) resulting in models with similar or even better loss. When we consider a batch the geometry of the associated loss surface \(\mathcal{L}_{\mathcal{B}}(\theta)\) controls the generated geodesic. So if the batch represents well the structure of the full dataset, then the resulting model will be meaningful with respect to the original problem, and in addition, it may exhibit some variation that is beneficial from the Bayesian perspective for the quantification of the uncertainty. The same concept applies in the linearized version, with the difference that when the full dataset is considered the geometry of \(\mathcal{L}^{\text{lin}}(\theta)\) may over-regularize the geodesics. Due to the linear nature of \(f_{\theta}^{\text{lin}}(\theta)\) the associated Riemannian metric is small only close to \(\theta_{*}\) so the generated samples are similar to \(f_{\theta_{*}}(\mathbf{x})\). We relax this behavior and potentially introduce variations in the resulting models when we consider a different batch whenever we generate a sample. Find more details in appendix D.
Figure 4: Analysis of mini-batching
Related work
**Bayesian neural networks.** Exact inference for bnn is generally infeasible when the number of parameters is large. Several methods rely on approximate inference, which differs in their trade-off between computational cost and the goodness of the approximation. These techniques are usually based on the Laplace approximation (MacKay, 1992), variational inference (Graves, 2011; Blundell et al., 2015; Khan et al., 2018), dropout (Gal and Ghahramani, 2016), stochastic weight averaging (Izmailov et al., 2018; Maddox et al., 2019) or Monte Carlo based methods (Neal, 1995), where the latter is often more expensive.
**Laplace approximations.** In this work, we are primarily focused on Laplace approximations, although the general geometric idea can be used in combination with any other inference approach listed above. Particularly, Laplace's method for bnn was first proposed by Mackay (1992) in his _evidence_ framework, where a closed-form approximation of predictive probabilities was also derived. This one uses a first-order Taylor expansion, also known as _linearization_ around the map estimate. For long, Laplace's method was infeasible for modern architectures with large networks due to the exact computation of the Hessian. The seminal works of Martens and Grosse (2015) and Botev et al. (2017) made it possible to approximate the Hessian of large networks, which made Laplace approximations feasible once more (Ritter et al., 2018). More recently, the Laplace approximation has become a go-to tool for turning trained neural networks into bnn in a _post-hoc_ manner, thanks to easy-to-use software (Daxberger et al., 2021) and new approaches to scale up computation (Antoran et al., 2022). In this direction, other works have only considered a subset of the network parameters (Daxberger et al., 2021; Sharma et al., 2023), especially the last-layer. This is _de facto_ the only current method competitive with _ensembles_(Lakshminarayanan et al., 2017).
**Posterior refinement.** Much work has gone into building more expressive approximate posteriors. Recently, Kristiadi et al. (2022) proposed to use normalizing flows to get a non-Gaussian approximate distribution using the Laplace approximation as a base distribution. Although this requires training an additional model, they showed that few bijective transformations are enough to improve the last-layer posterior approximation. Immer et al. (2021), instead, propose to refine the Laplace approximation by using Gaussian variational Bayes or a Gaussian process. This still results in a Gaussian distribution, but it has proven beneficial for linearized Laplace approximations. Other approaches rely on a mixture of distributions to improve the goodness of the approximation. Miller et al. (2017) expand a variational approximation iteratively adding components to a mixture, while Eschenhagen et al. (2021) use a weighted sum of posthoc Laplace approximations generated from different pre-trained networks. Havasi et al. (2021), instead, introduces auxiliary variables to make a local refinement of a mean-field variational approximation.
**Differential geometry.** Differential geometry is increasingly playing a role in inference. Arvanitidis et al. (2016) make a Riemannian normal distribution locally adapt to data by learning a suitable Riemannian metric from data. In contrast, our metric is derived from the model. This is similar in spirit to work that investigates pull-back metrics in latent variable models (Tosi et al., 2014; Arvanitidis et al., 2018; Hauberg, 2018). In addition to that, the geometry of the latent parameter space of neural networks was recently analyzed by Kristiadi et al. (2023) focusing on the invariance of flatness measures with respect to re-parametrizations. Finally, we note that Hauberg (2018) considers Laplace approximations on the sphere as part of constructing a recursive Kalman-like filter.
## 5 Experiments
We evaluate our Riemannian la (riem-la) using illustrative examples, image datasets where we use a convolutional architecture, and real-world classification problems. We compare our method and its linearized version to standard and linearized la. All predictive distributions are approximated using Monte Carlo (MC) samples. Although last-layer la is widely used lately, we focus on approximating the posterior of all the weights of the network. In all experiments, we maximize the marginal log-likelihood to tune the hyperparameters of the prior and the likelihood as proposed in (Daxberger et al., 2021). To evaluate the performance in terms of uncertainty estimation we considered the standard metrics in the literature: negative log-likelihood (NLL), the Brier score (BRIER), the expected calibration error (ECE), and the maximum calibration error (MCE). More experiments are available in appendix D together with the complete training and modeling details.
### Regression problem
We consider the toy-regression problem proposed by Snelson and Ghahramani (2005). The dataset contains 200 data points, and we randomly pick 150 examples as our training set and the remaining 50 as a test set. As shown by Lawrence (2001), Ritter et al. (2018), using samples from the LA posterior performs poorly in regression even if the Hessian is not particularly ill-conditioned, i.e. when the prior precision is optimized. For this reason, the linearization approach is necessary for regression with standard LA. Instead, we show that even our basic approach fixes this problem when the prior is optimized. We tested our approach by considering two fully connected networks, one with one hidden layer with 15 units and one with two layers with 10 units each, both with tanh activations. Our approach approximates well the true posterior locally, so the resulting function samples follow the data. Of course, if the Hessian is extremely degenerate our approach also suffers, as the initial velocities are huge. When we consider the linearized version of our approach the result is the same as the standard LA-linearization, which we include in the appendix D, where we also report results for in-between uncertainty as proposed by Foong et al. (2019).
### Classification problems
Illustrative example.We consider a 2-dimensional binary classification problem using the banana dataset which is shown in Fig. 6. We train a 2-layer fully connected neural net with 16 hidden units per layer and tanh activation. For all methods, we use 100 MC samples for the predictive distribution.
As in regression, direct samples from the vanilla la lead to a really poor model (Fig. 5(a)) with high uncertainty both within and away from the data support. Instead, the other three methods (Fig. 5(b)-5(d)) show a better-behaved confidence that decreases outside of the data support. This is also supported by the metrics in Table 1, where remarkably riem-la performs better in terms of NLL and Brier score on a separate test set.
As we discussed in Sec. 3.3, using a subset of the dataset for computing the exponential map can be beneficial for our linearized manifold in addition to speeding up computation. In Fig. 5(d) we plot the confidence for our linearized approach using batches while in appendix D we show the confidence of the same approach using the full data for solving the odes. We can see that our linearized riem-la tends to be overconfident outside the data region and also close to the decision
Figure 5: Posterior samples under a simple _(top)_ and an overparametrized model _(bottom)_. Vanilla la is known to generate bad models, while our samples from riem-la quantify well the uncertainty.
Figure 6: Binary classification confidence estimate using \(100\) Monte-Carlo samples on the banana dataset. Vanilla la underfit, while the other three methods are able to be certain within the data and uncertain far away. Note, for linearized riem-la we solve the expmap using a different subset of the data. Confidence plots of all different methods can be found in the supplementary material. Black lines are the decision boundaries.
boundary. This behaviour can be found in the high NLL that linearized riem-la gets compared to our vanilla approach and linearized la.
UCI datasets.We compare our approach against the standard la on a set of six UCI classification datasets using a fully connected network with a single layer, 50 hidden units and tanh activation. The predictive distribution is estimated using MC with 30 samples from the approximate posterior of each approach. In Table 2 we compare the methods in terms of their negative log-likelihood (NLL) in the test set. All other metrics are reported in appendix D. We are considering the setting where we optimize the prior-precision post-hoc, which is the optimal setting for la and linearized la. We consider our standard approaches without using batches, which we have seen that specifically for our linearized approach may lead to sub-optimal performance.
From the results in Table 2 we see that our riem-la consistently performs better in terms of negative log-likelihood than vanilla and linearized la. We also observe that in two datasets the performance of our linearized riem-la is not optimal. This implies that the loss surface of the linearized loss potentially over-regularizes the geodesics as we analyzed in Sec. 3.3, and in this case, considering mini-batching could have been beneficial.
Image classification.We consider a small convolutional neural network on MNIST and FashionMNIST. Our network consists of two convolutional layers followed by average pooling layers and three fully connected layers. We consider a model of this size as the high dimensionality of the parameter space is one of the main limitations of the ode solver. For the training of the model, we subsample each dataset and we consider 5000 observations by keeping the proportionality of labels, and we test in the full test set containing 8000 examples. In Table 3 we compare the different methods with the prior precision optimized as this is the ideal setting for the linearized la. We refer to appendix D for the setting with the prior precision not optimized.
From the results we observe that our standard riem-la performs better than all the other methods in terms of NLL and Brier score, meaning that the models are better calibrated, but it also leads to a more accurate classifier than the MAP. In terms of ECE, it seems that considering the linearized approach is beneficial in producing better-calibrated models in both datasets. This holds both for our approach linearized riem-la and the standard la. Optimizing the prior precision post-hoc is crucial for the vanilla la and associated results can be seen in appendix D. Instead, both our methods appear to be robust and consistent, as they achieve similar performance no matter if the prior precision is optimized or not.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{6}{c}{brain dataset} \\ \cline{2-7} & \multicolumn{3}{c}{prior optimized} & \multicolumn{3}{c}{pero not optimized} \\ \cline{2-7} method & Accuracy \(\uparrow\) & NLL\(\downarrow\) & Brier\(\downarrow\) & Accuracy \(\uparrow\) & NLL\(\downarrow\) & Brier\(\downarrow\) \\ \hline MAP & \(86.69\pm 0.34\) & \(0.333\pm 0.005\) & \(0.0930\pm 0.0015\) & \(86.69\pm 0.34\) & \(0.333\pm 0.005\) & \(0.0930\pm 0.0015\) \\ VAEIL-LA & \(95.50\pm 5.07\) & \(0.678\pm 0.009\) & \(0.2426\pm 0.0046\) & \(48.58\pm 2.32\) & \(0.700\pm 0.003\) & \(0.2534\pm 0.0017\) \\ LIN-LA & \(86.59\pm 0.37\) & \(0.325\pm 0.008\) & \(0.0956\pm 0.0023\) & \(86.92\pm 0.40\) & \(0.403\pm 0.012\) & \(0.1196\pm 0.0044\) \\ REM-LA & \(\mathbf{87.57\pm 0.07}\) & \(\mathbf{0.287\pm 0.002}\) & \(\mathbf{0.0886\pm 0.0006}\) & \(\mathbf{87.14\pm 0.20}\) & \(\mathbf{0.285\pm 0.001}\) & \(\mathbf{0.0878\pm 0.0006}\) \\ REM-LA (ACRES) & \(87.30\pm 0.08\) & \(\mathbf{0.286\pm 0.001}\) & \(\mathbf{0.0890\pm 0.0000}\) & \(\mathbf{87.32\pm 0.17}\) & \(\mathbf{0.294\pm 0.002}\) & \(\mathbf{0.0895\pm 0.0004}\) \\ LIN-BIBI-LA & \(87.02\pm 0.38\) & \(0.415\pm 0.029\) & \(0.0067\pm 0.0024\) & \(85.33\pm 0.31\) & \(0.884\pm 0.037\) & \(0.1252\pm 0.0022\) \\ LIN-BIBI-LA (AGRE) & \(\mathbf{87.72\pm 0.24}\) & \(0.298\pm 0.006\) & \(\mathbf{0.0887\pm 0.0012}\) & \(86.16\pm 0.21\) & \(0.352\pm 0.002\) & \(0.0934\pm 0.0011\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: In-distribution results in the banana dataset. We use 100 MC samples both for the la and our variants. ECE and MCE are computed using \(M=10\) bins. We report mean and standard error over 5 different seeds.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \multicolumn{6}{c}{prior precision optimized} \\ \cline{2-7} dataset & map & vanilla la & riem-la & linearized la & linearized rim-LA \\ \hline vehicle & \(0.975\pm 0.081\) & \(1.209\pm 0.020\) & \(\mathbf{0.454\pm 0.024}\) & \(0.875\pm 0.020\) & \(\mathbf{0.494\pm 0.044}\) \\ glass & \(2.084\pm 0.323\) & \(1.737\pm 0.037\) & \(\mathbf{1.047\pm 0.242}\) & \(1.365\pm 0.058\) & \(1.359\pm 0.299\) \\ ionosphere & \(1.032\pm 0.175\) & \(0.673\pm 0.013\) & \(\mathbf{0.344\pm 0.068}\) & \(0.497\pm 0.015\) & \(0.625\pm 0.110\) \\ waveform & \(1.076\pm 0.110\) & \(0.888\pm 0.030\) & \(\mathbf{0.459\pm 0.057}\) & \(0.640\pm 0.002\) & \(0.575\pm 0.065\) \\ australian & \(1.306\pm 0.146\) & \(0.684\pm 0.011\) & \(\mathbf{0.541\pm 0.053}\) & \(\mathbf{0.570\pm 0.016}\) & \(0.833\pm 0.108\) \\ breast cancer & \(\mathbf{0.225\pm 0.076}\) & \(0.594\pm 0.030\) & \(\mathbf{0.176\pm 0.092}\) & \(0.327\pm 0.022\) & \(\mathbf{0.202\pm 0.073}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Negative log-likelihood (lower is better) on UCI datasets for classification. Predictive distribution is estimated using 30 MC samples. Mean and standard error over 5 different seeds.
Note that for the mini-batches for our approaches, we consider 20% of the data by randomly selecting 1000 observations while we respect the label frequency based on the full dataset. Clearly, the batch-size is a hyperparameter for our methods and can be estimated systematically using cross-validation. Even if we do not optimize this hyperparameter, we see that our batched version of riem-la and lin-riem-la perform better than the standard la and on-par with our lin-riem-la without batches, implying that a well-tuned batch-size can potentially further improve the performance. Nevertheless, this also shows that our method is robust with respect to the batch-size.
## 6 Conclusion
We propose an extension to the standard Laplace approximation, which leverages the natural geometry of the parameter space. Our method is parametric in the sense that a Gaussian distribution is estimated using the standard Laplace approximation, but it adapts to the true posterior through a nonparametric Riemannian metric. This is a general mechanism that, in principle, can also apply to, e.g., variational approximations. In a similar vein, while the focus of our work is on Bayesian neural networks, nothing prevents us from applying our method to other model classes.
Empirically, we find that our Riemannian Laplace approximation is better or on par with alternative Laplace approximations. The standard Laplace approximation crucially relies on both linearization and on a fine-tuned prior to give useful posterior predictions. Interestingly, we find that the Riemannian Laplace approximation requires neither. This could suggest that the standard Laplace approximation has a rather poor posterior fit, which our adaptive approach alleviates.
Limitations.The main downside of our approach is the computational cost involved in integrating the ode, which is a common problem in computational geometry (Arvanitidis et al., 2019). The cost of evaluating the ode scales linearly with the number of observations, and we have considered the 'obvious' mini-batching solution. Empirically, we find that this introduces some stochasticity in the sampling, which can actually be helpful in the posterior exploration. The computational cost also grows with the dimensionality of the parameter space, predominantly because the number of necessary solver steps increases as well. Our implementation relies on an off-the-shelf ode solver, and we expect that significant improvements can be obtained using a tailor-made numerical integration method.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \multicolumn{5}{c}{CNN on MNIST - prior precision optimized} \\ \cline{2-6} method & Accuracy \(\uparrow\) & NLL\(\downarrow\) & Brier\(\downarrow\) & EC\(\downarrow\) & MCE\(\downarrow\) \\ \hline map & \(95.02\pm 0.17\) & \(0.167\pm 0.005\) & \(0.0075\pm 0.0002\) & \(1.05\pm 0.14\) & \(39.94\pm 14.27\) \\ vanilla la & \(88.69\pm 1.84\) & \(0.871\pm 0.026\) & \(0.0393\pm 0.0013\) & \(42.11\pm 1.22\) & \(50.52\pm 1.45\) \\ lin-la & \(94.91\pm 0.26\) & \(0.204\pm 0.006\) & \(0.0087\pm 0.0003\) & \(6.30\pm 0.8\) & \(39.30\pm 16.77\) \\ riem-la & \(\mathbf{96.74\pm 0.12}\) & \(\mathbf{0.115\pm 0.003}\) & \(\mathbf{0.0052\pm 0.0002}\) & \(2.48\pm 0.06\) & \(38.03\pm 15.02\) \\ riem-la (batches) & \(95.67\pm 0.19\) & \(0.170\pm 0.005\) & \(0.0072\pm 0.0002\) & \(5.40\pm 0.06\) & \(22.40\pm 0.51\) \\ lin-riem-la & \(95.44\pm 0.18\) & \(0.149\pm 0.004\) & \(0.0068\pm 0.0003\) & \(\mathbf{0.66\pm 0.03}\) & \(39.40\pm 14.75\) \\ lin-riem-la (batches) & \(95.14\pm 0.20\) & \(0.167\pm 0.004\) & \(0.0076\pm 0.0002\) & \(3.23\pm 0.04\) & \(\mathbf{18.10\pm 2.50}\) \\ \hline \hline \end{tabular}
\begin{tabular}{c c c c c c} \hline \hline & \multicolumn{5}{c}{CNN on FashionMNIST - prior precision optimized} \\ \cline{2-6} Method & Accuracy \(\uparrow\) & NLL\(\downarrow\) & Brier\(\downarrow\) & EC\(\downarrow\) & MCE\(\downarrow\) \\ \hline map & \(79.88\pm 0.09\) & \(0.541\pm 0.002\) & \(0.0276\pm 0.0000\) & \(\mathbf{1.66\pm 0.07}\) & \(24.07\pm 1.50\) \\ vanilla la & \(74.88\pm 0.83\) & \(1.026\pm 0.046\) & \(0.0482\pm 0.0019\) & \(31.63\pm 1.28\) & \(43.61\pm 2.95\) \\ lin-la & \(79.85\pm 0.13\) & \(0.549\pm 0.001\) & \(0.0278\pm 0.0000\) & \(3.23\pm 0.44\) & \(37.88\pm 17.98\) \\ riem-la & \(\mathbf{83.33\pm 0.17}\) & \(0.472\pm 0.001\) & \(\mathbf{0.0237\pm 0.0001}\) & \(3.13\pm 0.48\) & \(10.94\pm 2.11\) \\ riem-la (batches) & \(81.65\pm 0.18\) & \(0.525\pm 0.004\) & \(0.0263\pm 0.0002\) & \(5.80\pm 0.73\) & \(35.30\pm 18.40\) \\ lin-riem-la & \(81.33\pm 0.10\) & \(0.521\pm 0.004\) & \(0.0261\pm 0.0002\) & \(\mathbf{1.59\pm 0.40}\) & \(25.53\pm 0.10\) \\ lin-riem-la (batches) & \(80.49\pm 0.13\) & \(0.529\pm 0.003\) & \(0.0269\pm 0.0002\) & \(2.10\pm 0.42\) & \(\mathbf{6.14\pm 1.42}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Image classification results using a CNN on MNIST and FashionMNIST. The network is trained on \(5000\) examples and we test the in-distribution performance on the test set, which contains 8000 examples. We use \(25\) Monte Carlo samples to approximate the predictive distribution and \(1000\) datapoints per batch in our batched manifolds. Calibration metrics are computed using \(M=15\) bins. We report mean and standard error for each metric over 3 different seeds.
## Acknowledgments and Disclosure of Funding
This work was funded by the Innovation Fund Denmark (0175-00014B) and the Novo Nordisk Foundation through the Center for Basic Machine Learning Research in Life Science (NNF20OC0062606). It also received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research, innovation programme (757360). SH was supported in part by a research grant (42062) from VILLUM FONDEN.
|
2308.12947 | Counting Distinct Elements Under Person-Level Differential Privacy | We study the problem of counting the number of distinct elements in a dataset
subject to the constraint of differential privacy. We consider the challenging
setting of person-level DP (a.k.a. user-level DP) where each person may
contribute an unbounded number of items and hence the sensitivity is unbounded.
Our approach is to compute a bounded-sensitivity version of this query, which
reduces to solving a max-flow problem. The sensitivity bound is optimized to
balance the noise we must add to privatize the answer against the error of the
approximation of the bounded-sensitivity query to the true number of unique
elements. | Alexander Knop, Thomas Steinke | 2023-08-24T17:36:03Z | http://arxiv.org/abs/2308.12947v3 | # Counting Distinct Elements Under Person-Level Differential Privacy
###### Abstract
We study the problem of counting the number of distinct elements in a dataset subject to the constraint of differential privacy. We consider the challenging setting of person-level DP (a.k.a. user-level DP) where each person may contribute an unbounded number of items and hence the sensitivity is unbounded.
Our approach is to compute a bounded-sensitivity version of this query, which reduces to solving a max-flow problem. The sensitivity bound is optimized to balance the noise we must add to privatize the answer against the error of the approximation of the bounded-sensitivity query to the true number of unique elements.
## 1 Introduction
An elementary data analysis task is to count the number of distinct elements occurring in a dataset. The dataset may contain private data and even simple statistics can be combined to leak sensitive information about people [15]. Our goal is to release (an approximation to) this count in a way that ensures the privacy of the people who contributed their data. As a motivating example, consider a collection of internet browsing histories, in which case the goal is to compute the total number of websites that have been visited by at least one person.
Differential privacy (DP) [14] is a formal privacy standard. The simplest method for ensuring DP is to add noise (from either a Laplace or Gaussian distribution) to the true answer, where the scale of the noise corresponds to the sensitivity of the true answer - i.e., how much one person's data can change the true value.
If each person contributes a single element to the dataset, then the sensitivity of the number of unique elements is one. However, a person may contribute multiple elements to the dataset and our goal is to ensure privacy for all of these contributions simultaneously. That is, we seek to provide person-level DP (a.k.a. user-level DP2).
Footnote 2: We prefer the term “person” over “user,” as the latter only makes sense in some contexts and could be confusing in others.
This is the problem we study: We have a dataset \(D=(u_{1},u_{2},\cdots,u_{n})\) of person records. Each person \(i\in[n]\) contributes a finite dataset \(u_{i}\in\Omega^{*}\), where \(\Omega\) is some (possibly infinite) universe of potential elements (e.g., all finite-length binary strings) and \(\Omega^{*}:=\bigcup_{\ell\in\mathbb{N}}\Omega^{\ell}\) denotes all subsets of \(\Omega\) of finite size. Informally, our goal is to compute the number of unique elements
\[\mathrm{DC}(D):=\left|\bigcup_{i\in[n]}u_{i}\right| \tag{1}\]
in a way that preserves differential privacy. A priori, the sensitivity of this quantity is infinite, as a single person can contribute an unbounded number of unique elements.
In particular, it is not possible to output a meaningful upper bound on the number of distinct elements subject to differential privacy. This is because a single person could increase the number of distinct elements arbitrarily and differential privacy requires us to hide this contribution. It follows that we cannot output a differentially private unbiased estimate of the number of distinct elements with finite variance. However, it is possible to output a lower bound. Thus our formal goal is to compute a high-confidence lower bound on the number of distinct elements that is as large as possible and which is computed in a differentially private manner.
### Our Contributions
Given a dataset \(D=(u_{1},\cdots,u_{n})\in(\Omega^{*})^{n}\) and an integer \(\ell\geq 1\), we define
\[\mathrm{DC}(D;\ell):=\max\left\{\left|\bigcup_{i\in[n]}v_{i}\right|:\forall i \in[n]\;\,v_{i}\subseteq u_{i}\wedge|v_{i}|\leq\ell\right\}. \tag{2}\]
That is, \(\mathrm{DC}(D;\ell)\) is the number of distinct elements if we restrict each person's contribution to \(\ell\) elements. We take the maximum over all possible restrictions.
It is immediate that \(\mathrm{DC}(D;\ell)\leq\mathrm{DC}(D)\) for all \(\ell\geq 1\). Thus we obtain a lower bound on the true number of unique elements. The advantage of \(\mathrm{DC}(D;\ell)\) is that its sensitivity is bounded by \(\ell\) (see Lemma A.1 for a precise statement) and, hence, we can estimate it in a differentially private manner. Specifically,
\[\mathcal{M}_{\ell,\varepsilon}(D):=\mathrm{DC}(D;\ell)+\mathrm{Lap}\left(\ell /\varepsilon\right) \tag{3}\]
defines an \(\varepsilon\)-DP algorithm \(M_{\ell,\varepsilon}:(\Omega^{*})^{n}\to\mathbb{R}\), where \(\mathrm{Lap}\left(b\right)\) denotes Laplace noise scaled to have mean \(0\) and variance \(2b^{2}\). This forms the basis of our algorithm. Two challenges remain: Setting the sensitivity parameter \(\ell\) and computing \(\mathrm{DC}(D;\ell)\) efficiently.
To obtain a high-confidence lower bound on the true distinct count, we must compensate for the Laplace noise, which may inflate the reported value. We can obtain such a lower bound from \(\mathcal{M}_{\ell}(D)\) using the cumulative distribution function (CDF) of the Laplace distribution: That is, \(\forall b>0\;\forall\beta\in(0,1/2]\;\,\mathbb{P}\left[\mathrm{Lap}\left(b \right)\geq b\cdot\log\left(\frac{1}{2\beta}\right)\right]=\beta\), so
\[\mathbb{P}\left[\underbrace{\mathcal{M}_{\ell,\varepsilon}(D)-\frac{\ell}{ \varepsilon}\cdot\log\left(\frac{1}{2\beta}\right)}_{\text{lower bound}}\leq \mathrm{DC}(D)\right]\geq\underbrace{1-\beta}_{\text{confidence}}. \tag{4}\]
Choosing the sensitivity parameter \(\ell\).Any choice of \(\ell\geq 1\) gives us a lower bound: \(\mathrm{DC}(D;\ell)\leq\mathrm{DC}(D)\). Since \(\forall D\;\,\lim_{\ell\to\infty}\mathrm{DC}(D;\ell)=\mathrm{DC}(D)\), this lower bound can be arbitrarily tight. However, the larger \(\ell\) is, the larger the sensitivity of \(\mathrm{DC}(D;\ell)\) is. That is, the noise we add scales linearly with \(\ell\).
Thus there is a bias-variance tradeoff in the choice of \(\ell\). To make this precise, suppose we want a lower bound on \(\mathrm{DC}(D)\) with confidence \(1-\beta\in[\frac{1}{2},1)\), as in Equation (4). To obtain the tightest possible lower bound with confidence \(1-\beta\), we want \(\ell\) to maximize the expectation
\[q(D;\ell):=\mathrm{DC}(D;\ell)-\frac{\ell}{\varepsilon}\cdot\log\left(\frac{1} {2\beta}\right)=\operatorname*{\mathbb{E}}_{\mathcal{M}_{\ell,\varepsilon}} \left[\mathcal{M}_{\ell,\varepsilon}(D)-\frac{\ell}{\varepsilon}\cdot\log \left(\frac{1}{2\beta}\right)\right]\!. \tag{5}\]
We can use the exponential mechanism [10] to privately select \(\ell\) that approximately maximizes \(q(D;\ell)\). However, directly applying the exponential mechanism is problematic because each score has a different sensitivity - the sensitivity of \(q(\cdot;\ell)\) is \(\ell\). Instead, we apply the Generalized Exponential Mechanism (GEM) of Raskhodnikova and Smith [13] (see Algorithm 3). Note that we assume some a priori maximum value of \(\ell\) is supplied to the algorithm; this is \(\ell_{\max}\).
Our main algorithm attains the following guarantees.
**Theorem 1.1** (Theoretical Guarantees of Our Algorithm).: _Let \(\varepsilon>0\) and \(\beta\in(0,\frac{1}{2})\) and \(\ell_{\max}\in\mathbb{N}\). Define \(\mathcal{M}:(\Omega^{*})^{*}\to\mathbb{N}\times\mathbb{R}\) to be \(\mathcal{M}(D)=\textsc{DPDistinctCount}(D;\ell_{\max},\varepsilon,\beta)\) from Algorithm 1. Then \(\mathcal{M}\) satisfies all of the following properties._
* _Privacy:_ \(\mathcal{M}\) _is_ \(\varepsilon\)_-differentially private._
* _Lower bound:_ _For all_ \(D\in(\Omega^{*})^{n}\)_,_ \[\underset{(\ell,\hat{\nu})\leftarrow\mathcal{M}(D)}{\mathbb{P}} \left[\hat{\nu}\leq\mathrm{DC}(D)\right]\geq 1-\beta.\] (6)
* _Upper bound:_ _For all_ \(D\in(\Omega^{*})^{n}\)_,_ \[\underset{(\ell,\hat{\nu})\leftarrow\mathcal{M}(D)}{\mathbb{P}}\left[\hat{ \nu}\geq\max_{\ell\in[\ell_{\max}]}\mathrm{DC}(D;\ell)-\frac{10\ell+18\ell_{A }^{*}}{\varepsilon}\log\left(\frac{\ell_{\max}}{\beta}\right)\right]\geq 1-2\beta,\] (7) _where_ \(\ell_{A}^{*}=\arg\max_{\ell\in[\ell_{\max}]}\mathrm{DC}(D;\ell)-\frac{\ell}{ \varepsilon}\log\left(\frac{1}{2\beta}\right)\)_._
* _Computational efficiency:_ \(\mathcal{M}(D)\) _has running time_ \(O\left(|D|^{1.5}\cdot\ell_{\max}^{2}\right)\)_, where_ \(|D|:=\sum_{i}|u_{i}|\)_._
The upper bound guarantee (7) is somewhat difficult to interpret. However, if the number of items per person is bounded by \(\ell_{*}\), then we can offer a clean guarantee: If \(D=(u_{1},\cdots,u_{n})\in(\Omega^{*})^{n}\) satisfies \(\max_{i\in[n]}|u_{i}|\leq\ell_{*}\leq\ell_{\max}\), then combining the upper and lower bounds of Theorem 1.1 gives
\[\underset{(\hat{\ell},\hat{\nu})\leftarrow\mathcal{M}(D)}{\mathbb{P}}\left[ \mathrm{DC}(D)\geq\hat{\nu}\geq\mathrm{DC}(D)-\frac{28\ell_{*}}{\varepsilon} \log\left(\frac{\ell_{\max}}{\beta}\right)\right]\geq 1-3\beta. \tag{8}\]
Note that \(\ell_{*}\) is not assumed to be known to the algorithm, but the accuracy guarantee is able to adapt. We only assume \(\ell_{*}\leq\ell_{\max}\), where \(\ell_{\max}\) is the maximal sensitivity considered by the algorithm.
In addition to proving the above theoretical guarantees, we perform an experimental evaluation of our algorithm.
```
1:procedureSensitiveDistinctCount(\(D\!=\!(u_{1},\cdots,u_{n})\!\in\!(\Omega^{*})^{n}\); \(\ell\!\in\!\mathbb{N}\)) \(\triangleright\mathrm{DC}(D;\ell)\)
2: Let \(U_{\ell}=\bigcup_{i\in[n]}\left(\{i\}\times[\min\{\ell,|u_{i}|\}]\right)\subset[ n]\times[\ell]\).
3: Let \(V=\bigcup_{i\in[n]}u_{i}\in\Omega\).
4: Define \(E_{\ell}\subseteq U\times V\) by \(((i,j),v)\in E\iff v\in u_{i}\).
5: Let \(G_{\ell}\) be a bipartite graph with vertices partitioned into \(U_{\ell}\) and \(V\) and edges \(E_{\ell}\).
6:\(m_{\ell}\leftarrow\textsc{MaximumMatchingSize}(G)\). \(\triangleright\)[14, 15]
7:return\(m_{\ell}\in\mathbb{N}\)
8:endprocedure
9:procedureDPDistinctCount(\(D\!=\!(u_{1},\cdots,u_{n})\!\in\!(\Omega^{*})^{n}\); \(\ell_{\max}\!\in\!\mathbb{N}\), \(\varepsilon\!>\!0\), \(\beta\!\in\!(0,\frac{1}{2})\))
10:for\(\ell\in[\ell_{\max}]\)do
11: Define \(q_{\ell}(D):=\textsc{SensitiveDistinctCount}(D;\ell)-\frac{2\ell}{ \varepsilon}\cdot\log\left(\frac{1}{2\beta}\right)\).
12:endfor
13:\(\hat{\ell}\leftarrow\textsc{GEM}(D;\{q_{\ell}\}_{\ell\in[\ell_{\max}]},\{ \ell\}_{\ell\in[\ell_{\max}]},\varepsilon/2,\beta)\). \(\triangleright\) Algorithm 3
14:\(\hat{\nu}\gets q_{\ell}(D)+\mathrm{Lap}\left(2\hat{\ell}/\varepsilon\right)\).
15:return\((\hat{\ell},\hat{\nu})\in[\ell_{\max}]\times\mathbb{R}\).
16:endprocedure
```
**Algorithm 1** Distinct Count Algorithm
Efficient computation.The main computational task for our algorithm is to compute \(\mathrm{DC}(D;\ell)\). By definition (2), this is an optimization problem. For each person \(i\in[n]\), we must select a subset \(v_{i}\) of that person's data \(u_{i}\) of size at most \(\ell\) so as to maximize the size of the union of the subsets \(\left|\bigcup_{i\in[n]}v_{i}\right|\).
We can view the dataset \(D=(u_{1},\cdots,u_{n})\in(\Omega^{*})^{n}\) as a bipartite graph. On one side we have the \(n\) people and on the other side we have the elements of the data universe \(\Omega\).3 There is an edge between \(i\in[n]\) and \(x\in\Omega\) if and only if \(x\in u_{i}\).
Footnote 3: The data universe \(\Omega\) may be infinite, but we can restrict the computation to the finite set \(\bigcup_{i\in[n]}u_{i}\). Thus there are at most \(n+\mathrm{DC}(D)\leq n+|D|\) item vertices in the graph.
We can reduce computing \(\mathrm{DC}(D;\ell)\) to a max-flow problem: Each edge in the bipartite graph has capacity one. We add a source vertex \(s\) which is connected to each person \(i\in[n]\) by an edge with capacity \(\ell\). Finally we add a sink \(t\) that is connected to each \(x\in\Omega\) by an edge with capacity \(1\). The max flow through this graph is precisely \(\mathrm{DC}(D;\ell)\).
Alternatively, we can reduce computing \(\mathrm{DC}(D;\ell)\) to bipartite maximum matching. For \(\ell=1\), \(\mathrm{DC}(D;1)\) is exactly the maximum cardinality of a matching in the bipartite graph described above. For \(\ell\geq 2\), we simply create \(\ell\) copies of each person vertex \(i\in[n]\) and then \(\mathrm{DC}(D;\ell)\) is the maximum cardinality of a matching in this new bipartite graph.4
Footnote 4: We need only create \(\min\{\ell,|u_{i}|\}\) copies of the person \(i\in[n]\). Thus the number of person vertices is at most \(\min\{n\ell,|D|\}\).
Using this reduction, standard algorithms for bipartite maximum matching [13, 14] allow us to compute \(\mathrm{DC}(D;\ell)\) with \(O(|D|^{1.5}\cdot\ell)\) operations. We must repeat this computation for each \(\ell\in[\ell_{\max}]\).
```
1:procedureDPApproxDistinctCount(\(D\)=\((u_{1},\cdots,u_{n})\)\(\in(\Omega^{*})^{n}\); \(\ell_{\max}\)\(\in\)\(\mathbb{N}\), \(\varepsilon\)\(>\)\(0\), \(\beta\)\(\in\)\((0,\frac{1}{2})\))
2:\(S\leftarrow\emptyset\).
3:for\(\ell\in[\ell_{\max}]\)do
4:for\(i\in[n]\) with \(u_{i}\setminus S\neq\emptyset\)do
5: Choose lexicographically first \(v\in u_{i}\setminus S\). \(\triangleright\) Match \((i,\ell)\) to \(v\).
6: Update \(S\gets S\cup\{v\}\).
7:endfor
8: Define \(q_{\ell}(D):=|S|-\frac{2\ell}{\varepsilon}\cdot\log\left(\frac{1}{2\beta} \right)\). \(\triangleright\) This loop computes \(\{q_{\ell}(D)\}_{\ell\in[\ell_{\max}]}\).
9:endfor
10:\(\hat{\ell}\leftarrow\) GEM\((D;\{q_{\ell}\}_{\ell\in[\ell_{\max}]},\{\ell\}_{\ell\in[\ell_{\max}]},\varepsilon/2,\beta)\). \(\triangleright\) Algorithm 3
11:\(\hat{\nu}\gets q_{\hat{\ell}}(D)+\mathrm{Lap}\left(2\hat{\ell}/\varepsilon\right)\).
12:return\((\hat{\ell},\hat{\nu})\in[\ell_{\max}]\times\mathbb{R}\).
13:endprocedure
```
**Algorithm 2** Linear-Time Approximate Distinct Count Algorithm
Linear-time algorithm.Our algorithm above is polynomial-time. However, for many applications the dataset size \(|D|\) is enormous. Thus we also propose a linear-time variant of our algorithm. However, we must trade accuracy for efficiency.
There are two key ideas that differentiate our linear-time algorithm (Algorithm 2) from our first algorithm (Algorithm 1) above: First, we compute a maximal bipartite matching instead of a maximum bipartite matching.5 This can be done using a linear-time greedy algorithm and gives a 2-approximation to the maximum matching. (Experimentally we find that the approximation is better than a factor of 2.) Second, rather than repeating the computation from scratch for each \(\ell\in[\ell_{\max}]\), we incrementally update our a maximal matching while increasing \(\ell\). The main challenge here is ensuring that the approximation to \(\mathrm{DC}(D;\ell)\) has low sensitivity - i.e., we must ensure that our approximation algorithm doesn't inflate the sensitivity. Note that \(\mathrm{DC}(D;\ell)\) having low sensitivity does not automatically ensure that the approximation to it has low sensitivity.
Footnote 5: To clarify the confusing terminology: A matching is a subset of edges such that no two edges have a vertex in common. A maximum matching is a matching of the largest possible size. A maximal matching is a matching such that no edge could be added to the matching without violating the matching property. A maximum matching is also a maximal matching, but the reverse is not true.
**Theorem 1.2** (Theoretical Guarantees of Our Linear-Time Algorithm).: _Let \(\varepsilon>0\) and \(\beta\in(0,\frac{1}{2})\) and \(\ell_{\max}\in\mathbb{N}\). Define \(\mathcal{M}:(\Omega^{*})^{*}\rightarrow\mathbb{N}\times\mathbb{R}\) to be \(\widehat{\mathcal{M}}(D)=\beta\in(0,\frac{1}{2})\) and \(\ell_{\max}\in\mathbb{N}\)._
DPApproxDistinctCount\((D;\ell_{\max},\varepsilon,\beta)\) from Algorithm 2. Then \(\widehat{\mathcal{M}}\) satisfies all of the following properties._
* _Privacy:_\(\widehat{\mathcal{M}}\) _is_ \(\varepsilon\)_-differentially private._
* _Lower bound:_ _For all_ \(D\in(\Omega^{*})^{n}\)_,_ \[\mathop{\mathbb{P}}_{(\ell,\hat{\nu})\leftarrow\widehat{\mathcal{M}}(D)}\left[ \hat{\nu}\leq\mathrm{DC}(D)\right]\geq 1-\beta.\] (9)
* _Upper bound:_ _If_ \(D=(u_{1},\cdots,u_{n})\in(\Omega^{*})^{n}\) _satisfies_ \(\max_{i\in[n]}|u_{i}|\leq\ell_{*}\leq\ell_{\max}\)_, then_ \[\mathop{\mathbb{P}}_{(\ell,\hat{\nu})\leftarrow\widehat{\mathcal{M}}(D)}\left[ \hat{\nu}\geq\frac{1}{2}\mathrm{DC}(D)-\frac{28\ell_{*}}{\varepsilon}\log \left(\frac{\ell_{\max}}{\beta}\right)\right]\geq 1-2\beta.\] (10)
* _Computational efficiency:_\(\mathcal{M}(D)\) _has running time_ \(O\left(|D|+\ell_{\max}\log\ell_{\max}\right)\)_, where_ \(|D|:=\sum_{i}|u_{i}|\)_._
The factor \(\frac{1}{2}\) in the upper bound guarantee (10) is the main loss compared to Theorem 1.1. (The win is \(O(|D|)\) runtime.) This is a worst-case bound and our experimental result show that for realistic data the performance gap is not so bad.
The proofs of Theorems 1.1 and 1.2 are in Appendix A.
## 2 Related Work
Counting the number of distinct elements in a collection is one of the most fundamental database computations. This is supported as the COUNT(DISTINCT...) operation in SQL. Hence, unsurprisingly, the problem of computing the number of unique elements in a differentially private way has been extensively investigated.
In the case where we assume each person contributes only one element (a.k.a. event-level privacy or item-level privacy), the number of distinct elements has sensitivity \(1\) and, hence, we can simply use Laplace (or Gaussian) noise addition to release. However, it may not be possible to compute the number of distinct elements exactly due to space, communication, or trust constraints (e.g. in the local model of DP [11]).
Most efforts have been focused on creating differentially private algorithms for counting distinct elements under space constraints (and assuming each person contributes a single element). To save space, we wish to compute a small summary of the dataset (called a sketch) that allows us to estimate the number of distinct elements and which can be updated as more elements are added. Smith, Song, and Thakurta [20] proved that a variant of the Flajolet-Martin sketch is private and Pagh and Stausholm [21] analyzed a sketch over the binary finite field. Dickens, Thaler, and Ting [23] proved a general privacy result for order-invariant cardinality estimators. Hehir, Ting, and Cormode [14] provided a mergeable private sketch (i.e. two sketches can be combined to obtain a sketch of the union of the two datasets). In contrast, Desfontaines, Lochbihler, and Basin [1] proved an impossibility result for mergeable sketches, which shows that privacy or accuracy must degrade as we merge sketches.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline \multirow{2}{*}{Data Set} & \multirow{2}{*}{Vocabulary Size} & \multicolumn{3}{c}{Estimated Vocabulary Size} \\ \cline{3-5} & & 10th Percentile & Median & 90th Percentile \\ \hline Amazon Fashion & 1450 & 1220.6 & 1319.1 & 1394.2 \\ Amazon Industrial and Scientific & 36665 & 35970.5 & 36198.9 & 36326.7 \\ Reddit & 102835 & 102379.7 & 102512.6 & 102643.9 \\ IMDB & 98726 & 98555.6 & 98670.4 & 98726.8 \\ \hline \hline \end{tabular}
\end{table}
Table 1: True and estimated (using \(\mathrm{DPDistinctCount}\) with \(\varepsilon=1\), \(\beta=0.05\) and \(\ell_{\max}=100\)) counts per data set.
Counting unique elements has been considered in the pan-private streaming setting [10] (the aforementioned algorithms also work in the pan-private setting) and in the continual release streaming setting [14]. (In the continual release setting the approximate count is continually updated, while in the pan-private setting the approximate count is only revealed once, but at an unknown point in time.) Kreuter, Wright, Skvortsov, Mirisola, and Wang [13] give private algorithms for counting distinct elements in the setting of secure multiparty computation. In the local and shuffle models, the only known results are communication complexity bounds [15].
A closely related problem is that of identifying as many elements as possible (rather than just counting them); this is known as "partition selection," "set union," or "key selection" [16, 17, 18, 19, 20, 21, 22, 23]. Note that, by design, DP prevents us from identifying elements that only appear once in the dataset, or only a few times. Thus we can only output items that appear frequently.
The most closely related work to ours is that of Dong, Fang, Yi, Tao, and Machanavajjhala [11] and Fang, Dong, and Yi [11]. These papers present two different algorithms for privately approximating the distinct count (and other statistics). We discuss these below and present an experimental comparison in Table 2. We also remark that both papers prove instance optimality guarantees for their algorithms.
Figure 1: Performance of different algorithms estimating distinct count assuming that each person can contribute at most \(\ell\) elements (e.g., these algorithms are estimating \(\mathrm{DC}(D;\ell)\)). (These algorithms have bounded sensitivity, but we do not add noise for privacy yet.)
Figure 2: Performance of different algorithms estimating distinct count in a differentially private way for different values of \(\varepsilon\); for all of them \(\beta=0.05\) and \(\ell_{\max}=100\). The values between 10th and 90th percentile of each algorithms estimation are shaded into corresponding colors. For the shifted inverse algorithm, the first two plots contain the results for \(\beta=0.05\) and \(D\) equal to the true number of distinct elements in the dataset. The later two datasets are lacking the results for shifted inverse algorithm due to the computational constraints.
Most similar to our algorithm is the Race-to-the-Top (R2T) algorithm [5]; R2T is a generic framework and the original paper did not specifically consider counting distinct elements, but the approach can easily be applied to \(\mathrm{DC}(D;\ell)\). While we use the generalized exponential mechanism [11] to select the sensitivity \(\ell\), R2T computes multiple lower bounds with different sensitivities \(\ell\) and then outputs the maximum of the noisy values. This approach incurs the cost of composition across the multiple evaluations. To manage this cost, R2T only evaluates \(\ell=2,4,8,\cdots,2^{\log\ell_{\max}}\). Compared to our guarantee (8) with an error \(O\left(\frac{\ell_{*}}{\varepsilon}\log\left(\frac{\ell_{\max}}{\beta}\right)\right)\), R2T has a slightly worse theoretical error guarantee of \(O\left(\frac{\ell_{*}}{\varepsilon}\log(\ell_{\max})\log\left(\frac{\log\ell_ {\max}}{\beta}\right)\right)\)[5, Theorem 5.1].
The shifted inverse mechanism [5] takes a different approach to the problem. Rather than relying on adding Laplace noise (as we do), it applies the exponential mechanism with an ingenious loss function (see [10] for additional discussion). When applied to counting distinct elements, the shifted inverse mechanism gives an accuracy guarantee comparable to ours (8). The downside of the shifted inverse mechanism is that computing the loss function is, in general, NP-hard. Fang, Dong, and Yi [5] propose polynomial-time variants for several specific tasks, including counting distinct elements. However, the algorithm is still relatively slow.
## 3 Technical Background on Differential Privacy
For detailed background on differential privacy, see the survey by Vadhan [20] or the book by Dwork and Roth [14]. We briefly define pure DP and some basic mechanisms and results.
```
1:procedureGEM(\(D\in\mathcal{X}^{*}\); \(q_{i}\colon\mathcal{X}^{*}\to\mathbb{R}\) for \(i\in[m]\), \(\Delta_{i}>0\) for \(i\in[m]\), \(\varepsilon>0\), \(\beta>0\))
2:Require:\(q_{i}\) has sensitivity \(\sup_{\genfrac{}{}{0.0pt}{}{x,x^{\prime}\in\mathcal{X}^{*}}{\text{neighboring}}}|q(x)-q(x^{\prime})|\leq\Delta_{i}\) for all \(i\in[m]\).
3: Let \(t=\frac{2}{\varepsilon}\log\left(\frac{m}{\beta}\right)\).
4:for\(i\in[m]\)do
5:\(s_{i}\leftarrow\min_{j\in[m]}\frac{(q_{i}(D)-t\Delta_{i})-(q_{j}(D)-t\Delta_ {j})}{\Delta_{i}+\Delta_{j}}\).
6:endfor
7: Sample \(\hat{i}\in[m]\) from the Exponential Mechanism using the normalized scores \(s_{i}\); i.e., \[\forall i\in[m]\qquad\mathbb{P}\left[\hat{i}=i\right]=\frac{\exp\left(\frac{ 1}{2}\varepsilon s_{i}\right)}{\sum_{k\in[m]}\exp\left(\frac{1}{2} \varepsilon s_{k}\right)}.\]
8:return\(\hat{i}\in[m]\).
9:endprocedure
```
**Algorithm 3** Generalized Exponential Mechanism [11]
**Definition 3.1** (Differential Privacy (Dp) [15] ).: _A randomized algorithm \(M:\mathcal{X}^{*}\to\mathcal{Y}\) satisfies \(\varepsilon\)-DP if, for all inputs \(D,D^{\prime}\in\mathcal{X}^{*}\) differing only by the addition or removal of an element and for all measurable \(S\subset\mathcal{Y}\), we have \(\mathbb{P}\left[M(D)\in S\right]\leq e^{\varepsilon}\cdot\mathbb{P}\left[M(D^{ \prime})\in S\right]\)._
We refer to pairs of inputs that differ only by the addition or removal of one person's data as _neighboring_. Note that it is common to also consider replacement of one person's data; for simplicity, we
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{User} & \multicolumn{2}{c}{Supplier} & \multicolumn{2}{c}{Customer} \\ \cline{2-5} & PS.AQ & L.EP & O.OD & L.RD \\ \hline R2T [5] & 0.0658 & 0.1759 & 0.0061 & 0.150 \\ (Approx)ShiftedInverse [5] & 0.0553 & 0.0584 & 0.005 & 0.0061 \\ \(\mathrm{DPApprox DistinctCount}\) & 0.0140 & 0.0110 & 0.0008 & 0.0037 \\ \(\mathrm{DPDDistinctCount}\) & 0.0100 & 0.0096 & 0.0008 & 0.0001 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average relative absolute error of algorithms described in this paper and in [5, 5] on the TPC-H dataset. For each algorithm we executed it 100 times, removed 20 top and 20 bottom values and computed average error for the rest of 60 values.
do not do this. We remark that there are also variants of DP such as approximate DP [10] and concentrated DP [16; 1], which quantitatively relax the definition, but these are not relevant in our application. A key property of DP is that it composes and is invariant under postprocessing.
**Lemma 3.2** (Composition & Postprocessing).: _Let \(M_{1}:\mathcal{X}^{*}\to\mathcal{Y}\) be \(\varepsilon_{1}\)-DP. Let \(M_{2}:\mathcal{X}^{*}\times\mathcal{Y}\to\mathcal{Z}\) be such that, for all \(y\in\mathcal{Y}\), the restriction \(M(\cdot,y):\mathcal{X}^{*}\to\mathcal{Z}\) is \(\varepsilon_{2}\)-DP. Define \(M_{12}:\mathcal{X}^{*}\to\mathcal{Z}\) by \(M_{12}(D)=M_{2}(D,M_{1}(D))\). Then \(M_{12}\) is \((\varepsilon_{1}+\varepsilon_{2})\)-DP._
A basic DP tool is the Laplace mechanism [10]. Note that we could also use the _discrete_ Laplace mechanism [10; 11].
**Lemma 3.3** (Laplace Mechanism).: _Let \(q:\mathcal{X}^{*}\to\mathbb{R}\). We say \(q\) has sensitivity \(\Delta\) if \(|q(D)-q(D^{\prime})|\leq\Delta\) for all neighboring \(D,D^{\prime}\in\mathcal{X}^{*}\). Define \(M:\mathcal{X}^{*}\to\mathbb{R}\) by \(M(D)=q(D)+\operatorname{Lap}\left(\Delta/\varepsilon\right)\), where \(\operatorname{Lap}\left(b\right)\) denotes laplace noise with mean \(0\) and variance \(2b^{2}\)- i.e., \(\underset{\xi\leftarrow\operatorname{Lap}\left(b\right)}{\mathbb{P}}\left[ \xi>t\right]=\underset{\xi\leftarrow\operatorname{Lap}\left(b\right)}{\mathbb{P }}\left[\xi<-t\right]=\frac{1}{2}\exp\left(\frac{t}{b}\right)\) for all \(t>0\). Then \(M\) is \(\varepsilon\)-DP._
Another fundamental tool for DP is the exponential mechanism [14]. It selects the approximately best option from among a set of options, where each option \(i\) has a quality function \(q_{i}\) with sensitivity \(\Delta\). The following result generalizes the exponential mechanism by allowing each of the quality functions to have a different sensitivity.
**Theorem 3.4** (Generalized Exponential Mechanism [14, Theorem 1.4]).: _For each \(i\in[m]\), let \(q_{i}:\mathcal{X}^{*}\to\mathbb{R}\) be a query with sensitivity \(\Delta_{i}\). Let \(\varepsilon,\beta>0\). The generalized exponential mechanism (\(\text{GEM}(\cdot;\{q_{i}\}_{i\in[m]},\{\Delta_{i}\}_{i\in[m]},\varepsilon,\beta)\) in Algorithm 3) is \(\varepsilon\)-DP and has the following utility guarantee. For all \(D\in\mathcal{X}^{*}\), we have_
\[\underset{i\leftarrow\text{GEM}\left(D;\{q_{i}\}_{i\in[m]},\{\Delta_{i}\}_{ i\in[m]},\varepsilon,\beta\right)}{\mathbb{P}}\left[q_{\bar{i}}(D)\geq\underset{j \in[m]}{\max}q_{j}(D)-\Delta_{j}\cdot\frac{4}{\varepsilon}\log\left(\frac{m}{ \beta}\right)\right]\geq 1-\beta.\]
## 4 Experimental Results
We empirically validate the performance of our algorithms using data sets of various sizes from different text domains. We focus on the problem of computing vocabulary size with person-level DP. Section 4.1 describes the data sets and Section 4.2 discusses the algorithms we compare.
### Datasets
We used four publicly available datasets to assess the accuracy of our algorithms compared to baselines. Two small datasets were used: Amazon Fashion 5-core [14] (reviews of fashion products on Amazon) and Amazon Industrial and Scientific 5-core [14] (reviews of industrial and scientific products on Amazon). Two large data sets were also used: Reddit [11] (a data set of posts collected from r/AskReddit) and IMDb [13; 12] (a set of movie reviews scraped from IMDb). See details of the datasets in Table 3.
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline \multirow{2}{*}{Data Set} & \multicolumn{2}{c}{Size} & \multicolumn{2}{c}{Words per Person} & \multirow{2}{*}{Vocabulary Size} \\ \cline{2-2} \cline{4-6} & People & Records & Min & Median & Max \\ \hline Amazon Fashion & 404 & 8533 & 1 & 14.0 & 139 & 1450 \\ Amazon Industrial and Scientific & 11041 & 1446031 & 0 & 86 & 2059 & 36665 \\ Reddit & 223388 & 7117494 & 0 & 18.0 & 1724 & 102835 \\ IMDB & 50000 & 6688844 & 5 & 110.0 & 925 & 98726 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Data sets details.
### Comparisons
Computing the number of distinct elements using a differentially private mechanism involves two steps: selecting a contribution bound (\(\ell\) in our algorithms) and counting the number of distinct elements in a way that restricts each person to only contribute the given number of elements.
Selection:We examine four algorithms for determining the contribution limit:
1. Choosing the true maximum person contribution (due to computational restrictions this was only computed for Amazon Fashion data set).
2. Choosing the 90th percentile of person contributions.
3. Choosing the person contribution that exactly maximizes the utility function \(q_{\ell}(D)=\mathrm{DC}(D;\ell)-\frac{\ell}{\varepsilon}\log(\frac{1}{2\beta})\), where \(\varepsilon=1\), and \(\beta=0.001\).
4. Choosing the person contribution that approximately maximizes the utility function using the generalized exponential mechanism with \(\epsilon=1\).
Note that only the last option is differentially private, but we consider the other comparison points nonetheless.
Counting:We also consider three algorithms for estimating the number of distinct elements for a given sensitivity bound \(\ell\):
1. For each person, we uniformly sample \(\ell\) elements without replacement and count the number of distinct elements in the union of the samples.
2. The linear-time greedy algorithm (Algorithm 2) with \(\varepsilon=1\) and \(\beta=0.001\).
3. The matching-based algorithm (Algorithm 1) with \(\varepsilon=1\) and \(\beta=0.001\).
All of these can be converted into DP algorithms by adding Laplace noise to the result.
In all our datasets "true maximum person contribution" and "90th percentile of person contributions" output bounds that are much larger than necessary to obtain true distinct count; hence, we only consider DP versions of the estimation algorithm for these selection algorithms.
### Results
Figure 1 shows the dependency of the result on the contribution bound for each of the algorithms for computing the number of distinct elements with fixed person contribution. It is clear that matching and greedy algorithms vastly outperform the sampling approach that is currently used in practice.
Tables 4 to 7 show the performance of algorithms for selecting optimal person contribution bounds on different data sets. For all bound selection algorithms and all data sets, the sampling approach to estimating the distinct count performs much worse than the greedy and matching-based approaches. The greedy approach performs worse than the matching-based approach, but the difference is about 10% for Amazon Fashion and is almost negligible for other data sets since they are much larger. As for the matching-based algorithm, it performs as follows on all the data sets:
1. The algorithm that uses the bound equal to the maximal person contribution overestimates the actual necessary bound. Therefore, we only consider the DP algorithms for counts estimation. It is easy to see that while the median of the estimation is close to the actual distinct count, the amount of noise is somewhat large.
2. The algorithm that uses the bound equal to the 99th percentile of person contributions also overestimates the necessary bound and behaves similarly to the one we just described (though the spread of the noise is a bit smaller).
3. The algorithms that optimize the utility function are considered: one non-private and one private. The non-private algorithm with non-private estimation gives the answer that is very close to the true number of distinct elements. The private algorithm with non-private estimation gives the answer that is worse, but not too much. Finally, the private algorithm with the private estimation gives answers very similar to the results of the non-private estimation.
## Acknowledgments and Disclosure of Funding
We would like to thank Badih Ghazi, Andreas Terzis, and four anonymous reviewers for their constructive feedback and valuable suggestions. We thank Markus Hasenohrl for helpful discussions, which helped us identify the problem. In addition, we are grateful to Ke Yi, Wei Dong, and Juanru Fang for bringing their related work [5, 6] to our attention.
|
2308.03692 | Collective ion dynamics in Coulomb one-component plasmas within the
self-consistent relaxation theory | In this paper, we present the theoretical formalism describing the collective
ion dynamics of the nonideal Coulomb classical one-component plasmas on the
basis of the self-consistent relaxation theory. The theory is adapted to
account for correlations between the frequency relaxation parameters that
characterize the three- and four-particle dynamics and the parameters
associated with the two-particle dynamics. The dynamic structure factor spectra
and dispersion characteristics calculated for a wide range of wave numbers are
in agreement with the molecular dynamics simulation data and the results
obtained with the theory of the frequency moments. The proposed formalism
reproduces all the features inherent to the Coulomb one-component plasmas and
requires only knowledge of the coupling parameter and the information about the
structure. | Ilnaz I. Fairushin, Anatolii V. Mokshin | 2023-08-07T16:10:55Z | http://arxiv.org/abs/2308.03692v1 | # Collective ion dynamics in Coulomb one-component plasmas
###### Abstract
In this paper, we present the theoretical formalism describing the collective ion dynamics of the nonideal Coulomb classical one-component plasmas on the basis of the self-consistent relaxation theory. The theory is adapted to account for correlations between the frequency relaxation parameters that characterize the three- and four-particle dynamics and the parameters associated with the two-particle dynamics. The dynamic structure factor spectra and dispersion characteristics calculated for a wide range of wave numbers are in agreement with the molecular dynamics simulation data and the results obtained with the theory of the frequency moments. The proposed formalism reproduces all the features inherent to the Coulomb one-component plasmas and requires only knowledge of the coupling parameter and the information about the structure.
## I Introduction
The Coulomb one-component plasmas (COCP) is the specific system of identical charged point particles in a uniform neutralizing background [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15]. Together with the hard sphere model, the COCP occupies an important place in the theory of simple liquids and plays a key role in the physics of extreme states of matter [8; 9]. In nature, the COCP is realized in such objects as neutron star crusts, interiors of white dwarfs and giant planets [8; 9; 10; 11; 12]. Point particles of the COCP interact with each other through the Coulomb potential
\[u(r)=\frac{(Ze)^{2}}{4\pi\varepsilon_{0}r}\,, \tag{1}\]
where \(r\) is the distance between particles, \(\varepsilon_{0}\) is the electrical constant, \(Z\) is the particle charge in units of electron charge \(e\). Despite the simple analytical form of this potential, the COCP is nontrivial system. First of all, the Coulomb interaction potential is long ranged. Unlike, for example, the Lennard-Jones potential, the Coulomb potential (as well as the Yukawa potential) is purely repulsive. Therefore, laboratory and natural implementations of such systems are always accompanied by the presence of external factors that stabilize them and keep the system in equilibrium. For example, in experiments with dust particles, this is usually an external electric or magnetic field, which creates the so-called trap that holds particles within a finite volume [9]. When modeling by the molecular dynamics (MD) method, the stability of these systems can be achieved by using periodic boundary conditions. In this case, however, the use of the standard method of taking into account the interaction with the nearest neighbours can give correct results only for intermediate and strongly screening Yukawa systems. The Coulomb systems as well as the Yukawa systems with a large screening length require considering the long-range effect of the corresponding potentials. As a rule, this is achieved using the modified Ewald summation method or its analogs, in which the effective interaction potential of particles (charges) takes into account their interaction with the background of the opposite sign [1; 16]. Thus, when describing the physical properties of the Coulomb system, it is always necessary to take into account the presence of the uniform neutralizing background of opposite sign. For example, in a general case, if particles of some system interact through some potential \(\phi(r)\) and their mutual arrangement is reproduced by the radial distribution function \(g(r)\), then the reduced excess internal energy per particle is defined by the following relation:
\[U_{ex}=\frac{2\pi\rho}{k_{B}T}\int_{0}^{\infty}\phi(r)g(r)r^{2}dr.\]
Here, \(\rho\) is the number density of particles, \(k_{B}\) is the Boltzmann constant, and \(T\) is the absolute temperature. Then, in the case of the Coulomb potential, \(\phi(r)=u(r)\), the last equation will produce infinite energy. To calculate \(U_{ex}\) in the case of the COCP, it is necessary to replace \(g(r)\) with \(g(r)-1\), which actually means taking into account the presence of a uniform neutralizing background. On the other hand, the interaction of the COCP charges with the opposite background appears at local deviations of their number density from some average value \(\rho\). This interaction is characterized by forces whose amplitudes are directly proportional to the amplitude of the displacement of the point charges relative to the background [1; 17]. This leads to the appearance of a collective oscillatory motion of the COCP particles with the certain inherent frequency \(\omega_{p}\). The frequency \(\omega_{p}\) is called the plasma frequency and is defined as follows [1; 2; 3; 4; 5; 6; 7; 8; 9]:
\[\omega_{p}=\sqrt{\frac{(Ze)^{2}\rho}{\varepsilon_{0}m}}. \tag{2}\]
Here, \(m\) is the mass of the particles. On the other hand, as we know, the frequency of natural oscillations of a spring pendulum is determined by a relation of the form:
\[\omega_{K}=\sqrt{\frac{K}{M}}, \tag{3}\]
where \(K\) is the elasticity coefficient of the spring pendulum and \(M\) is the mass of the pendulum. Comparing Eqs.
(2) and (3), one can see that the quantity \((Ze)^{2}\rho/\varepsilon_{0}\) actually represents the effective elasticity coefficient of the COCP.
The characteristic frequency of collective vibrations of particles at finite spatial scales will differ from \(\omega_{p}\), i.e., there is a dispersion dependence of the frequency \(\omega\) on the wave number \(k\). Full information about the vibrational processes associated with density redistribution at different spatial scales of an equilibrium multiparticle system can be obtained from such quantity as the dynamic structure factor \(S(k,\omega)\)[22]. The physical meaning of the dynamic structure factor can be interpreted as the intensity of fluctuations of particle number density in the system with different frequencies \(\omega\) on various spatial scales \(L\sim 2\pi/k\). In the case of simple single-component liquids, the spectrum \(S(k,\omega)\) at the fixed \(k\) is characterized by one central and two symmetric side peaks [22]. The central peak corresponds to nonpropagating isobaric entropy fluctuations, and side peaks correspond to adiabatic pressure fluctuations, which propagate in space. The positions of these peaks on the frequency \(\omega\) axis as well as their widths are associated with values of thermal diffusivity, attenuation coefficient, and sound velocity for the system.
The specificity of the Coulomb system is largely due to the long-range interaction of particles (charges) and directly appears also in the form of the spectra \(S(k,\omega)\) mainly in the long-wave limit (i.e., at \(k\to 0\)). First, in the case of the COCP, the central (Rayleigh) peak is practically absent [3; 14]. This is due to the fact that the resulting arbitrary local redistribution of the particle number density rapidly propagates throughout the Coulomb system without appreciable transfer of thermal energy. Second, the side peaks of \(S(k,\omega)\) spectra are located near the frequency \(\omega_{p}\) and tend to it in low-\(k\) limit (\(k\to 0\)). This means that the \(k\) dependence of the side peak positions of \(S(k,\omega)\) spectra, which is denoted as \(\omega_{c}(k)\), has a frequency (energy) gap at \(k=0\) and the width of this gap is
\[\omega_{p}=\lim_{k\to 0}\omega_{c}(k). \tag{4}\]
Note that in the case of systems with a short-range interparticle interaction potential, the dispersion dependence \(\omega_{c}(k)\) at \(k\to 0\) is linear: \(\lim_{k\to 0}\omega_{c}(k)=c_{s}k\), where \(c_{s}\) is the speed of sound. The shift of the side peak position as \(k\) increases into the region of higher or lower frequencies compared to the frequency \(\omega_{p}\) depends on the thermodynamic state of the system, which is determined by a single quantity - the so-called coupling parameter
\[\Gamma=\frac{(Ze)^{2}}{4\pi\varepsilon_{0}ak_{B}T}\,. \tag{5}\]
Here, \(a=(3/4\pi\rho)^{1/3}\) is the radius of the Wigner-Seitz cell. The coupling parameter \(\Gamma\) is approximately the ratio of the potential energy between two particles to the average thermal energy of a particle. The greater the value of the parameter \(\Gamma\), the more coupled the system is. Thus, at \(\Gamma\lesssim 175\), the COCP is a disordered system, while, at \(\Gamma\gtrsim 175\), it is a crystal with the bcc lattice [2; 3]. When the thermal energy of motion of the particles is much larger than their interaction energy, i.e., at \(\Gamma\ll 1\), ideal Coulomb gas is realized. In this case, the \(k\) dependence of the characteristic frequency of collective excitations is given by the well-known Bohm-Gross dispersion relation [23; 24]:
\[\omega^{(BG)}(k)=\omega_{p}\sqrt{1+\frac{(ka)^{2}}{\Gamma}}. \tag{6}\]
For a strongly coupled COCP at \(\Gamma\) values exceeding some critical value \(\Gamma_{c}\), the side peak of \(S(k,\omega)\) spectra at low-\(k\) range with increasing \(k\) shifts to a lower frequency compared to the plasma frequency \(\omega_{p}\)[25; 26; 27]. In this case, the so-called negative dispersion mode is realized and the dispersion dependence \(\omega_{c}(k)\) will satisfy the condition:
\[\frac{d\omega_{c}(k)}{dk}<0\,\,\,\mbox{at}\,\,\,k\to 0. \tag{7}\]
The results of molecular dynamic (MD) simulations of the COCP [25] reveal that the critical value of the coupling parameter is \(\Gamma_{c}\approx 9.5\) (Fig. 1). It is noteworthy that the \(k\) dependence of \(\omega^{(BG)}(k)\) is similar to the dependence of the total energy of mass particles on their momentum, where at zero momentum there is an energy gap due to the rest mass of particles [28]. In the case of the COCP, the rest energy is directly the energy of natural collective vibrations at the frequency \(\omega_{p}\).
Another feature of the COCP is the presence of the so-called second high-frequency plasmon peak in the \(S(k,\omega)\) spectra, which is found in MD simulations data [15]. It is necessary to note that there is currently no consensus on what causes this effect.
Figure 1: Dispersions of side peak in \(S(k,\omega)\) spectra for the COCP (reproduced from [25]) at different coupling parameters \(\Gamma\). Solid line represents the Bohm-Gross dispersion relation (6) at \(\Gamma=0.2\).
Note that until now there has been no unified theoretical approach that would describe the collective dynamics of the COCP on the wide spatial scale without adjusting parameters. Existing methods either contain adjustable parameters or give satisfactory results in a limited range of wave number values. In this paper, this gap will be filled by use of the self-consistent relaxation theory of collective particle dynamics, which had previously been successfully applied to describe both Yukawa liquids and liquid metals [29; 30]. The main results of this work are as follows. Based on the established correlations between the frequency relaxation parameters characterizing the two-particle, three-particle, and four-particle dynamics of the COCP for different states with the coupling parameter \(\Gamma\in[5;100]\), the expression for the dynamical structure factor \(S(k,\omega)\) was derived. It is necessary to note that the obtained expression for \(S(k,\omega)\) requires only information about the thermodynamic state and structure of the system as input parameters. The theory correctly reproduces the MD simulations results for \(S(k,\omega)\) over a wide range of wave numbers as well as the dispersion laws of the high-frequency plasma mode, decrement of the plasma excitations and frequency of longitudinal plasma excitations.
The paper is organized as follows. In Sec. II, we describe the theoretical formalism related with the self-consistent relaxation theory of ion collective dynamics in the COCP. In Sec. III, the obtained theoretical results are compared with MD simulations data and the results of other theoretical approaches. The main findings are given in the Conclusion (Sec. IV).
## II Theoretical formalism
The dynamic structure factor \(S(k,\omega)\) is a Fourier transform (in frequency) of the density fluctuations time correlation function \(F(k,t)\) known also as the intermediate scattering function [30; 31; 32]:
\[S(k,\omega)=\frac{S(k)}{2\pi}\int_{-\infty}^{\infty}F(k,t)\exp(\mathbf{i} \omega t)dt\,. \tag{8}\]
Here, \(S(k)\) is the static structure factor; \(t\) is the time. For simple liquids in low-\(k\) limit (hydrodynamic regime), the exact expression for the dynamic structure factor [22] is known:
\[\begin{split}& S^{H}(k,\omega)=\frac{S(k)}{2\pi}\bigg{[}\frac{ \gamma-1}{\gamma}\frac{2D_{T}k^{2}}{\omega^{2}+(D_{T}k^{2})^{2}}\\ &+\frac{1}{\gamma}\sum_{j=1}^{2}\frac{\sigma k^{2}}{(\omega+(-1) ^{j}c_{s}k)^{2}+(\sigma k^{2})^{2}}\bigg{]}.\end{split} \tag{9}\]
Here, \(\gamma\) is the ratio of the specific heat capacity at constant pressure to the specific heat capacity at constant volume, \(D_{T}\) is the thermal diffusivity coefficient, and \(\sigma\) is the sound attenuation coefficient. Equation (9) can be derived directly from the linearized Navier-Stokes equations, where the key dynamic variables - the number density, the energy density and the current - are treated as slow variables. This equation correctly reproduces the collective dynamics of particles in simple liquids, where the effective interparticle interaction is characterized by a finite length, and it usually provides a phenomenological description of experiments on inelastic light scattering in liquids. A detailed derivation of Eq. (9) can be found in the classical monographs [22; 31]. However, the hydrodynamic theory with Eq. (9) for \(S(k,\omega)\) is not applicable for the COCP [5; 6] that is due to the long-range character of the particle interaction in the COCP. On the other hand, the microscopic theories, which consider the system as an ensemble of interacting particles, turn out to be more efficient. For example, the theory based on the method of frequency moments (FM theory) provides an analytical expression for \(S(k,\omega)\), which has no adjustable parameters [13; 14]. Here, the expression for \(S(k,\omega)\) is obtained as a result of a fractional-linear transformation of the Nevanlinna function, which has specific mathematical properties and satisfies the sum rules. Further, models based on exponential [32] and Gaussian [42] memory functions and the model based on a modified Navier-Stokes equation [41] are also used to describe the collective dynamics of the COCP. However, these models contain various fitting parameters. This paper presents a theoretical formalism to calculate \(S(k,\omega)\) of the COCP, which is based on the self-consistent relaxation theory of collective dynamics in multiparticle systems [29; 30; 37; 38; 39; 40].
From Eq. (8), the following series can be obtained for the function \(F(k,t)\):
\[\begin{split} F(k,t)&=1-\langle\omega^{(2)}(k) \rangle\frac{t^{2}}{2!}+\langle\omega^{(4)}(k)\rangle\frac{t^{4}}{4!}+\ldots\\ &\quad+(-\mathbf{i})^{l}\langle\omega^{(l)}(k)\rangle\frac{t^{l }}{l!}+\ldots\,,\end{split} \tag{10}\]
where \(\langle\omega^{(l)}(k)\rangle\) is the normalized frequency moment \(S(k,\omega)\)\(l\)-th order:
\[\langle\omega^{(l)}(k)\rangle=(-\mathbf{i})^{l}\frac{d^{l}}{dt^{l}}F(k,t) \bigg{|}_{t=0}=\frac{\int_{-\infty}^{\infty}\omega^{l}S(k,\omega)d\omega}{S(k )}\,. \tag{11}\]
From Eq. (10), one obtains the following expression for the Laplace-transform of the function \(F(k,t)\):
\[\begin{split}\widetilde{F}(k,s)&=\frac{1}{s}- \frac{\langle\omega^{(2)}(k)\rangle}{s^{3}}+\frac{\langle\omega^{(4)}(k) \rangle}{s^{5}}+\ldots\\ &\quad+(-\mathbf{i})^{l}\frac{\langle\omega^{(l)}(k)\rangle}{s^{ l+1}}+\ldots\,.\end{split} \tag{12}\]
On the other hand, the last expression can be rewritten
as the continued fraction:
\[\widetilde{F}(k,s)=\frac{1}{s+\frac{\Delta_{1}(k)}{s+\frac{\Delta_{2}(k)}{s+\frac{ \Delta_{3}(k)}{s+\ddots}}}}. \tag{13}\]
Here, \(\Delta_{n}(k)\) (\(n=1,2,3,...\)) is the frequency relaxation parameters, which have a dimension of square frequency [29; 30; 37; 38; 39; 40]. Each of these parameters is related to the corresponding frequency moment of \(S(k,\omega)\) via the sum rules [30]:
\[\Delta_{1}(k) = \frac{\langle\omega^{(2)}(k)\rangle}{\langle\omega^{(0)}(k) \rangle}, \tag{14}\] \[\Delta_{2}(k) = \frac{\langle\omega^{(4)}(k)\rangle}{\langle\omega^{(2)}(k) \rangle}-\frac{\langle\omega^{(2)}(k)\rangle}{\langle\omega^{(0)}(k)\rangle},\] \[\Delta_{3}(k) = \frac{\left[\langle\omega^{(6)}(k)\rangle\langle\omega^{(2)}(k) \rangle-\left(\langle\omega^{(4)}(k)\rangle\right)^{2}\right]\langle\omega^{ (0)}(k)\rangle}{\langle\omega^{(4)}(k)\rangle\langle\omega^{(2)}(k)\rangle \langle\omega^{(0)}(k)\rangle-\left(\langle\omega^{(2)}(k)\rangle\right)^{3}},\] \[\Delta_{n}(k) = \mathcal{F}\left[\langle\omega^{(0)}(k)\rangle,\langle\omega^{(2 )}(k)\rangle,\ldots,\langle\omega^{(2n)}(k)\rangle\right],\]
where \(\mathcal{F}\) means an algebraic expression.
The following microscopic expressions are known for the frequency relaxation parameters of the first, second, and third orders [40]:
\[\Delta_{1}(k) = \frac{k_{B}T}{m}\frac{k^{2}}{S(k)}, \tag{15a}\] \[\Delta_{2}(k) = 3\left(\omega_{E}^{2}+\frac{k_{B}T}{m}k^{2}\right)-\Delta_{1}(k)\] \[- \frac{\rho}{m}\int\nabla_{1}^{2}\phi(\mathbf{r})\exp(\mathbf{i} \mathbf{k}\cdot\mathbf{r})g(r)d^{3}\mathbf{r},\] \[\Delta_{3}(k) = \frac{\omega_{3}^{4}}{\Delta_{2}(k)}+\Omega_{3}\left(k\right). \tag{15c}\]
Here,
\[\omega_{E}^{2}=\frac{\rho}{3m}\int\nabla_{l}^{2}\phi(\mathbf{r})g(r)d^{3} \mathbf{r},\]
is known as the Einstein frequency,
\[\omega_{3}^{4}=\frac{\rho^{2}}{m^{2}}\int d^{3}\mathbf{r}\int d^{3}\mathbf{r }_{1}\frac{g_{3}(\mathbf{r},\mathbf{r}_{1})}{rr_{1}}\frac{\phi(r)}{dr}\frac{ \phi(r_{1})}{dr_{1}},\]
is the analog of the Einstein frequency, which characterizes the frequency of vibrational dynamics of different particle triplets, and \(\Omega_{3}(k)\) is the combination integral expressions containing the interaction potential \(\phi(r)\), pair distribution function \(g(r)\) and three-particle distribution function \(g_{3}(\mathbf{r},\mathbf{r}_{1})\)[40]. In the general case, for the \(\Delta_{n}(k)\), we have the following expression:
\[\Delta_{n}(k)=W\{\omega_{n},\Omega_{n}\left(k\right)\}. \tag{16}\]
Here, \(W\) means an algebraic expression, \(\omega_{n}\) is the analog of the Einstein frequency, which characterizes the frequency of vibrational dynamics of different groups of \(n\) particles (see Fig. 2), and \(\Omega_{n}(k)\) is the combination integral expressions containing the interaction potential \(\phi(r)\) and distribution functions beginning from the pair \(g(r)\) to the \(n\)-particle \(g_{n}(\mathbf{r},\mathbf{r}_{1},...\mathbf{r}_{n-2})\) inclusive.Thus, the \(n\)-th order frequency relaxation parameter \(\Delta_{n}(k)\) is related to the corresponding _n_-particle distribution function of the system and characterizes the vibrational process for various groups of \(n\) particles [30; 37; 38; 39; 40; 29].
On the other hand, the quantities \(\tau_{n}(k)=1/\sqrt{\Delta_{n}(k)}\), where \(n=1,2,3,...\), determine the time scale of the corresponding relaxation processes. Thus, the first four quantities in this set - \(\tau_{1}(k)\), \(\tau_{2}(k)\), \(\tau_{3}(k)\) and \(\tau_{4}(k)\) - correlate with the time scales of the processes where hydrodynamic variables are exhibited. In turn, these dynamic variables form an orthogonal basis, the first element of which is the density fluctuations [37; 38; 39; 40; 29]. The generalization of the hydrodynamic theory realized within the framework of the self-consistent relaxation theory implies a restriction by the set of frequency relaxation parameters up to and including the fourth order. The time scales of the dynamic variables above the fourth order will be outside the processes that are associated with the structure relaxation.
The key idea of the self-consistent relaxation theory is as follows: beginning from the fourth order the characteristic frequencies of the dynamic variables fluctuations are aligned [37; 30; 29; 40], i.e.,
\[\Delta_{4}(k)=\Delta_{5}(k)=\Delta_{6}(k)=.... \tag{17}\]
Using this condition, from Eq. (13) one obtains the following analytical expression for \(S(k,\omega)\)[37; 30; 29; 40]:
\[S(k,\omega) = \frac{S(k)}{\pi}\frac{\Delta_{1}(k)\Delta_{2}(k)\Delta_{3}(k)}{ \Delta_{4}(k)-\Delta_{3}(k)}\] \[\times \frac{\sqrt{\Delta_{4}(k)}}{\omega^{6}+\mathcal{A}_{1}(k)\omega^{ 4}+\mathcal{A}_{2}(k)\omega^{2}+\mathcal{A}_{3}(k)}\,,\]
where
\[\mathcal{A}_{1}(k) = \frac{\Delta_{3}^{2}(k)-\Delta_{2}(k)[2\Delta_{4}(k)-\Delta_{3}(k )]}{\Delta_{4}(k)-\Delta_{3}(k)}-2\Delta_{1}(k)\,,\]
\[\mathcal{A}_{2}(k)=\frac{\Delta_{2}^{2}(k)\Delta_{4}(k)-2\Delta_{ 1}(k)\Delta_{3}^{2}(k)}{\Delta_{4}(k)-\Delta_{3}(k)}\] \[+\frac{\Delta_{1}(k)\Delta_{2}(k)[2\Delta_{4}(k)-\Delta_{3}(k)]}{ \Delta_{4}(k)-\Delta_{3}(k)}+\Delta_{1}^{2}(k)\,,\]
\[\mathcal{A}_{3}(k) = \frac{\Delta_{1}^{2}(k)\Delta_{3}^{2}(k)}{\Delta_{4}(k)-\Delta_{ 3}(k)}\,.\]
Thus, to calculate \(S(k,\omega)\) within the self-consistent relaxation theory, one needs to know the first four frequency relaxation parameters. For the case of the COCP,
the following exact microscopic expressions for first- and second-order parameters are known [33; 34; 35; 36; 30; 33]:
\[\Delta_{1}(k)=\frac{\omega_{p}^{2}(ka)^{2}}{3\Gamma S(k)}\,, \tag{19}\]
\[\Delta_{2}(k)=\omega_{p}^{2}\left(1+\frac{(ka)^{2}}{\Gamma}+2\int_{0}^{\infty} \frac{j_{2}(kax)}{x}h(x)dx\right) \tag{20}\] \[-\Delta_{1}(k),\]
where \(x=r/a\) is the dimensionless spatial variable, \(j_{2}(x)\) is the second-order spherical Bessel function, and \(h(x)=g(x)-1\). The frequency relaxation parameters \(\Delta_{3}(k)\) and \(\Delta_{4}(k)\) can be determined from MD simulations (details are provided in the Appendix) data through their basic definitions [30]. Mathematical analysis of these parameters derived from MD simulations reveals the following correlations:
\[\Delta_{3}(k)\approx\frac{3}{2}\Delta_{2}(k)+\omega_{0}^{2}, \tag{21a}\] \[\Delta_{4}(k)\approx\frac{4}{3}\Delta_{3}(k)+\omega_{1}^{2}(k)\approx 2 \Delta_{2}(k)+\frac{4}{3}\omega_{0}^{2}+\omega_{1}^{2}(k), \tag{21b}\]
where
\[\omega_{0}^{2}=\frac{3\,\omega_{p}^{2}}{\sqrt{\Gamma}};\ \omega_{1}^{2}(k)= \frac{\omega_{p}^{2}\sqrt{\Gamma}}{7ka}.\]
It is necessary to note that the similar correlations were obtained for the Yukawa plasmas one-component (YOCP) [29]. In contrast to this case, Eq. (21b) for \(\Delta_{4}(k)\) contains the \(k\) dependent term \(\omega_{1}^{2}(k)\), which occurs due to the long-range nature of the Coulomb interaction potential (1). Figure 3 shows \(k\) dependencies of the reduced frequency relaxation parameters for the COCP at various \(\Gamma\). As can be seen, Eqs. (21a) and (21b) satisfactorily reproduce well the MD simulations results for \(\Delta_{3}(k)\) and \(\Delta_{4}(k)\). In fact, relations (21a) and (21b) indicate a correspondence between two-particle correlations and three- and four-particle correlations. The theoretical model presented in this paper is obtained within the framework of the self-consistent relaxation theory [37; 38; 39; 40; 29], which is modified for the case of the COCP, where the frequency relaxation parameters are related to each other according to certain correlation relations (21a) and (21b). These relations represent an empirical result: as follows from molecular dynamics simulation data, these relations are satisfied for the thermodynamic states where the COCP is a fluid-like system (i.e. at \(\Gamma\in[5;100]\)). The theoretical model presented applies directly to this region of states.
Analysis of expression (18)
Figure 3: Dispersion dependences of the frequency relaxation parameters reduced to \(\omega_{p}^{2}\) [symbols – calculations based on simulation data; solid and dashed lines – calculations using approximate correlation relations (21a) and (21b), respectively].
Figure 2: Schemes showing arbitrary vibrational \(n\)-particle groups. The quantity \(\omega_{n}\), where \(n=1,2,3,...\), is the average frequency of the corresponding oscillatory circuit, which characterizes the oscillatory dynamics of various \(n\)-particle groups. Note that in the case of \(\omega_{2}\equiv\omega_{E}\) we have a linear oscillatory circuit (a), in the case of \(\omega_{3}\) we have a flat oscillatory circuit (b), and in the case of \(\omega_{4}\) we have a three-dimensional oscillatory circuit (c). Particles can oscillate in any direction; however, regardless of this, the dimension of the oscillatory circuit \(d\) is preserved.
dispersion equation for the high-frequency plasma mode:
\[s^{3}+\mathcal{B}_{1}(k)s^{2}+\mathcal{B}_{2}(k)s+\mathcal{B}_{1}(k)\Delta_{1}(k )=0, \tag{22}\]
where
\[\mathcal{B}_{1}(k)=\frac{2\Delta_{1}(k)\sqrt{\Delta_{4}(k)}}{2\Delta_{4}(k)- \Delta_{3}(k)},\]
\[\mathcal{B}_{2}(k)=\Delta_{1}(k)+\mathcal{B}_{1}(k)\sqrt{\Delta_{4}(k)}.\]
Solution of this equation yields \(s(k)=\pm i\omega_{c}(k)-\delta(k)\) with dispersion for the side peak of \(S(k,\omega)\):
\[\omega_{c}(k)=\sqrt{3}\left(\sqrt[3]{Z(k)-q(k)}+\sqrt[3]{Z(k)+q(k)}\right), \tag{23}\]
and the decrement dispersion of plasma excitations,
\[\delta(k)=\sqrt[3]{Z(k)+q(k)}-\sqrt[3]{Z(k)-q(k)}-\frac{\mathcal{B}_{2}(k)}{3}, \tag{24}\]
where
\[Z(k) = \sqrt{p^{3}(k)+q^{2}(k)},\] \[p(k) = \frac{\mathcal{B}_{2}(k)}{3}+\left(\frac{\mathcal{B}_{1}(k)}{3} \right)^{2},\] \[q(k) = \frac{\mathcal{B}_{1}(k)}{54}\left(2\mathcal{B}_{1}(k)^{2}-9 \mathcal{B}_{1}(k)\sqrt{\Delta_{4}(k)}+18\Delta_{1}(k)\right).\]
Obviously, by analogy with the hydrodynamic expression (9), the values \(\omega_{c}(k)\) and \(\delta(k)\) defined by formulas (23) and (24), respectively, will characterize the position and width of the side peak in \(S(k,\omega)\) spectra.
## III Results and discussion
The theoretical results are compared with the data of MD simulations and with the results of other models, in
Figure 4: Top panels: spectra of \(S(k,\omega)\) multiplied by the plasma frequency at different values of the coupling parameter \(\Gamma\). Here theoretical results from Eq. (18) shown by black solid lines are compared with MD simulations data given by green circles, with results of the FM theory [13] given by red dashed lines, with results of the exponential memory function model (EMF) given by brown dashed lines [32], results of the Gaussian memory function model (GMF) [42] given by blue dashed lines and results of the model based on and modified Navier-Stokes equation (MNS) [41] given by a thin black line. Bottom panels: differences between the simulation data and the corresponding theoretical values.
particular, with the FM theory. In the work of [13], based on physical considerations, an expression for \(S(k,\omega)\) was found that is similar to Eq. (18). A detailed discussion of the correspondences between the self-consistent relaxation theory and the method of frequency moments was given in Ref. [29].
Figure 4 shows the \(S(k,\omega)\) spectra of the COCP for different dimensionless wave numbers \(ka\) and coupling parameters \(\Gamma=5,20,50\) and \(100\). These \(\Gamma\) values correspond to the liquid phase of the COCP. It can be seen that for the considered values of the coupling parameter \(\Gamma\) and wave number \(k\), the self-consistent relaxation theory reproduces the results of MD simulations quite accurately and describes all the features of these spectra. At small wave numbers \(k<k_{m}/2\), where \(k_{m}\) is the wave number corresponding to the first maximum in the static structure factor \(S(k)\), the spectra \(S(k,\omega)\) of the COCP, as expected, contain only the high frequency components at the frequencies near to \(\omega_{p}\). As the dimensionless wave number \(ka\) increases, beginning from values comparable to \(k_{m}/2\), the zero component appears and the high-frequency component disappears. This feature is characteristic of all classical simple liquids with a short-range interparticle interaction potential. This means that beginning from the wave numbers \(k=k_{m}/2\) and higher, i.e. on spatial scales that correspond to several mean interparticle distances, the long-range character of the Coulomb interaction ceases to play an appreciable role in the particle dynamics. Note that Eq. (18) in some cases gives somewhat better agreement with the results of MD simulations than the FM theory [13], as well as models based on the exponential memory function [32] and modified Navier-Stokes equation [41]. The model based on the Gaussian memory function [42] gives good agreement with MD simulations data, but it contains a fitting parameter - the so-called relaxation time.
To obtain the \(k\) dependence of the longitudinal collective excitations frequency \(\omega_{L}\), we consider the spectral density of the longitudinal current correlation function \(C_{L}(k,\omega)\), which is directly related to \(S(k,\omega)\) as
\[C_{L}(k,\omega)=\frac{3\Gamma\omega^{2}}{(\omega_{p}ka)^{2}}S(k,\omega). \tag{25}\]
Using this relation, from Eq. (18) one can obtain the analytical expression for the dispersion law \(\omega_{L}(k)\) of longitudinal plasma excitations:
\[\omega_{L}(k)=\sqrt{C_{+}(k)+C_{-}(k)-\frac{\mathcal{A}_{1}(k)}{6}}, \tag{26}\]
where
\[C_{\pm}(k)=\sqrt[3]{\frac{\mathcal{A}_{3}(k)}{4}-\frac{\mathcal{A}_{1}^{3}(k) }{216}\pm\sqrt{\frac{\mathcal{A}_{3}^{2}(k)}{16}-\frac{\mathcal{A}_{3}(k) \mathcal{A}_{1}^{3}(k)}{432}}}.\]
Figure 5 presents the dispersion characteristics of the COCP. It can be seen (top and middle rows) that Eqs. (23) and (24) reproduce the MD simulations results for the dispersion characteristics \(\omega_{c}(k)\) and \(\delta(k)\) very well. From the bottom row, it is clear that Eq. (26) enables one to correctly calculate \(\omega_{L}(k)\) over a wide range of the COCP parameter changes. The proposed theoretical formalism correctly reproduces asymptotes of dispersion dependencies for low wave numbers and the so-called roton minima [13] at the COCP states with \(\Gamma=5,20,50\) and \(100\).
A remarkable fact is that both the self-consistent relaxation theory and the FM theory produce expressions for the characteristics of the collective particle dynamics in terms of frequency moments and/or frequency relaxation parameters. In addition, both theories are consistent with each other: one can formulate a condition for high order frequency relaxation parameters under which self-consistent relaxation theory produces FM theory results. This point is discussed in detail in Ref. [29] (see Supplemental Material). A characteristic feature of FM theory is that it is based on the Nevanlinna parameter function. The most important advantage of this theory is that the obtained analytical expressions for the dynamical structure factor and other quantities of collective particle dynamics do not contain any fitting parameters [13; 14] and also the theory does not rely directly on any empirical results leading to expressions similar to correlation relations (21a) and (21b). On the other hand, a feature of FM theory is that it does not take into account the manifestation of an independent central (Rayleigh) component in the spectra of the dynamic structure factor at wave numbers comparable with \(ka=4.29\) and higher (see Fig. 4).
As mentioned above, states close to an ideal Coulomb gas with \(\Gamma\ll 1\) have a positive dispersion of high-frequency collective excitations [see Eq.(6)], while a Coulomb (Wigner) crystal with \(\Gamma\gtrsim 175\) is characterized by a negative dispersion of these excitations [3]. Thus one can expect that the state with \(\Gamma_{c}\approx 9.5\), which is the boundary for regimes with positive and negative dispersions, will correspond to a crossover between regimes with _gas like_ and _solid like_ collective ion dynamics. On the other hand, the disappearance of the so-called roton minima observed at \(\Gamma_{c}\) represents one of the conditions of the Frenkel line [44], which in the phase diagram of an arbitrary system separates the thermodynamic states with _gas like_ and _solid like_ particle dynamics [45; 46; 47; 44]. Consequently, in the case of the COCP, there is a direct correspondence between the known value of \(\Gamma_{c}\) and the Frenkel line, which will be located at \(\Gamma_{c}\approx 9.5\) in the phase diagram.
To determine \(\Gamma_{c}\) in the framework of the proposed formalism, it is necessary to find the approximation of Eq. (26) at small \(k\) as a quadratic polynomial of the form
\[\omega_{L}^{(\rm lk)}(k)\approx\omega_{p}\left(1+\alpha(ka)^{2}\right). \tag{27}\]
The positive values of the coefficient \(\alpha\) correspond to positive dispersion, whereas the negative values correspond to negative dispersion. Table 1 shows the values of this coefficient for various \(\Gamma\). Using three alpha values cor
responding to \(\Gamma=5\), 20, and 50, one can construct an approximation of the dependence \(\alpha(\Gamma)\) in the following view:
\[\alpha(\Gamma)=2.237\cdot 10^{-4}\Gamma^{2}-0.017\Gamma+0.165. \tag{28}\]
\(\Gamma_{c}\) will correspond to the condition with \(\alpha(\Gamma)=0\). Then, we find \(\Gamma_{c}\approx 11.42\). As can be seen, this result is close to \(\Gamma_{c}\approx 9.5\) obtained from the large-scale MD simulations in Ref. [25].
## IV Conclusion
Thus, in this paper, correlation ratios between the frequency relaxation parameters characterizing the three- and four-particle dynamics with the parameters characterizing the two-particle dynamics were obtained for the case of the COCP at the values of the coupling parameter \(\Gamma=5\), 20, 50 and 100. The application of the obtained correlations enables one to describe all the features of this non-trivial multiparticle system within the self-consistent relaxation theory without any fitting parameters. In spite of the fact that in the realized approach all correlations are reduced to pairs correlations, it turns out to be sufficient to describe a system of particles with the long-range Coulomb interactions. The calculated dynamic structure factor and dispersion characteristics are consistent with molecular dynamic simulation data. The obtained discrepancies between the theoretical results and the MD simulations data are comparable with those given by the theory based on the frequency moments method.
## V Acknowledgements
This work was supported by the Russian Science Foundation (Project No. 19-12-00022). The authors are grateful to I. M. Tkachenko and S. A. Khrapak for helpful discussions.
Figure 5: Wave number dependencies of the frequency \(\omega_{c}(k)\) (top row), decrement of plasma excitations \(\delta(k)\) (middle row) and longitudinal plasma excitations \(\omega_{L}(k)\) (bottom row) plotted at different values of the coupling parameter \(\Gamma\). The black solid lines represent theoretical results obtained using expressions (23), (24), and (26), the red dashed lines show the results of the moment theory [13], and the green circles show the MD simulations data.
Appendix: Molecular dynamics simulation details
MD simulations of the COCP were performed in the LAMMPS package [43] for the equilibrium configuration of the COCP at \(\Gamma=5,20,50\) and \(100\) in the NVT ensemble. The simulation cell contained 64,000 particles interacting through the Coulomb potential. Periodic boundary conditions in all directions were applied to the cell and the PPPM fast summation method was used. The equations of motion of the particles were integrated using the velocity-based Verle algorithm with a time integration step \(\tau=0.01/\omega_{p}\).
|
2307.04778 | Formulating A Strategic Plan Based On Statistical Analyses And
Applications For Financial Companies Through A Real-World Use Case | Business statistics play a crucial role in implementing a data-driven
strategic plan at the enterprise level to employ various analytics where the
outcomes of such a plan enable an enterprise to enhance the decision-making
process or to mitigate risks to the organization. In this work, a strategic
plan informed by the statistical analysis is introduced for a financial company
called LendingClub, where the plan is comprised of exploring the possibility of
onboarding a big data platform along with advanced feature selection
capacities. The main objectives of such a plan are to increase the company's
revenue while reducing the risks of granting loans to borrowers who cannot
return their loans. In this study, different hypotheses formulated to address
the company's concerns are studied, where the results reveal that the amount of
loans profoundly impacts the number of borrowers charging off their loans.
Also, the proposed strategic plan includes onboarding advanced analytics such
as machine learning technologies that allow the company to build better
generalized data-driven predictive models. | Saman Sarraf | 2023-07-10T05:43:31Z | http://arxiv.org/abs/2307.04778v2 | Formulating A Strategic Plan Based On Statistical Analyses And Applications For Financial Companies Through A Real-World Use Case
###### Abstract
Business statistics play a crucial role in implementing a data-driven strategic plan at the enterprise level to employ various analytics where the outcomes of such a plan enable an enterprise to enhance the decision-making process or to mitigate risks to the organization. In this work, a strategic plan informed by the statistical analysis is introduced for a financial company called LendingClub, where the plan is comprised of exploring the possibility of onboarding a big data platform along with advanced feature selection capacities. The main objectives of such a plan are to increase the company's revenue while reducing the risks of granting loans to borrowers who cannot return their loans. In this study, different hypotheses formulated to address the company's concerns are studied, where the results reveal that the amount of loans profoundly impacts the number of borrowers charging off their loans. Also, the proposed strategic plan includes onboarding advanced analytics such as machine learning technologies that allow the company to build better generalized data-driven predictive models.
Strategic Plan, Statistical Analysis, Financial Companies, Cloud Computing
## 1 Introduction
Formulating a strategic plan aligned with a company's business scope allows the company to explore data-driven ways of business improvement and risk mitigation quantitively while utilizing collected data to perform statistical applications. The company's business leadership generally organizes joint meetings with internal or external data analysis teams to design a plan for executing business-related statistical analysis. Such projects demonstrate that the company should invest in what areas and adjust the budget for business verticals with low revenue. Furthermore, statistical applications can determine the logic of how to improve staff performance in the workplace.
LendingClub, as a peer-to-peer lending company, offers loans and investment products in different sectors, including personal and business loans, automobile loans, and health-related financing loans. LendingClub's business model comprises three primary players: borrowers, investors, and portfolios for issued loans. LendingClub is about expanding the statistical analytics that consists of infrastructure and software algorithm applications to develop two meaningful solutions ultimately: a) estimating durations in which clients will pay off loans; and b) 30-minute loan approval decision-making. To implement these two capabilities, the company has collected data on loans that were granted or rejected over 12 years, including 145 attributes and more than 2 million observations, where 32 features have no missing values across the dataset.
To achieve its ultimate targets, LendingClub performs a statistical analysis of numerous steps to determine whether to
accept or reject hypotheses, which enables data scientists and statisticians to select attributes for predictive modeling. LendingClub seeks patterns in the loan data to discover relationships between a loan amount and borrowers who have charged off and reported by LendingClub Emekter et al. (2015). The company assumes a potential correlation between the two features, which establishes specific loan criteria for the group loan applicants who might encounter such an issue. Discovering the correlation enables LendingClub to enhance its risk management portfolio and minimize the risk of losing financial resources, aiming to mitigate the negative impacts of issuing loans to borrowers of this category. Using business statistics, the company seeks proof of concept for the mentioned ideas before recruiting a third-party software developer to implement a standalone product; therefore, the internal data scientists explore various aspects of such data, not limited to the questions listed above Sarraf et al. (2016); Grady et al. (2016).
In the first phase, demographic information is extracted from the datasets, and data preprocessing steps, such as data cleaning, are performed to remove any broken data from the database. Next, further investigation of specific data (e.g., type of loans issued, loans issued by region, and a more in-depth analysis of bad loans) is performed Sarraf and Tofighi (2016, 2016). In the second phase, which oversees the business perspective, the company's experts explore the operative side of the business (operational business aspects) and analyze applicants' income category. The third phase refers to the risk assessment of issuing loans, which consists of four steps: a) identifying existing risks in the business; b) the importance and role of credit scores in the loan approval or denial; c) defining bad loans and risky borrowers; d) loans by default (pre-approved); and e) exploring risks by targeted criteria Sarraf (2019). The ultimate goals of such extensive analysis are to lead LendingClub's data scientists to explore the feasibility of answering the two questions above based on current data, provide recommendations for data collection, or modify the business scope Saverino et al. (2016); Sarraf and Ostadhashem (2016); Sarraf et al. (2016).
## 2 Problem Statement and Hypothesis
The problem for this work points to statistical applications in LendingClub, which establishes three hypotheses regarding the relationship between the "Loan Amount" and "Charge OFF Flag" features, where various statistical analyses, including hypothesis testing Bunting (2019) and correlation analysis Mondal (2016), are employed. The hypotheses are as follows:
1. Accepting or rejecting the hypothesis that any relationship exists between the loan amounts and charge-offs
2. Accepting or rejecting the hypothesis that any relationship exists between the higher loan amounts and charge-offs
3. Accepting or rejecting the hypothesis that any relationship exists between the lower loan amounts and charge-offs
## 3 Statistical Analysis Pipeline Design
The problem statement consists of three main components: a) data exploration, b) descriptive analysis of loan duration, and c) real-time (fast) loan approval (or denial). Data exploration includes preprocessing, data cleaning, feature engineering, and selection to result in a meaningful descriptive analysis to find an accurate loan during and prediction. In the real-time step, various statistical techniques are explored, including hypothesis testing, student T-Test, and ANOVA testing, and statistical models, such as linear regression, logistic regression, cluster analysis, ANOVA tests, and correlation analysis Anderson et al. (2017); Yang et al. (2018); Strother et al. (2014).
### Data Exploration
Missing values are removed from the loan data, and "loanAmnt" refers to "the listed amount of the loan applied for by the borrower if, at some point in time, the credit department reduces the loan amount, then it will be reflected in this value" and "debt_settlement_flag" indicating "flags whether or not the borrower, is charged-off, is working" are extracted from the preprocessed data shown in Figure 1. The "debt_settlement_flag" - a binary feature - is considered a categorical attribute requiring conversion to numerical equivalents for statistical analysis Vafaei et al. (2019); Sarraf (2020). Also, the histogram of loan amounts shows how borrowers are distributed regarding loan amounts.
### Hypothesis Testing
In this experiment, T-Test is the primary method for whether to accept or reject the hypothesis. A T-Test is a hypothesis-testing method with broad applications in the industry due to its simplicity and convergence capability with a small sample of data Staniewski (2016); Sarraf et al. (2020). T-Test requires a relatively small subset of data so that the loan
dataset is shuffled, and a subsample of 1000 observations is randomly selected from charged-off samples along with 1000 samples, which are randomly selected from the on-time borrowers' observations for further analysisSarraf (2018). To explore the consistency of T-Test results, analysis of variance (ANOVA) tests are applied to the same subsets as those used in the previous method. ANOVA tests demonstrate whether such groups offer statistically significant differences Quirk (2016); Sarraf et al. (2021).
### Correlation Analysis
Correlation analysis is applied to the subsets to show the dependency between two features Schober et al. (2018). This analysis can indicate whether the loan amount impacts the number of borrowers charged off. Correlation analysis provides additional exposure to the data, which might strengthen the acceptance or rejection of the three hypothesesSarraf et al. (2016); Sarraf (2017).
### Results Visualization and Interpretation
The results of statistical analysis methods are visualized and interpreted to verify whether the hypotheses are accepted. Also, the visualization of results allows the company's data scientists to explore whether such outcomes from various techniques converge for decision-making and conclusion purposes.
## 4 Summary of Results
To perform an accurate T-Test, several data requirements must be met: a) test variables are continuous; b) test variables (observations) are independent; c) subsets are randomly selected; c) data distribution is approximately normal; d) variance scores of subsets and population are approximately consistent; and e) no outliers Sarraf et al. (2014, 2023); Sarraf and Noori (2021). In addition to these criteria, a balanced dataset design is required to conduct a meaningful ANOVA test, where the number of subjects in each group needs to be equal Harmonicioglu et al. (1999). Also, an ideal correlation analysis requires data to be independently collected as paired samples, preferably continuous numeric values Nerenberg and Essex (1990).
### Data Analysis
The first step of data analysis is exploring the distribution of observations regarding the number of on-time borrowers versus those who have charged off. The next step is to downsample the charged-off samples into subsets of 1000 observations. The same procedure was applied to on-time borrowers' observations (non-charged-off), and 1000 samples were randomly selected; thus, each subset included 2000 samples of each class equally distributed Krishnan et al. (2016). The mean, standard deviation, and variance of each subset were calculated. The statistical measures of subsets are highly similar, which suggests the need for statistical testing to produce interpretable results. Figure 1 shows a histogram of each subset where the number of bins is automatically calculated from the data (bin=10). The histogram results indicate that most of the issued loan amounts are in the range of [$5000,$20000].
Figure 1: Left: Distribution of borrowers Right: Distribution of loan amounts charged off
#### 4.1.1 Hypothesis 1
G*Power statistical software application Faul et al. (2007) performed a T-Test against each subset, including 2000 samples of charged-off and on-time borrowers' observations equally distributed. One-tailed T-Tests were conducted using an alpha error probability of 0.05 and a power of 0.95 (1 - beta error probability) to produce an actual power (decision-making criteria) for each subset. The results demonstrated that the actual power values were greater than 0.95, suggesting that the null hypothesis can be rejected, meaning that a "Loan Amount" affects whether a borrower can be charged-off. ANOVA test was conducted against each subset using G*Power, where the outcomes demonstrate that the actual power values are higher than 0.95, suggesting that the null hypothesis can be rejected, which means two groups offer variance differences so that a "Loan Amount" affects whether a borrower can be charged-off. The correlation analysis was performed against each subset and produced scores of -0.005255, 0.061228, and 0.007396 per subset, where the results indicate no strong correlation between the loan amount and the status of charged-off borrowers. The correlation results are not aligned with the T-Tests, suggesting that further analysis is needed.
#### 4.1.2 Hypothesis 2
To explore the second hypothesis regarding a relationship between higher "Loan Amount" and "Charged-off," each subset was sorted in descending order by loan amount, and the top 25% of observations were selected for analysis. The results revealed that all actual power values were higher than 0.95, suggesting that the null hypothesis should be rejected and indicating a strong relationship between the loan amount and charged-off borrowers.
#### 4.1.3 Hypothesis 3
The third hypothesis is that the bottom 25% of loan amounts would also show a statistical relationship with the charged-off borrowers. Each subset was sorted in descending order regarding loan amount attributed, and the bottom 25% of observations were selected. The two-tailed T-Test (conducted by G*Power) revealed a strong relationship between the loan amount and charged-off accounts.
## 5 Discussion
The company formulated a hypothesis to explore the impact of "Loan Amount" as a dependent variable on an independent attribute referring to "Charge OFF Flag," showing whether a borrower has repaid the loan or charged it off. To do so, LendingClub decided to conduct T-Test and ANOVA hypothesis testing and correlation analysis. The hypothesis testing revealed a statistically significant difference at p-values less than.05, which is interpreted as an indication of the impact of the loan amount on loan repayment. However, the correlation analysis produced a low score, which disagreed with the results of hypothesis testing, and the company decided to perform a more in-depth analysis to locate the source of such divergence.
### Steps in Statistical Analysis
Statistical analysis includes various steps, such as data exploration, hypothesis testing, and visualization, where the interpretation of results is the last step that aims to explain the results of each step (or most steps) of the analysis De Vaus (2002), Sarraf (2022, 2019b). In general, an explanation of statistical results often covers four main areas: a) sample size, b) metrics of central tendency, c) distribution of data, and d) hypothesis testing Morgan et al. (2004).
#### 5.1.1 Dataset or Sample Size
The number of observations available for statistical analysis plays a crucial role in interpreting results. This number demonstrates whether the samples (observations) can be considered representative of analyzed data Goodhue et al. (2006). A significant difference between statistics and machine learning exists in terms of the number of samples required for experiments, where, for example, 50 observations can represent a population for statistical analysis. A significantly larger dataset is often required for developing a machine learning model.
#### 5.1.2 Measures of Central Tendency
The mean, median, and mode of observations used for statistical analysis, along with the variance and standard deviation (i.e., measures of central tendency), reveal the central gravity of observations Wilcox and Keselman (2003). Interpreting those metrics enables practitioners to discover outliers in the observations and explore the possibility of removing them from the analysis. Unlike machine learning model development, where outliers might not impact results significantly, outliers here can affect statistical results by biasing the results towards that extreme.
#### 5.1.3 Data Distribution
Spreading data by calculating the observation variance can show how samples are distributed among a population Mardia (1975). Also, exploring data distribution by calculating the histogram of data can reveal the type of data distribution (i.e., normal distribution). It also indicates whether the data are skewed towards the left or right of the histogram Mardia (1975). Interpreting the data distribution also reveals whether the data are multimodal, where observations come from two or more distributions. Moreover, such interoperation can be used for accurate data normalization, removing outliers, and properly formulating hypotheses for future analyses or reiterations of the current analysis Silverman (1986).
#### 5.1.4 Hypothesis Testing
Interpretation of hypothesis testing comprises two steps: a) exploring the logic of formulating such a hypothesis and b) exploring the results of hypothesis testing Mullins (2002). In the first step, statisticians review the reasons for forming such a hypothesis by studying documents related to the business aspects of an organization. For example, statisticians can only formulate a hypothesis for analysis because they have considered the types/amounts of loans granted as dependent variables (inputs) when predicting whether borrowers could repay Emekter et al. (2015); Mondal (2016). The logic behind such a hypothesis is explored and interpreted once the data are analyzed and the results produced. The second step is to interpret the hypothesis testing results, determine whether the hypothesis is accepted or rejected, and explore the confidence interval of such interpretations Berger and Mortera (1991). For example, the interpretation of hypothesis testing results for types of loans and successful repayment could potentially reveal a) whether types/amounts of loans are adequate metrics for predicting risks associated with a borrower; and b) how an organization can mitigate potential risks and update their criteria for granting loans Emekter et al. (2015).
### Limitations in Statistical Analysis
Statistical analysis encounters various limitations that make the interpretation of results challenging. As discussed earlier, the primary challenge of statistical analysis, relative to machine learning techniques, is the number of observations required to perform analysis Young (2018). A standard practice in statistical analysis is to sample a population randomly and test hypotheses against the subset of data that can raise concerns about whether the generated subset is a true representative of data Inohara and Kusumi (2011). By contrast, training machine learning algorithms require a significant amount of data, so practitioners assume that the number of samples or observations used to train the algorithms would represent the entire population Inohara and Kusumi (2011). Another limitation in interpreting the analysis results is how to relate findings to business problems and interpret the outcomes of hypothesis testing to address business problem statements Young (2018).
#### 5.2.1 Small Dataset
The size of the dataset or sample used for statistical analysis plays a crucial role in determining the extent to which the results can be generalized Pasini (2015). A small sample size imposes significant limitations on statistical analysis, where a small dataset serves as a somewhat unrepresentative sample of the entire population, causing different types of bias in the analysis results Fong et al. (2020). Also, a small dataset increases the risk that outliers in each population will negatively impact measures of central tendency that have been calculated based on samples out of distribution. In addition to the problem of outliers discussed earlier, a small dataset makes splitting data into training and testing highly challenging. Although statistical analysis methods employ all samples provided to implement models based on hypothesis testing, practitioners in the field often use unseen data to validate hypothesis testing results Fong et al. (2020); Pasini (2015). Another issue caused by a small sample size is an unpredicted increase in measurement errors where the error metrics used to evaluate the models produce highly varying results. To overcome the limitations imposed by a small dataset, the primary practice is to randomly shuffle the dataset and generate several subsets of data, repeating statistical analysis to ensure the results converge Kvesic and Dukic (2012).
#### 5.2.2 Cause and Effect
One of the challenges in interpreting statistical results relates to inconsistency between the hypotheses formulated and the outcomes of testing methods. Practitioners interpreting the statistical results might notice that the results are misaligned with the logic of hypothesis tests Doggett (2004). In such ambiguous circumstances, discovering the cause and effect in statistical analysis results conducted on specific business use cases is challenging since the interpretation disagrees with the predefined scenario Doggett (2004). This issue can arise when the hypothesis testing design does not cover the useful parameters in testing or when less powerful features and attributes in data are used for hypothesis testing Laland et al. (2011). It sometimes happens that practitioners or business teams helping design such statistical analysis misinterpret the results or overlook some findings and/or implications Doggett (2004); Laland et al. (2011).
Another source of issues includes a low confidence interval level and results lacking statistical significance Doggett (2004).
#### 5.2.3 Divergence of Results Obtained from Various Methods
A common challenge in interpreting statistical analysis results occurs when the results obtained from various techniques diverge Read and Cressie (2012). It is a widespread practice that statisticians design a statistical analysis using multiple techniques, such as T-Test, ANOVA, or regression, to explore whether the results produced by these techniques align. An agreement between the results from different methods enables an organization to interpret analytical results clearly and make firm recommendations. However, the research shows that hypothesis testing and other methods, such as correlation analysis or machine learning, sometimes produce different results, contrasting with other methodsRead and Cressie (2012). Such an issue indicates that a systematic problem might exist in preparing samples or conducting hypothesis testing. The solution for this type of problem is offered case by case, where practitioners more familiar with the organization's business scope can suggest methods that produce results closer to the problem statement.
### Business Statistical Analysis and Interpretation
Business statistics, which include various types of analysis, focus on statistical methodologies aligned with an organization's business scope to improve the decision-making process, mitigate risks to the organization, and increase revenue Sun and Wang (2022). Interpretation of such analysis is crucial to the organization, and the process is expected to go beyond that of a simple report or presentation. The areas covered by business statistics include a) customer behavior prediction and trend extraction; b) data exploration, hypothesis testing, and interpretation, such as extensive visualization; c) enhancing business performance from various angles; and d) improving decision-making processes Sun and Wang (2022). To achieve such targets, business data analysts understand their organization's business objectives and explore data and results. Also, the root cause analysis is performed to extract in-depth technical insights regarding the organization's vulnerabilities, enabling the organization to inform its decision-making process Sun and Wang (2022).
### Reflection on the Statistical Analysis Process
The findings from the initial statistical application enable the company to redesign the statistical analysis processes to concentrate on those attributes that more substantially impact their business. Feature engineering--a systematic methodology--is necessary to reveal the relationships between dependent attributes and target variables Nargesian et al. (2017). Also, the company aims to explore other features highly correlated with potential target variables from the business perspective but uncorrelated with other dependent attributes Kotusev (2019).
#### 5.4.1 Potential Improvement
The process of statistical analysis at LendingClub requires several changes to better serve the company's business needs. The primary targets are to enhance the process of issuing loans, such as the duration of the loan approval process, and to mitigate financial risks to the company by offering borrowers a data-driven loan amount. LendingClub is to apply such changes to the statistical analysis and decision-making process by employing big data infrastructure for advanced multi-model data collection and analytics. In the first step, the company needs a plan demonstrating how to onboard new technology and its costs. The second step includes a broader statistical analysis, such as hypothesis testing, and uses the current data to assess whether specific statistical applications could broadly improve the company's performance. In the third step, LendingClub conducts research and recruits a third party to develop the required infrastructure.
#### 5.4.2 Required Infrastructure
Onboarding a large-scale system, such as an enabled big data analytics platform, is a significant change to LendingClub, where modifications have been performed to everything from databases to reporting systems. The first stage is to decide whether LendingClub would adopt a big data platform to the current system or entirely migrate to the new model. This decision allows the stakeholder to estimate the cost of a big data platform and start planning. Although the cost of system adaptation or migration to the big data platform requires detailed information, the migration to a cloud environment, for example, offering various big data services, would be a potential expansion of LendingClub's analytics in the future. Figure 2 illustrates the proposed steps for migrating the LendingClub data collection and analytics pipeline to a cloud-based environment that offers big data services such as Amazon Web Services (AWS) Al-Maawali et al. (2019); Mullins (2002). These steps consist of a) cloud assessment, b) proof of concept, c) data migration, d) application migration, e) leverage of the cloud, and f) optimization.
### Proposed Large-Scale Plan
The large-scale plan to enhance the current statistical analysis pipeline consists of two primary phases: a) designing and implementing an end-to-end data collection and processing pipeline that offers big data analytics, and b) increasing the number and quality of features Lee and Wang (2019). The current data collection pipeline collects data from various sources, and no broadly systematic methodology is employed to acquire such data. Gathering data from different providers (in-house or third-party) involves an extensive preprocessing pipeline, which might remove many observations to prepare a consistent dataset.
The proposed pipeline illustrated in Figure 3 offers various capabilities, including big data collection and data stream processing. The first component of the architecture is a user interface that enables it to receive data from external sources where the data could either be stored in a multi-model database or be in the form of real-time messaging input into an allocated database. The collected data can be transferred between data storage and real-time messaging place holders, which offers big data capabilities to host structured and unstructured data. The next architecture layer includes enabled big data processing components for batch processing, which oversees data preparation and preprocessing for further analysis Sarraf et al. (2019).
A similar component--the stream processing unit--prepares and preprocesses data streams for real-time analysis and applications. The preprocessed data are sent to the next component of the architecture, which encompasses the statistical analysis and machine learning methods, where such a block is considered the brain that orchestrates the data analytics. Statistical analysis or machine learning outcomes are stored in a "results database." The last layer of this orchestration is the user interface block, which enables practitioners in the organization to generate reports with visualizations that can be provided to leadership for decision-making purposes. An extra capability in the new architecture is scheduling automatic training machine learning models or performing statistical analysis.
The second phase of the new data analytics platform aims to enhance the quality of feature selection, which concentrates on those attributes that contribute most to target variables. Quarter-based statistical analysis and feature engineering demonstrate what features should be collected with higher resolution. The advantage of using targeted data collection through particular data attributes is to reduce the cost of on-demand infrastructure by reducing the load on the architecture servers and analytical blocks. However, the main disadvantage of employing such a step is that it decreases the amount of data that can be collected, which might harm statistical analysis or predictive model development. Therefore, the organization must weigh the cost of massive data streaming and collection against the impact of selective data collection.
## 6 Conclusions
Statistical applications enable enterprises to establish a data-driven business plan that provides clear objectives to enhance the enterprise's performance, revenue, and risk management. This work summarized a strategic plan informed
Figure 2: Steps for migrating data pipeline to a cloud environment
by an already performed analysis for LendingClub - a financial company - that grants various forms. The statistical results showed that different logic could be extracted from currently collected data. Such results enabled LendingClub to improve its business scope and to encourage the company to onboard a big data platform. The plan recommended exploring employing enhanced feature engineering capabilities to acquire enormous data per year and develop predictive models to increase the company's revenue and lessen potential risks. LendingClub's plan also seeks to utilize artificial intelligence and machine learning technologies to implement robust models aligned with the company's business scopes.
|
2301.11945 | Uranium Abundances and Ages of $R$-process Enhanced Stars with Novel U
II Lines | The ages of the oldest stars shed light on the birth, chemical enrichment,
and chemical evolution of the Universe. Nucleocosmochronometry provides an
avenue to determining the ages of these stars independent from stellar
evolution models. The uranium abundance, which can be determined for metal-poor
$r$-process enhanced (RPE) stars, has been known to constitute one of the most
robust chronometers known. So far, U abundance determination has used a
$single$ U II line at $\lambda3859$ \r{A}. Consequently, U abundance has been
reliably determined for only five RPE stars. Here, we present the first
homogeneous U abundance analysis of four RPE stars using two novel U II lines
at $\lambda4050$ \r{A} and $\lambda4090$ \r{A}, in addition to the canonical
$\lambda3859$ \r{A} line. We find that the U II lines at $\lambda4050$ \r{A}
and $\lambda4090$ \r{A} are reliable and render U abundances in agreement with
the $\lambda3859$ U abundance, for all the stars. We, thus, determine revised U
abundances for RPE stars, 2MASS J09544277+5246414, RAVE J203843.2-002333, HE
1523-0901, and CS 31082-001, using multiple U II lines. We also provide
nucleocosmochronometric ages of these stars based on the newly derived U, Th,
and Eu abundances. The results of this study open up a new avenue to reliably
and homogeneously determine U abundance for a significantly larger number of
RPE stars. This will, in turn, enable robust constraints on the
nucleocosmochronometric ages of RPE stars, which can be applied to understand
the chemical enrichment and evolution in the early Universe, especially of
$r$-process elements. | Shivani P. Shah, Rana Ezzeddine, Alexander P. Ji, Terese Hansen, Ian U. Roederer, Márcio Catelan, Zoe Hackshaw, Erika M. Holmbeck, Timothy C. Beers, Rebecca Surman | 2023-01-27T19:00:07Z | http://arxiv.org/abs/2301.11945v1 | # Uranium Abundances and Ages of \(R\)-process Enhanced Stars with Novel U II Lines1
###### Abstract
The ages of the oldest stars shed light on the birth, chemical enrichment, and chemical evolution of the Universe. Nucleocosmochronometry provides an avenue to determining the ages of these stars independent from stellar evolution models. The uranium abundance, which can be determined for metal-poor \(r\)-process enhanced (RPE) stars, has been known to constitute one of the most robust chronometers known. So far, U abundance determination has used a _single_ U ii line at \(\lambda 3859\) A. Consequently, U abundance has been reliably determined for only five RPE stars. Here, we present the first homogeneous U abundance analysis of four RPE stars using two novel U ii lines at \(\lambda 4050\) A and \(\lambda 4090\) A, in addition to the canonical \(\lambda 3859\) A line. We find that the U ii lines at \(\lambda 4050\) A and \(\lambda 4090\) A are reliable and render U abundances in agreement with the \(\lambda 3859\) U abundance, for all the stars. We, thus, determine revised U abundances for RPE stars, 2MASS J09544277+5246414, RAVE J203843.2-00233, HE 1523-0901, and CS 31082-001, using multiple U ii lines. We also provide nucleocosmochronometric ages of these stars based on the newly derived U, Th, and Eu abundances. The results of this study open up a new avenue to reliably and homogeneously determine U abundance for a significantly larger number of RPE stars. This will, in turn, enable robust constraints on the nucleocosmochronometric ages of RPE stars, which can be applied to understand the chemical enrichment and evolution in the early Universe, especially of \(r\)-process elements.
## 1 Introduction
Ages of the oldest stars aid our understanding of chemical enrichment and evolution in the early universe, the assembly history of our Galaxy (e.g., Marin-Franch et al., 2009; Bonaca et al., 2020; Xiang & Rix, 2022; Rix et al., 2022; Buder et al., 2022), and the age of the Universe (e.g., Bond et al., 2013; VandenBerg et al., 2014; Jimenez et al., 2019; Valcin et al., 2020; Abdalla et al., 2022). Most techniques used to infer stellar ages, including isochrone-placement and asteroseismology, depend on a detailed understanding of low-metallicity stellar-evolution mod
els, which is challenging and still evolving (e.g., Miglio et al., 2013; Epstein et al., 2014; Joyce & Chaboyer, 2015; Tayar et al., 2017; Catelan, 2018; Valentini et al., 2019). On the other hand, nucleocosmochronometry, a radioactive-dating technique, offers an independent avenue to determining the ages of some of the oldest stars (Soderblom, 2010; Catelan, 2018).
Nucleocosmochronometry uses the decay of long-lived actinides, uranium (\({}^{238}\)U; \(\tau_{1/2}=4.47\) Gyr) and thorium (\({}^{232}\)Th; \(\tau_{1/2}=14.05\) Gyr), to estimate the ages of metal-poor stars (Francois et al., 1993; Cowan et al., 1991; Cayrel et al., 2001; Frebel & Kratz, 2009). U and Th are created solely via the rapid-neutron capture process (\(r\)-process) (Burbidge et al., 1957; Cameron, 1957). Therefore, \(r\)-process-enhanced (RPE) stars, classified as having [Eu/Fe] \(>+0.3\)1(Beers & Christlieb, 2005; Holmbeck et al., 2020), have been some of the best candidates for employing nucleocosmochronometry (Frebel, 2018). RPE stars are typically metal-poor ([Fe/H] \(\lesssim-1.5\); Frebel (2018)), offering the ability to detect the weak absorption lines of U and Th in their spectra. Moreover, the \(r\)-process enrichment of RPE stars is the result of only a few \(r\)-process nucleosynthetic events, missing the need for galactic chemical enrichment models in nucleocosmochronometry (Frebel, 2018; Arnould & Goriely, 2020). In practice, the absolute ages of the stars are determined by using the observed present-day abundance ratios and theoretical production ratios (PRs) of U and/or Th to coproduced \(r\)-process elements e.g., U/Th, U/X, and Th/X, where X refers to a lighter stable \(r\)-process element, such as Eu, Os, and Ir (e.g., Cowan et al., 1997, 1999; Cayrel et al., 2001; Hill et al., 2002; Frebel et al., 2007; Placco et al., 2017).
Footnote 1: [A/B] = \(\log(N_{A}/N_{B})_{\rm Star}-\log(N_{A}/N_{B})_{\rm Solar}\), where \(N\) is the number density of the element.
One of the major systematic uncertainties in nucleocosmochronometry is the PRs of Th and U to lighter \(r\)-process elements like Eu (Goriely & Arnould, 2001; Schatz et al., 2002). This issue has been prominently highlighted by the negative Th/Eu stellar ages obtained for 30% of RPE stars, termed as actinide-boost stars (Roederer et al., 2009; Mashonkina et al., 2014; Holmbeck et al., 2018). The negative stellar ages are a result of the observed Th/Eu abundance ratios being higher than the Th/Eu PRs predicted by current \(r\)-process models. More generally, the application of the Th/Eu chronometer to RPE stars has led to a large range in ages from 21 Gyr to -9 Gyr, even though these stars are metal-poor (Ji & Frebel, 2018; Holmbeck et al., 2018). These anomalies have indicated that the astrophysical conditions of \(r\)-process nucleosynthetic events may be varying event-to-event, with the PRs of actinides to lighter \(r\)-process elements sensitive to these changes. Consequently, no one set of Th/X and U/X PR may be applicable to all RPE stars (Holmbeck et al., 2019).
On the other hand, the U/Th chronometer results in high-fidelity stellar-age estimates even for the actinide-boost stars (Cayrel et al., 2001; Hill et al., 2002; Frebel et al., 2007; Placco et al., 2017; Holmbeck et al., 2018). Since U and Th have similar nuclear masses and are synthesized along similar reaction channels during the \(r\)-process, their PR is robust to major shortcomings of \(r\)-process models (Arnould & Takahashi, 1999; Goriely & Clerbaux, 1999; Schatz et al., 2002). Additionally, any variations in the \(r\)-process astrophysical conditions are expected to impact the actinides, U and Th, equally, so that their PR is generally constant across all \(r\)-process nucleosynthetic events (e.g., Holmbeck et al., 2019), with the uncertainty in the predicted value of the PR largely due to the unknown nuclear data of the neutron-rich actinides (Holmbeck et al., 2019; Lund et al., 2022).
However, it is particularly challenging to reliably determine U abundance. So far, in the context of nucleocosmochronometry, U abundance has been confidently determined for only five highly \(r\)-process enhanced ([Eu/Fe] \(>+0.7\)) stars: namely CS 31082-001 (Cayrel et al., 2001; Hill et al., 2002), HE 1523-0901 (Frebel et al., 2007), 2MASS J09544277+5246414 (Holmbeck et al., 2018), RAVE J203843.2-002333 (Placco et al., 2017), and CS 29497-004 (Hill et al., 2017). Canonically, and also in the case of these five stars, a _single_ U ii line at \(\lambda\)3859 A has been used to determine the U abundance of RPE stars. This line is blended with the wing of a strong Fe i line and a poorly-constrained CN feature, rendering a reliable U abundance determination challenging. Moreover, the proximity to the CN feature limits the study of U in stars with strong C enhancements. Interestingly, the U abundance of the Przybylski star (HD 101065), a chemically peculiar star, has been determined using 17 U ii transitions (Shulyak et al., 2010).
In this study, we have homogeneously determined the U abundance of four highly RPE stars using two novel U ii lines at \(\lambda\)4050 and \(\lambda\)4090 A, in addition to the canonical \(\lambda\)3859 A line. We revisit the U abundances of 2MASS J09544277+5246414 (hereafter J0954+5246), RAVE J203843.2-00233 (hereafter J2038-0023), HE 1523-0901, and CS 31082-001 to investigate the utility of the two new U ii lines. This work is aimed at serving as a benchmark test for the reliable determination of U abundances and subsequently stellar ages of RPE stars using multiple U ii lines, in an effort to advance the field of nucleocosmochronometry.
Hereafter, this paper is organized as follows: Section 2 describes the observations and data reduction of the stars. Section 3 discusses their atmospheric stellar-parameter estimates. The chemical-abundance analysis of the pertinent elements, including U, Th, and Eu, is described in section 4. In Section 5, we present the nucleocosmochronometric ages of the stars using the newly derived U, Th, and Eu abundances. Finally, in section 6, we discuss our results and in section 7, we present the main conclusions of this work.
## 2 Data acquisition and reduction
A robust abundance analysis of U requires high signal-to-noise (S/N) and high-resolution spectral data, since U ii transition lines are weak and surrounded by blends. We obtained new spectroscopic data for J0954+5246 and J2038-0023 using high-resolution spectrographs, Keck/HIRES and Magellan/MIKE, respectively. For HE 1523-0901 and CS 31082-001, we utilized VLT/UVES archival data.
Following the data reduction and radial-velocity correction, orders of each exposure were normalized using a natural cubic spline function with sigma clipping and the strong lines masked. The normalized orders were co-added and then stitched2 to furnish the final spectrum of each star. We summarize the spectral data properties of all the stars in Table 1, including the wavelength range, resolving power, and S/N per pixel of the final spectra.
Footnote 2: [https://github.com/alexji/alexmods/blob/master/alexmods/specutils/continuum.py](https://github.com/alexji/alexmods/blob/master/alexmods/specutils/continuum.py)
### 2mass J09544277+5246414
We observed J0954+5246 with Keck/HIRESr (Vogt et al., 1994) on 2021 March 26 for a total of 6.6h. The observations were broken down as 8 exposures of 1800s, 1 exposure of 1400s, and 1 exposure of 1350s. The observations were taken with the red cross-disperser, using the \(5.0\times 0\farcs 40\) slit and with \(1\times 1\) binning, which yielded a resolving power of \(R\sim 86\),600. We used the blue and the green CCD chip data, which we reduced with MAKEE3 using standard settings. The full wavelength range of the spectra was 3600-6800 A. We corrected each spectrum for radial velocity by cross-correlating against a high-resolution spectrum of HD 122563. We normalized, co-added, and stitched the spectra to furnish the final spectrum with S/N per pixel of 185 at 4050 A.
Footnote 3: [https://sites.astro.caltech.edu/~tb/makee/](https://sites.astro.caltech.edu/~tb/makee/)
### 2.rav J203843.2-00233
We observed J2038-0023 with Magellan/MIKE (Bernstein et al., 2003) on 2018 July 8, 2018 July 24, 2018 September 26, and 2018 November 11, for a total of 15.6h. We used the \(0\farcs 35\) slit and \(2\times 1\) binning, which yielded a resolving power of \(R\sim 83\),000. Data for this star already existed (Placco et al., 2017), but with \(2\times 2\) binning, which could have undersampled the profiles of the U ii lines, so we reobserved the star. We reduced the spectra from each night, together, using CarPy (Kelson et al., 2000; Kelson, 2003) and corrected the radial velocity by cross-correlating against a high-resolution spectrum of HD 122563. We normalized, coadded, and stitched the spectra to furnish the final spectrum with S/N per pixel of 175 at 4050A.
### He 1523-0901
We used the data from Frebel et al. (2007), who observed the star with VLT/UVES (Dekker et al., 2000) in 2005 and 2006, using image slicer No. 2 and \(0\farcs 45\) slit width to achieve a resolving power of \(R\sim 75\),000. The data are publicly available on the ESO raw data archive4. We used the BLUE 437 nm setting observations from 2006 April 22, 2006 April 23, and 2006 May 19, which amounted to a total exposure time of 3.0h. The wavelength range of the final spectrum was 3758-4990 A, which included all the lines of interest. We reduced the data using ESOReflex(Freudling et al., 2013), with order extraction set to the recommended linear method for data collected with an image slicer. We corrected the radial velocity of each exposure by cross-correlating with a high quality MIKE/Magellan spectrum of HE 1523-0901. We normalized, co-added, and stitched the spectra to furnish the final spectrum with S/N per pixel of 200 at 4050 A.
Footnote 4: [http://archive.eso.org/eso/eso_archive_main.html](http://archive.eso.org/eso/eso_archive_main.html)
### Cs 31082-001
We used data from Hill et al. (2002), who observed the star with VLT/UVES (Dekker et al., 2000) in 2000 using \(0\farcs 45\) slit-width. The data are publicly available on the ESO raw data archive. We used the BLUE arm 380-510 nm setting observations from 2000 October 17 and 2000 October 19, which totaled 2h of exposure. The resulting resolving power was \(R\sim 75\),000. We reduced the data using ESOReflex(Freudling et al., 2013), with order extraction set to the recommended optimal method. We corrected the radial velocity of each exposure by cross-correlating to a high-quality MIKE/Magellan spectrum of HE 1523-0901. We nor
malized, co-added, and stitched the spectra to furnish the final spectrum with S/N of 125 per pixel at 4050 A.
## 3 Stellar Parameters
We derived the stellar parameters of all the stars spectroscopically. For this purpose, we used SMHr5, the next generation spectroscopic analysis software of SMH (Casey, 2014). SMHr wraps the radiative transfer code, MOOG(Sneden, 1973) and allows the employment of various grid model atmospheres. We used MOOG with the proper treatment of scattering included6(Sobeck et al., 2011) and employed the ATLAS9 grid of 1D plane-parallel model atmospheres computed under the assumption of local thermodynamic equilibrium (LTE) (Castelli & Kurucz, 2003).
Footnote 5: We used [https://github.com/eholmbeck/smhr-rpa/tree/refactor-scatterplot](https://github.com/eholmbeck/smhr-rpa/tree/refactor-scatterplot) forged from [https://github.com/andycasey/smhr/tree/refactor-scatterplot](https://github.com/andycasey/smhr/tree/refactor-scatterplot)
Footnote 6: [https://github.com/alexji/moog17scat](https://github.com/alexji/moog17scat)
We used equivalent widths (EWs) of Fe i and Fe ii lines for stellar parameter determination of all the stars. We measured the EWs within SMHr by fitting Gaussian or Voigt profiles. We obtained the effective temperature (\(T_{\rm eff}\)) by inducing equilibrium in the Fe i line-abundances with respect to the excitation potential of the lines, the surface gravity (\(\log~{}g\)) by minimizing the difference between the mean Fe i and Fe ii abundances, and the microturbulent velocity (\(\xi\)) by inducing an equilibrium in Fe i line-abundances with respect to the reduced EW of the lines. We solved for the stellar parameters, \(T_{\rm eff}\), \(\log~{}g\), and \(\xi\), simultaneously with multiple iterations and corrected the resulting best-fit \(T_{\rm eff}\) to the photometric scale described in Frebel et al. (2013). Subsequently, we re-derived \(\log~{}g\) and \(\xi\) as described above for \(T_{\rm eff}\) fixed to this corrected value. We list the resulting stellar parameters of J0954+5246, J2038-0023, HE 1523-0901, and CS 31082-001 in Table 2.
The stellar parameters derived in this work agree with those determined previously in the literature within uncertainties for all the stars in our sample, except for J2038-0023. The primary disagreement for J2038-0023 is in \(\log~{}g\), where we derived \(\log~{}g=0.57\), whereas Placco et al. (2017) derived \(\log~{}g=1.20\) using the EW technique. Upon further investigation into this discrepancy, we suspect that it mostly originates from different implementations of scattering in MOOG. For homogeneity with the other stars in our sample, we adopt our derived stellar parameters for J2038-0023.
## 4 Chemical-Abundance Analysis
We derived chemical abundances for all the stars using EWs and spectral synthesis in SMHr. Though abundances for these stars have been previously reported in the literature, we re-derived abundances of the relevant elements for a homogeneous and consistent analysis. This enabled us to robustly constrain the transitions directly blended with the U ii lines, as well as those neighboring the U ii lines, which could affect the local continuum placement. We derived abundances of most light elements, including Na, Mg, Al, Si, Ca, Ti, Cr, Fe, and Zn, using the EW method. We derived abundances of the remaining light elements and the neutron-capture (n-cap) elements, including C, N, V, Mn, Sr, Y, Zr, Ba, La, Ce, Pr, Nd, Sm, Eu, Gd, Dy, Tm, Er, Th, and U, with spectral synthesis of \(\pm 5\) A regions around the transition lines. For the abundance determination of the light and n-cap elements, we used a subset of the transition list compiled by Roederer et al. (2018). For abundance analysis with spectral synthesis, we generated the atomic-parameters linelists with linemake7(Placco et al., 2021), which included the transition wavelengths (\(\lambda\)), excitation potentials (\(\chi\)), oscillator strengths (\(\log~{}gf\)), and hyperfine structure of the
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Star Name & Telescope/ & Wavelength & Slit & Resolving Power & Total & S/N pix\({}^{-1}\) \\ & Instrument & Range (Å) & Width & (\(\Delta\lambda/\lambda\)) & Exposure (h) & at 4050 Å \\ \hline
2MASS J09544277+5246414 & Keck/HIRESb & 3600-6800 & 0.40\({}^{\prime\prime}\) & 86,600 & 6.6 & 200 \\ RAVE J203843.2–00233 & Magellan/MIKE & 3200-9900 & 0.35\({}^{\prime\prime}\) & 83,000 & 15.6 & 175 \\ HE 1523-0901 & VLT/UVES & 3758-4990 & 0.45\({}^{\prime\prime}\) & 75,000 & 3.0 & 200 \\ CS 31082-001 & VLT/UVES & 3800-5100 & 0.45\({}^{\prime\prime}\) & 70,000 & 2.0 & 125 \\ \hline \end{tabular}
\end{table}
Table 1: Spectral Data Properties
transition lines. We used the updated atomic parameters of CH transitions from Masseron et al. (2014)8 and \(r\)-process isotopic ratios from Sneden et al. (2008) for the spectral synthesis of Ba, Eu, Nd, Sm, Yb, and Pb. We further detail the abundance determination of the U ii line blends, U, Th, and Eu in sections 4.1, 4.2, 4.3, and 4.4, respectively. We describe uncertainty analysis of the derived abundances in section 4.5.
Footnote 8: [https://github.com/alexji/linemake](https://github.com/alexji/linemake)
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{1}{c}{ Star} & Source & \(T_{\rm eff}\) & \(\log\ g\) & [Fe/H] & \(\xi\) \\ & & (K) & (cgs) & & (km s\({}^{-1}\)) \\ \hline J0954+5246 & This work & \(4410\pm 150\) & \(0.61\pm 0.30\) & \(-2.96\pm 0.14\) & \(2.74\pm 0.20\) \\ & Holmbeck et al. (2018) & \(4340\pm 125\) & \(0.41\pm 0.20\) & \(-2.99\pm 0.10\) & \(2.28\pm 0.20\) \\ J2038-0023 & This work & \(4519\pm 150\) & \(0.57\pm 0.30\) & \(-3.12\pm 0.12\) & \(2.26\pm 0.20\) \\ & Placco et al. (2017) & \(4630\pm 100\) & \(1.20\pm 0.20\) & \(-2.91\pm 0.10\) & \(2.15\pm 0.20\) \\ HE 1523-0901 & This work & \(4607\pm 150\) & \(0.94\pm 0.30\) & \(-2.98\pm 0.14\) & \(2.65\pm 0.20\) \\ & Frebel et al. (2007) & \(4630\pm 40\) & \(1.00\pm 0.30\) & \(-2.95\pm 0.2\) & \(2.60\pm 0.30\) \\ CS 31082-001 & This work & \(4793\pm 150\) & \(1.36\pm 0.30\) & \(-2.94\pm 0.11\) & \(1.68\pm 0.20\) \\ & Hill et al. (2002) & \(4825\pm 120\) & \(1.50\pm 0.30\) & \(-2.9\pm 0.13\) & \(1.80\pm 0.20\) \\ \hline \end{tabular}
\end{table}
Table 2: Stellar Parameters
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & Source & J0954+5246a & J2038-0023b & HE 1523-0901c & CS 31082-001d \\ \hline \(\log\epsilon\)(Fe) & This Work & \(4.55\pm 0.15\) & \(4.41\pm 0.12\) & \(4.53\pm 0.14\) & \(4.58\pm 0.11\) \\ & Other & \(4.51\pm 0.12\) & \(4.59\pm 0.12\) & \(4.50\pm 0.20\) & \(4.60\pm 0.13\) \\ \(\log\epsilon\)(C) & This Work & \(4.97\pm 0.20\) & \(5.11\pm 0.20\) & \(5.17\pm 0.20\) & \(5.75\pm 0.20\) \\ & Other & \(4.94\pm 0.20\) & \(5.08\pm 0.20\) & \(5.14\) & \(5.82\pm 0.05\) \\ \({}^{12}\)C/\({}^{13}\)C & This Work & \(4.0\) & \(4.6\) & \(3.5\) & \(19.0\) \\ & Other & \(\cdots\) & \(\cdots\) & \(\sim 3\)-\(4\) & \(>20.0\) \\ \(\log\epsilon\)(N) & This Work & \(5.82\pm 0.20\) & \(5.56\pm 0.20\) & \(5.88\pm 0.20\) & \(\cdots\) \\ & Other & \(\cdots\) & \(\cdots\) & \(5.43\) & \(<\)5.22 \\ \(\log\epsilon\)(La) & This Work & \(-1.06\pm 0.09\) & \(-1.06\pm 0.05\) & \(-0.47\pm 0.11\) & \(-0.65\pm 0.07\) \\ & Other & \(-1.15\pm 0.10\) & \(-0.76\pm 0.07\) & \(-0.63\) & \(-0.60\pm 0.04\) \\ \hline \end{tabular} Note. – Source of other work: aHolmbeck et al. (2018),bPlacco et al. (2017),cFrebel et al. (2007),dHill et al. (2002).
\end{table}
Table 3: Abundances and Isotopic Ratios of U ii Line Blends
### Blends: Fe, C, N, and La
We took special care to constrain the abundances of elements that have transitions blended with the weak U ii lines. We identified that transitions of Fe i, CH, CN, and La ii are blended with the U ii lines investigated in this work. We obtained the Fe abundance using EW measurements of a subset of acceptable Fe i lines listed in Roederer et al. (2018), for each sample star. We estimated the uncertainty on the mean Fe abundance as the standard deviation in the abundances of the chosen Fe i lines. We determined the C abundance by fitting the \(\lambda 4313\) A \(G\)-band of CH. Based on the quality of the data and the synthetic spectrum fits, we set a fiducial uncertainty estimate of \(\pm 0.2\) dex on the C abundance of all the sample stars. With the C abundance fixed, we determined the isotopic ratio of \({}^{12}\)C/\({}^{13}\)C by fitting the \({}^{13}\)CH feature at \(\lambda\)4217 A. We determined the N abundance by fitting the \(\lambda 3876\) A CN molecular band for all our sample stars, except for CS 31082-001, for which we could not derive a reliable N abundance. For the
Figure 1: Spectral synthesis of the U ii line at \(\lambda\)3859.57 Å for J0954+5246, J2038-0023, HE 1523-0901, and CS 31082-001. The red-solid line traces the best-fit synthetic model to the observed data in black points. The red-shaded region depicts abundance variation within \(\pm\) 0.2 dex of the best-fit U abundance. The blue-dashed line traces the synthetic model with no U. Important neighboring transition lines are labeled. The corresponding residuals between the observed data and the synthetic models are also shown.
N abundance of J0954+5246, RAVE J203843.2-002333, and HE 1523-0901, we estimated an uncertainty of \(\pm 0.2\) dex, based on the spectral synthesis fits. For each sample star, we determined the La abundance with the spectral synthesis of a subset of blend-free and acceptable La ii transitions listed in Roederer et al. (2018). We estimated the uncertainty on the mean La abundance as the standard deviation in the La abundances of the chosen La ii lines. We list the Fe, C, N, and La abundances and the \({}^{12}\)C/\({}^{13}\)C isotopic ratio, determined for each star, in Table 3, along with their corresponding values from previous literature studies. The abundances determined in this work are in agreement with the values from the literature, within uncertainties. The exception to this case is our derived La abundance for J2038-0023, which disagrees with the abundance derived by Placco et al. (2017). However, this discrepancy is simply attributed to the difference in our adopted stellar parameters (see section 3).
Figure 2: Spectral synthesis of the U ii line at \(\lambda\)4050.04 Å for J0954+5246, J2038-0023, HE 1523-0901, and CS 31082-001. The red-solid line traces the best-fit synthetic model to the observed data in black points. The red-shaded region depicts the abundance variation within \(\pm\) 0.2 dex of the best-fit U abundance. The blue-dashed line traces the synthetic model with no U, and the black-dotted line traces the synthetic model with no U and no La. Important neighboring transition lines are labeled. The corresponding residuals between the observed data and synthetic models are also shown.
### Uranium
So far in the literature, U abundances of RPE stars have been primarily determined using a single U ii line at \(\lambda\)3859 A9. Moreover, U abundance analyses have been carried out by different studies for individual stars, with each study varying in the employed method and atomic data.
Footnote 9: An exception to this case is Roederer et al. (2018), who also used the U ii line at \(\lambda\)4241 Å to place an upper limit on the U abundance of an RPE star, HD 222925.
In this study, we performed, for the first time, a homogeneous analysis to determine U abundances of four highly RPE stars using three U ii lines at \(\lambda\)3859 A, \(\lambda\)4050 A, and \(\lambda\)4090 A. We generated the linelist for the spectral synthesis of the U ii line-regions with linemake. We used the \(\log\,gf\) measurements of the U ii lines from Nilsson et al. (2002), who measured them with high accuracy by combining their branching fraction calcula
Figure 3: Spectral synthesis of the U ii line at \(\lambda\)4090.13 Å for J0954+5246, J2038-0023, HE 1523-0901, and CS 31082-001. The red-solid line traces the best-fit synthetic model to the observed data in black points. The red-shaded region depicts the abundance variation within \(\pm\) 0.2 dex of the best-fit U abundance. The blue-dashed line traces the synthetic model with no U, and the black-dotted line traces the synthetic model with no U and no Fe. Important neighboring transition lines are labeled. The corresponding residuals between the synthetic models and the observed data are also shown.
tions with the radiative lifetime measurements of 6 U ii levels from Lundberg et al. (2001). We list the atomic parameters employed for the three U ii transitions in Table 4.
We determined the final U abundance of each star as the weighted-average of the U abundances from the three U ii lines i.e., \(\log\epsilon(\rm U)=\sum_{i}(w_{i}\log\epsilon_{i})/\sum_{i}w_{i}\), where \(\log\epsilon_{i}\) is the U abundance from line \(i\) and \(w_{i}=1/\Delta(\rm stat)^{2}_{i}\) for line \(i\)(Ji et al., 2019; McWilliam et al., 1995). We detail the method we used to estimate \(\Delta(\rm stat)_{i}\), the statistical uncertainty on \(\log\epsilon_{i}\), in Section 4.5. For the total uncertainty on the average-weighted U abundances, we accounted for systematic uncertainties (from stellar parameters and blends) and statistical uncertainties (from \(\log~{}gf\) measurement and continuum placement). We discuss this further in Section 4.5. We list the final weighted-average U abundance with the associated total uncertainty for all the sample stars in Table 5.
#### 4.2.1 The \(\lambda\)3859 A U ii Line
While the \(\lambda\)3859.57 A line is the strongest U ii line discernible in the spectra of stars, blends from other transitions makes its spectral synthesis quite difficult. This line is situated in the wing of a strong Fe i line at \(\lambda\)3859.91 A and is further blended with a CN feature at \(\lambda\)3859.65 A, which also resides in the wing of the Fe i line. Therefore, it is essential to constrain the wing of the Fe i line as well as the CN feature for a reliable U abundance determination.
To fit the wings of the strong Fe i line, we used the Unsold approximation (Unsold, 1955) multiplied by a factor of \(-8.77\) for the Van der Waals hydrogen collision-damping coefficient of the Fe i line, for all the sample stars. Furthermore, we adjusted the derived Fe i abundances of the stars by \(-0.15\), \(-0.05\), \(+0.02\), and \(+0.03\) dex for J0954+5246, J2038-0023, HE 1523-0901, and CS 31082-001, respectively. For most of the stars, the adjustment made to the derived Fe i abundance of the star is small and lies within the uncertainty on the Fe i abundance.
To fit the CN feature, we adjusted the derived N abundance of the stars by \(+0.0\), \(+0.04\), and \(+0.12\) dex for J0954+5246, J2038-0023, and HE 1523-0901, respectively. We find these adjustments acceptable, since they are within the \(\pm 0.2\) dex uncertainty on the derived N abundances. For CS 31082-001, we used \(\log\epsilon(\rm N)=5.22\), which is the upper limit placed on the N abundance by Hill et al. (2002), as we could not derive a reliable N abundance for the star. We used the derived C abundance of the stars without any adjustments.
For the purpose of a better fit to the neighboring line features, we blue-shifted the transition wavelength of Nd ii line at 3859.42 A and Fe i line at 3859.21 A by 0.07 A. To fit the Nd ii line, we adjusted the derived Nd ii abundance of the star by \(-0.12\), \(+0.0\), \(+0.04\)
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Species & \(\lambda\) (Å) & \(\chi\) (eV) & \(\log~{}gf\) & \% Uncertainty \\ & & & & \multicolumn{2}{c}{in \(gf\)} \\ \hline U ii & 3859.57 & 0.036 & \(-0.067\) & 12 \\ U ii & 4050.04 & 0.000 & \(-0.706\) & 7 \\ U ii & 4090.13 & 0.217 & \(-0.184\) & 13 \\ \hline \end{tabular} Note. –\(\log~{}gf\) values and % uncertainty on \(gf\) values taken from Table 2 of Nilsson et al. (2002a). Excitation potential taken from linemake.
\end{table}
Table 4: Atomic Parameters of U ii Transition Lines.
Figure 4: Spectral synthesis around the \(\lambda\)4090.13 Å U ii line region for J0954+5246, J2038-0023, HE 1523-0901, and CS 31082-001. The normalized flux of the stars are scaled for illustration. The red-solid line traces the best-fit synthetic model to the observed data in black points. The blue-solid line traces the synthetic model with no U and no Fe, depicting the continuum at the U ii transition. Important neighboring transition lines are labeled. This zoomed-out plot depicts the best placement of the local continuum of the spectral synthesis models for our sample stars.
and \(-0.05\) dex for J0954+5246, J2038-0023, HE 1523-0901, and CS 31082-001, respectively. The adjustment is small for most stars and within the uncertainty of the Nd abundance.
We determined \(\log\epsilon\)(U)\({}_{3859}=-2.45\), \(-2.50\), \(-1.93\), and \(-2.0\) for J0954+5246, J2038-0023, HE 1523-0901, and CS 31082-001, respectively. In Figure 1, we show the resulting best-fit spectral synthesis model for the observed data of each star, along with the residuals between the model and the data. We also depict the \(\pm 0.2\) dex abundance variation from the derived \(\lambda 3859\) U abundance in red-shaded region, for all the sample stars.
#### 4.2.2 The 4050 A U ii
The \(\lambda 4050.04\) A U ii line is blended with a La ii line at \(\lambda 4050.07\) A, which needs to be constrained well for U abundance determination. linemake obtains \(\log~{}gf\) = 0.11 for the La ii line from Corliss & Bozman (1962), but we used an updated \(\log~{}gf\) = 0.428, as measured by Bord et al. (1996). We substantiated this choice with spectral synthesis of the La ii line in RPE stars with minimal U contamination, which showed that the Bord et al. (1996) value provides a better fit to the observed spectra of these stars. Additionally, we blue-shifted the transition wavelength of the La ii line by 0.02 A to enable a better fit to the observed data. We also account for the hyperfine splitting (HFS) structure of this La ii line, as described in Appendix A. We applied the described prescription for the La ii line uniformly across all the sample stars to determine their U abundances. We employed the derived La abundance of each star in the spectral synthesis without any adjustment.
We determined \(\log\epsilon\)(U)\({}_{4050}=-2.50,-2.34,-2.00\), and \(-1.60\) for J0954+5246, J2038-0023, HE 1523-0901, and CS 31082-001, respectively. We show the corresponding best-fit spectral synthesis models for all the stars in Figure 2, along with the resulting residuals between the model and the observed data. We also depict \(\pm 0.2\) dex variations in the \(\lambda 4050\) U abundance with a red-shaded region for all the stars. We generally find a good fit to the \(\lambda 4050\) A spectral region, as seen in Figure 2. We note an over-estimation of the synthetic model flux around \(\lambda 4049.95\) A for J0954+5246 and CS 31082-001. This could possibly indicate an unidentified line between the Gd ii and U ii lines that has manifested itself more strongly in J0954+5246 and CS 31082-001, as compared to in J2038-0023 and HE 1523-0901. Alternatively, the abundance of the La ii HFS structure may not be well represented by the mean La abundances determined for the stars.
#### 4.2.3 The \(\lambda 4090\) A U ii line
The \(\lambda 4090.13\) A U ii line is blended with one weak Fe i line at \(\lambda 4090.07\) A. We derived the \(\lambda 4090\) U abundance of all the sample stars with spectral synthesis, specifically, by fitting the U ii line to the red-ward wing of the Fe-U absorption feature. We determined \(\log\epsilon\)(U)\({}_{4090}=-2.60,-2.50,-2.15,\rm{and}-2.00\) for J0954+5246, J2038-0023, HE 1523-0901, and CS 31082-001, respectively. We show the corresponding best-fit spectral synthesis models for all the stars in Figure 3, along with residuals between the models and the observed data. We also depict \(\pm 0.2\) variation in the best-fit \(\lambda 4050\) U abundance with red-shaded region for all the sample stars. We note that the two absorption features blue-ward and red-ward of the U ii line are currently unidentified in the linemake linelist. We tested the effect of these unidentified features by adding "fabricated" lines to mimic them and found that they have minimal-to-no effect on the U abundance determination. In Figure 4, we also show the best-fit spectral synthesis for a wider wavelength window of this line region, for all the stars. This figure depicts that even though the immediately neighboring lines of the \(\lambda 4090.13\) A U ii line are unidentified, we found an optimal continuum placement for all the stars using other spectral regions.
### Thorium
We determined Th abundances for all of the sample stars using Th ii lines at \(\lambda 4019.13\) A, \(\lambda 4086.52\) A, and \(\lambda 4094.75\) A. We generated the linelists for spectral synthesis with linemake, using \(\log~{}gf\) values from Nilsson et al. (2002b).
The Th ii line at \(\lambda 4019.13\) A is the strongest Th line detectable in the optical spectra of stars. It is blended with a Ce ii line at \(\lambda 4019.06\) A, a Fe i line at \(\lambda 4019.04\) A and \({}^{13}\)CH lines at \(\lambda 4018.98\) A and \(\lambda 4019.15\) A (see Figure 5). For the spectral synthesis of this region, we employed the abundances of the blends without any adjustments. We determined \(\log\epsilon\)(Th)\({}_{4019}=-1.92,-1.70,-1.22,\rm{and}-1.18\) for J0954+5246, J2038-0023, HE 1523-0901, and CS 31082-001, respectively. The corresponding best-fit spectral synthesis model is shown in Figure 5 for each star, along with the residual between the mode and the observed data. We note that the synthetic spectrum is overestimated around the \(\lambda 4018.98\) A and \(\lambda 4019.25\) A regions for all the sample stars. This suggests a need to revisit the atomic parameters of the lines in this spectral region and perhaps identify unknown transitions. Nevertheless, we expect minimal effect of these wing-features on the Th abun
dance, which was robustly determined by constraining the fit of the synthetic spectrum to the core of the absorption feature.
The Th ii line at \(\lambda 4086.52\) A is situated next to a La ii line and partly blended with a Ce ii line. With spectral synthesis of this line-region, we determined \(\log\epsilon(\mathrm{Th})_{4086}=-1.70,-1.63,-0.95,\mathrm{and}-1.09\) for J0954+5246, J2038-0023, HE 1523-0901, and CS 31082-001. For the spectral synthesis of CS 31082-001, we adjusted the derived Ce abundance of the star by \(-0.05\) dex.
The Th ii line at \(\lambda 4094.75\) A is blended with a CH line and partly blended with an Er ii line. For the purpose of a good spectral synthesis fit to the region, we allowed an adjustment of the Er abundance within \(\pm 0.2\) dex of the derived Er stellar abundance for all of the sample stars. Since the Er ii line is blended with only a section of the blue-ward wing of the Th ii line, any adjustment of the Er abundance had minimal ef
Figure 5: Spectral synthesis of the Th ii line at \(\lambda 4019.13\) Å for J0954+5246, J2038-0023, HE 1523-0901, and CS 31082-001. The red-solid line traces the best-fit synthetic model to the observed data in black points. The blue-dashed line traces the synthetic model with no Th, and the black-dotted line traces the synthetic model with no Th, Fe, C, and Ce. The red-shaded region depicts abundance variations within \(\pm 0.2\) dex. Important neighboring transition lines are labeled. The corresponding residuals between the synthetic models and the observed data are also shown.
fect on the synthetic-spectrum fit to the core of the Th ii line. Subsequently, we determined \(\log\epsilon(\rm{Th})_{4095}=-1.76,-1.57,-1.01,\rm{and}-1.10\) for J0954\(+\)5246, J2038-0023, HE 1523-0901, and CS 31082-001, respectively.
The final Th abundance of each sample star was obtained as the mean of the \(\lambda\)4019.13, \(\lambda\)4086.52, and \(\lambda\)4094.75 Th abundances. We determined mean Th abundance as \(\log\epsilon(\rm{Th})\)\(=-1.79\), \(-1.31\), \(-1.06\), and \(-1.12\) for J0954\(+\)5246, J2038-0023, HE 1523-0901, and CS 31082-001, respectively. We list the Th abundances obtained for each line and the corresponding mean Th abundance for each star in Table 5, along with the uncertainty estimates. Table 5 also lists the Th abundances determined in previous literature studies for comparison. For J0954\(+\)5246, HE 1523-0901, and CS 31082-001, we obtain good agreement with the Th abundances published in the literature. For J2038-0023, we note some discrepancy, which we attribute to the difference in the adopted stellar parameters (see section 3).
### Europium
We determined the Eu abundance of each sample star using Eu ii lines at \(\lambda\)4219 A \(\lambda\)4205 A and \(\lambda\)4435 A. We determined the mean \(\log\epsilon(\rm{Eu})\)\(=-1.16,-1.16,-0.53\), and \(-0.81\) for J0954\(+\)5246, J2038-0023, HE 1523-0901, and CS 31082-001, respectively. We list the mean Eu abundance, along with the uncertainty estimates and Eu abundance estimates from previous literature studies in Table 5. For J0954\(+\)5246, HE 1523-0901, and CS 31082-001, we find our derived Eu abundances to be in good agreement with the literature estimates within uncertainties. For J2038-0023, we note a discrepancy in the abundances, which we attribute to the difference in the adopted stellar parameters.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Source & \(\log\epsilon(\rm{X})\) & J0954\(+\)5246 & \multicolumn{1}{c}{J2038-0023 } & \multicolumn{1}{c}{HE 1523-0901 } & \multicolumn{1}{c}{CS 31082-001 } \\ \hline Other Work & \(\log\epsilon(\rm{U})_{3859}\) & \(-2.13\pm\ 0.20\) & \(-2.14\pm\ 0.20\) & \(-2.06\pm\ 0.12\) & \(-1.92\pm\ 0.17\) \\ This Work & \(\log\epsilon(\rm{U})_{3859}\) & \(-2.45\pm\ 0.30\) & \(-2.50\pm\ 0.26\) & \(-1.93\pm\ 0.18\) & \(-2.00\pm\ 0.22\) \\ & \(\log\epsilon(\rm{U})_{4050}\) & \(-2.50\pm\ 0.33\) & \(-2.34\pm\ 0.30\) & \(-2.00\pm\ 0.48\) & \(-1.60\pm\ 0.21\) \\ & \(\log\epsilon(\rm{U})_{4090}\) & \(-2.60\pm\ 0.30\) & \(-2.50\pm\ 0.24\) & \(-2.15\pm\ 0.28\) & \(-2.00\pm\ 0.25\) \\ & \(\log\epsilon(\rm{U})\) & \(-2.50\pm\ 0.29\) & \(-2.47\pm 0.21\) & \(-1.96\pm\ 0.25\) & \(-1.87\pm\ 0.19\) \\ Other Work & \(\log\epsilon(\rm{Th})\) & \(-1.13\pm 0.10\) & \(-1.24\pm 0.10\) & \(-1.2\pm 0.05\) & \(-0.98\pm 0.13\) \\ This Work & \(\log\epsilon(\rm{Th})_{4019}\) & \(-1.92\pm 0.09\) & \(-1.70\pm 0.05\) & \(-1.22\pm 0.11\) & \(-1.18\pm 0.04\) \\ & \(\log\epsilon(\rm{Th})_{4086}\) & \(-1.70\pm 0.09\) & \(-1.63\pm 0.05\) & \(-0.95\pm 0.11\) & \(-1.09\pm 0.04\) \\ & \(\log\epsilon(\rm{Th})_{4095}\) & \(-1.76\pm 0.09\) & \(-1.57\pm 0.05\) & \(-1.01\pm 0.11\) & \(-1.10\pm 0.04\) \\ & \(\log\epsilon(\rm{Th})\) & \(-1.79\pm 0.18\) & \(-1.63\pm 0.21\) & \(-1.06\pm 0.19\) & \(-1.12\pm 0.16\) \\ Other Work & \(\log\epsilon(\rm{Eu})\) & \(-1.19\pm 0.10\) & \(-0.75\pm 0.10\) & \(-0.62\pm 0.05\) & \(-0.76\pm 0.13\) \\ This Work & \(\log\epsilon(\rm{Eu})\) & \(-1.16\pm 0.12\) & \(-1.16\pm 0.13\) & \(-0.53\pm 0.08\) & \(-0.81\pm 0.12\) \\ \hline & Chronometer & J0954\(+\)5246 & J2038-0023 & HE 1523-0901 & CS 31082-001 \\ \hline Age (Gyr) & U/Th & \(11.1\pm 6.4\) & \(13.5\pm 4.8\) & \(16.6\pm 5.1\) & \(11.1\pm 4.0\) \\ \((\pm{\rm sys}\pm{\rm stat}\pm{\rm PR})\) & & \((\pm 5.7\pm 1.9\pm 2.2)\) & \((\pm 3.8\pm 2.1\pm 2.2)\) & \((\pm 4.2\pm 2.0\pm 2.2)\) & \((\pm 2.8\pm 1.8\pm 2.2)\) \\ Age (Gyr) & U/Eu & \(12.0\pm 4.3\) & \(11.3\pm 3.1\) & \(14.2\pm 3.8\) & \(7.3\pm 2.6\) \\ \((\pm{\rm sys}\pm{\rm stat}\pm{\rm PR})\) & & \((\pm 3.9\pm 1.0\pm 1.6)\) & \((\pm 2.2\pm 1.4\pm 1.6)\) & \((\pm 3.3\pm 1.0\pm 1.6)\) & \((\pm 1.6\pm 1.3\pm 1.6)\) \\ \hline \end{tabular} Note. – *Nucleocosmochronometric ages listed are obtained in this work. See text for details on uncertainty estimation for the elemental abundances and stellar ages. Source of other work: \({}^{\rm a}\)Holmbeck et al. (2018),\({}^{\rm b}\)Placco et al. (2017),\({}^{\rm c}\)Frebel et al. (2007),\({}^{\rm d}\)Hill et al. (2002).
\end{table}
Table 5: U, Th, and Eu Abundances and Nucleocosmochronometric Ages.*
### Uncertainty Analysis
For the U abundances of the sample stars, we homogeneously accounted for various sources of systematic (\(\Delta\)(sys)) and statistical (\(\Delta\)(stat)) uncertainties. For \(\Delta\)(sys), we considered the uncertainties on the stellar parameters (\(T_{\rm eff}\), \(\log~{}g\), and \(\xi\)) and the abundances of the blending elements. We list the individual systematic uncertainty components, \(\Delta T_{\rm eff}\), \(\Delta\)\(\log~{}g\), \(\Delta\xi\), and \(\Delta\)(blend) in Table 6. For \(\Delta\)(stat), we considered the uncertainties on the \(\log~{}gf\) values (\(\Delta\)(loggf)) and the continuum placement of the synthetic spectra ((\(\Delta\)(cont)), which we also list in Table 6. For each U ii line, we estimated the individual uncertainty components, \(\Delta T_{\rm eff}\), \(\Delta\)\(\log~{}g\), \(\Delta\xi\), \(\Delta\)(blend), \(\Delta\)(loggf), and \(\Delta\)(cont).
To estimate U abundance uncertainties from stellar parameters, we independently changed each stellar parameter by its uncertainty. We thus changed \(T_{\rm eff}\) by \(+150\) K, \(\log~{}g\) by \(+0.3\) dex, and \(\xi\) by \(0.2\) km/s, for all the stars. For every stellar parameter, we re-derived the abundances of key elements and re-synthesized the U ii lines. We report the resulting change in the U abundances as \(\Delta T_{\rm eff}\), \(\Delta\)\(\log~{}g\), and \(\Delta\xi\), for the respective stellar parameter.
We estimated \(\Delta\)(blend) by changing the abundance of the blending element (e.g., Fe and La) by \(\pm 1\sigma\) and re-deriving the U abundance. Here \(\sigma\) is the standard deviation in the abundance of the blending element, which we have also adopted as the uncertainty on the mean abundance of the respective blending element (see section 4.1). We further limited the change in the abundance of the blending to ensure that the new synthetic spectrum flux was within the S/N of the observed spectrum. We considered the following blending elements: Fe for the \(\lambda\)3859 A and \(\lambda\)4090 A U ii lines and La for the \(\lambda\)4050 A U ii line. While the \(\lambda\)3859 A U ii line is also blended with a CN feature, we find that the U abundance determination is most sensitive to the Fe abundance.
We estimated \(\Delta\)(\(\log~{}gf\)) through varying the \(\log~{}gf\) values by the measurement uncertainties listed in Nilsson et al. (2002a) and re-deriving the U abundances. We estimated \(\Delta\)(cont) for each U ii line by changing the local continuum placement in the spectral synthesis of the U ii line by \(\pm 0.5\%\) and then re-deriving the U abundance.
We determined the total uncertainty (\(\Delta\)(total)) on the U abundance of each U ii line as quadrature sum of the U ii line's systematic and statistical uncertainties. In turn, we determined \(\Delta\)(sys) for each U ii line as quadrature sum of \(\Delta\)(\(T_{\rm eff}\)), \(\Delta\)(\(\log~{}g\)), \(\Delta\)(\(\xi\)), and \(\Delta\)(blend). Similarly, we determined \(\Delta\)(stat) for each U ii line as quadrature sum of the U ii line's \(\Delta\)(\(\log~{}gf\)) and \(\Delta\)(cont). For all the stars, we list the resulting \(\Delta\)(sys), \(\Delta\)(stat), and \(\Delta\)(total) for each U ii line in Table 6.
For the final U abundance of each sample star, we take the weighted-average of the U abundances from the three U ii lines. Therefore, \(\log\epsilon({\rm U})=\sum_{i}(w_{i}\log\epsilon_{i})/\sum_{i}w_{i}\), where \(\log\epsilon_{i}\) is the U abundance from line \(i\) and \(w_{i}=1/\Delta\)(stat)\({}^{2}_{i}\). Here \(\Delta\)(stat)\({}_{i}\) is the \(\Delta\)(stat) uncertainty as estimated above for the U ii line \(i\). We determined the \(\Delta\)(total) on the weighted-average U abundance of each star as the quadrature sum of the corresponding \(\Delta\)(sys) and \(\Delta\)(stat). For the weighted-average U abundance, the systematic uncertainty components, \(\Delta\)(\(T_{\rm eff}\)), \(\Delta\)(\(\log~{}g\)), \(\Delta\)(\(\xi\)), and \(\Delta\)(blend) are determined by taking the average of these components estimated for the three U ii lines. We then determined \(\Delta\)(sys) as the quadrature sum of the averaged \(\Delta\)(\(T_{\rm eff}\)), \(\Delta\)(\(\log~{}g\)), \(\Delta\)(\(\xi\)), and \(\Delta\)(blend). For the weighted-average U abundance of each sample star, we determined \(\Delta\)(stat) by propagating the \(\Delta\)(stat) estimates of each U ii for the weighted-average formula i.e., \(1/\Delta\)(stat)\({}^{2}=\sum_{i}w_{i}\), where \(w_{i}=1/\Delta\)(stat)\({}^{2}_{i}\). We list the final \(\Delta\)(sys), \(\Delta\)(stat), and \(\Delta\)(total) for the weighted-average U abundance of each star in Table 6.
We also determined systematic and statistical uncertainties for the Th and Eu abundances of all the stars. We determined \(\Delta\)(sys) as the quadrature sum of \(\Delta T_{\rm eff}\), \(\Delta\)\(\log~{}g\), and \(\Delta\)\(\xi\). We determined \(\Delta\)(stat) as the standard-error of the mean Th and Eu abundances. Therefore, \(\Delta\)(stat) = \(\sigma/n\), where \(\sigma\) is the standard deviation of the abundances determined with different lines and \(n\) is the total number of lines used. We then computed \(\Delta\)(total) for the mean Th and Eu abundances of all the sample stars as the quadrature sum of the corresponding \(\Delta\)(sys) and \(\Delta\)(sys).
## 5 Ages with Novel U ii Lines
The U abundance of a star can be used to determine the star's age using nucleocosmochronometry. We homogeneously determined ages for J0954+5246, J2038-0023, HE 1523-0901, and CS 31082-001, for the first time with U abundance derived using three U ii lines. We determined the ages using equations 1 and 2 for the U/Th and U/Eu chronometers, respectively (Cayrel et al., 2001).
\[t=21.8[\log~{}\epsilon({\rm U/Th})_{0}-\log\epsilon({\rm U/Th})_{\rm obs}]~{}{ \rm Gyr} \tag{1}\]
\[t=14.8[\log~{}\epsilon({\rm U/Eu})_{0}-\log\epsilon({\rm U/Eu})_{\rm obs}]~{}{ \rm Gyr} \tag{2}\]
Here \(\log\epsilon\)(U/X)\({}_{0}\) is the production ratio (PR) of the chronometer and \(\log\epsilon\)(U/X)\({}_{\rm obs}\) is the present-day abundance ratio of the chronometer as determined from this work. We took PRs from Schatz et al. (2002), who used waiting-point calculations to estimate site-independent PRs of \(r\)-process elements. We list the final ages from the two chronometers and the associated uncertainties in Table 5. We find the U/Th and U/Eu age agreeing for all the stars within uncertainties. There is a relatively large discrepancy between the U/Th and U/Eu ages of CS 31082-001, which can be attributed to its actinide-boost nature.
We estimated the uncertainty on the ages by propagating the uncertainties of the PRs and the present-day observed abundance ratios. As a result, our age uncertainties consist of statistical, systematic, and PR components. We obtained the uncertainty on the PRs from Schatz et al. (2002), which are 0.10 dex for the U/Th
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & \(\Delta T_{\rm eff}\) (K) & \(\Delta\) log \(g\) (cgs) & \(\Delta\)\(\xi\) (km/s) & \(\Delta\)(blend) & \(\Delta\)(sys) & \(\Delta\)log \(gf\) & \(\Delta\)(cont) & \(\Delta\)(stat) & \(\Delta\)(total) \\ \hline J0954\(+\)5246 & \(+\)150 & \(+\)0.30 & \(+\)0.20 & \(\pm\)1\(\sigma\) & & & \(\pm\)0.5\% & & \\ \hline \(\log\epsilon\)(U)\({}_{3859}\) & \(+\)0.10 & \(+\)0.10 & \(+\)0.05 & \(\pm\)0.23 & \(\pm\)0.27 & \(\pm\)0.05 & \(\pm\)0.10 & \(\pm\)0.11 & \(\pm\)0.30 \\ \(\log\epsilon\)(U)\({}_{4050}\) & \(+\)0.05 & \(+\)0.05 & \(+\)0.02 & \(\pm\)0.30 & \(\pm\)0.31 & \(\pm\)0.03 & \(\pm\)0.10 & \(\pm\)0.10 & \(\pm\)0.33 \\ \(\log\epsilon\)(U)\({}_{4090}\) & \(+\)0.08 & \(+\)0.09 & \(+\)0.04 & \(\pm\)0.23 & \(\pm\)0.26 & \(\pm\)0.06 & \(\pm\)0.15 & \(\pm\)0.15 & \(\pm\)0.30 \\ \(\log\epsilon\)(U) & \(+\)0.08 & \(+\)0.08 & \(+\)0.04 & \(\pm\)0.25 & \(\pm\)0.28 & \(\cdots\) & \(\pm\)0.07 & \(\pm\)0.29 \\ \(\log\epsilon\)(Th) & \(+\)0.15 & \(+\)0.08 & \(+\)0.00 & \(\cdots\) & \(\pm\)0.17 & \(\cdots\) & \(\pm\)0.05 & \(\pm\)0.18 \\ \(\log\epsilon\)(Eu) & \(+\)0.09 & \(+\)0.07 & \(-\)0.02 & \(\cdots\) & \(\pm\)0.12 & \(\cdots\) & \(\pm\)0.01 & \(\pm\)0.12 \\ \(\log\epsilon\)(U/Th) & \(-\)0.07 & \(+\)0.00 & \(-\)0.00 & \(\pm\)0.25 & \(\pm\)0.26 & \(\cdots\) & \(\pm\)0.09 & \(\pm\)0.28 \\ \(\log\epsilon\)(U/Eu) & \(-\)0.01 & \(+\)0.01 & \(+\)0.06 & \(\pm\)0.25 & \(\pm\)0.26 & \(\cdots\) & \(\pm\)0.07 & \(\pm\)0.27 \\ \hline J2038-0023 & \(+\)150 & \(+\)0.30 & \(+\)0.20 & \(\pm\)1\(\sigma\) & & & \(\pm\)0.5\% & & \\ \hline \(\log\epsilon\)(U)\({}_{3859}\) & \(+\)0.10 & \(+\)0.05 & \(+\)0.00 & \(\pm\)0.20 & \(\pm\)0.23 & \(\pm\)0.06 & \(\pm\)0.10 & \(\pm\)0.12 & \(\pm\)0.26 \\ \(\log\epsilon\)(U)\({}_{4050}\) & \(+\)0.04 & \(+\)0.17 & \(+\)0.04 & \(\pm\)0.14 & \(\pm\)0.23 & \(\pm\)0.04 & \(\pm\)0.20 & \(\pm\)0.20 & \(\pm\)0.31 \\ \(\log\epsilon\)(U)\({}_{4090}\) & \(+\)0.10 & \(+\)0.08 & \(+\)0.02 & \(\pm\)0.08 & \(\pm\)0.15 & \(\pm\)0.05 & \(\pm\)0.20 & \(\pm\)0.21 & \(\pm\)0.26 \\ \(\log\epsilon\)(U) & \(+\)0.08 & \(+\)0.10 & \(+\)0.02 & \(\pm\)0.14 & \(\pm\)0.19 & \(\cdots\) & \(\pm\)0.09 & \(\pm\)0.21 \\ \(\log\epsilon\)(Th) & \(+\)0.18 & \(+\)0.11 & \(+\)0.01 & \(\cdots\) & \(\pm\)0.21 & \(\cdots\) & \(\pm\)0.03 & \(\pm\)0.21 \\ \(\log\epsilon\)(Eu) & \(+\)0.11 & \(+\)0.07 & \(+\)0.00 & \(\cdots\) & \(\pm\)0.13 & \(\cdots\) & \(\pm\)0.02 & \(\pm\)0.13 \\ \(\log\epsilon\)(U/Th) & \(-\)0.10 & \(-\)0.01 & \(+\)0.01 & \(\pm\)0.14 & \(\pm\)0.17 & \(\cdots\) & \(\pm\)0.10 & \(\pm\)0.20 \\ \(\log\epsilon\)(U/Eu) & \(-\)0.03 & \(+\)0.03 & \(+\)0.02 & \(\pm\)0.14 & \(\pm\)0.15 & \(\cdots\) & \(\pm\)0.09 & \(\pm\)0.17 \\ \hline HE 1523-0901 & \(+\)150 & \(+\)0.30 & \(+\)0.20 & \(\pm\)1\(\sigma\) & & & \(\pm\)0.5\% & & \\ \hline \(\log\epsilon\)(U)\({}_{3859}\) & \(+\)0.10 & \(+\)0.10 & \(+\)0.03 & \(\pm\)0.08 & \(\pm\)0.17 & \(\pm\)0.04 & \(\pm\)0.06 & \(\pm\)0.07 & \(\pm\)0.18 \\ \(\log\epsilon\)(U)\({}_{4050}\) & \(+\)0.01 & \(+\)0.25 & \(+\)0.20 & \(\pm\)0.25 & \(\pm\)0.41 & \(\pm\)0.03 & \(\pm\)0.25 & \(\pm\)0.25 & \(\pm\)0.48 \\ \(\log\epsilon\)(U)\({}_{4090}\) & \(+\)0.10 & \(+\)0.10 & \(+\)0.05 & \(\pm\)0.11 & \(\pm\)0.19 & \(\pm\)0.05 & \(\pm\)0.21 & \(\pm\)0.22 & \(\pm\)0.28 \\ \(\log\epsilon\)(U)\({}_{4090}\) & \(+\)0.07 & \(+\)0.15 & \(+\)0.09 & \(\pm\)0.15 & \(\pm\)0.24 & \(\cdots\) & \(\pm\)0.07 & \(\pm\)0.25 \\ \(\log\epsilon\)(Th) & \(+\)0.14 & \(+\)0.11 & \(+\)0.0 & \(\cdots\) & \(\pm\)0.18 & \(\cdots\) & \(\pm\)0.06 & \(\pm\)0.19 \\ \(\log\epsilon\)(Eu) & \(+\)0.06 & \(+\)0.04 & \(-\)0.03 & \(\cdots\) & \(\pm\)0.08 & \(\cdots\) & \(\pm\)0.02 & \(\pm\)0.08 \\ \(\log\epsilon\)(U/Th) & \(-\)0.07 & \(+\)0.04 & \(+\)0.09 & \(\pm\)0.15 & \(\pm\)0.19 & \(\cdots\) & \(\pm\)0.09 & \(\pm\)0.21 \\ \(\log\epsilon\)(U/Eu) & \(+\)0.01 &
chronometer and 0.11 dex for the U/Eu chronometer. We determined \(\Delta\)(sys) for the present-day abundance ratios as the quadrature sum of the \(\Delta T_{\rm eff}\), \(\Delta\)log \(\,g\), \(\Delta\xi\), \(\Delta\)(blend) uncertainties of the ratios. We determined \(\Delta\)(stat) for the present-day abundance-ratios as quadrature sum of \(\Delta\)(stat) determined for the elements in the ratio. We list the resulting \(\Delta\)(sys) and \(\Delta\)(stat) for the present-day U/Th and U/Eu abundance ratios in Table 6. We list the individual systematic, statistical and production-ratio components of the age uncertainties in Table 5. We determined the total uncertainty on the age as the quadrature sum of the systematic, statistical, and production-ratio uncertainties. We further discuss the ages and the respective uncertainties from the various components in Section 6.4.
## 6 Discussion
To test the reliability of the two new U ii lines at \(\lambda\)4050 and \(\lambda\)4090 A, we performed the first homogeneous U abundance analysis of four highly RPE stars ([Eu/Fe]\(>+0.7\)) with these new lines, in addition to the canonical \(\lambda\)3859 A U ii line. The stars chosen for the analysis are four of the five stars with U abundance previously determined in the literature. While U abundance was determined for CS 29497-004 (Hill et al., 2017), our analysis of the star's UVES/VLT spectra indicated an almost-non existent signature of U at all three U ii lines. Therefore, we left CS 29497-004 out of further analysis. We now discuss and establish the reliability of the two new U ii lines in the other four stars.
### Reliability of the New U ii lines: Line Abundances and Uncertainties
Table 5 lists the final U abundance determined at each U ii line, along with its estimated uncertainty, for all the sample stars. We note that the U abundances from the three U ii lines agree well, within uncertainties, for all the sample stars. We also find that the \(\lambda\)3859 U abundance determined in this work is consistent with previous literature estimate of the same within uncertainties for all the stars; \(\lambda\)3859 U abundance from literature is also listed in Table 5.
Moreover, we find that in the case for all the sample stars, the uncertainties on the U abundances of all three U ii lines are of the same order. This indicates that the new U ii lines provide similar precision for U abundance determination as the canonical \(\lambda\)3859 A U ii line. For all three U ii lines, we have homogeneously taken into account various sources of systematic and statistical uncertainties (see section 4.5). Generally, we find the systematic uncertainties, specifically, from the blending elements to be the dominant source of the total uncertainty on the U ii line abundances. We also find that U abundance determination with all three U ii lines is sensitive to the continuum placement of the synthetic model. The sensitivity of the individual U ii lines to the various sources of uncertainties and the similar precision offered by the U ii lines impresses upon the advantage of using three U ii lines for U abundance determination, instead of just one.
We note, however, an exceptionally high uncertainty estimated for the \(\lambda\)4050 U abundance of HE 1523-0901, on the order of \(\sim 0.5\) dex. This high estimate is driven by the La ii blend since the star has a relatively strong La feature relative to the other stars (see Figure 2). This does highlight the fact that blends can have a significant impact on U abundance determination. However, we find that the spectral synthesis fit of the region is very good for the derived La abundance of this star.
As noted in section 4.2, even though the spectral synthesis fits to the U ii lines are good -- the goodness of the fits is further corroborated by the agreement between U abundances from the different U ii lines -- there is indication of unidentified features in the \(\lambda\)4090 A and possibly \(\lambda\)4050 A spectral regions. Given the immense potential of these U ii lines on advancing nucleocosmochronometry and \(r\)-process studies, we recommend a detailed investigation into the atomic data, especially laboratory measurements of the transition lines in these spectral regions.
### Reliability of the New U ii lines: Residuals between Line Abundances
We further demonstrate the reliability of the new U ii lines with Figure 6, which shows (i) the residuals between the \(\lambda\)4050 and \(\lambda\)3859 U abundances in circle data points and (ii) the residuals between the \(\lambda\)4090 and \(\lambda\)3859 U abundances in square data points, for all the sample stars. The residuals are plotted against the respective star's \(T_{\rm eff}\), log \(\,g\), [Fe/H], [C/Fe], [Eu/Fe], and spectrum S/N in different panels. The uncertainties on the residual data points are calculated by propagating the total uncertainties of the individual line abundances.
First, we note that the residuals of the U abundances are all within \(\pm 0.2\) dex (shown by the shaded grey region) for all the stars. For CS 31082-001, while we find that the residual between the \(\lambda\)4090 and \(\lambda\)3859 U abundances is 0.0 dex, the residual between the \(\lambda\)4050 and \(\lambda\)3859 U abundances is +0.40 dex. This indicates that there might be an unidentified transition line in the \(\lambda\)4050 A U ii line region that is more prominent for CS 31082-001 than for the other stars. Alternatively, the abundance of the La ii HFS structure in this region may not be well represented by the mean La abundance determined for the star. While this relatively large residual
may signify the need to better constrain the atomic data of this spectral region and/or the abundance of the La ii HFS structure, the uncertainty on the residual overlaps with the \(\pm 0.2\) dex shaded-region, subduing any serious concern.
Second, we note that the residuals of the U abundances show no discernible trend with respect to \(T_{\rm eff}\), \(\log~{}g\), [Fe/H], [C/Fe], [Eu/Fe], and spectrum S/N. This indicates that our current spectral synthesis models are of high fidelity and no identifiable systematic biases are being percolated to the U abundance determinations. Together, the \(\pm 0.2\) dex range of the residuals between the U ii line abundances and the absence of any significant trend in the residuals with respect to key atmospheric and chemical properties as well as data-quality of the sample stars establishes the reliability of the two new U ii new lines at \(\lambda 4050\) A and \(\lambda 4090\) A.
### Mean U Abundance with Multiple U ii lines
We provide revised U abundances for the RPE stars, J0954+5246, J2038-0023, HE 1523-0901, and CS 31082-001 via a homogeneous analysis of the three U ii lines at \(\lambda 3859\) A, \(\lambda 4050\) A, and \(\lambda 4090\) A. The resulting mean U abundance is computed as a weighted-average of the U abundances from the three U ii lines, with the weights assigned based on the statistical uncertainty on the U abundance of each line (see section 4.2 for more details). We find that the total uncertainty of the weighted-average U abundances is on the order of \(\sim 0.2\) dex, for all sample stars. On the other hand, the total uncertainty of individual U ii line abundances is higher and typically in the range of \(\sim 0.2-0.3\) dex. This reiterates the advantage of using multiple U ii lines, which can potentially lend more precise U abundance than a single U ii line.
We compare the final weighted-average U abundances estimated in this work to previous literature estimates of \(\lambda 3859\) U abundances in Figure 7, for all the stars. We find that our U abundances agree well with the literature values within uncertainties, for all the stars. In fact, the final U abundances agree with previous literature estimates within 0.05 dex for HE 1523-0901 and CS 31082-001, which we consider excellent agreement. This further establishes the reliability of the new U ii lines. For J0954+5246, we find that our U abundance is \(\sim 0.4\) dex lower than that determined by Holmbeck et al.
Figure 6: For all the sample stars, residuals between the \(\lambda 4050\) Å and \(\lambda 3859\) Å U ii line abundances are shown with round data points and the residuals between the \(\lambda 4090\) Å and \(\lambda 3859\) Å U ii line abundances are shown with square data points. The residuals are plotted against the \(T_{\rm eff}\), \(\log~{}g\), [Fe/H], [C/Fe], and S/N at 4050 Å of the respective stars in different panels. A grey-shaded region for residuals within \(\pm\) 0.2 dex is also shown.
(2018). We investigated the source of this discrepancy and suspect the cause to be differences in the adopted atomic data of the \(\lambda\)3859.91 A Fe i line. We also find that our U abundance for J2038-0023 is \(\sim 0.3\) dex lower than that determined by Placco et al. (2017). This discrepancy is attributed to the difference in the adopted stellar parameters, especially \(\log\,g\) (see section 3). Nevertheless, our U/Th and U/Eu nucleocosmochronometric age estimates compare well with those reported in Placco et al. (2017) (see section 5).
We also discuss the uncertainty estimates on the final U abundances as determined in this work compared to previous literature estimates. In the case of J0954+5246 and HE 1523-0901, the total uncertainty estimated for the final weighted-average U abundance in this work is larger than the uncertainty quoted by the respective previous literature studies for their final \(\lambda\)3859 U abundance (see Figure 7 and/or Table 5). For J0954+5246, Holmbeck et al. (2018) quoted a fiducial uncertainty of \(\pm 0.20\) dex on their final U abundance, while we obtained an uncertainty of \(\pm 0.29\) dex. For HE 1523-0901, Frebel et al. (2007) assigned an uncertainty of \(\pm 0.11\) dex, solely arising from the effect of changing the Fe abundance by \(\pm 0.10\) dex (although they considered additional sources of uncertainties in the age determinations). Having accounted for additional sources of uncertainties, we obtained a larger uncertainty of \(\pm 0.25\) dex for the U abundance of HE 1523-0901. While Placco et al. (2017) also set a fiducial uncertainty of \(\pm 0.20\) dex for J2038-0023, we find our detailed analysis renders a similar uncertainty of \(\pm 0.21\) dex. Similarly for CS 31082-001, we find that our U abundance uncertainty of \(\pm 0.19\) dex agrees with that of Hill et al. (2002), who also accounted for stellar parameters, oscillator strength, and observational fitting uncertainties and obtained an uncertainty of \(\pm 0.19\) dex.
Figure 8: Nucleocosmochronometric ages from this work (colored data points) and previous literature work (white data points) using the U/Th (top panel) and U/Eu (bottom panel) chronometers. The age of the universe is shown in dashed-black line (Planck Collaboration et al., 2016). For the ages of this work, the weighted-average U, Th, and Eu abundances from multiple transition lines were used. Literature ages were taken from Holmbeck et al. (2018) for J0954+5246, Placco et al. (2017) for RAVE J203843.2–002333, Frebel et al. (2007) for HE 1523-0901, and Hill et al. (2002) for CS 31082-001.
Figure 7: Weighted-average U abundances derived in this work with U ii lines at \(\lambda\)3859, \(\lambda\)4050, and \(\lambda\)4090 Å for J0954+5246, J2038-0023, HE 1523-0901, and CS 31082-001. U abundances determined in previous literature studies using the \(\lambda\)3859 Å line are also shown. Literature U abundance was taken from Holmbeck et al. (2018) for J0954+5246, Placco et al. (2017) for J2038-0023, Frebel et al. (2007) for HE 1523-0901, and Hill et al. (2002) for CS 31082-001.
### Ages with Mean U Abundances
For every sample star, we determined two stellar ages, one from the U/Th chronometer and another using the U/Eu chronometer. We list the resulting ages in Table 5. There is a good agreement between the U/Th and U/Eu ages of J0954+5246, J2038-0023, and HE 1523-0901. For CS 31082-001, U/Eu age is \(\sim 4.0\) Gyr lower than the U/Th age due to its actinide-boost nature (Cayrel et al., 2001; Hill et al., 2002; Schatz et al., 2002). Even then, the U/Th and U/Eu ages of CS 31082-001 agree within uncertainties.
For stellar age uncertainties, we took into account systematic, statistical, and PR uncertainties as described in section 5. These individual age uncertainty components are also listed in Table 5, along with the total age uncertainties. For the U/Th and U/Eu stellar ages of all the sample stars, the systematic uncertainties are the largest (and also the most dominant, in many cases), followed by the PR uncertainties and then the statistical uncertainties. Specifically, the systematic uncertainties of the stellar ages are driven by the large U abundance uncertainties from the blending elements.
Additionally, the uncertainties on the U/Th stellar ages are larger than the U/Eu counterpart because of the longer half-life of Th, relative to U. We note that we have not taken into account the systematic uncertainties associated with using just one set of theoretical PRs (from Schatz et al., 2002). Other \(r\)-process models have predicted slightly varying zero-age abundance ratios for U/Th and U/Eu (e.g., Goriely & Arnould, 2001; Farouqi et al., 2010). However, since the aim of this study was to investigate stellar ages estimated with revised U abundances from multiple U ii lines, we find using one set of PRs sufficient for the analysis.
We compare all the stellar ages determined in this work to previous literature results in Figure 8, which shows the U/Th and U/Eu ages in the top and bottom panels, respectively. The dashed black line in Figure 8 indicates the age of the universe, as determined by the _Planck_ mission (Planck Collaboration et al., 2016). For the literature ages of J0954+5246 (Holmbeck et al., 2018), J2038-0023 (Placco et al., 2017), and HE 1523-0901 (Frebel et al., 2007), we display the ages determined by the respective studies using, specifically, the zero-age abundance ratios of Schatz et al. (2002) to facilitate a consistent comparison to our age-estimates. For CS 31082-001, Hill et al. (2002) determined U/Th age using zero-age abundance ratio from Goriely & Arnould (2001), which we display. Also, Hill et al. (2002) did not determine the U/Eu stellar age of the CS 31082-001.
We find that our stellar ages mostly agree well with previous literature results within uncertainties. The exception to this case is the U/Eu age of J0954+5246, which is much higher than previously determined in the literature. We attribute this discrepancy to a lower U abundance determined in this work relative to that determined in Holmbeck et al. (2018) (see section 6.3 more details). We also determined lower Th abundance relative to Holmbeck et al. (2018) so that in the U/Th ratio, the offset in the abundances is cancelled.
## 7 Conclusions
Uranium abundances of metal-poor RPE stars enable stellar age determination independent from stellar evolution models, using nucleocosmochronometry. U abundances of a large sample of RPE stars may also enable important constraints on the astrophysical conditions and nuclear physics of \(r\)-process enrichment events, by probing the production of the actinides. However, U abundance determination has been limited to using a single U ii line at \(\lambda 3859\) A. In this study, we have performed the first homogeneous U abundance analysis of four highly RPE stars with two new U ii lines at \(\lambda 4050\) A and \(\lambda 4090\) A, along with the canonical \(\lambda 3859\) A U ii line.
We test the utility of the \(\lambda 4050\) A and \(\lambda 4090\) A U ii lines and find them generally reliable for U abundance determination. In Figures 2 and 3, we show the spectral synthesis fits to the new U ii lines. The resulting \(\lambda 3859\), \(\lambda 4050\), and \(\lambda 4090\) U abundances agree with each other within \(\pm 0.2\) dex for most stars, and within uncertainties for all the stars, as seen in Figure 6. In fact, we find it particularly advantageous to use the \(\lambda 4050\) A and \(\lambda 4090\) A U ii lines, since they are not blended with strong lines or C features. In that regard, they may find a special utility for determining U abundances of C-enhanced metal-poor stars. As seen in Figure 7, the final weighted-average U abundances of all the sample stars agree with previous literature estimates. This substantiates our U abundance analysis framework and the use of multiple U ii lines.
We performed a detailed uncertainty analysis of the U abundances by taking into account systematic uncertainties from stellar parameters and blends, as well as statistical uncertainties from continuum placement and \(\log~{}gf\) measurements. We find that all three U ii lines provide similar precision of \(\sim 0.2-0.3\) dex. On the other hand, for the weighted-average U abundance, the uncertainties are on the order of \(\sim 0.2\) dex. This underscores the advantage of using multiple U ii lines. Moreover, any unconstrained systematic biases associated with a particular U ii line are mitigated in the average abundance from multiple U ii lines.
We also obtained homogeneous ages for the stars with the U/Th and U/Eu chronometers and using the U, Th,
and Eu abundances as derived in this work from multiple transition lines of each element. As seen in Figure 8, we find that the newly obtained ages are reasonable and in agreement with previous literature estimates within uncertainties. For the uncertainties on the ages, we estimated the systematic, statistical, and PR uncertainty components for all the stars. The resulting total uncertainties on the ages of the sample stars are on the order of \(\sim 4-6\) Gyr for the U/Th ages and \(\sim 3-4\) Gyr for the U/Eu ages.
To improve the uncertainties on the U abundances and subsequently stellar ages, it will be necessary to address systematic uncertainties from the blends and stellar parameters. Additionally, a substantial component of U abundance uncertainties is contributed to fitting uncertainties like continuum placement. As a result, studies of these new U ii-line spectral regions are recommended to better constrain the atomic parameters of the blends and to identify unknown neighboring transitions, which will improve the confidence in the continuum placement.
Another source of uncertainty on the stellar ages is the poorly known nuclear physics that enters into the predicted U/Th and U/Eu PRs. Upcoming studies at facilities such as the N=126 Factory at Argonne National Laboratory (Savard et al., 2020) and the Facility for Rare Isotope Beams (Castelvecchi, 2022) will reach many heavy, neutron-rich species whose properties are crucial for understanding actinide production. These anticipated advances in atomic and nuclear physics will contribute to the overarching goal of improving the precision of nucleocosmochronometry.
Through the rest of the decade, we are expecting an influx of new spectroscopic data, specifically for RPE stars, from surveys such as that by the \(R\)-Process Alliance (Hansen et al., 2018; Sakari et al., 2018; Ezzeddine et al., 2020; Holmbeck et al., 2020), 4MOST (de Jong et al., 2019), and WEAVE (Dalton et al., 2012). Reliable U abundance determination of RPE stars will be critical in obtaining precise and robust nucleocosmochronometric ages of some of the oldest stars. Precise nucleocosmochronometric ages, combined with chemo-dynamical information, can aid our understanding of the chemical enrichment and evolution in the early Universe, especially of the \(r\)-process elements, as well as the assembly history of our galaxy. Additionally, reliable U abundances for a large sample of RPE stars can shed light on the extent of actinide-variation in RPE stars and its origin. To that end, the results of this work open up a new avenue to reliably determine U abundances and nucleocosmochronometric ages for a large sample of RPE stars with multiple U ii lines.
SPS acknowledges Jamie Tayar for useful conversations on stellar ages and other comments. RE acknowledges support from NSF grant AST-2206263. APJ was supported by NASA through Hubble Fellowship grant HST-HF2-51393.001 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. APJ also acknowledges support from a Carnegie Fellowship and the Thacher Research Award in Astronomy. T.T.H acknowledges support from the Swedish Research Council (VR 2021-05556). This project initiated during MC's 2018 sabbatical stay at Carnegie Observatories, and he is very grateful to the faculty and staff there for their hospitality and generous support. Additional support for MC is provided by the Ministry for the Economy, Development, and Tourism's Millennium Science Initiative through grant ICN12_12009, awarded to the Millennium Institute of Astrophysics (MAS), and by Proyecto Basal CATA ACE210002 and FB210003. I.U.R. acknowledges support from the U.S. National Science Foundation (NSF) (grants PHY 14-30152--Physics Frontier Center/JINA-CEE, AST 1815403/1815767, and AST 2205847), and the NASA Astrophysics Data Analysis Program, grant 80NSSC21K0627. EMH acknowledges support for this work provided by NASA through the NASA Hubble Fellowship grant HST-HF2-51481.001 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. T.C.B. acknowledges partial support for this work from grant PHY 14-30152; Physics Frontier Center/JINA Center for the Evolution of the Elements (JINA-CEE), awarded by the US National Science Foundation. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
Keck/HIRES, Magellan/MIKE, VLT/UVES
_Software:_ astropy (Astropy Collaboration et al., 2013), MAKEE ([https://sites.astro.caltech.edu/~tb/makee/](https://sites.astro.caltech.edu/~tb/makee/)), CarPy (Kelson et al., 2000; Kelson, 2003), ESOReflex (Freudling et al., 2013), MOOG ([https://github.com/alexji/moog17scat](https://github.com/alexji/moog17scat) **and**Sneden (1973)), SMHr ([https://github.com/eholmbeck/smhr-rpa/tree/refactor-scatterplot](https://github.com/eholmbeck/smhr-rpa/tree/refactor-scatterplot) **and**[https://github.com/andycasey/smhr/tree/refactor-scatterplot](https://github.com/andycasey/smhr/tree/refactor-scatterplot))
## Appendix A Hyperfine Splitting of the LA ii \(\lambda\)4050 LINE
The U ii line at 4050.04 A is blended with a stronger La ii line at 4050.073 A. There is one dominant naturally occurring isotope of La, \({}^{139}\)La, which has nuclear spin \(I=7/2\). This non-zero nuclear spin creates HFS structure, which desaturates the line and is thus important to account for in stellar abundance work. We adopt the HFS \(A\) and \(B\) constants for the upper and lower levels of this transition from the measurements of Furmann et al. (2008, 2008). We compute the complete line component pattern for this line following the procedure described in Appendix A1 of Ivans et al. (2006). We calculate the center-of-gravity wavenumber of this line from the energy levels given in the National Institute of Standards and Technology (NIST) Atomic Spectra Database (ASD; Kramida et al., 2021). We convert to the center-of-gravity air wavelength using the standard index of air (Peck and Reeder, 1972). Table A1 lists the line component positions relative to these values. The strengths are normalized to sum to 1.0.
|
2310.18946 | Video Frame Interpolation with Many-to-many Splatting and Spatial
Selective Refinement | In this work, we first propose a fully differentiable Many-to-Many (M2M)
splatting framework to interpolate frames efficiently. Given a frame pair, we
estimate multiple bidirectional flows to directly forward warp the pixels to
the desired time step before fusing overlapping pixels. In doing so, each
source pixel renders multiple target pixels and each target pixel can be
synthesized from a larger area of visual context, establishing a many-to-many
splatting scheme with robustness to undesirable artifacts. For each input frame
pair, M2M has a minuscule computational overhead when interpolating an
arbitrary number of in-between frames, hence achieving fast multi-frame
interpolation. However, directly warping and fusing pixels in the intensity
domain is sensitive to the quality of motion estimation and may suffer from
less effective representation capacity. To improve interpolation accuracy, we
further extend an M2M++ framework by introducing a flexible Spatial Selective
Refinement (SSR) component, which allows for trading computational efficiency
for interpolation quality and vice versa. Instead of refining the entire
interpolated frame, SSR only processes difficult regions selected under the
guidance of an estimated error map, thereby avoiding redundant computation.
Evaluation on multiple benchmark datasets shows that our method is able to
improve the efficiency while maintaining competitive video interpolation
quality, and it can be adjusted to use more or less compute as needed. | Ping Hu, Simon Niklaus, Lu Zhang, Stan Sclaroff, Kate Saenko | 2023-10-29T09:09:32Z | http://arxiv.org/abs/2310.18946v1 | # Video Frame Interpolation with Many-to-many Splatting and Spatial Selective Refinement
###### Abstract
In this work, we first propose a fully differentiable Many-to-Many (M2M) splatting framework to interpolate frames efficiently. Given a frame pair, we estimate multiple bidirectional flows to directly forward warp the pixels to the desired time step before fusing any overlapping pixels. In doing so, each source pixel renders multiple target pixels and each target pixel can be synthesized from a larger area of visual context, establishing a many-to-many splatting scheme with robustness to undesirable artifacts. For each input frame pair, M2M has a minuscule computational overhead when interpolating an arbitrary number of in-between frames, hence achieving fast multi-frame interpolation. However, directly warping and fusing pixels in the intensity domain is sensitive to the quality of motion estimation and may suffer from less effective representation capacity. To improve interpolation accuracy, we further extend an M2M++ framework by introducing a flexible Spatial Selective Refinement (SSR) component, which allows for trading computational efficiency for interpolation quality and vice versa. Instead of refining the entire interpolated frame, SSR only processes difficult regions selected under the guidance of an estimated error map, thereby avoiding redundant computation. Evaluation on multiple benchmark datasets shows that our method is able to improve the efficiency while maintaining competitive video interpolation quality, and it can be adjusted to use more or less compute as needed.
Efficient Video Frame Interpolation, Many-to-Many Splatting, Arbitrary Frame Interpolation, Spatial Selective Refinement
## 1 Introduction
Video frame interpolation (VFI) aims to increase frame rates of videos by synthesizing intermediate frames in between the original ones [1, 2, 3]. As a classic problem in video processing, VFI contributes to many practical applications, including slow-motion animation, video editing, video compression, etc [4, 5, 6, 7, 8]. In recent years, a plethora of techniques for video frame interpolation have been proposed [9, 10, 11, 12, 13, 14, 15, 16, 17]. However, frame interpolation remains an unsolved problem due to challenges like occlusions, blur, and large motion.
The referenced research can roughly be categorized into motion-free and motion-based, depending on whether or not cues like optical flow are incorporated [18, 19, 20, 21, 22]. Motion-free models typically rely on kernel prediction [23, 24, 25, 26] or spatio-temporal decoding [27, 28, 29], which are effective but limited to interpolating frames at fixed time steps and their runtime increases linearly in the number of desired output frames. On the other end of the spectrum, motion-based approaches establish dense correspondences between frames and apply warping to render the intermediate pixels.
A common motion-based technique estimates bilateral flow for the desired time step and then synthesizes the intermediate frame via backward warping [30, 31, 32, 4, 33]. The estimation of bilateral motion is challenging though and incorrect flows can easily degrade the interpolation quality. As a result, for each time step, these methods typically apply a synthesis network to refine the bilateral flows. Another motion-based solution is to forward warp pixels to the desired time step via optical flow [2]. However, forward warping is subject to holes and ambiguities where multiple pixels map to the same location. Therefore, image refinement networks are commonly adopted to correct any remaining artifacts [34, 35, 36]. Both of these approaches require significant amounts of compute, and the refinement networks need to be executed for the entire frame at each of the desired interpolation instants. This decreases their efficiency in multi-frame interpolation tasks since their runtime increases linearly in the number of desired output
Fig. 1: An overview of the proposed M2M++ framework for efficient video frame interpolation. Given an input pair (a), we first apply our many-to-many (M2M) splatting method to efficiently predict an error map (b) and an initial interpolation (c). A refined interpolation (d) is then generated by applying the spatial selective refinement (SSR) network to post-process challenging regions guided by the error map. By setting different thresholds for the erroneous region selection, M2M++ enables trading computational efficiency for interpolation quality and vice versa. |
2301.10483 | SWING: Balancing Coverage and Faithfulness for Dialogue Summarization | Missing information is a common issue of dialogue summarization where some
information in the reference summaries is not covered in the generated
summaries. To address this issue, we propose to utilize natural language
inference (NLI) models to improve coverage while avoiding introducing factual
inconsistencies. Specifically, we use NLI to compute fine-grained training
signals to encourage the model to generate content in the reference summaries
that have not been covered, as well as to distinguish between factually
consistent and inconsistent generated sentences. Experiments on the DialogSum
and SAMSum datasets confirm the effectiveness of the proposed approach in
balancing coverage and faithfulness, validated with automatic metrics and human
evaluations. Additionally, we compute the correlation between commonly used
automatic metrics with human judgments in terms of three different dimensions
regarding coverage and factual consistency to provide insight into the most
suitable metric for evaluating dialogue summaries. | Kung-Hsiang Huang, Siffi Singh, Xiaofei Ma, Wei Xiao, Feng Nan, Nicholas Dingwall, William Yang Wang, Kathleen McKeown | 2023-01-25T09:33:11Z | http://arxiv.org/abs/2301.10483v1 | # Swing : Balancing Coverage and Faithfulness
###### Abstract
Missing information is a common issue of dialogue summarization where some information in the reference summaries is not covered in the generated summaries. To address this issue, we propose to utilize natural language inference (NLI) models to improve coverage while avoiding introducing factual inconsistencies. Specifically, we use NLI to compute fine-grained training signals to encourage the model to generate content in the reference summaries that have not been covered, as well as to distinguish between factually consistent and inconsistent generated sentences. Experiments on the DialogSum and SAMSum datasets confirm the effectiveness of the proposed approach in balancing coverage and faithfulness, validated with automatic metrics and human evaluations. Additionally, we compute the correlation between commonly used automatic metrics with human judgments in terms of three different dimensions regarding coverage and factual consistency to provide insight into the most suitable metric for evaluating dialogue summaries.1
Footnote 1: We release our source code for research purposes: [https://github.com/amazon-science/AWS-SWING](https://github.com/amazon-science/AWS-SWING).
## 1 Introduction
Dialogue summarization is a text generation task that aims to produce a compact summary given a piece of conversation. Conventional approaches to dialogue summarization rely on features of conversation data (Goo and Chen, 2018; Li et al., 2019; Oya et al., 2014). Recently, the rise of large pretrained language models (LMs) has enabled coherent and fluent summaries to be generated without these features. However, low coverage and factual inconsistency remain two pressing issues as studies have shown that the summaries generated from these pre-trained LMs often do not fully cover the reference (Liu and Chen, 2021; Tang et al., 2022) and that the generated summaries are often not factually consistent with the inputs (Zhang et al., 2020; Maynez et al., 2020; Cao and Wang, 2021). If an unfaithful dialogue summarization model with low coverage is deployed for public use, it could spread misinformation and generate misleading content that only covers partial facts of a conversation. Hence, we are urgently in need of a solution to improve coverage without negatively impacting faithfulness for dialogue summarization.
Relatively little work addresses coverage and factual inconsistency for dialogue summarization. Some work addresses the issue of unfaithfulness with a controllable generation framework guided by person named entities (Liu and Chen, 2021) or summary sketches (Wu et al., 2021). Tang et al. (2022) categorize factual inconsistencies for dialogue summarization into different types of errors,
Figure 1: An illustration of how NLI can help determine whether a reference sentence is covered by the generated summary. We compute the entailment probability from each reference sentence (i.e. premise) to each generated sentence (i.e. hypothesis). By taking the max value along the row dimension, the resulting vector denotes the probability that each reference sentence entails a sentence in the generated summary. In this example, the entailment probability for the second reference sentence is low, indicating that this sentence is likely not covered by the generated summary.
such as missing information and wrong reference. Their framework integrates a contrastive loss and a self-supervised loss to reduce multiple types of errors. However, a great portion (> \(40\%\)) of their outputs does not cover the full content of the reference summary. Thus, it is important to address coverage and factual consistency synergistically in dialogue summarization. The issue where the content in the reference does not occur in the generated summary is known as the missing information issue Liu and Chen (2021); Tang et al. (2022). In this work, we aim to mitigate missing information in the summary while being faithful to the dialogue.
We propose Swing, Summarizing Dialogue **With**NLI** Guidance. Our approach samples a summary from the model and utilizes natural language inference (NLI) to determine (1) the faithfulness of each generated sentence and (2) whether each reference sentence has been covered by the generated summary. An example is shown in Figure 1. Based on the results computed by NLI, two losses are proposed to encourage the model to generate missing information and distinguish between factually consistent and inconsistent generated sentences.
Our contributions can be summarized as follows:
* We propose Swing, a dialogue summarization framework that effectively addresses missing information through two losses computed using NLI. The first loss encourages the model to recover content missing from the reference summaries. The second loss instructs the model to differentiate between factually consistent and inconsistent generated sentences.
* Our approach achieves the best performance in mitigating missing information on two public dialogue summarization datasets, DialogSumChen et al. (2021) and SAMSumGliwa et al. (2019), as validated by automatic metrics and human judges.
* We measure the correlation of human judgments with conventional and recently developed automatic metrics to provide intuition for future research on evaluating the faithfulness and coverage of dialogue summaries.
## 2 Method
Upon analyzing the dialogue summaries in SAMSum, we observe that dialogues are often summarized linearly, consistent with the findings of Wu et al. (2021). Therefore, we segment the summaries into sentences and use a natural language inference (NLI) model to provide finer-grained training signals at the sentence level for two goals: (1) encourage generating sentences in the reference summaries that have not been covered by the generated sentences and (2) differentiate factually consistent generated sentences from inconsistent ones. To achieve these goals, we first determine the faithfulness of each sentence using an entailment-induced bipartite graph (SS2.1). Then, we propose two new losses addressing each challenge in turn: an **Uncovered Loss** that encourages the model to recover missing information (SS2.2) and a **Contrastive Loss** that brings closer the representations of the reference summary and the generated sentences that
Figure 2: Illustration of how an entailment-induced bipartite graph is built and how a MixAndMatch summary is derived. With the NLI model, we determine which sentences from each summary contain equivalent information by computing the entailment probabilities between pairs of generated sentences and reference sentences, as indicated by the purple edges. Based on the graph, we determine that the generated summary does not cover the first reference sentence and that the first generated sentence is not faithful. Hence, the MixAndMatch summary is formed by combining the first reference sentence and the second to the fourth generated sentence.
contain equivalent information to some sentences in the reference summary (SS2.3). For the rest of this paper, we use _reference sentence_ and _generated sentence_ to refer to a sentence in the reference summary and the generated summary, respectively.
### Entailment-induced Bipartite Graph
To determine which reference sentence has not been covered by the generated summary and which generated sentence is not faithful to the reference summary, we construct a bipartite graph that links sentences between a reference summary and a generated summary. An edge indicates the linked sentences contain equivalent information. If no edge connects to a reference sentence, we consider this sentence not covered by the generated summary. Similarly, if a generated sentence is not linked in the bipartite graph, this sentence is likely not faithful to the reference summary. We use the entailment probabilities computed by an NLI model to determine whether a pair of sentences contain equivalent information. The procedure of constructing the bipartite graph is shown in Algorithm 1.
The NLI model takes in two sentences, a premise (\(P\)) and a hypothesis (\(H\)), and computes whether \(P\) entails, contradicts, or is neutral to \(H\). Here, we only focus on the entailment probability from the \(i\)-th reference sentence to the \(j\)-th generated sentence \(p_{\text{ent}}(s_{i}^{*},s_{j})\). We use the RoBERTa-Large model2 trained on the MNLI dataset, achieving an accuracy of around 91%, which is on par with the performance of state-of-the-art models.
Footnote 2: [https://huggingface.co/roberta-large-mnli](https://huggingface.co/roberta-large-mnli)
Let \(\phi(i,j)\) denote the mapping between the \(i\)-th reference sentence and the \(j\)-th generated sentence. \(\phi(i,j)=1\) if a link exists between \(s_{i}^{*}\) and \(s_{j}\); otherwise, \(\phi(i,j)=0\). We first consider a simplified setting by assuming each reference sentence can be mapped to at most one generated sentence, and vice versa (i.e. \(0\leq\sum_{j}\phi(i,j)\leq 1\)). In this setting, we can determine whether two sentences contain equivalent information by checking the entailment relation from both directions (lines 26-27).
\[\phi(i,j)=\begin{cases}1,&p_{\text{ent}}(s_{i}^{*},s_{j})>\tau\bigwedge p_{ \text{ent}}(s_{j},s_{i}^{*})>\tau\\ 0,&\text{otherwise}\end{cases} \tag{1}\]
Here, \(\tau\) is a hyperparameter that indicates the entailment threshold.
However, one reference sentence may contain information equivalent to multiple generated sentences (one-to-many mappings) and vice versa (many-to-one mappings). In Figure 2, for example, the second reference sentence contains information equivalent to the second and the third generated sentences combined. This relation cannot be discovered if we only check the entailment relation between pairs of individual sentences.
Therefore, we must resolve one-to-many and many-to-one mappings before checking one-to-one mappings. To find one-to-many mappings, for every reference sentence \(s_{i}^{*}\), we look for consecutive generated sentences \(\{s_{j},s_{j+1},...,s_{j+k}\}\)\(s.t.\)\(\max_{i}\)\(p_{\text{ent}}(s_{i}^{*},s_{m})>\tau\)\(\forall m\in\{j,...,j+k\}\) (lines 6-8). We only check for consecutive sentences based on our previous observation that dialogues are often summarized linearly. For every match, we concatenate the generated sentences \(s_{j:j+k}=\{s_{j},s_{j+1},...,s_{j+k}\}\) and
check whether \(s_{j:j+k}\) entails the reference sentence \(s^{*}_{i}\) (lines 8-9). If the entailment holds, we let \(\phi(i,m)\) = 1 \(\forall m\in\{j,...,j+k\}\) (lines 11-12). The same approach is used to address many-to-one mappings (lines 14-22). Following Algorithm 1, a bipartite graph is built between the generated summary and the reference summary. Henceforth, we denote the reference sentences that have not been covered as \(\underline{S}^{*}\) = \(\{s^{*}_{i}|\,\forall j\ \ \ \phi(i,j)=0\}\) and generated sentences that can be mapped to some of the reference sentences as \(\underline{S}\) = \(\{s_{j}|\,\exists i\ \ \ \phi(i,j)=1\}\).
### Uncovered Loss
The objective of the uncovered loss is to encourage the model to generate information from the reference summary that the generated summary has not covered. To this end, we train the model with MixAndMatch summaries, which are constructed by combining reference sentences that are not covered by the generated summary and generated sentences that contain information equivalent to some of the reference sentences. An example is shown in Figure 2.
The MixAndMatchummary \(\hat{S}\) is constructed by taking the union of \(\underline{S}\) and \(\underline{S}^{*}\) and sorting the sentences by their index,
\[\hat{S}=\textsc{Sort}(\underline{S}\cup\underline{S}^{*}). \tag{2}\]
The uncovered loss is effectively maximum likelihood estimation (MLE) with MixAndMatch summaries being the decoding targets:
\[\mathcal{L}_{\text{ Uncovered}}=-\sum_{t}\log p(\hat{S}_{t}|\hat{S}_{ct}, \mathcal{D}), \tag{3}\]
where \(\mathcal{D}\) is the original dialogue and \(\hat{S}_{t}\) denotes the \(t\)-th token in the MixAndMatch summary.
The main advantages of constructing MixAndMatch summaries over other positive sample construction approaches, such as back translation and paraphrasing, are the two desired properties of this formulation. First, the model already has a high probability of generating sentences in \(\underline{S}\). Therefore, the loss function (Equation (3)) does not penalize the model much for generating these sentences. Second, the penalty for generating sentences \(\underline{S}^{*}\) is larger since the model has a lower probability of generating those sentences.
### Contrastive Loss
In the early stage of our experiment, the original goal was to discourage the model from generating factually inconsistent sentences. We adopt unlike-lihood training Welleck et al. (2020) to decrease the probability of sampling these sentences from the model. However, we found that this objective causes the model to generate nonsense sequences. This phenomenon was also observed when we experimented with ConSeqNan et al. (2021), which also incorporates such a loss function into its training process, as shown in SS4.1. We hypothesize that it resulted from the fact that sentences in dialogue summaries share similar structures. Hence, using the unlikelihood training objective would confuse the model.
Instead, we pivoted our focus on differentiating factually consistent sentences from their inconsistent counterparts with the proposed contrastive loss. For each summary, we use the factually inconsistent sentences as negative samples (i.e. \(s_{j}\notin\underline{S}\)) and consistent sentences as positive samples (i.e. \(s_{j}\in\underline{S}\)). The contrastive learning objective takes a similar form as the InfoNCE loss Oord et al. (2018):
\[\mathcal{L}_{\text{ Contrative}}=-\sum_{s_{i}\in\underline{S}}\frac{\exp( cos(h_{i},h_{S^{*}}))}{\sum_{s_{j}\in S}\exp(cos(h_{j},h_{S^{*}}))} \tag{4}\]
, where \(h_{i}\) and \(h_{j}\) denote the representations of the generated sentences, \(h_{S^{*}}\) means the representations of the reference summary, and \(cos(,)\) denotes cosine similarity. The main difference between our contrastive objective and the other work Cao and Wang (2021); Tang et al. (2022) is granularity. Equation (4) operates at the sentence level rather than the summary level; therefore, it provides finer-grained training signals.
### Training
The final loss function that our model is optimized with is a weighted sum of the two aforementioned loss functions and MLE,
\[\mathcal{L}_{\text{Final}}=\mathcal{L}_{\text{ MLE}}+\alpha\mathcal{L}_{\text{ Uncovered}}+\beta\mathcal{L}_{\text{ Contrative}}, \tag{5}\]
where \(\mathcal{L}_{\text{MLE}}\) is:
\[\mathcal{L}_{\text{ MLE}}=-\sum_{t}\log p(S^{*}_{t}|S^{*}_{ct}, \mathcal{D}). \tag{6}\]
## 3 Experiments
### Datasets
Experiments are conducted on two English-language dialogue summarization datasets: SAMSum Gliwa et al. (2019) and DialogSumChen
et al., 2021). SAMSum contains 16,369 online chitchat dialogues with an average of around 94 tokens per dialogue. DialogSum is a spoken dialogue dataset that consists of 13,460 samples in total. With an average token count of about 131, the dialogues in DialogSum are under real-life scenarios with clear communication patterns and intents. Details of the dataset statistics can be found in Appendix A.
### Metrics
Our evaluation focuses on measuring the factual consistency, particularly the missing information challenge, of the summarization models. Therefore, we adopt recently developed metrics that have been shown to correlate well with human judgments in terms of faithfulness. BARTScore Yuan et al. (2021) computes the semantic overlap between the generated summary and the reference summary by calculating the logarithmic probability of generating each summary conditioned on the other one. Since our goal is to assess how well the model reduce information missing from the reference summary, we consider the _Recall (R)_ setting where we assess \(p(S^{\star}|S,\theta)\), the likelihood of generating the reference summary \(S\) given the generated summary \(S^{\star}\). FactCC Kryscinski et al. (2020) is an entailment-based metric that predicts the faithfulness probability of a claim w.r.t. with the source texts. Similar to BARTScore, we use FactCC in the _Recall_ setting where the claim is a reference sentence and the source text is the generated summary. We report the mean of the average Correct probability of each sentence within a generated summary.
In addition, we report the ROUGE-L metric Lin (2004), which has been also shown to better reflect faithfulness compared to ROUGE-1 and ROUGE-2 Pagnoni et al. (2021). For these metrics, we also consider the _F1_ setting, where we compute each metric in the reverse direction (\(S^{\star}\to S\)) and then take the average of both directions, to validate that the model is not generating too much redundant information. Finally, two recently introduced QA-based metrics that have demonstrated close approximation to human judgements in terms of factuality, QUALS Nan et al. (2021) and QAFactEval Fabbri et al. (2022), are also used for evaluation.
### Implementation Details
We choose BART Lewis et al. (2020) as the backbone seq2seq model as it has demonstrated better dialogue summarization performance than other pre-trained language models Tang et al. (2022), such as PEGASUS Zhang et al. (2020) and T5 Raffel et al. (2020). The proposed models are optimized using AdamW Loshchilov and Hutter (2019) with learning rate 3e-5 and weight decay 1e-3. The maximum input sequence length is set to 1024. For all baseline models, we use the best hyper-parameters reported in their papers. We fix \(\tau\) to be 0.5 throughout all our experiments. \(\alpha\) and \(\beta\) are both 1.0.
### Baselines
We compare Swing with the following competitive baseline systems. **TextRank**Mihalcea and Tarau (2004) is a graph-based ranking algorithm that performs extractive summarization. **BART**Lewis et al. (2020) is a seq2seq language model pre-trained on various denoising objectives. **Ctrl-DiaSumm**Liu and Chen (2021) and **CODS**Wu et al. (2021) are controllable generation frameworks that generate summaries guided by named entity
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{DialogSum} & \multicolumn{6}{c}{SAMSemNum} \\ \cline{2-13}
**Model** & RL\({}_{p}\) & RL\({}_{q}\) & BS\({}_{p}\) & BS\({}_{p}\) & RC\({}_{p}\) & PC\({}_{g}\) & QS & QFE & RL\({}_{p}\) & RL\({}_{q}\) & BS\({}_{p}\) & BS\({}_{p}\) & FC\({}_{p}\) & FC\({}_{g}\) & QS & QFE \\ \hline TextRank & 27.74 & 29.16 & -3.000 & -3.099 & 60.55 & 59.54 & -1.948 & 0.566 & 116.5 & -4.374 & -3.891 & 34.28 & 3.02 & -2.172 & 0.237 \\ BART-Large & 50.82 & 56.78 & -2.012 & -1.960 & 82.50 & 85.86 & -1.183 & 1.854 & 49.53 & 52.71 & -2.248 & -2.332 & 62.46 & 61.28 & -0.912 & 2.335 \\ CPLPLDaSum & 48.99 & 57.25 & -2.145 & -1.985 & 82.55 & 89.56 & -1.241 & 1.817 & 47.79 & 51.17 & -2.360 & -2.414 & 61.50 & 61.76 & -0.957 & 2.727 \\ PODS & 48.51 & 48.36 & -2.379 & -2.214 & 83.35 & 86.81 & -1.246 & 1.860 & 48.39 & 47.68 & -2.643 & -2.593 & 61.21 & 62.01 & -0.867 & 2.345 \\ ConSeq & 22.82 & 19.50 & -3.480 & -3.588 & 84.32 & 73.14 & -1.474 & 0.208 & 12.04 & 76.2 & -5.908 & -7.278 & 41.23 & 13.77 & -2.058 & 0.015 \\ CLIPF & 51.87 & 56.22 & -2.012 & -1.973 & 83.86 & 36.30 & -1.106 & 2.109 & 43.70 & 45.49 & -2.485 & -2.340 & 55.47 & 56.01 & -1.063 & 1.891 \\ ConSIFT & 50.44 & 55.65 & -2.049 & -2.061 & 83.64 & 86.37 & -1.179 & 1.790 & 49.29 & 52.76 & -2.188 & -2.316 & **65.03** & 63.12 & **-0.819** & 2.343 \\ \hline Swing & **51.96** & **59.04\({}^{\star}\) & **-1.999\({}^{\star}\)** & -1.904\({}^{\star}\)** & **56.48** & **89.03** & -1.082\({}^{\star}\) & 2.087 & **50.08** & 52.91 & -2.228 & -2.310\({}^{\star}\) & 64.19 & **63.52** & -0.829 & **2.407** \\ - _C_\({}_{\text{uncured}}\) & 50.94 & **60.06\({}^{\star}\)** & -2.044 & **-1.895** & 83.26 & 87.45 & **-1.075\({}^{\star}\)** & 2.399\({}^{\star}\) & 49.78 & 53.57 & -2.231 & -2.295\({}^{\star}\) & 63.81 & 63.11 & -0.876 & 1.989 \\ - _C_\({}_{\text{uncured}}\) & 51.53 & 59.27\({}^{\star}\) & -2.012 & -1.901 & 82.90 & 85.86 & -1.130 & **2.99\({}^{\star}\)** & 49.73 & **53.96** & **-1.285\({}^{\star}\)** & **-1.243\({}^{\star}\)** & 63.47 & 63.15 & -0.886 & 2.027 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance comparison on DialogSum and SAMSum. - \(\mathcal{L}_{\text{Uncovered}}\) and - \(\mathcal{L}_{\text{Contrastive}}\) denote variants of Swing by ablating the corresponding loss. RL denotes ROUGE-L (%), BS denotes BARTScore, FC denotes FactCC (%), QS denotes QUALS, and QFE denotes QAFactEval. The subscripts \(F\) and \(R\) denote F1 score and recall, respectively. The proposed method outperforms previous systems on both DialogSum and SAMSum in most metrics, especially on the recall measures. Statistical significance over previous best systems computed with the permutation test Fisher et al. (1937) is indicated with * (\(p<.01\)).
planning and sketches, respectively. **ConSeq**[21] learns a contrastive objective based on unlikelihood training, where positive and negative samples are selected by QUALS. **CLIFF**[17] and **ConFiT**[13] are trained with a similar contrastive learning loss that takes the form of the InfoNCE loss [11], except that ConFiT is optimized with an additional self-supervised loss that aims to reduce reference errors. BART-Large is used across all experiments that involve pre-trained language models for fair comparison.
## 4 Results
### Main results
Table 1 summarizes the main results on DialogSum and SAMSum. Swing outperforms previous approaches in almost all metrics, especially recall measures. This result reflects that the proposed approach generates summaries that cover more content in the reference summaries lexically and semantically. One interesting observation was the deficient performance of ConSeq on both datasets. We hypothesize that poor performance was the use of the unlikelihood training objective in their loss, as mentioned in SS2.3. Since sentences of dialogue summaries often share similar structures, adopting such an objective could confuse the model. We verified this hypothesis by running a small experiment by training BART-Large with MLE and negative samples determined by QUALS, similar to ConSeq. The resulting model also produces significantly lower performance than training with MLE alone. The finding confirms that the poor performance of ConSeq is caused by the unlikelihood training and that such a loss function is unsuitable for dialogue summarization.
### Human Evaluation
To further validate the effectiveness of Swing, we use Amazon's Mechanical Turk (AMT) to recruit workers to conduct human evaluations on three methods: CLIFF, ConFiT and Swing. We sampled 100 dialogues from the test set of DialogSum and SAMSum, respectively. For each dialogue, human judges are presented with a pair of summaries produced by two different approaches and asked to select the better one with respect to three dimensions. **Recall** assesses the portion of information in the reference summary covered by the generated summary. **Precision** considers whether all the content in the generated summary occurs in the reference summary. **Faithfulness** examines whether the generated summary is actually consistent with the dialogue. "Tie" is selected if the judges consider the two summaries to be of equal quality. The final score of each system is calculated as the percentage of times the system is selected as the better one minus the percentage of times the system is not. To evaluate the annotation quality, we compute the inter-annotator agreement. The average Cohan's Kappa [11] is 54.35%, indicating a moderate agreement. Details of the human evaluation setup can be found in Appendix B.
The human evaluation results are demonstrated in Figure 3. We have the following observations. First, Swing achieves the highest Recall scores on both datasets, indicating that our approach is the best in addressing the missing information issue for dialogue summarization. Second, while Swing does not score the highest on Precision, we achieve the highest scores on Faithfulness. This implies that even though our approach often generates summaries with extra information,
Figure 3: Human evaluation results. Swing achieves the highest Recall and Faithfulness scores on both datasets, suggesting the advantages of our approach in reducing missing information and improving the overall faithfulness of the generated dialogue summary.
the additional content is likely still faithful to the input. To measure the amount of additional information produced, we compute the average number of tokens per summary for each model. As seen in Table 3, the summaries generated by Swing is only slightly longer than those produced by CLIFF and ConFiT. This suggests that Swing achieves significantly higher faithfulness and coverage than CLIFF and ConFiT while maintaining conciseness.
### Qualitative Analysis
To provide better insight into the effectiveness of the proposed method, we conduct a qualitative analysis using the 100 dialogues randomly sampled from the SAMSum dataset. Specifically, we further categorize missing information errors into two sub-types: (1) _missing details_ where partial information of a sentence in the reference summary is missing in the generated summary and (2) _missing sentences_ where the model fails to generate an entire sentence in the reference summary. An example of each sub-type is shown in Table 2. By comparing the test sets outputs of ConFiT and Swing, we see that there are 10 improved cases with less _missing details_ and 6 cases where _missing sentences_ is mitigated by Swing. Meanwhile, our proposed approach only introduces _missing details_ error and _missing sentences_ error in 1 and 2 examples, respectively. This implies that our approach is effective in alleviating both sub-types of missing information error while particularly advantageous in reducing _missing details_ errors.
### Correlation with Human Judgements
Although recently proposed metrics have been shown to be highly correlated with human judgments on news summarization in terms of factuality [13, 14], no previous work has studied the transferability of these metrics to dialogue summarization. We seek to answer this question by computing the correlation of the automatic metrics in Table 1 with the human annotations discussed in SS4.2. Using Kendall's Tau [12] as the correlation measure, the results are summarized in Table 4. We observe that: (1) \(\textsc{BARTScore}_{R}\) is the most consistent and reliable metric across the three dimensions. It performs the best in Recall on both datasets, indicating that \(\textsc{BARTScore}_{R}\) is most suitable for measuring how well a model resolves the missing information issue in dialogue summarization. (2) Although a large number of invalid questions and answers are generated, QUALS is the best metric for assessing Precision overall. (3) \(\textsc{FactCC}_{F}\) and \(\textsc{FactCC}_{R}\) are two of the worst metrics in general. This could be explained by the fact that FactCC constructs negative samples with some semantically variant transformations. However, these transformations may not be comprehensive enough to cover all cases. Hence, the poor transferability of FactCC on these two datasets.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Reference Summary** & **ConFiT** & **Swing** \\ \hline Mike took his car to garage today. Ernest is relieved as someone had just crashed into a red Honda which looks like Mike’s. & Mike took his car to the garage today. Someone crashed into his car. & Mike took his car into the garage today. Someone just crashed into a red Honda looking like Mike’s. \\ \hline Hilary has the keys to the apartment. Benjamin wants to get them and go take a nap. Hilary is having lunch with some French people who work on the history of food in colonial Mexico. They will try to avoid talking them. They’re meeting for the drinks in the evening. & Benjamin, Elliot, Daniel and Hilary will meet at La Cantina at 2 pm to have l lunch with some French people who work on the history of food in colonial Mexico. \\ \hline \hline \end{tabular}
\end{table}
Table 2: Qualitative analysis on the outputs of Swing and ConFiT. The two rows demonstrate the _missing details_ and the _missing sentences_ issue of the summaries generated by ConFiT, respectively. The extra information in the outputs of ConFiT that also occurs in the reference summaries is highlighted in blue. In both cases, Swing is able to cover more content presented in the reference summaries.
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & DialogSum & SAMSum \\ \hline ConFit & 29.46 & 22.45 \\ Cliff & 27.34 & 22.30 \\ BART-Large & 28.03 & 23.19 \\ \hline Swing & 31.32 & 24.23 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Average token count per summary generated by different models.
### Remaining Challenges
We analyzed the remaining errors by comparing 100 generated summaries with corresponding reference summaries on the SAMSum datasets using the categories of factual errors defined in Tang et al. (2022). The results are shown in Figure 4. We observe that missing information still accounts for the largest portion of factual errors, even though our approach significantly exceeds prior methods in mitigating this issue. This reflects that this issue is challenging to tackle and that there is still a great opportunity to improve the reduction of missing information. As a comparison, we manually inspected outputs of BART-Large using the same 100 dialogues as input. We found 42 cases where information is missing from the dialogue summaries produced by BART-Large. This observation further confirms the effectiveness of Swing in addressing insufficient coverage. In addition, redundant information is another major source of errors. Although we have shown in SS4.2 that the additional information generated by Swing is likely still faithful to the input dialogue, compactness is one of the important qualities of a summary. This can be improved by using NLI to guide the model to avoid generating extra information. Other common mistakes are wrong reference and object errors, both of which can be addressed with the self-supervised loss discussed in Tang et al. (2022).3
Footnote 3: This analysis is not comparable to results reported in Tang et al. (2022) due to differences in the sampled examples.
## 5 Related Work
Dialogue SummarizationEarly work on dialogue summarization focus on the AMI meeting corpus (McCowan et al., 2005) due to the lack of dialogue summarization data. These studies enhance summarization performance by leveraging features of conversational data, such as dialogue act (Goo and Chen, 2018), visual features (Li et al., 2019), and the relationships between summary and dialogue (Oya et al., 2014). Later, Gliwa et al. (2019) released the SAMSum dataset, the first large-scale dialogue summarization dataset, enabling abstractive summarization research on casual chat dialogue. With the rise of large language models (LMs), recent work focuses on improving the controllability of sequence-to-sequence models built upon large LMs. For instance, Wu et al. (2021) proposes to utilize a summary sketch to control the granularity of the summary generated. Liu and Chen (2021) conditions the generators with person name entities to control which people to include in the generating summary. Chan et al. (2021) improves controllability by formulating the summarization task as a constrained Markov Decision Process.
Factual Consistency EnhancementWhile factuality has been widely explored in the field of fact-checking and fake news detection (Thorne et al., 2018; Wadden et al., 2020; Huang et al., 2022; Shu et al., 2018; Pan et al., 2021; Huang et al., 2022), factual inconsistency remains a major challenge for abstractive summarization. One line of work attempts to improve the faithfulness of
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{**DialogSum**} & \multicolumn{3}{c}{**SAMSum**} \\ \cline{2-7} Metric & Recall & Precision & Faithfulness & Recall & Precision & Faithfulness \\ \hline ROUGE-L\({}_{F}\) & 23.50 & **24.21** & 10.29 & 6.07 & 10.24 & -0.75 \\ ROUGE-L\({}_{R}\) & 23.46 & 2.51 & 4.24 & 29.52 & 9.61 & 17.88 \\ BARTScore\({}_{F}\) & 18.35 & 25.94 & 3.17 & 15.50 & 8.00 & 10.69 \\ BARTScore\({}_{R}\) & **26.48** & 14.87 & 9.25 & **32.10** & 9.68 & **24.11** \\ FactCC\({}_{F}\) & 6.15 & 6.93 & 1.19 & -3.43 & 5.12 & -2.28 \\ FactCC\({}_{R}\) & 4.79 & 6.86 & 10.56 & 4.13 & 10.32 & -1.43 \\ QUALS & 14.23 & 23.61 & -0.83 & 1.55 & **15.35** & 4.50 \\ QFAcetEval & 14.06 & 16.20 & **16.80** & 5.03 & 2.83 & 6.26 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Correlation (%) of automatic metrics with human judgements. We first convert human evaluation results and automatic metric scores into a scale of {-1, 0, 1}, which corresponds to {Lose, Tie, Win}. Then, Kendall’s Tau (Kendall, 1938) is used to compute the correlation between two sequences.
Figure 4: Remaining challenges.
the generated summary with a separate correction model that corrects the errors made by the summarization model Dong et al. (2020); Cao et al. (2020); Fabbri et al. (2022) or directly fix factual inconsistencies in the training data Adams et al. (2022). Another line of work employs auxiliary loss functions to improve models' representations or discourage the model from generating unfaithful outputs Cao and Wang (2021); Chen et al. (2021); Nan et al. (2021); Tang et al. (2022). The main advantage of these approaches is the efficiency in inference time.
Some studies have attempted to use NLI to detect factual inconsistency in generated summaries. Early approaches rely on out-of-the-box NLI models, which did not yield satisfactory results Falke et al. (2019). Barrantes et al. (2020) improved the detection accuracy by using an NLI model fine-tuned on the Adversarial NLI dataset Nie et al. (2020). Laban et al. (2022) addresses the mismatch issue in input granularity between NLI datasets and inconsistency detection by passing sentence pairs as inputs instead of document-summary pairs. Kryscinski et al. (2020) and Yin et al. (2021) trains document-sentence entailment models to address the granularity mismatch issue. Utama et al. (2022) introduces a controllable generation framework that generates document-level NLI training data for identifying factual inconsistency. Our work leverages an NLI model to guide the dialogue summarization model to recover missing information.
## 6 Conclusion
We have proposed Swing, a dialogue summarization framework that generates summaries with mitigated missing information and improved faithfulness. To instruct the model to generate missing content from the reference summaries and to differentiate factually consistent generated sentences from their inconsistent counterparts, we propose two losses based on NLI. Experimental results on the DialogSum and SAMSum datasets showed that our approach achieves significantly higher faithfulness and coverage, while still maintaining conciseness, compared to prior methods. In addition, we measure the correlation between the reported automatic metrics and human judgments to provide insight into the most suitable metric for evaluating the coverage and factuality of dialogue summaries for future research.
## 7 Ethical Considerations
We acknowledge that the use of large language models pre-trained on the Web could lead to biased outputs. We did find out that our model may sometimes generate the incorrect pronouns for neutral names. For example, in Figure 1, Charlee is being referred to as a male in the generated summary, while Charlee is actually a female as shown in the reference summary. Such an issue is often caused by under-specified context (e.g. Charlee's gender is not mentioned in the input dialogue). Fortunately, we found that such an error accounts for < 1% of the total outputs from our framework and the issue can be largely alleviated when enough context is provided.
## 8 Limitations
While our proposed approach is effective in mitigating missing information, this issue is still far from resolved, as shown in Figure 4. Significant effort is needed to ensure dialogue summarization models produce completely factual content. In addition, our method works as we found that most of the reference summaries in the two datasets we used are faithful to the corresponding dialogue. The proposed method may not work on other summarization datasets, such as XSum, which contains hallucinations in about 70% of the reference summaries Maynez et al. (2020).
## 9 Acknowledgments
We would like to extend our gratitude to the reviewers for their valuable feedback and insights, which greatly contributed to the improvement of this paper. We would also like to thank the human evaluators for their time and effort in assessing the performance of our model. Their contributions have been essential in ensuring the quality of our research.
|
2307.02309 | $D_s \to f_0$ form factors and the $D_s^+ \to \left[ ππ\right]_{\rm
S} e^+ ν_e$ decay from light-cone sum rules | In this paper we revisit $D_s \to f_0$ form factors from the light-cone sum
rules with the light meson light-cone distribution amplitudes. The main
motivation of this study is the differential decay width of $D_s \to
\left[\pi\pi \right]_{\rm S} e \nu_e$ measured recently by BESIII collaboration
and the $D_s \to f_0$ form factor extracted under the intermediate resonant
model. Our result of the differential width of $D_s^+ \to f_0 (\to \left[
\pi\pi \right]_{\rm S}) e^+ \nu_e$ decay obtained under the narrow width
approximation is a litter bit lower than the data, the result obtained under
the resonant Flatt\'e model is in consistent with the data while shows a litter
bit larger, indicating a sizable mixing $\sim 20\degree$ between ${\bar s}s$
and ${\bar u}u+{\bar d}d$ of $f_0$. In order to obtain a model independent
prediction, we suggest to calculate $D_s \to \left[ \pi\pi \right]_{\rm S}$
form factors with the isoscalar scalar dipion light-cone distribution
amplitudes. Our calculation of $D_s \to \left[ \pi\pi \right]_{\rm S}$ form
factors is carried out at the leading twist level due to the finite knowledge
of dipion system, the result of differential width shows a moderate evolution
in contrast to that obtained from the narrow width approximation and the
Flatt\'e model, revealing a bright prospect to study the four-body leptonic
decays of heavy mesons with the dimeson light-cone distribution amplitudes. | Shan Cheng, Shu-Lei Zhang | 2023-07-05T14:11:51Z | http://arxiv.org/abs/2307.02309v1 | (D_{s}\to f_{0}\) form factors and the \(D_{s}^{+}\to[\pi\pi]_{\rm S}\,e^{+}\nu_{e}\) decay from light-cone sum rules
###### Abstract
In this paper we revisit \(D_{s}\to f_{0}\) form factors from the light-cone sum rules with the light meson light-cone distribution amplitudes. The main motivation of this study is the differential decay width of \(D_{s}\to[\pi\pi]_{\rm S}\,e\nu_{e}\) measured recently by BESIII collaboration and the \(D_{s}\to f_{0}\) form factor extracted under the intermediate resonant model. Our result of the differential width of \(D_{s}^{+}\to f_{0}(\to[\pi\pi]_{\rm S})e^{+}\nu_{e}\) decay obtained under the narrow width approximation is a litter bit lower than the data, the result obtained under the resonant Flatte model is in consistent with the data while shows a litter bit larger, indicating a sizable mixing \(\sim 20^{\circ}\) between \(\bar{s}s\) and \(\bar{u}u+\bar{d}d\) of \(f_{0}\). In order to obtain a model independent prediction, we suggest to calculate \(D_{s}\to[\pi\pi]_{\rm S}\) form factors with the isoscalar scalar dipion light-cone distribution amplitudes. Our calculation of \(D_{s}\to[\pi\pi]_{\rm S}\) form factors is carried out at the leading twist level due to the finite knowledge of dipion system, the result of differential width shows a moderate evolution in contrast to that obtained from the narrow width approximation and the Flatte model, revealing a bright prospect to study the four-body leptonic decays of heavy mesons with the dimeson light-cone distribution amplitudes.
## I Introduction
Weak decays of hadrons containing at least a valence bottom or charm quarks play an important role in the precise examination of standard model (SM) and offer one of the best chance for the discovery of new physics (NP) beyond standard model, in which the semileptonic \(D_{s}\) weak decays provide the clear environment to study the structure of light hadrons [1]. For examples, the semileptonic decay with \(D_{s}\to\eta^{(\prime)}\) transition provides an opportunity to study the \(\eta\)-\(\eta^{\prime}\) mixing [2; 3], the decay deduced by \(D_{s}\to f_{0}\) and \(D_{s}\to a_{0}\) transitions could help us to understand the composition figure of scalar mesons [4; 5; 6] and the isospin-violated \(f_{0}\)-\(a_{0}\) mixing [7; 8]. In this work, we focus on the \(D_{s}^{+}\to f_{0}e^{+}\nu_{e}\) decay by considering the width effect in the differential width measurements.
From the spectral analysis, the underlying assignments of scalar mesons like \(f_{0}(980)\) is still not clear. Pictures like the tetraquark [9; 10], the gluonball [11], the hybrid state [12] and the molecule state [13] are all discussed, in which the tetraquark assignment is more favorite nowadays. The case is different in the \(B\) meson decay where \(f_{0}\) is energetic and the process happens with large recoiling, in this case the conventional \(q\bar{q}\) assignment is the favorite one because the possibility to form a tetra-quark state is power suppressed with comparing to the state of quark pair. In the \(D_{s}\to f_{0}\) decay, one may doubt the \(q\bar{q}\) configuration because \(f_{0}\) is not fast moving enough so that it has time to pick other \(q\bar{q}\) to form a tetraquark. This is also our consideration in this paper, we take the \(q\bar{q}\) configuration in the form factors calculation to check the reliability of energetic picture in charm meson decays. From the experiment side, one decade ago, CLEO collaboration published the first absolute branching fraction measurement of \(D_{s}\) semileptonic decay including a scalar meson in the final state. The result published firstly is \(\mathcal{B}(D_{s}^{+}\to f_{0}e^{+}\nu_{e})\times\mathcal{B}(f_{0}\to\pi^{+} \pi^{-})=(1.3\pm 0.4\pm 0.1)\times 10^{-3}\)[14] and subsequently updated to \((2.0\pm 0.3\pm 0.1)\times 10^{-3}\) in Ref. [15] and also in Ref. [16]. Recently, the BESIII collaboration has verified the CLEO measurement with much better accuracy, the result of product branching fraction is \(B(D_{s}^{+}\to f_{0}e^{+}\nu_{e})\times\mathcal{B}(f_{0}\to\pi^{0}\pi^{0})=(0.7 9\pm 0.14\pm 0.03)\times 10^{-3}\) for the neutral channel [17] and \(\mathcal{B}(D_{s}^{+}\to f_{0}e^{+}\nu_{e})\times\mathcal{B}(f_{0}\to\pi^{+} \pi^{-})=(1.72\pm 0.13\pm 0.10)\times 10^{-3}\) for the charged channel [18]. More important is that BESIII extracted the \(D_{s}\to f_{0}\) form factor under the Flatte resonant model with the data corresponding to an integrated luminosity of \(7.33\) fb\({}^{-1}\), the result at the full recoiled point is \(f_{+}(q^{2}=0)=0.518\pm 0.018\pm 0.036\)[18] with the statistical and systematic errors.
In this paper, we revisit the \(D_{s}\to f_{0}(980)\) form factor under the scenario of \(\bar{s}s\) and possible mixing with \(\bar{u}u+\bar{d}d\). With considering the mixing angle \(\theta=20^{\circ}\pm 10^{\circ}\), the updated LCSRs calculation of \(D_{s}\to f_{0}\) form factor is in consistent with the one extracted from the differential width of \(D_{s}^{+}\to f_{0}([\pi\pi]_{\rm S})e^{+}\nu_{e}\) decay under the Flatte model by BESIII. While showing a litter bit larger. Our calculation is carried out at leading order of strong coupling constant. In order to estimate the next-to-leading-order (NLO) correction, we vary the charm quark mass by \(\bar{m}_{c}(m_{c})=1.3\pm 0.3\) GeV which deduces \(20\%-30\%\) additional uncertainty. We adopt the Flatte formula to discuss the width effect in semileptonic decay \(D_{s}^{+}\to f_{0}(-\pi^{+}\pi^{-})e^{+}\nu_{e}\) and compare the differential width to the recent measurements at BESIII [18]. In order to obtain a model independent prediction,
we suggest to calculate \(D_{s}\rightarrow\left[\pi\pi\right]_{\rm S}\) form factors with dipion distribution amplitudes (2\(\pi\)DAs) and compare directly to the measurement without involving resonant. The \(q^{2}\) dependence of \(D_{s}^{+}\rightarrow\left[\pi^{+}\pi^{-}\right]_{\rm S}e^{+}\nu_{e}\) decay width indeed shows a different behavior comparing to the result obtained by resonant model. Our calculation is carried out at leading twist level due to the finite knowledge of 2\(\pi\)DAs, more precise measurement is highly anticipated to help us determine the subleading twist 2\(\pi\)DAs.
The rest of this paper is organized as follows. In the next section, the decay constant and LCDAs of scalar isoscalar meson is revisited in the framework of two-point sum rules, based on which the \(D_{s}\to f_{0}\) and \(B_{s}\to f_{0}\) transition form factors are calculated from the light-cone sum rules in section III. In Section IV, the chiral even generalized \(\pi\pi\) distribution amplitudes is introduced and the \(D_{s}\rightarrow\left[\pi\pi\right]_{\rm S}\) form factors are obtained at leading twist level. The phenomena related to the recent BESIII measurement is presented in section III.2, where the differential width is discussed under the narrow width approximation, Flatte resonant model and the direct \(D_{s}\rightarrow\left[\pi\pi\right]\) transition. The summary is given in section V.
## II Light-cone distribution amplitudes of \(f_{0}\)
LCDAs is defined by a nonlocal matrix element sandwiched between vacuum and the onshell hadron state. Up to twist three level, the light-cone expansion of scalar meson is
\[\langle S(p_{1})|\bar{q}_{2\beta}(z)q_{1\alpha}(-z)|0\rangle= \frac{1}{4}\int_{0}^{1}du\,e^{i(2u-1)p_{1}\cdot z}\left\{\bar{p}_{1}\phi(u)+m_ {S}\left[\phi^{s}(u)-\frac{\sigma_{\rho\delta}p_{1}^{\rho}z^{\delta}}{6} \phi^{\sigma}(u)\right]\right\}_{\beta\alpha},\] \[\langle S(p_{1})|\bar{q}_{2\beta}(z)g_{s}G_{\mu\nu}(vz)\sigma_{ \rho\delta}q_{1\alpha}(-z)|0\rangle=\left[p_{1\mu}\left(p_{1\rho}g_{\nu\delta}^ {\perp}-p_{1\delta}g_{\nu\rho}^{\perp}\right)\right]\int{\cal D}\alpha_{i}\, \phi_{3S}(\alpha_{i})\,e^{ipz(-\alpha_{1}+\alpha_{2}+v\alpha_{3})}\] \[-\left[p_{1\nu}\left(p_{1\rho}g_{\mu\delta}^{\perp}-p_{1\delta}g_ {\mu\rho}^{\perp}\right)\right]\int{\cal D}\alpha_{i}\,\phi_{3S}(\alpha_{i})\, e^{ipz(-\alpha_{1}+\alpha_{2}+v\alpha_{3})}. \tag{1}\]
In the definitions \(\phi\) and \(\phi^{s,\sigma}\) are the twist two and twist three LCDAs of the \(\bar{q}q\) composition, respectively, \(\phi_{3S}\) is the twist three LCDA in the \(\bar{q}q\) composition. Here \(v=(u-\alpha_{1})/\alpha_{3}\) and \(\alpha_{3}=1-\alpha_{1}-\alpha_{2}\), the measure of three particle integral reads as
\[\int_{0}^{u}\,{\cal D}\alpha_{i}=\int_{0}^{u}d\alpha_{1}\int_{0}^{1-u}\frac{d \alpha_{2}}{1-\alpha_{1}-\alpha_{2}}. \tag{2}\]
The leading twist and two particle twist three LCDAs are normalized to the vector and scale-dependent scalar decay constants, which are defined by the local matrix elements deduced by the vector and scalar currents, respectively,
\[\int_{0}^{1}du\,\phi(u)=f_{S},\qquad\langle S(p_{1})|\bar{q}_{2} \gamma_{\mu}q_{1}|0\rangle=f_{S}p_{1\mu}, \tag{3}\] \[\int_{0}^{1}du\,\phi^{(s/\sigma)}(u)=\bar{f}_{S},\qquad\langle S(p _{1})|\bar{q}_{2}q_{1}|0\rangle=\bar{f}_{S}m_{S}. \tag{4}\]
For the neutral scalar meson like \(f_{0},a_{0}\) which could not be produced via the vector current, \(f_{S}=0\) due to the charge conjugate invariance or the conservation of vector current. This is also implied in the relation
\[\bar{f}_{S}=\mu_{S}f_{s},\qquad\mu_{S}\equiv\frac{m_{S}}{m_{q_{2}}(\mu)-m_{q_ {1}}(\mu)}, \tag{5}\]
from which we can see that \(f_{S}\) vanishes in the \(\mathrm{SU}(3)\) or isospin limit.
### The scalar decay constant
To calculate the scalar coupling, we consider the following correlation function
\[\Pi(q)=i\int d^{4}xe^{iq\cdot x}\langle 0|\mathrm{T}\{\bar{q}_{2}(x)q_{1}(x), \bar{q}_{1}(0)q_{2}(0)\}|0\rangle. \tag{6}\]
The QCD sum rule (QCDSR) of the neutral scalar coupling is quoted as [19]
\[m_{S}^{2}\bar{f}_{S}^{2}(\mu)e^{-m_{S}^{2}/M^{2}} =I_{0}^{\rm pert}(s_{0},M^{2})+\langle\bar{q}q\rangle I_{0}^{( \bar{q}q)}(M^{2})+\langle\alpha_{s}G^{2}\rangle I_{0}^{(\alpha_{s}G^{2})}(M^{ 2},\mu)\] \[+\langle g_{s}\bar{q}\sigma TG_{Q}\rangle I_{0}^{(g_{s}\bar{q} \sigma TG_{Q})}(M^{2})+\langle g_{s}\bar{q}q\rangle^{2}I_{0}^{(g_{s}\bar{q}q)^ {2}}(M^{2})+\langle g_{s}^{2}\bar{q}q\rangle^{2}I_{0}^{(g_{s}^{2}\bar{q}q)^{2}} (M^{2},\mu) \tag{7}\]
with the weighted functions
\[I_{0}^{\rm pert}(s_{0},M^{2})=\frac{3M^{4}}{8\pi^{2}}\left[1+\frac{ \alpha_{s}(\mu)}{\pi}\left(\frac{17}{3}+2\frac{I(1)}{f(1)}-2\ln\frac{M^{2}}{\mu^ {2}}\right)f(1)\right],\] \[I_{0}^{(\bar{q}q)}(M^{2})=3m_{q},\qquad I_{0}^{(\alpha_{s}G^{2}) }(M^{2},\mu)=\frac{1}{8\pi},\] \[I_{0}^{(g_{s}\bar{q}\sigma TGq)}(M^{2})=-\frac{m_{q}}{M^{2}}, \qquad I_{0}^{(g_{s}\bar{q}q)^{2}}(M^{2})=\frac{2\pi\alpha_{s}}{3M^{2}},\qquad I _{0}^{(g_{s}^{2}\bar{q}q)^{2}}(M^{2},\mu)=\frac{\pi\alpha_{s}}{M^{2}}, \tag{8}\]
and the functions \(I(1)=\int_{e^{-\sigma_{0}/M^{2}}}^{1}(\ln t)\ln(-\ln t)dt\) and \(f(1)=1-e^{-s_{0}/M^{2}}\left(1+s_{0}/M^{2}\right)\). The renormalization group equations of scalar coupling and vacuum condensations are [20; 21]
\[\tilde{f}_{S}(\mu)=\tilde{f}_{S}(\mu_{0})L(\mu,\mu_{0}),\qquad m_ {q}(\mu)=m_{q}(\mu_{0})L^{-1}(\mu,\mu_{0}),\qquad\langle\bar{q}q(\mu)\rangle= \langle\bar{q}q(\mu_{0})\rangle L(\mu,\mu_{0}),\] \[\langle g_{s}\bar{q}\sigma TGq(\mu)\rangle=\langle g_{s}\bar{q} \sigma TGq(\mu_{0})\rangle L^{-1/6}(\mu,\mu_{0}),\qquad\langle g_{s}\bar{q}q( \mu)\rangle^{2}=\langle g_{s}\bar{q}q(\mu_{0})\rangle^{2}L(\mu,\mu_{0}). \tag{9}\]
Here \(L(\mu,\mu_{0})=\left[\alpha_{s}(\mu_{0})/\alpha_{s}(\mu)\right]^{4/b}\). In the numerics, we take \(\alpha_{s}(1\,{\rm GeV})=0.47\) corresponding to the world average \(\alpha_{s}(m_{Z})=0.118\) and the follow vacuum condensates [22] where the acquiescent scale is \(1\) GeV.
\[\langle\bar{q}q\rangle=-0.0156\,{\rm GeV}^{3},\qquad\langle\bar{s }s\rangle=0.8\langle\bar{q}q\rangle,\qquad\langle\alpha_{s}G^{2}\rangle=0.012 \pi\,{\rm GeV}^{4},\qquad m_{0}^{2}=0.8,\] \[\langle g_{s}\bar{q}\sigma TGq\rangle=m_{0}^{2}\langle\bar{q}q \rangle,\qquad\langle g_{s}\bar{q}q\rangle^{2}=\frac{-16}{9}\langle\bar{q}q \rangle^{2},\qquad\langle g_{s}^{2}\bar{q}q\rangle^{2}=\frac{-16}{3}\langle\bar {q}q\rangle^{2}. \tag{10}\]
The scalar coupling is also studied by the sum rules within the back ground field (BFTSR) [23], where the perturbative term, the quark condensate, dimension-\(4\) gluon condensate and dimension-\(6\) quark-gluon condensate are the same as in the traditional one as shown in Eq. (8), while the dimension-\(6\) four quark condensates are different.
\[I_{0}^{(g_{s}\bar{q}q)^{2}}(M^{2})=-\frac{8}{27M^{2}},\qquad I_{0}^{(g_{s}^{2} \bar{q}q)^{2}}(M^{2},\mu)=\frac{2+\kappa^{2}}{486\pi^{2}M^{2}}\left[35+6\ln \frac{M^{2}}{\mu^{2}}\right]. \tag{11}\]
Meanwhile, the values of the non-perturbative vacuum condensates appearing in the BFTSR are also different from them in the traditional ones.
\[\langle\bar{q}q\rangle=-0.0242\,{\rm GeV}^{3},\qquad\langle\bar{ s}s\rangle=0.8\langle\bar{q}q\rangle,\qquad\langle\alpha_{s}G^{2}\rangle=0.038\,{ \rm GeV}^{4},\] \[\langle g_{s}\bar{q}\sigma TGq\rangle=-0.0193\,{\rm GeV}^{5}, \qquad\langle g_{s}\bar{q}q\rangle^{2}=2.082\times 10^{-3}\,{\rm GeV}^{6},\] \[\langle g_{s}^{2}\bar{q}q\rangle^{2}=7.420\times 10^{-3}\,{\rm GeV }^{6},\qquad\langle g_{s}^{2}fG^{2}\rangle=0.045\,{\rm GeV}^{6}. \tag{12}\]
The Borel mass \(M^{2}\) is usually chosen by a priori criterion that the contribution from high dimension condensates is no more than twenty percents, and simultanously the contribution from high excited states and continuum spectral is smaller than thirty percents, so it is a compromise between the unitary interpolation from the hadron summing and the operator production expansion from the QCD calculation. The threshold value \(s_{0}\) is close to the outset of the first excited state with the same quantum number and determined by the maximal stability of physical quantities once the Borel mass has been set down. Under the statement of \(\tilde{f}_{\sigma}^{s}=0\)[24], we consider the \(f_{0}\) as the ground state with \(\bar{s}s\) component in the sum rules Eq. (6) with \(q_{1}=q_{2}=s\). Taking \(M^{4}\partial/\partial M^{2}\) in to both sides of Eq. (7), we obtain the sum rules of \(m_{f_{0}}\) which is then fixed at the PDG value \(990\pm 50\,{\rm MeV}\). With this input, we ultimately find the optimal choice of Borel mass and threshold value are the same in the two types sum rules of \(m_{f_{0}}\) as shown in Eq. (8) and Eq. (11). This agreement is not a coincidence but a rational result since the only difference between the QCDSR and BFTSR is the expression of terms proportional to dimension-\(6\) four-quark condensate, which is highly power suppressed.
\begin{table}
\begin{tabular}{l|c c|c c c} \hline \hline Sum rules & QCDSR & BFTSR & QCDSR [4] & QCDSR [21] & BFTSR [25] \\ \hline \(m_{f_{0}(980)}\)(MeV) & \(990\pm 50\) (input) & \(990\pm 50\) & — & — \\ \(\tilde{f}_{f_{0}(980)}\)(MeV) & \(335^{+9}_{-12}\) & \(331^{+9}_{-12}\) & \(370\pm 20\) & — & — \\ \(\tilde{m}_{f_{0}(1500)}\)(MeV) & — & — & \(1500\pm 100\) & \([1640,1730]\) & \([1563,1706]\) \\ \(\frac{\tilde{f}_{f_{0}(1500)}}{\rm(MeV)}\) & — & — & \(-(255\pm 30)\) & \([369,391]\) & \([374,378]\) \\ \hline \(M^{2}\)(GeV\({}^{2}\)) & \(1.6\pm 0.1\) & \([1.1,1.6]\) & \(1.1\pm 0.1\) & \(2.0\pm 0.2\) \\ \(s_{0}\)(GeV\({}^{2}\)) & \(2.2\pm 0.2\) & \(5.0\pm 0.3\) & \(6.5\pm 0.3\) & \(6.5\pm 0.3\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The scalar decay constant of \(f_{0}\) obtained from QCD sum rules.
In table 1, we show the sum rules result of scalar decay constant at the default scale \(1\) GeV. For the sake of comparison, we also present the previous sum rules result [4] where both the ground state \(f_{0}\) and the first excited state \(f_{0}(1500)\) are taken in to account. The QCDSR is is also studied under the statement that \(f_{0}(1500)\) is the ground state with conventional \(\bar{s}s\) component while \(f_{0}(980)\) is more like a tetraquark state [21], and the similar BFTSRs study is carried out in Ref. [25]. In figure 1, we show the Borel mass dependence of the sum rules of scalar decay constant as shown in Eq. (7), in which the uncertainty band corresponds to different values of \(s_{0}\).
### Leading twist light-cone distribution amplitudes
The leading twist LCDA is usually expanded in terms of the gegenbauer polynomials
\[\phi(u,\mu)=\bar{f}_{S}6u\bar{u}\sum_{n=0}^{\infty}a_{n}^{\rm t2}(\mu)C_{n}^{3/2 }(2u-1), \tag{13}\]
from which the multiplicatively renormalization coefficients can be written by
\[a_{n}(\mu)=\frac{1}{\bar{f}_{S}}\frac{2(2n+3)}{3(n+1)(n+2)}\int_{0}^{1}C_{n}^{ 3/2}(2u-1)\phi(u,\mu)du. \tag{14}\]
Substituting Eq. (13) into the normalization condition in Eq. (3), we obtain \(a_{n={\rm even}}\propto 1/\mu_{S}\) and hence the even coefficients are zero for the neutral scalar mesons. Taking in to account the expansion of gegenbauer polynomials, the renormalization coefficients is rewritten by means of the moments \(\langle\zeta_{n}\rangle\)
\[\langle\zeta_{n}^{q}\rangle=\frac{1}{\bar{f}_{S}}\int_{0}^{1}(2u-1)^{n}\phi( u,\mu)du. \tag{15}\]
The first two renormalization coefficients are related to the moments by
\[a_{1}(\mu)=\frac{5}{3}\langle\zeta_{1}^{q}(\mu)\rangle,\qquad a_{3}(\mu)=\frac {21}{4}\langle\zeta_{3}^{q}(\mu)\rangle-\frac{9}{4}\langle\zeta_{1}^{q}(\mu)\rangle. \tag{16}\]
Here \(q=u,d,s\) is the quark component of scalar meson. The renormalization group equation of the gegenbauer coefficients read as
\[a_{n}(\mu)=a_{n}(\mu_{0})\left[\frac{\alpha_{s}(\mu_{0})}{\alpha_{s}(\mu)} \right]^{-\left(\gamma_{n}^{(0)}\right)/b}L^{-1}(\mu,\mu_{0}) \tag{17}\]
with the one-loop anomalous dimension
\[\gamma_{n}^{(0)}=C_{F}\left[3+\frac{2}{(n+1)(n+2)}-4\psi(n+2)-4\gamma_{E} \right], \tag{18}\]
in which \(b=(11N_{c}-2n_{f})/3,C_{F}=(N_{c}^{2}-1)/(2N_{c})\).
We consider the correlation function
\[\Pi_{n}(z,q)=i\int d^{4}xe^{iq\cdot x}\langle 0|{\rm T}\{\bar{q}_{2}(x)\not{ \varepsilon}\left(iz\cdot\overleftrightarrow{D}\right)^{n}q_{1}(x),\bar{q}_{1} (0)q_{2}(0)\}|0\rangle. \tag{19}\]
In the deep Euclidean region \(q^{2}\ll 0\), it can be evaluated directly by applying the OPE technique
\[\Pi_{n}(z,q)=i\int d^{4}xe^{iq\cdot x}\left\{-{\rm Tr}\cdot\langle 0|S_{0} ^{q_{2}}(0,x)\not{\varepsilon}\left(iz\cdot\overleftrightarrow{D}\right)^{n}S _{0}^{q_{1}}(x,0)\}|0\rangle+\langle 0|\bar{q}_{2}(x)q_{2}(0)\not{\varepsilon} \left(iz\cdot\overleftrightarrow{D}\right)^{n}S_{0}^{q_{1}}(x,0)\}|0\rangle\right.\] \[\left.+{\rm Tr}\cdot\langle 0|S_{0}^{q_{2}}(0,x)\not{\varepsilon} \left(iz\cdot\overleftrightarrow{D}\right)^{n}\bar{q}_{1}(x)q_{1}(0)\}|0 \rangle+\cdots\right\}. \tag{20}\]
When \(q^{2}\) shifts from deeply negative to positive, the \(\bar{q}q\) state begins to form hadrons and the correlation function can be expressed by the sum of contributions from all possible intermediate states with appropriate subtractions. Writing the hadron representation in the dispersion relation and isolating the ground state contribution, for \(q^{2}>0\) we have
\[\Pi_{n}(z,q)=\frac{1}{\pi}\int_{0}^{\infty}ds\frac{{\rm Im}\Pi_{n}^{\rm had}(s,q^{2})}{s-q^{2}}=\bar{f}_{S}^{2}m_{S}\langle\zeta_{n}\rangle+\frac{1}{4\pi^{ 3}}\int_{s_{0}}^{\infty}ds\frac{3m_{q}}{(n+2)(s-q^{2})}, \tag{21}\]
in which the relationships \(\langle 0|\bar{q}_{2}\not{\varepsilon}(iz\cdot\overleftrightarrow{D})^{n}q_{1}|0 \rangle=(z\cdot q)^{n+1}\bar{f}_{S}\langle\zeta_{n}\rangle\) and \(\langle 0|\bar{q}_{1}q_{2}|S\rangle=m_{S}\bar{f}_{S}\langle\zeta_{0}^{s}\rangle\) are implied. We take the convention \(\langle\zeta_{0}^{s}\rangle=1\) in the following.
After implement the quark-hadron duality and the Borel transformation, the QCDSRs of leading twist LCDA moments (\(n={\rm odd}\)) is
\[\langle\zeta_{n}(\mu)\rangle=I_{n}^{\rm pert}(s_{0},M^{2})+\langle\bar{q}q \rangle I_{n}^{(\bar{q}q)}(M^{2})+\langle g_{s}\bar{q}\sigma TGq\rangle I_{n}^{( g_{s}\bar{q}\sigma TGq)}(M^{2})+\langle\alpha_{s}G^{2}\rangle\langle\bar{q}q \rangle I_{n}^{(\alpha_{s}G^{2})\langle\bar{q}q\rangle}(M^{2},\mu). \tag{22}\]
The weighted functions \(I_{n}\) describe the contributions from perturbative and various condensate effects. For the neutral scalar meson \(q_{1}=q_{2}=q\), the result up to dimension six is quoted [4] as
\[I_{n}^{\rm pert}(s_{0},M^{2})=-\frac{m_{q}}{m_{S}\bar{f}_{S}^{2}} e^{m_{S}^{2}/M^{2}}\frac{3M^{2}}{8\pi^{2}(n+2)}\left(1-e^{-s_{0}/M^{2}}\right),\] \[I_{n}^{(\bar{q}q)}(M^{2})=\frac{2}{m_{S}\bar{f}_{S}^{2}}e^{m_{S} ^{2}/M^{2}},\] \[I_{n}^{(g_{s}\bar{q}\sigma TGq)}(M^{2})=\frac{1}{m_{S}\bar{f}_{S}^ {2}}e^{m_{S}^{2}/M^{2}}\frac{10n-3}{12M^{2}},\] \[I_{n}^{(\alpha_{s}^{2}G^{2})\langle\bar{q}q\rangle}(M^{2},\mu)= \frac{1}{m_{S}\bar{f}_{S}^{2}}e^{m_{S}^{2}/M^{2}}\frac{4\pi n(4n-5)}{18\pi M^{ 4}}. \tag{23}\]
The leading twist LCDA moments are also studied under the BFTSR [23] and the result is
\[\langle\zeta_{n={\rm odd}}(\mu)\rangle=I_{n}^{\rm pert}(s_{0},M^{ 2})+\langle\bar{q}q\rangle I_{n}^{(\bar{q}q)}(M^{2})+\langle g_{s}\bar{q} \sigma TGq\rangle I_{n}^{(g_{s}\bar{q}\sigma TGq)}(M^{2})+\langle\alpha_{s}G^ {2}\rangle I_{n}^{(\alpha_{s}G^{2})}(M^{2},\mu)\] \[+\langle\alpha_{s}G^{2}\rangle\langle\bar{q}q\rangle I_{n}^{( \alpha_{s}G^{2})\langle\bar{q}q\rangle}(M^{2})+\langle g_{s}\bar{q}q\rangle^{2 }I_{n}^{(g_{s}\bar{q}q)^{2}}(M^{2})+\langle g_{s}^{2}\bar{q}q\rangle^{2}I_{n} ^{(g_{s}^{2}\bar{q}q)^{2}}(M^{2},\mu)+\langle g_{s}^{3}fG^{3}\rangle I_{n}^{(g _{s}^{3}fG^{3})}(M^{2},\mu). \tag{24}\]
The weighted functions associated to the perturbative and quark condensate terms are the same as in the QCDSR. For other nonperturbative condensates with \(n\leqslant 1\), they are
\[I_{n}^{(g_{s}\bar{q}\sigma TGq)}(M^{2})=-\frac{1}{m_{S}\bar{f}_{S} ^{2}}e^{m_{S}^{2}/M^{2}}\frac{4n}{3M^{2}},\] \[I_{n}^{(\alpha_{s}G^{2})}(M^{2},\mu)=\frac{2m_{q}}{m_{S}\bar{f}_ {S}^{2}}\frac{e^{m_{S}^{2}/M^{2}}}{48\pi M^{2}}\left\{-12n\ln\frac{M^{2}}{\mu^{2 }}-6(n+2)+\Theta(n-1)\left[-4n\ln\frac{M^{2}}{\mu^{2}}+3\tilde{\psi}(n)-\frac {6}{n}\right]\right\},\] \[I_{n}^{(g_{s}\bar{q}q)^{2}}(M^{2})=\frac{m_{q}}{m_{S}\bar{f}_{S} ^{2}}e^{m_{S}^{2}/M^{2}}\frac{4(n+3)}{81M^{4}},\] \[I_{n}^{(g_{s}^{2}\bar{q}q)^{2}}(M^{2},\mu)=-\frac{2m_{q}}{m_{S} \bar{f}_{S}^{2}}e^{m_{S}^{2}/M^{2}}\frac{2+\kappa^{2}}{388\pi^{2}M^{4}}\left\{4( n+5)+\delta^{0n}\left[24\ln\frac{M^{2}}{\mu^{2}}-148\right]+\delta^{1n}\left[-128\ln \frac{M^{2}}{\mu^{2}}-692\right]\right.\] \[\left.+\Theta(n-1)\left[-8(6n^{2}+34n)\ln\frac{M^{2}}{\mu^{2}}+4 n\tilde{\psi}(n)-2(6n^{2}+96n+212)\right]\right\},\] \[I_{n}^{(g_{s}^{3}fG^{3})}(M^{2},\mu)=-\frac{2m_{q}}{m_{S}\bar{f}_{S} ^{2}}e^{m_{S}^{2}/M^{2}}\frac{1}{384\pi^{2}M^{4}}\left\{\delta^{1n}\left[24\ln \frac{M^{2}}{\mu^{2}}+84\right]\right.\] \[\left.+\Theta(n-1)\left[4n(3n-5)\ln\frac{M^{2}}{\mu^{2}}+2(2n^{2} +5n-13)\right]\right\}. \tag{25}\]
Here \(\tilde{\psi}(n)=\psi((n+1)/2)-\psi(n/2)+(-1)^{n}\ln 4\), \(\psi(n+2)=\sum_{j=1}^{n+1}1/j-\gamma_{E}\), \(\kappa=0.74\pm 0.03\).
In figure 2 we show the Borel mass dependence of the first moment obtained from sum rules. Because the moments are dominated by the nonperturbative contributions, we see that \(\langle\zeta_{1}\rangle\) is not sensitive to the threshold value \(s_{0}\). In table 2 we show the result of the first gegenbauer coefficient, where the first and second errors arise from \(M^{2}\) and \(s_{0}\), respectively. For the sake of comparison, we also present the result obtained from previous sum rules1. Our result of the first gegenbauer coefficient \(a_{1}\) obtained from the QCDSR is consist with the previous sum rules determination [4], the result is almost the same as that obtained from the BFTSR with the same sum rules parameters.
Footnote 1: In the last column, we take the \(\mathrm{SU}(3)\) asymptotic and compare directly to the result obtained for the isovector scalar meson \(a_{0}(980)\)[25].
### Subleading twist light-cone distribution amplitudes
Twist three light-cone distribution amplitudes associated with two particle composition are expanded also in terms of gegenbauer polynomials [26]
\[\phi^{s}(u,\mu) =\bar{f}_{S}\left[1+\sum_{n=1}^{\infty}a_{n}^{s}(\mu)C_{n}^{1/2}( 2u-1)\right],\] \[\phi^{s}(u,\mu) =\bar{f}_{S}6u\bar{u}\left[1+\sum_{n=1}^{\infty}a_{n}^{s}(\mu)C_ {n}^{3/2}(2u-1)\right]. \tag{26}\]
The scalar and tensor moments defined in
\[\langle\zeta_{n}^{s/\sigma}\rangle=\int_{0}^{1}du(2u-1)^{n}\phi^{s/\sigma}(u,\mu) \tag{27}\]
relate to the gegenbauer coefficients \(a_{n}^{s/\sigma}\) by
\[a_{1}^{s} =3\langle\zeta_{1}^{s}\rangle,\quad a_{2}^{s}=\frac{5}{2}\left[3 \langle\zeta_{2}^{s}\rangle-1\right],\quad a_{4}^{s}=\frac{9}{8}\left[35 \langle\zeta_{4}^{s}\rangle-30\langle\zeta_{2}^{s}\rangle+3\right],\] \[a_{1}^{\sigma} =\frac{5}{3}\langle\zeta_{1}^{\sigma}\rangle,\quad a_{2}^{\sigma }=\frac{7}{12}\left[5\langle\zeta_{2}^{\sigma}\rangle-1\right],\quad a_{4}^{ \sigma}=\frac{11}{24}\left[21\langle\zeta_{4}^{\sigma}\rangle-14\langle\zeta _{2}^{\sigma}\rangle+1\right]. \tag{28}\]
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline Sum rules & QCDSR & BFTSR & QCDSR [4] & QCDSR [26] & BFTSR [23] \\ \hline \(a_{1}(f_{0}(980))\) & \(-0.891^{+0.039+0.004}_{-0.033-0.004}\) & \(-0.855^{+0.039+0.004}_{-0.033-0.005}\) & \(-0.78\pm 0.08\) & — & \(-0.51\pm 0.07\) \\ \(a_{1}(f_{0}(1500))\) & — & — & \(0.80\pm 0.40\) & \(-0.48\pm 0.11\) & — \\ \hline \(M^{2}\)(GeV\({}^{2}\)) & \(1.6\pm 0.1\) & & \([1.1,1.6]\) & \([2.5,3.5]\) \\ \(s_{0}\)(GeV\({}^{2}\)) & \(2.2\pm 0.2\) & & \(5.0\pm 0.3\) & \(7.0\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The first gegenbauer coefficients obtained from QCD sum rules.
Figure 2: The Borel mass dependence of the first moment obtained from the QCDSR (left) and BFTSR (right).
We consider two correlation functions
\[\Pi_{n}^{s}(z,q) = i\int d^{4}x\,e^{iq\cdot x}\langle 0|\mathrm{T}\left\{\bar{q}_{2}(x) \left(iz\cdot\overleftrightarrow{D}\right)^{n}q_{1}(x),\bar{q}_{1}(0)q_{2}(0) \right\}|0\rangle=-\left(z\cdot q\right)^{n}I_{n}(q^{2}), \tag{29}\] \[\Pi_{n}^{\sigma}(z,q) = i\int d^{4}x\,e^{iq\cdot x}\langle 0|\mathrm{T}\left\{\bar{q}_{2}(x) \sigma_{\mu\nu}\left(iz\cdot\overleftrightarrow{D}\right)^{n}q_{1}(x),\bar{q} _{1}(0)q_{2}(0)\right\}|0\rangle=i\left(q_{\mu}z_{\nu}-q_{\nu}z_{\mu}\right) \left(z\cdot q\right)^{n}I_{n}^{\sigma}(q^{2}). \tag{30}\]
The dispersion relation representation of the correlation functions in the physical regions reads as
\[\Pi_{n}^{s/\sigma}(z,q)=\frac{1}{\pi}\int ds\frac{\mathrm{Im}\Pi_{n}^{s/ \sigma,\mathrm{had}}(s,q^{2})}{s-q^{2}}. \tag{31}\]
With the definitions of local matrix elements in terms of moments
\[\langle 0|\bar{q}_{2}(x)\left(iz\cdot \overleftrightarrow{D}\right)^{n}q_{1}(x)|S(q)\rangle=m_{S}\bar{f}_{S} \left(q\cdot z\right)^{n}\langle\zeta_{n}^{s}\rangle,\] \[\langle 0|\bar{q}_{2}(x)\sigma_{\mu\nu}\left(iz\cdot \overleftrightarrow{D}\right)^{n+1}q_{1}(x)|S(q)\rangle=-i\frac{n+1}{3}m_{S} \bar{f}_{S}\left(q_{\mu}z_{\nu}-q_{\nu}z_{\mu}\right)\left(q\cdot z\right)^{n }\langle\zeta_{n}^{\sigma}\rangle, \tag{32}\]
we obtain the imaginary parts for the neutral scalar mesons2,
Footnote 2: We here concentrate on the correlation functions with even \(n\) since the odd ones result to zero moments for the neutral meson due to the \(C\)-parity conservation.
\[\frac{1}{\pi}\mathrm{Im}I_{n=\mathrm{even}}^{s,\mathrm{had}} = -\delta(q^{2}-m_{S}^{2})m_{S}^{2}\bar{f}_{S}^{2}\langle\zeta_{n= \mathrm{even}}^{s}\rangle+\Theta(q^{2}-s_{0}^{s})\frac{3\left(-q^{2}+4m_{q}^{2 }\right)}{8\pi^{2}}\frac{1}{n+1},\] \[\frac{1}{\pi}\mathrm{Im}I_{n=\mathrm{even}}^{s,\mathrm{had}} = -\delta(q^{2}-m_{S}^{2})m_{S}^{2}\bar{f}_{S}^{2}\langle\zeta_{n= \mathrm{even}}^{\sigma}\rangle\frac{n+1}{3}+\Theta(q^{2}-s_{0}^{s})\frac{3 \left(-q^{2}+2m_{q}^{2}\right)}{8\pi^{2}}\frac{1}{n+3}. \tag{33}\]
In the deep Euclidean region, the correlation functions can be evaluated using operator product expansion at quark level. After applying the quark-hadron duality to math the result of \(I_{n}^{s/\sigma}(q^{2})\) obtained from the hadron interpolating and OPE calculation, for the neutral scalar mesons, the Borelization result of the second moments are [21]
\[-m_{S}^{2}\bar{f}_{S}^{2}e^{-m_{S}^{2}/M^{2}}\langle\zeta_{n= \mathrm{even}}^{s}\rangle=-\frac{3}{8\pi^{2}}\frac{1}{n+1}\int_{0}^{s_{0}^{s} }se^{-s/M^{2}}ds-\langle\alpha_{s}G^{2}\rangle\frac{1}{8\pi}-\langle\bar{q}q \rangle(n+3)m_{q}\] \[+\langle g_{s}\bar{q}\sigma TGq\rangle\frac{(4n^{2}+23n+12)m_{q}} {12}\frac{1}{M^{2}}+4\pi\alpha_{s}\langle\bar{q}q\rangle^{2}\frac{(-8n^{2}+16 n+150)}{81}\frac{1}{M^{2}}, \tag{34}\] \[-\frac{1}{3}m_{S}^{2}\bar{f}_{S}^{2}e^{-m_{S}^{2}/M^{2}}\langle \zeta_{n=\mathrm{even}}^{\sigma}\rangle=-\frac{3}{8\pi^{2}}\frac{1}{(n+1)(n+3 )}\int_{0}^{s_{0}^{s}}se^{-s/M^{2}}ds-\langle\alpha_{s}G^{2}\rangle\frac{1}{24 \pi}\frac{1}{n+1}-\langle\bar{q}q\rangle m_{q}\] \[+\frac{1}{n+1}\frac{1}{M^{2}}\left\{\langle g_{s}\bar{q}\sigma TGq \rangle\frac{(4n^{2}+7n+5)m_{q}}{12}-4\pi\alpha_{s}\langle\bar{q}q\rangle^{2 }\left[\frac{8n^{2}+9n-35}{162}+\frac{2\delta_{n0}}{9}\right]\right\}, \tag{35}\]
Besides the QCDSR result, the BFTSR also applied to calculate the scalar and tensor moments with the accuracy up to the linear terms of quark mass contributions [25].
\[-m_{S}^{2}\bar{f}_{S}^{2}e^{-m_{S}^{2}/M^{2}}\langle\zeta_{n= \mathrm{even}}^{s}\rangle=-\frac{3}{8\pi^{2}}\frac{1}{n+1}M^{4}+\frac{3}{8\pi^ {2}}\frac{e^{-s_{0}^{s}/M^{2}}}{n+1}\left[M^{4}+s_{0}^{s}M^{2}-2(n+2)m_{q}^{2 }M^{2}\right]\] \[-\langle\alpha_{s}G^{2}\rangle\frac{1}{8\pi}-\langle\bar{q}q \rangle(n+3)m_{q}+\langle g_{s}^{3}fG^{3}\rangle\frac{n}{96\pi^{2}}\frac{1}{M^{ 2}}\] \[+\langle g_{s}\bar{q}\sigma TGq\rangle\frac{(8n^{2}+13n-18)m_{q}} {18}\frac{1}{M^{2}}+4\pi\alpha_{s}\langle\bar{q}q\rangle^{2}\frac{(-4n^{2}-14n+ 168)}{81}\frac{1}{M^{2}}, \tag{36}\] \[-\frac{1}{3}m_{S}^{2}\bar{f}_{S}^{2}e^{-m_{S}^{2}/M^{2}}\langle \zeta_{n=\mathrm{even}}^{\sigma}\rangle=\frac{3}{8\pi^{2}}\frac{1}{(n+1)(n+3 )}M^{4}-\frac{3}{8\pi^{2}}\frac{e^{-s_{0}^{s}/M^{2}}}{(n+1)(n+3)}\left[M^{4}+s _{0}^{s}M^{2}-2(n+3)m_{q}^{2}M^{2}\right]\] \[-\langle\alpha_{s}G^{2}\rangle\frac{1}{24\pi}\frac{1}{n+1}-\langle \bar{q}q\rangle m_{q}+\langle g_{s}\bar{q}\sigma TGq\rangle\frac{(8n+7)m_{q}} {18}\frac{1}{M^{2}}+4\pi\alpha_{s}\langle\bar{q}q\rangle^{2}\frac{(-4n+10)}{81} \frac{1}{M^{2}}. \tag{37}\]
Figure 3 and figure 4 show the Borel mass dependence of the second scalar and tensor moments. Its easy to find the apparent difference between the result obtained from QCDSR and BFTSR, and hence the different predictions for the gegenbauer coefficients as shown in table 3 and table 4.
Twist three LCDAs also contributed from three particle composition [27; 28; 29]
\[\phi_{3S}^{q}(p_{1},\alpha_{i})=360f_{3S}\alpha_{1}\alpha_{2}\alpha_{3}^{2} \left[1+\lambda_{3S}\left(\alpha_{1}-\alpha_{2}\right)+\omega_{3S}\left(7\alpha _{3}-3\right)\right]. \tag{38}\]
where the nonperturbative parameters is defined by the local matrix elements
\[\langle S(p_{1})|\bar{q}_{2}(z)g_{s}G\cdot\sigma q_{1}(z)|0\rangle =2f_{3S}\left(p_{1}\cdot z\right)^{2},\] \[\langle S(p_{1})|\bar{q}_{2}(z)\overleftrightarrow{D}g_{s}G \cdot\sigma q_{1}(z)-\bar{q}_{2}(z)g_{s}G\cdot\sigma\overrightarrow{D}q_{1}( z)|0\rangle=-2f_{3S}\lambda_{3S}\left(p_{1}\cdot z\right)^{3}/14,\] \[\langle S(p_{1})|\bar{q}_{2}(z)\sigma^{\mu\nu}\left[iD,g_{s}G_{ \mu\nu}\right]q_{1}(z)-\frac{3}{7}\partial\bar{q}_{2}(z)g_{s}G\cdot\sigma \overrightarrow{D}q_{1}(z)|0\rangle=-f_{3S}\omega_{3S}\left(p_{1}\cdot z \right)^{3}/14. \tag{39}\]
The renormalization group equations of them are
\[f_{3S}(\mu)=f_{3S}(\mu_{0})L(\mu,\mu_{0})^{-\gamma_{f_{3S}}/8},\] \[\left[f_{3S}\lambda_{3S}\right](\mu)=\left[f_{3S}\lambda_{3S} \right](\mu_{0})L(\mu,\mu_{0})^{-\gamma_{\lambda_{3S}}/8},\] \[\left[f_{3S}\omega_{3S}\right](\mu)=\left[f_{3S}\lambda_{3S} \right](\mu_{0})L(\mu,\mu_{0})^{-\gamma_{\lambda_{3S}}/8}, \tag{40}\]
at one loop accuracy, the anomalous dimensions are
\[\gamma_{f_{3S}}^{(0)}=\frac{110}{9},\quad\gamma_{\lambda_{3S}}^{(0)}=\frac{13 9}{9},\quad\gamma_{\omega_{3S}}^{(0)}=\frac{208}{9}. \tag{41}\]
In the numerics we take \(\lambda_{3f_{0}}=0\) due to the \(G\)-odd definition, and \(\omega_{3f_{0}}(1{\rm GeV})=-1.5\pm 0.7\) as the same as \(\omega_{3\pi}\). The
additional scalar coupling is related to the gegengauer moments \(\langle\zeta_{n=0,2}^{s/\sigma}\rangle\) and \(\bar{f}_{f_{0}}\) by by the equation of motion
\[\langle\zeta_{2}^{s}\rangle=\frac{1}{3}\langle\zeta_{0}^{s}\rangle+\frac{4}{m_{ f_{0}}}\frac{\bar{f}_{3f_{0}}}{\bar{f}_{f_{0}}},\qquad\langle\zeta_{2}^{\sigma} \rangle=\frac{1}{5}\langle\zeta_{0}^{\sigma}\rangle+\frac{12}{5m_{f_{0}}} \frac{\bar{f}_{3f_{0}}}{\bar{f}_{f_{0}}}-\frac{8}{5m_{f_{0}}}\frac{\bar{f}_{3f_ {0}}}{\bar{f}_{f_{0}}}\langle\langle\alpha_{3}\rangle\rangle_{f_{0}}. \tag{42}\]
The moments of three particle distribution amplitudes are defined
\[\langle\langle(\alpha_{2}-\alpha_{1}+v\alpha_{3})^{n}\rangle\rangle=\int{ \cal D}\alpha_{i}\phi_{3S}(\alpha_{i})\left(\alpha_{2}-\alpha_{1}+v\alpha_{3} \right)^{n}. \tag{43}\]
## III \(D_{s}\to f_{0}\) form factors
\(D_{s}\to f_{0}\) transition form factors are defined by
\[\langle f_{0}(p_{1})|\bar{s}\gamma_{\mu}\gamma_{5}c|D_{s}^{+}(p)\rangle =-if_{1}(q^{2})\left[(p+p_{1})_{\mu}-\frac{m_{D_{s}}^{2}-m_{f_{0}} ^{2}}{q^{2}}q_{\mu}\right]-if_{0}(q^{2})\frac{m_{D_{s}}^{2}-m_{f_{0}}^{2}}{q^ {2}}q_{\mu}\] \[=-i\left[f_{+}(q^{2})\left(p+p_{1}\right)_{\mu}+f_{-}(q^{2})q_{ \mu}\right]\,.\] \[\langle f_{0}(p_{1})|\bar{s}\sigma_{\mu\nu}\gamma_{5}q^{\mu}c|D_{ s}^{+}(p)\rangle =-\frac{f_{T}(q^{2})}{m_{D_{s}}+m_{f_{0}}}\left[q^{2}\left(p+p_{1} \right)_{\nu}-\left(m_{D_{s}}^{2}-m_{f_{0}}^{2}\right)q_{\nu}\right]\,. \tag{44}\]
Here \(q=p-p_{1}\) is the transfer momentum, the relations between two definitions are
\[f_{+}(q^{2})=f_{1}(q^{2})\,,\qquad f_{-}(q^{2})=\frac{m_{D_{s}}^{2}-m_{f_{0}}^ {2}}{q^{2}}f_{0}(q^{2})-\frac{m_{D_{s}}^{2}-m_{f_{0}}^{2}}{q^{2}}f_{1}(q^{2})\,. \tag{45}\]
To evaluate the \(D_{s}\to f_{0}\) form factors, we start with the correlation functions
\[\Pi_{\mu}^{\rm S_{i}}(p_{1},q) =i\int d^{4}xe^{iqx}\langle f_{0}(p_{1})|{\rm T}\{j_{1,\mu}^{\rm S _{i}}(x),j_{2}^{\rm S_{i}}\}|0\rangle, \tag{46}\] \[\tilde{\Pi}_{\mu}^{\rm S_{i}}(p_{1},q) =i\int d^{4}xe^{iqx}\langle f_{0}(p_{1})|{\rm T}\{j_{1,\mu}^{\rm S _{i}}(x),j_{2}^{\rm S_{i}}\}|0\rangle, \tag{47}\]
where \(j_{1,\mu}\) and \(\tilde{j}_{1,\mu}\) are the weak transition currents, \(j_{2}\) is the hadron interpolating current. Roman alphabets \({\rm S_{i}}\) denote different scenarios of the currents as shown in table 3. We highlight that both the leading and subleading twist LCDAs of scalar meson contribute to \(D_{s}\to f_{0}\) form factors if we take the conventional non-chiral currents, while only the leading or subleading twist LCDAs contribute to the form factors if we take the chiral currents.
In the physical region, the long-distance quark-gluon interaction between the two currents in Eqs. (46,47) begins to form hadrons. In this respect, the correlation function can be understood by the sum of contributions from all possible intermediate
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline Sum rules & QCDSR & BFTSR & QCDSR [21] & BFTSR [25] \\ \hline \(a_{s}^{2}(f_{0}(980))\) & \(0.296\pm 0.044\) & \(-0.828\pm 0.065\) & — & — \\ \(a_{s}^{2}(f_{0}(1500))\) & — & — & \([-0.33,-0.18]\) & \([-0.02,0.05]\) \\ \hline \(M^{*}\)(GeV\({}^{2}\)) & \(1.8\pm 0.1\) & \(1.6\pm 0.1\) & \(1.7\pm 0.2\) & \(2.0\pm 0.2\) \\ \(s_{0}\)(GeV\({}^{2}\)) & \(2.2\pm 0.2\) & \(6.5\pm 0.3\) & \(6.5\pm 0.3\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: The second gegenbauer coefficients obtained from the scalar sum rules.
states with appropriate subtractions. We take the (axial)-vector current in the weak vertex for example to show the dispersion relation of invariant amplitudes in variable \((p_{1}+q)^{2}>0\), which is written in
\[\Pi_{\mu}^{\rm S_{i},had}(p_{1},q)=\frac{\langle f_{0}(p_{1})|j_{1,\mu}^{\rm S_{i }}(x)|D_{s}(p_{1}+q)\rangle\langle D_{s}|j_{s}^{\rm S_{i}}(0)|0\rangle}{m_{D_{s }}^{2}-(p_{1}+q)^{2}}+\frac{1}{\pi}\int_{s_{0}^{\rm S}}^{\infty}ds\frac{\rho_{ \mu}^{h}(s,q^{2})}{s-(p_{1}+q)^{2}}. \tag{48}\]
The ground state \(D_{s}\) is isolated from the contributions from excited states and continuum spectra by introducing a threshold value \(s_{0}\). With the form factors defined in Eqs. (44) and the decay constant defined as \(\langle D_{s}(p_{1}+q)|j_{s}^{\rm S_{1}}(0)|0\rangle=m_{D_{s}}^{2}f_{D_{s}}/(m _{c}+m_{s})\), the hadron representation is rewritten as
\[\Pi_{\mu}^{\rm S_{i},had}(p_{1},q)=\frac{-im_{D_{s}}^{2}f_{D_{s}}\left[2f_{+}( q^{2})p_{1\nu}+\left(f_{+}(q^{2})+f_{-}(q^{2})\right)q_{\mu}\right]}{(m_{c}+m_{s} )\left[m_{D_{s}}^{2}-(p_{1}+q)^{2}\right]}+\frac{1}{\pi}\int_{s_{0}^{\rm S}}^ {\infty}ds\frac{\rho_{\mu}^{h}(s,q^{2})p_{1\mu}+\rho_{-}^{h}(s,q^{2})q_{\mu}} {s-(p_{1}+q)^{2}}. \tag{49}\]
The relations between different scenarios read as
\[\Pi_{\mu}^{\rm S_{i},had}(p_{1},q)=\Pi_{\mu}^{\rm S_{2},had}(p_{1},q)=-\Pi_{ \mu}^{\rm S_{3}}(p_{1},q). \tag{50}\]
In the Euclidean momenta space with negative \(q^{2}\), the correlation functions can be evaluated directly by QCD at the quark-gluon level. Since the operator product expansion (OPE) is valid for large energies of the final state vector mesons, the momentum transfer squared is restriction to be not too large \(0\leqslant|q^{2}|\leqslant q_{\rm max}^{2}\), and hence the operator product of the \(c\)-quark fields in the correlation function can be expanded near the light cone \(x^{2}\sim 0\) due to the large virtuality,
\[S_{c}(x,0,m_{c})\equiv-i\langle 0|T\{c_{i}(x),\bar{c}_{j}(0)\}|0 \rangle=-i\frac{m_{c}^{2}}{4\pi^{2}}\left[\frac{K_{1}(m_{c}\sqrt{|x^{2}|})}{ \sqrt{|x^{2}|}}+\frac{i\not{x}K_{2}(m_{c}\sqrt{|x^{2}|})}{|x^{2}|}\right]\delta _{ij}\] \[-i\frac{gm_{c}}{16\pi^{2}}\int_{0}^{1}du\left[G\cdot\sigma K_{0}( m_{c}\sqrt{|x^{2}|})+i\frac{[\bar{u}\not{x}G\cdot\sigma+uG\cdot\sigma\not{x}]K_{1} (m_{c}\sqrt{|x^{2}|})}{\sqrt{|x^{2}|}}\right]\delta_{ij}+\cdots. \tag{51}\]
The first term corresponds to the free charm quark propagator, the second one corresponds to the quark-gluon interaction at leading power, and the ellipsis denotes the high power corrections from the quark-gluon interaction. \(\bar{u}=1-u\) is indicated. The correlation functions are ultimately written in a general convolution of hard functions with various LCDAs at different twists
\[\Pi_{\mu}^{\rm S_{i},OPE}(p_{1},q)=\sum_{t}\int_{0}^{1}du\,{\cal T}_{\mu}^{(t )}(u,q^{2},(p_{1}+q)^{2})\otimes\phi^{(t)}(u)+\int_{0}^{1}du\int_{0}^{u}{\cal D }\alpha_{i}\,{\cal T}_{\mu}^{\prime}(u,\alpha_{i},q^{2},(p_{1}+q)^{2})\otimes \phi_{3f_{0}}(\alpha_{i}), \tag{52}\]
where the first term comes from the two particle LCDAs, and the second term comes from the twist three LCDA with three particle configuration. The OPE amplitudes in Eq. (52) can also be written in a dispersion integral over the invariant mass of the interpolating heavy meson
\[\Pi_{\mu}^{\rm S_{i},OPE}(p_{1},q) = \frac{1}{\pi}\int_{0}^{1}du\,\sum_{n=1,2}\left[\frac{{\rm Im}\Pi_ {+,n}^{\rm S_{i},OPE}(q^{2},u)\,p_{1\mu}+{\rm Im}\Pi_{-,n}^{\rm S_{i},OPE}(q^{ 2},u)q_{\mu}}{u^{n}\left[s_{2}(u)-(p_{1}+q)^{2}\right]^{n}}\right. \tag{53}\] \[\left.+\int_{0}^{u}{\cal D}\alpha_{i}\frac{{\rm Im}\Pi_{+,n}^{\rm S _{i},OPE}(q^{2},u,\alpha_{i})\,p_{1\mu}+{\rm Im}\Pi_{-,n}^{\rm S_{i},OPE}(q^{ 2},u,\alpha_{i})\,q_{\mu}}{\left[\alpha_{2}+v\left(1-\alpha_{1}-\alpha_{2} \right)\right]^{n}\left[s_{3}(u,\alpha_{i})-(p_{1}+q)^{2}\right]^{n}}\right].\]
Here the momentum fraction dependent kinematical variables read as \(s_{2}(u)=\bar{u}m_{f_{0}}^{2}+(m_{c}^{2}-\bar{u}q^{2})/u\) and \(s_{3}(u,\alpha_{i})=(1-\alpha_{2}-v\alpha_{3})m_{f_{0}}^{2}+[m_{c}^{2}-(1- \alpha_{2}-v\alpha_{3})q^{2}]/(\alpha_{2}+v\alpha_{3})\).
We then implement the quark-hadron duality to eliminate the contribution from the excited and continuum spectra with the threshold invariant mass \(s_{0}^{i}\). In order to improve the reliability of quark-hadron duality, we Borel transfer both the hadronic
\begin{table}
\begin{tabular}{c|c c c|c|c} \hline \hline Scenarios & \(j_{1,\mu}^{\rm S_{i}}\) & \(\tilde{j}_{1,\mu}^{\rm S_{i}}\) & \(j_{2}^{\rm S_{i}}\) & \(f_{+},f_{-},f_{T}\) \\ \hline \hline \(\rm S_{1}\) & \(\bar{s}\gamma_{\mu}\gamma_{5}c\) & \(\bar{s}\sigma_{\mu\nu}\gamma_{5}q^{\nu}c\) & \(\bar{c}i\gamma_{5}s\) & \(\phi,\phi^{*},\phi^{*^{\prime}}\) \\ \hline \(\rm S_{2}\) & \(\bar{s}\gamma_{\mu}(1-\gamma_{5})c\) & \(\bar{s}\sigma_{\mu\nu}(1+\gamma_{5})q^{\nu}c\) & \(\bar{c}i(1-\gamma_{5})s\) & \(\phi\) \\ \hline \(\rm S_{3}\) & \(\bar{s}\gamma_{\mu}(1-\gamma_{5})c\) & \(\bar{s}\sigma_{\mu\nu}(1+\gamma_{5})q^{\nu}c\) & \(\bar{c}i(1+\gamma_{5})s\) & \(\phi^{*},\phi^{*^{\prime}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Scenarios of the currents to evaluate the \(D_{s}\to f_{0}\) form factors.
representation and the OPE evaluation of the correlation functions. This operation, from one hand, sticks out the ground scalar meson by suppressing the contribution from the excited and continuum spectra in the hadron representation, from the other hand, is helpful to obtain the convergent power expansion in the OPE evaluation. The \(D_{s}\to f_{0}\) form factors obtained from LCSRs approach with different currents are collected as follow,
\[f_{+}^{\rm S1}(q^{2}) =\frac{m_{c}+m_{s}}{2m_{D_{s}}^{2}f_{D_{s}}}\left\{\int_{u_{0}}^{1 }\frac{du}{u}\left[-m_{c}\phi(u)+um_{f_{0}}\phi^{s}(u)+\frac{m_{f_{0}}\phi^{ \sigma}(u)}{3}+\frac{m_{f_{0}}\phi^{\sigma}(u)}{6}\frac{m_{c}^{2}+q^{2}-u^{2}m_ {f_{0}}^{2}}{uM^{2}}\right]e^{\frac{-s_{2}(u)+m_{D_{s}}^{2}}{M^{2}}}\right.\] \[+\int_{u_{0}}^{1}du\int_{0}^{u}d\alpha_{1}\int_{0}^{1-u}\frac{d \alpha_{2}}{\alpha_{3}}\frac{8m_{f_{0}}^{2}f_{3f_{0}}^{s}\phi_{3f_{0}}^{s}( \alpha_{i})}{\left[\alpha_{2}+v\alpha_{3}\right]M^{2}}\,e^{\frac{-s_{3}(u, \alpha_{i})+m_{D_{s}}^{2}}{M^{2}}}+\frac{m_{f_{0}}\phi^{\sigma}(u_{0})}{6} \frac{m_{c}^{2}+q^{2}-u_{0}^{2}m_{f_{0}}^{2}}{m_{c}^{2}-q^{2}+u_{0}^{2}m_{f_{0 }}^{2}}\,e^{\frac{-s_{0}^{1}+m_{D_{s}}^{2}}{M^{2}}}\] \[+\left.\int_{0}^{u_{0}}d\alpha_{1}\int_{0}^{1-u_{0}}\frac{d \alpha_{2}}{\alpha_{3}}\frac{8m_{f_{0}}^{2}f_{3f_{0}}^{s}\phi_{3f_{0}}^{s}( \alpha_{i})}{\left[m_{c}^{2}-q^{2}+\left(\alpha_{2}+v_{0}\alpha_{3}\right)^{2} m_{f_{0}}^{2}\right]}\,e^{\frac{-s_{0}^{1}+m_{D_{s}}^{2}}{M^{2}}}\right\},\] (54) \[f_{+}^{\rm S1}(q^{2}) +f_{-}^{\rm S1}(q^{2})=\frac{m_{c}+m_{s}}{m_{D_{s}}^{2}f_{D_{s}}} \left\{\int_{u_{0}}^{1}\frac{du}{u}\left[m_{f_{0}}\phi^{s}(u)+\frac{m_{f_{0}} \phi^{\sigma}(u)}{6u}-\frac{m_{f_{0}}\phi^{\sigma}(u)}{6}\frac{m_{c}^{2}-q^{2} +u^{2}m_{f_{0}}^{2}}{u^{2}M^{2}}\right]e^{\frac{-s_{0}(u)+m_{D_{s}}^{2}}{M^{2}}}\right.\] \[+\left.\int_{u_{0}}^{1}du\int_{0}^{u}d\alpha_{1}\int_{0}^{1-u} \frac{d\alpha_{2}}{\alpha_{3}}\frac{8m_{f_{0}}^{2}f_{3f_{0}}^{s}\phi_{3f_{0}} ^{s}(\alpha_{i})}{\left[\alpha_{2}+v\alpha_{3}\right]^{2}M^{2}}\,e^{\frac{-s_ {3}(u,\alpha_{i})+m_{D_{s}}^{2}}{M^{2}}}-\frac{m_{f_{0}}\phi^{\sigma}(u_{0})}{ 6u_{0}}\,e^{\frac{-s_{0}^{1}+m_{D_{s}}^{2}}{M^{2}}}\] \[+\int_{0}^{u_{0}}d\alpha_{1}\int_{0}^{1-u_{0}}\frac{d\alpha_{2}}{ \alpha_{3}}\frac{8m_{f_{0}}^{2}f_{3f_{0}}^{s}\phi_{3f_{0}}^{s}(\alpha_{i})}{ \left[m_{c}^{2}-q^{2}+\left(\alpha_{2}+v_{0}\alpha_{3}\right)^{2}m_{f_{0}}^{2} \right]}\,e^{\frac{-s_{0}^{1}+m_{D_{s}}^{2}}{M^{2}}}\Bigg{\}}\,,\] (55) \[f_{T}^{\rm S1}(q^{2}) =\frac{\left(m_{c}+m_{s}\right)\left(m_{D_{s}}+m_{f_{0}}\right)}{ m_{D_{s}}^{2}f_{D_{s}}}\left\{\int_{u_{0}}^{1}\frac{du}{u}\left[-\frac{\phi(u)}{2}+ \frac{m_{f_{0}}\phi^{\sigma}(u)}{6}\frac{m_{c}}{uM^{2}}\right]e^{\frac{-s_{2}(u )+m_{D_{s}}^{2}}{M^{2}}}\right.\] \[+\left.\frac{m_{f_{0}}\phi^{\sigma}(u_{0})}{6}\frac{m_{c}}{\left[ m_{c}^{2}-q^{2}+u_{0}^{2}m_{f_{0}}^{2}\right]}\,e^{\frac{-s_{0}^{1}+m_{D_{s}}^{2}}{M^{2}}}\right\},\] (56) \[f_{+}^{\rm S2}(q^{2}) =-\frac{m_{c}\left(m_{c}+m_{s}\right)}{m_{D_{s}}^{2}f_{D_{s}}} \int_{u_{0}}^{1}\frac{du}{u}\phi(u)e^{\frac{-s_{2}(u)+m_{D_{s}}^{2}}{M^{2}}},\] (57) \[f_{+}^{\rm S2}(q^{2}) +f_{-}^{\rm S2}(q^{2})=0,\] (58) \[f_{T}^{\rm S2}(q^{2}) =-\frac{\left(m_{c}+m_{s}\right)\left(m_{D_{s}}+m_{f_{0}}\right)}{ m_{D_{s}}^{2}f_{D_{s}}}\int_{u_{0}}^{1}\frac{du}{u}\phi(u)e^{\frac{-s_{2}(u)+m_{D_{s}}^{2}}{ M^{2}}},\] (59) \[f_{+}^{\rm S3}(q^{2}) =\frac{\left(m_{c}+m_{s}\right)m_{f_{0}}}{2m_{D_{s}}^{2}f_{D_{s}}} \left\{\int_{u_{0}}^{1}\frac{du}{u}\left[2u\phi^{s}(u)+\frac{2\phi^{\sigma}(u)}{3 }+\frac{\phi^{\sigma}(u)}{3}\frac{m_{c}^{2}+q^{2}-u^{2}m_{f_{0}}^{2}}{uM^{2}} \right]e^{\frac{-s_{2}(u)+m_{D_{s}}^{2}}{M^{2}}}\] \[+\int_{u_{0}}^{1}du\int_{0}^{u}d\alpha_{1}\int_{0}^{1-u}\frac{d \alpha_{2}}{\alpha_{3}}\frac{16m_{f_{0}}f_{3f_{0}}^{s}\phi_{3f_{0}}^{s}(\alpha_{ i})\left(\alpha_{2}+v_{0}\alpha_{3}\right)}{\left[m_{c}^{2}-q^{2}+\left(\alpha_{2}+v_{0} \alpha_{3}\right)^{2}m_{f_{0}}^{2}\right]}+\frac{\phi^{\sigma}(u_{0})}{3}\frac{m_ {c}^{2}+q^{2}-u_{0}^{2}m_{f_{0}}^{2}}{m_{c}^{2}-q^{2}+u_{0}^{2}m_{f_{0}}^{2}}\,e^{ \frac{-s_{0}^{1}+m_{D_{s}}^{2}}{M^{2}}}\] \[+\int_{0}^{u_{0}}d\alpha_{1}\int_{0}^{1-u_{0}}\frac{d\alpha_{2}}{ \alpha_{3}}\frac{16m_{f_{0}}f_{3f_{0}}^{s}\phi_{3f_{0}}^{s}(\alpha_{i})\left( \alpha_{2}+v_{0}\alpha_{3}\right)}{\left[m_{c}^{2}-q^{2}+\left(\alpha_{2}+v_{0} \alpha_{3}\right)^{2}m_{f_{0}}^{2}\right]}\,e^{\frac{-s_{0}^{1}+m_{D_{s}}^{2}}{M^{2}}}\Bigg{\}}\,,\] (60) \[f_{+}^{\rm S3}(q^{2}) +f_{-}^{\rm S3}(q^{2})=\frac{\left(m_{c}+m_{s}\right)m_{f_{0}}}{m_{D_{s}}^{2} f_{D_{s}}}\left\{\int_{u_{0}}^{1}\frac{du}{u}\left[2\phi^{s}(u)+\frac{\phi^{\sigma}(u)}{3u}- \frac{\phi^{\sigma}(u)}{3}\frac{m_{c}^{2}-q^{2}+u^{2}m_{f_{
The threshold momentum fraction is the solution of \(s_{i}(u_{0})=s_{0}\) with \(i=1,2\),
\[u_{0}^{i}=\frac{-(s_{0}^{i}-q^{2}-m_{f_{0}}^{2})+\sqrt{(s_{0}^{i}-q^{2}-m_{f_{0}}^ {2})^{2}+4m_{f_{0}}^{2}(m_{c}^{2}-q^{2})}}{2m_{f_{0}}^{2}}. \tag{63}\]
The expressions in Eqs. (55-56), Eqs. (58-59) and Eqs. (60-62), under different scenarios of the currents, are consistent with the result obtained in Ref. [30], Ref. [31] and Ref. [25], respectively. The above calculations are based on the ideal two-quark configuration that \(f_{0}\) is pure an \(s\bar{s}\) state. Nevertheless, there are several experiment measurements indicate the mixing between \(f_{0}\) and \(\sigma\),
\[\sigma=|\bar{n}n\rangle\cos\theta+|\bar{s}s\rangle\sin\theta,\qquad f_{0}=-| \bar{n}n\rangle\sin\theta+|\bar{s}s\rangle\cos\theta. \tag{64}\]
Then the form factor in Eqs. (55-56), Eqs. (58-59) and Eqs. (60-62) are modified3 by multiplying an angle dependence \(\cos\theta\). The mixing angle extracted from the data is not larger than \(40^{\circ}\)[32; 33; 34], and a recent LHCb measurement of the upper limit on the branching fraction \(\mathcal{B}(\bar{B}^{0}\to J/\Psi f_{0})\times\mathcal{B}(f_{0}\to\pi^{+}\pi ^{-})\) leads to \(|\theta|<30^{\circ}\)[35]. In the follow calculation, we would take the mixing angle \(\theta=20^{\circ}\pm 10^{\circ}\).
Footnote 3: The mixing would not bring changes to the LCSRs in section II since the angle dependence could be absorbed in to the definition of scalar decay constant.
The value of Borel mass squared is implied by the internal virtuality of propagator which is smaller than the cutoff threshold value, saying \(M^{2}\sim\mathcal{O}(um_{D_{s}}^{2}+\bar{u}q^{2}-u\bar{u}m_{f_{0}}^{2})<s_{0}\), this value is a litter bit larger than the factorisation scale we chosen at \(\mu_{f}^{2}=m_{D_{s}}^{2}-m_{c}^{2}=1.48^{2}\,\mathrm{GeV}^{2}\) with the quark mass \(\overline{m}_{c}(m_{c})=1.30\,\mathrm{GeV}\). In practice the selection of Borel mass is actually a compromise between the overwhelming chosen of ground state in hadron spectral that demands a small value and the convergence of OPE evaluation that prefers a large one, which result in a region where \(H_{ij}(q^{2})\) shows an extremum in \(M^{2}\)
\[\frac{d}{d(1/M^{2})}{\rm ln}H_{ij}(q^{2})=0\,. \tag{65}\]
The continuum threshold is usually set to close to the outset of the first excited state with the same quantum number as \(D_{s}\) and characterised by \(s_{0}\approx(m_{D_{s}}+\chi)^{2}\), which is finally determined by considering the maximal stable evolution of physical quantities on the Borel mass squared. We take \(s_{0}\equiv s_{0}^{1}=s_{0}^{2}\) in the numerics. The chose of these two parameters should guarate the convergence of twist expansion in the truncated OPE calculation (high twists contributions are no more than thirty percents) and simultaneously the high energy cutoff in the hadron interpolating (the contributions from high excited state and continuum spectral is smaller than thirty percents). We finally set them at \(M^{2}=5.0\pm 0.5\) GeV\({}^{2}\) and \(s_{0}=6.0\pm 0.5\) GeV\({}^{2}\) in this work. The value of Borel mass is a litter bit larger than it chosen in the \(D_{s}\to\pi,K\) transition[36], close to it chosen in the \(D_{s}\to\phi\)[30], \(D_{s}\to\eta^{\prime}\)[2] and \(D_{s}\to f_{0}(980)\) transition [3].
We show the LCSRs result of \(D_{s}\to f_{0}(980)\) form factors in table 6, the results obtained from other approaches are also presented parallel for comparison. We see that the result obtained by adopting different currents (\({\rm S}_{1},{\rm S}_{2},{\rm S}_{3}\)) are different, especially for the form factors \(f_{+}\) which gives the contribution to semileptonic \(D_{s}^{+}\to f_{0}e^{+}\nu\) decays. The difference can be traced back to the ill-defined sum rules with the chiral currents under scenario \({\rm S}_{2}\) and \({\rm S}_{3}\), in which only the axial-vector current \(\bar{s}\gamma_{\mu}\gamma_{5}c\) is considered in the \(D_{s}\to f_{0}\) decay while the vector current \(\bar{s}\gamma_{\mu}c\) with the \(D_{s0}^{*}\to f_{0}\) decay is overlooked at the hadron level. Hereafter we would pay attention on the sum rules with the current chosen under scenario \({\rm S}_{1}\). Our result \(f_{+}(0)=0.58\pm 0.07\) is much larger than the previous LCSRs calculation \(f_{+}(0)=0.30\pm 0.03\)[30], but more consistent with the recent measurement \(0.52\pm 0.05\) at BESIII [18]. The main reason of the difference is the input of decay constant \(\bar{f}_{f_{0}}\), which is taken at \(180\) MeV [5] in the previous work but \(335\) MeV evaluated from QCDSR here. Besides it, we have added the first gegenbauer expansion terms in the LCDAs which part contributions are ignored in the previous work.
We show the dependence of form factors on the Borel mass in figure 5, where the gray and magnate curves correspond to the result obtained up to leading twist and subleading twist LCDAs, the uncertainties come from the threshold value \(s_{0}\). We see that the subleading twist contribution is dominate in the form factors \(f_{+}\), while the leading twist contribution become more important for the form factor \(f_{-}\) and \(f_{T}\). We plot the \(q^{2}\) dependence of the form factors in figure 6 with the LCSRs maximal
momentum transfer \(q_{\rm max}^{2}=0.4\,{\rm GeV}^{2}\)[37], where the uncertainties associated to the Borel mass \(M^{2}\) and also the threshold value \(s_{0}\) are shown in the gray and magnate bands, respectively. We find that the uncertainties of LCSRs prediction to \(D_{s}\to f_{0}\) form factors is full dominated by \(M^{2}\). To estimate the uncertainties associated to scale \(\mu_{f}\), we vary the charm quark mass in the region \(\bar{m}_{c}(m_{c})=1.30\pm 0.30\,{\rm GeV}\) which result in \(\mu_{f}=1.48\pm 0.30\,{\rm GeV}\), we find this variation bring another \(20\%\) uncertainty to \(f_{+}(q^{2})\) and \(f_{T}(q^{2})\), and bring much significant corrections to \(f_{-}(q^{2})\) (large than \(50\%\)).
### \(B_{s}\to f_{0}\) form factors
With the definitions of heavy to light transition form factors in Eq. (44), we can straightly obtain the \(B_{s}\to f_{0}\) form factor by substituting the charm quark to bottom quark in the LCSRs result shown in Eqs. (55-56). We take the bottom quark mass at \(\bar{m}_{b}(m_{b})=4.2\,{\rm GeV}\) and the factorization scale at \(\mu_{f}={\cal O}(m_{B_{s}}^{2}-m_{b})=3\) GeV following the work in Ref. [38]. The Borel mass and threshold value are chosen at \(M^{2}=18\pm 2\,{\rm GeV}^{2}\) and \(s_{0}=36\pm 2\,{\rm GeV}^{2}\) closing to the choice in the previous LCSRs work [30]. In table 7, we present the \(B_{s}\to f_{0}\) form factors at the full recoiled point obtained from various theoretical approaches, like the perturbative QCD(PQCD) [39], QCDSR [40], LCSRs with chiral current [31], LCSRs with light meson on-shell [30], LCSRs with \(B\) meson on-shell [24]. Again, our result with different weak vertex currents (\({\rm S}_{1},{\rm S}_{2},{\rm S}_{3}\)) have large difference and we focus on the result obtained under scenario \({\rm S}_{1}\) with the axial-vector weak current. Our result is very close to the previous LCSRs result with the chiral current [31] which we would like to consider it as an incident. With using the same LCDAs of \(f_{0}\), our result is different from the result obtained in previous LCSRs [30], this is largely because of the different input of scalar coupling \(\tilde{f}_{f_{0}}\) too. We can see also that our result of \(f_{+}(0)\) is very close to it obtained in the LCSRs with the \(B\) meson LCDAs [24], while the form factors \(f_{-}(0)\) and \(f_{T}(0)\) have large deviations, so the high order corrections are highly eager to improve the accuracy. Here we estimate the possible NLO correction by varying the factoriazation scale as \(\mu_{f}=3.0\pm 0.5\) GeV, which in turn brings another \(20\%-30\%\) uncertainty to \(f_{+}(q^{2})\), \(f_{-}(q^{2})\) and \(f_{T}(q^{2})\).
In figure 7 we depict the Borel mass dependence of \(B_{s}\to f_{0}\) form factors where the LCSRs result with the accuracy up to leading twist and subleading twist LCDAs are shown in gray and magnate curves, respectively. Due to the much better
Figure 5: The Borel mass dependence of \(D_{s}\to f_{0}\) form factors from the LCSRs.
heavy quark expansion convergence in contrast to the \(D_{s}\) decay, we see that the leading twist contribution is the dominate one in \(B_{s}\to f_{0}\) transition. We plot the momentum transfer dependence in figure 8 with the largest momentum transfer at \(q_{\rm max}^{2}=12\,{\rm GeV}^{2}\), where the uncertainties associated to Bore mass and also threshold value are overlap drawing in gray and magnate curves. We see that the dominate source of uncertainty comes from both the them.
### \(D_{s}^{+}\to(f_{0}\to)\left[\pi\pi\right]_{\rm S}e^{+}\nu_{e}\) decay
The second-order differential decay width of semileptonic decays \(D_{s}^{+}(p)\to f_{0}(p_{1})e^{+}(p_{2})\nu_{e}(p_{3})\) is proportional to the transition form factor \(f_{1}\) as
\[\frac{d^{2}\Gamma(D_{s}^{+}\to f_{0}e^{+}\nu_{e})}{dE_{2}dq^{2}}=\frac{G_{F}^{ 2}m_{D_{s}}^{2}|V_{cs}|^{2}}{16\pi^{3}}|f_{1}(q^{2})|^{2}\left[2x(1+y-z)-4x^{2 }-y\right], \tag{66}\]
in which the dimensionless quantities are
\[x\equiv\frac{E_{2}}{m_{D_{s}}}\,\,\,\,\,\,\,y\equiv\frac{q^{2}}{m_{D_{s}}^{2} }\,,\,\,\,\,\,\,\,z\equiv\frac{m_{f_{0}}^{2}}{m_{D_{s}}^{2}} \tag{67}\]
with \(E_{2}\) being the energy of lepton and \(q^{2}\equiv m_{23}^{2}=(p_{2}+p_{3})^{2}=(p-p_{1})^{2}\) being the invariant mass of lepton-neutrino pair. Integrating over the lepton energy, we obtain the differential decay width on the momentum transfers \(q^{2}\)
\[\frac{d\Gamma(D_{s}^{+}\to f_{0}e^{+}\nu_{e})}{dq^{2}}=\frac{G_{F}^{2}|V_{cs} |^{2}\lambda^{3/2}(m_{D_{s}}^{2},m_{f_{0}}^{2},q^{2})}{192\pi^{3}m_{D_{s}}^{3} }|f_{+}(q^{2})|^{2}. \tag{68}\]
\(\lambda(a,b,c)=a^{2}+b^{2}+c^{2}-2xy-2xz-2yz\) is the Kallen function.
From the experimental side, \(f_{0}\) is measured via the \(\left[\pi\pi\right]_{\rm S}\) invariant mass spectral. So the key question in the phenomena is to look close at the role of \(f_{0}\) in the \(\pi\pi\) invariant mass. A natural solution is to examine the effects of resonance width, and noticeable, \(D_{s}(p)\to\left[\pi(k_{1})\pi(k_{2})\right]_{\rm S}l(p_{2})\nu(p_{3})\) decay is the kinematically simplest channel, because the invariant amplitude depends only on the invariant mass of dipion system \(s\equiv p_{1}^{2}=(k_{1}+k_{2})^{2}\) and not rely on its angular orientation with respecting the the remaining particles, meanwhile it is the most important channel due to the \(\left[\pi\pi\right]_{\rm S}\) phase shift shows a very board rise.
To perform with the BESIII analysis [18], we take the Flatt model to describe the width effect of intermediate resonant. The second-order differential decay width of \(D_{s}^{+}\to\left(f_{0}\to\right)\left[\pi\pi\right]_{\rm S}e^{+}\nu_{e}\) is then written in
\[\frac{d^{2}\Gamma(D_{s}^{+}\to\left[\pi\pi\right]_{\rm S}e^{+}\nu_{e})}{dE_{2} dq^{2}}=\frac{1}{\pi}\int_{4m_{2}^{2}}^{s_{\rm max}}ds\frac{g_{1}^{2}\beta_{\pi\pi}}{ \left|s-m_{2}^{2}+i(g_{1}^{2}\beta_{\pi\pi}+g_{2}^{2}\beta_{KK})\right|^{2}} \frac{d^{2}\Gamma(D_{s}^{+}\to{\rm S}l^{+}\nu)}{dE_{2}dq^{2}}. \tag{69}\]
Figure 7: The Borel mass dependence of \(B_{s}\to f_{0}\) form factors from the LCSRs.
\(\beta_{\pi\pi}(s)=\sqrt{1-4m_{\pi}^{2}/s}\) and \(\beta_{KK}(s)=\sqrt{1-4m_{K}^{2}/s}\) are the dipion phase factors, \(g_{1}^{2}=0.165\) GeV\({}^{2}\) and \(g_{2}^{2}=0.695\) GeV\({}^{2}\) are the weighted parameters [41]. Integrating over the invariant mass one arrives at the differential decay width on the momentum transfers
\[\frac{d\Gamma(D_{s}^{+}\to[\pi\pi]_{\rm S}\,e^{+}\nu_{e})}{dq^{2}}=\frac{1}{ \pi}\frac{G_{F}^{2}|V_{cs}|^{2}}{192\pi^{3}m_{D_{s}}^{3}}|f_{+}(q^{2})|^{2}\int _{4m_{\pi}^{2}}^{s_{\rm max}}ds\frac{\lambda^{3/2}(m_{D_{s}}^{2},s,q^{2})\,g_ {1}^{2}\beta_{\pi\pi}(s)}{|s-m_{\rm S}^{2}+i(g_{1}^{2}\beta_{\pi\pi}(s)+g_{2}^ {2}\beta_{KK}(s))|^{2}}. \tag{70}\]
In the left panel of figure 9, we depict the \(D_{s}\to f_{0}\) form factor \(f_{+}(q^{2})\) obtained by the LCSRs calculation in the large recoiled regions (gray band) and the simple \(z\)-series expansion parameterization (SSE) [42] in the small recoiled region (lightblue band). For the sake of comparison, we also plot the extracted form factor from the differential width \(d\Gamma/dq^{2}\) measured at BESIII with the Flatte model. Our prediction is consistent with the data extraction, while shows a litter bit larger. In the right panel, we depict the differential width of \(D_{s}^{+}\to[\pi\pi]_{\rm S}\,e^{+}\nu_{e}\) on the momentum transfers, where the result obtained from the narrow width approximation in Eq. (68) and the Flatte resonant model in Eq. (70) are plotted in blue and black curves, respectively. The curve of decay width obtained from the narrow width approximation is a litter bit lower than the data, the result with considering the width effect by resonant model is in consistent with the data with showing a litter bit larger. This difference indicates the sensitive of differential decay width on the resonant model. Based on the discussions, we would like to comment that the extraction of \(D_{s}\to f_{0}\) form factor by the BESIII measurement of differential decay width, as well as our result of decay width from the LCSRs form factor, is model dependent. A model independent analysis is highly anticipated to do the high accuracy phenomena study.
## IV \(D_{s}\to[\pi\pi]_{\rm S}\) from factors
A model independent study is to consider directly the stable \(\pi\pi\) state on-shell rather than the \(f_{0}\). In this case the \(D_{s}(p)\to[\pi(k_{1})\pi(k_{2})]_{\rm S}\,l(p_{2})\nu(p_{3})\) decay amplitude can be written as
\[\mathcal{M}(D_{s}^{+}\to[\pi\pi]_{\rm S}\,e^{+}\nu)=-\frac{iG_{F}}{\sqrt{2}}V _{cs}^{*}\bar{u}_{e}(p_{2})\gamma^{\mu}(1-\gamma_{5})v_{\nu_{4}}(p_{3})\langle \left[\pi(k_{1})\pi(k_{2})\right]_{\rm S}\,|\bar{s}\gamma_{\mu}(1-\gamma_{5})c |D_{s}^{+}(p)\rangle, \tag{71}\]
in which the hadron transition matrix element is decomposed in terms of the orthogonal form factors
\[\langle\left[\pi(k_{1})\pi(k_{2})\right]_{\rm S}\,|\bar{s}\gamma_{\mu}(1- \gamma_{5})c|D_{s}^{+}(p)\rangle=-iF_{t}(q^{2},s,\zeta)k_{\mu}^{t}-iF_{0}(q^{ 2},s,\zeta)k_{\mu}^{0}-iF_{\parallel}(q^{2},s,\zeta)k_{\mu}^{\parallel} \tag{72}\]
accompanying with the kinematical variables
\[k_{\mu}^{t}=\frac{q_{\mu}}{\sqrt{q^{2}}}\,,\quad k_{\mu}^{0}=\frac{2\sqrt{q^{ 2}}}{\sqrt{\lambda_{D_{s}}}}\left(k_{\mu}-\frac{k\cdot q}{q^{2}}q_{\mu}\right) \,,\quad k_{\mu}^{\parallel}=\frac{1}{\sqrt{k^{2}}}\left(\bar{k}_{\mu}-\frac{4 (q\cdot k)(q\cdot\bar{k})}{\lambda_{D_{s}}}k_{\mu}+\frac{4k^{2}(q\cdot\bar{k} )}{\lambda_{D_{s}}}q_{\mu}\right). \tag{73}\]
Besides the momentum transfers \(q^{2}\) in the weak decay and the invariant mass \(k^{2}\equiv(k_{1}+k_{2})^{2}\) of dipion system, \(D_{s}\rightarrow\left[\pi\pi\right]_{\rm S}\) form factors also carries the information about the angular momentum between two collinear pions by the variable \(2q\cdot\bar{k}\equiv 2q\cdot(k_{1}-k_{2})=\sqrt{\lambda_{D_{s}}}(2\zeta-1)=\sqrt{ \lambda_{D_{s}}}\beta_{\pi\pi}(k^{2})\cos\theta_{\pi}\). Here \(\theta_{\pi}\) is the angle between the 3-momenta of \(\pi(k_{2})\) meson and the \(D_{s}(p)\) meson in the dipion rest frame, the Kallen function is \(\lambda_{D_{s}}=\lambda(m_{D_{s}}^{2},k^{2},q^{2})\).
Multiplying Eq. (72) by the polarization vector of weak (lepton-neutrino pair) current, we can define the helicity form factors
\[H_{i}^{D_{s}\rightarrow\left[\pi\pi\right]_{\rm S}}(q^{2},k^{2},\zeta)=\bar{ \epsilon}^{\mu}(i)\langle\left[\pi(k_{1})\pi(k_{2})\right]_{\rm S}\left|\bar{ s}\gamma_{\mu}(1-\gamma_{5})c|D_{s}^{+}(p)\right\rangle. \tag{74}\]
The subscript \(i=0,t\) denotes the polarization direction, hereafter we would not show explicitly the superscript for the sake of simplicity. We immediately obtain the simple relation between the helicity form factors and the orthogonal form factors,
\[H_{0}(q^{2},k^{2},\zeta)=-iF_{0}(q^{2},k^{2},\zeta),\qquad H_{t}(q^{2},k^{2}, \zeta)=-iF_{t}(q^{2},k^{2},\zeta). \tag{75}\]
We then obtain the three-order differential width of semileptonic decay \(D_{s}^{+}(p)\rightarrow\left[\pi(k_{1})\pi(k_{2})\right]_{\rm S}e^{+}(p_{2}) \nu_{e}(p_{3})\)
\[\frac{d^{3}\Gamma(D_{s}^{+}\rightarrow\left[\pi\pi\right]_{\rm S}l^{+}\nu)}{ dk^{2}dq^{2}d(\cos\theta_{\pi})} = \frac{G_{F}^{2}|V_{cs}|^{2}}{192\pi^{3}m_{D_{s}}^{3}}\frac{\beta_{\pi\pi}(k^{2}) \sqrt{\lambda_{D_{s}}}q^{2}}{16\pi}\left[-iF_{0}(q^{2},k^{2},\zeta)\right]^{2}. \tag{76}\]
The key issue becomes to calculate the \(D_{s}\rightarrow\left[\pi\pi\right]_{\rm S}\) form factors \(F_{0}(q^{2},k^{2},\zeta)\).
### The chiral even generalized \(2\pi\) distribution amplitudes
In order to calculate the \(D_{s}\rightarrow\left[\pi\pi\right]_{\rm S}\) form factors, we should firstly know the LCDAs (\(2\pi\)DAs) of \(\pi\pi\) system. We introduce a parameter angle to describe the mixing between \(\bar{n}n=(\bar{u}u+\bar{d}d)/\sqrt{2}\) and \(\bar{s}s\) in the isoscalar scalar \(\pi\pi\) and \(KK\) states
\[\left[\pi\pi\right]_{\rm S}=\left|\bar{n}n\right\rangle\cos\theta+\left|\bar{ s}s\right\rangle\sin\theta,\qquad\left[KK\right]_{\rm S}=-|\bar{n}n\rangle \sin\theta+|\bar{s}s\rangle\cos\theta. \tag{77}\]
The chiral even two quark isoscalar \(2\pi\)DAs [43; 44] involved in our calculation is defined by
\[\langle\left[\pi^{a}(k_{1})\pi^{b}(k_{2})\right]_{\rm S}|\bar{s}(xn)\gamma_{ \mu}s(0)|0\rangle=2\delta^{ab}k_{\mu}\sin\theta\int due^{iux(k\cdot n)}\Phi_{ \parallel,\left[\pi\pi\right]_{\rm S}}^{I=0}(u,\zeta,k^{2})\,. \tag{78}\]
It is easy to check the \(C\)-parity symmetry properties
\[\Phi_{\parallel,\left[\pi\pi\right]_{\rm S}}^{I=0}(u,\zeta,k^{2})=-\Phi_{ \parallel,\left[\pi\pi\right]_{\rm S}}^{I=0}(1-u,\zeta,k^{2})=\Phi_{\parallel,\left[\pi\pi\right]_{\rm S}}^{I=0}(u,1-\zeta,k^{2})\,. \tag{79}\]
To describe the \(\pi\pi\) system, three independent kinematical variables are introduced, they are the momentum fraction \(u\) carried by the antiquark with respect to the total momentum \(k\), the longitudinal momentum fraction \(\zeta=k_{1}^{+}/k^{+}\) carried by one pion in the system, and the invariant mass square \(k^{2}=s\).
The generalized isoscalar scalar \(2\pi\)DAs is normalized by the quark part of energy momentum tensor form factor \(F_{\pi}^{\rm EMT}\),
\[\int du(2u-1)\Phi_{\parallel,\left[\pi\pi\right]_{\rm S}}^{I=0}(u,\zeta,k^{2}) =-2M_{2}^{(\pi)}\zeta(1-\zeta)F_{\pi}^{\rm EMT}(k^{2}), \tag{80}\]
in which \(M_{2}^{(\pi)}\) indicates the second moment of quark distributions in the pion \(M_{2}^{(\pi)}=2\int_{0}^{1}duu\left[q_{\pi}(u)+\bar{q}_{\pi}(x)\right]\), \(F_{\pi}^{\rm EMT}(0)=1\).
With the double expansion of eigenfunction of the evolution function and the partial wave, the generalized \(2\pi\)DAs is written by the detached Gegenbauer and Legendre polynomials
\[\Phi_{\parallel,\left[\pi\pi\right]_{\rm S}}^{I}(u,\zeta,k^{2},\mu)=6u(1-u) \sum_{n=1,{\rm odd}}^{\infty}\sum_{l=0,{\rm even}}^{n+1}B_{\parallel,nl}^{I=0} (k^{2},\mu)C_{n}^{3/2}(2u-1)C_{l}^{1/2}(2\zeta-1)\,. \tag{81}\]
The \(k^{2}\)-dependent expansion coefficient \(B_{nl}\) have the similar scale evolution as the gegenbauer coefficients in the LCDAs of \(f_{0}\) meson [45].
\[B_{\parallel,nl}^{I=0}(\mu,k^{2})=B_{\parallel,nl}^{I=0}(0)\left[\frac{\alpha_ {s}(\mu)}{\alpha_{s}(\mu_{0})}\right]^{\frac{\gamma_{n}^{(0)}-\gamma_{0}^{(0)} }{(2\beta_{0})}}\exp\left[\sum_{m=1}^{N-1}\frac{k^{2m}}{m!}\frac{d^{m}}{dk^{2m}} \ln B_{\parallel,nl}^{I=0}(0)+\frac{k^{2N}}{\pi}\int_{4m_{K}^{2}}^{\infty}ds \frac{\delta_{l}^{I=0}(s)}{s^{N}(s-k^{2}-i0)}\right]. \tag{82}\]
the pion-pion phase shift \(\delta_{l}^{I=0}(s)\) carries the resonant information in the dipion invariant mass spectral. At one-loop level, the anomalous dimension reads as
\[\gamma_{n}^{\parallel,(0)}=8C_{F}\left(\sum_{k=1}^{n+1}\frac{1}{k}- \frac{3}{4}-\frac{1}{2(n+1)(n+2)}\right), \tag{83}\]
the \(\beta\) function coefficient is \(\beta_{0}=11-2N_{f}/3\). Based on the Watson theorem of scattering amplitudes, the exponential function in Eq. (82) is the solution of the \(N\)-subtracted dispersion relation for the coefficient \(B_{nl}\), whose evolution could touch up to \(\sim 2.5\,\mathrm{GeV}\).
Concerning the coefficients at the zero point of invariant mass, the soft pion theorem and the crossing symmetry relate it with theegenbauer moments \(a_{n}\) and the moments of quark distribution \(M_{N}\) in pion.
\[\sum_{l=0}^{n+1}B_{\parallel,nl}^{I=0}=0\,,\quad B_{\parallel,N-1 N}^{I=0}(0)=\frac{1}{3}\frac{2N+1}{N+1}M_{N=\mathrm{even}}^{(\pi)}\,. \tag{84}\]
In the vicinity of the resonance, isoscalar scalar \(2\pi\)DAs deduces to the distribution amplitudes of \(f_{0}\)
\[\bar{f}_{f_{0}}a_{n}^{f_{0}}=B_{\parallel,n0}^{I=0}(0)\mathrm{ Exp}\left[\sum_{m=1}^{n-1}c_{m}^{(n0)}m_{f_{0}}^{2m}\right],\quad\bar{f}_{f_{0}}a_{1} ^{f_{0}}=\frac{\Gamma_{f_{0}}\mathrm{Im}B_{\parallel,12}^{I=0}(m_{f_{0}}^{2}) }{g_{f_{0}\pi\pi}} \tag{85}\]
with the subtraction coefficient
\[c_{m}^{(nl)}=\frac{1}{m!}\frac{d^{m}}{dk^{2m}}\left[\ln B_{ \parallel,n1}^{I=0}(k^{2})-\ln B_{\parallel,1+1}^{I=0}(k^{2})\right]_{k^{2} \to 0}. \tag{86}\]
Based on the QCDSR calculation of LCDAS of \(f_{0}\) in the section II, we obtain the first expansion and subtraction coefficients of isoscalar scalar dipion system.
\[\bar{f}_{f_{0}}a_{1}^{f_{0}}=B_{\parallel,10}^{I=0}(0)=\frac{ \Gamma_{f_{0}}\mathrm{Im}B_{\parallel,10}^{I=0}(m_{f_{0}}^{2})}{g_{f_{0}\pi \pi}}=-0.300,\quad B_{\parallel,12}^{I=0}=-B_{\parallel,10}^{I=0}=0.300,\] \[\frac{d\ln B_{\parallel,10}^{I=0}(k^{2})}{dk^{2}}|_{k^{2}\to 0}= \frac{d\ln B_{\perp,12}^{I=0}(k^{2})}{dk^{2}}|_{k^{2}\to 0}=\frac{N_{c}}{48 \pi^{2}f_{\pi}^{2}}=0.375. \tag{87}\]
In figure 10, we depict the phase shift \(\delta_{l}^{I=0}(s)\) from the amplitude analysis with a combination of dispersion relations and unitarity [46; 47], and the obtained expansion coefficient \(B_{1l}^{I=0}(s)\). We can clearly see a sharp dip around the \(f_{0}\) region in the \(S\)-wave contribution to phase shift and hence the first-order expansion coefficient, additionally, a quick rising around \(f_{0}(1370)\) region in the \(D\)-wave contributions.
### \(D_{s}\to\left[\pi\pi\right]_{\mathrm{S}}\) from factor \(F_{0}^{(l)}(q^{2},s)\) at leading twist
To evaluate the \(D_{s}\to\left[\pi\pi\right]_{\mathrm{S}}\) form factors, we consider the nonlocal correlation function
\[\Pi_{\mu}^{ab}(q,k_{1},k_{2})=i\int d^{4}xe^{iq\cdot x}\langle\pi^{a}(k_{1}) \pi^{b}(k_{2})|T\{j_{1,\mu}^{\mathrm{S1}}(x),j_{2}^{\mathrm{S1}}(0)\}|0\rangle, \tag{88}\]
and take the \(D_{s}\) interpolating current and weak decay currents as the same as in the scenario I (S1) in the calculation of \(D_{s}\to f_{0}\) form factors. Since our knowledge of \(2\pi\)DAs is still at leading twist level so far, we can only discuss the width effect of \(D_{s}\to\left[\pi\pi\right]_{\mathrm{S}}\) at leading twist. We furthermore consider an auxiliary correlation function
\[\Pi^{ab}(q,k_{1},k_{2})=i\int d^{4}xe^{iq\cdot x}\langle\pi^{a}(k_{1})\pi^{b}( k_{2})|T\{j_{5}(x),j_{2}^{\mathrm{S1}}(0)\}|0\rangle\,. \tag{89}\]
to evaluate the timelike helicity form factor \(F_{t}\) with the weak current \(j_{5}=-im_{c}\bar{s}\gamma_{5}c\). The auxiliary correlation function can be obtained by multiplying Eq. (88) with \(q_{\mu}\).
For the sake of simplicity, we take the neutral dipion system with electric charge \(a=b=0\) for example to show the LCSRs evaluation. The correction functions in Eqs. (88,89) can be written in the hadron representation as
\[\Pi_{\mu}^{\rm had}(q,k_{1},k_{2}) = \frac{im_{D_{s}}^{2}f_{D_{s}}}{\left[m_{D_{s}}^{2}-(k+q)^{2} \right](m_{c}+m_{s})}\left[F_{t}(q^{2},k^{2},\zeta)k_{\mu}^{t}+F_{0}(q^{2},k^{2 },\zeta)k_{\mu}^{0}+F_{\parallel}(q^{2},k^{2},\zeta)k_{\mu}^{\parallel}\right] \tag{90}\] \[+ \frac{1}{\pi}\int_{s_{0}}^{\infty}ds\frac{\rho_{t}^{h}(q^{2},s)k_ {\mu}^{t}+\rho_{0}^{h}(q^{2},s)k_{\mu}^{0}+\rho_{\parallel}^{h}(q^{2},s)k_{\mu }^{\parallel}}{s-(k+q)^{2}}\,,\] \[\Pi^{\rm had}(q,k_{1},k_{2}) = \frac{m_{D_{s}}^{2}f_{D_{s}}}{\left[m_{D_{s}}^{2}-(k+q)^{2} \right](m_{c}+m_{s})}\left[\sqrt{q^{2}}F_{t}(q^{2},k^{2},\zeta)\right]+\frac{ 1}{\pi}\int_{s_{0}}^{\infty}ds\frac{\rho^{h}(q^{2},s)}{s-(k+q)^{2}}\,. \tag{91}\]
Meanwhile, the OPE calculation of these correlation functions result in
\[\Pi_{\mu}^{\rm OPE}(q,k_{1},k_{2}) = 2i\sin\theta m_{c}k_{\mu}\int_{0}^{1}\frac{du}{u}\frac{\Phi_{ \parallel,\left[\pi\pi\right]_{\rm S}}^{I=0,s\bar{s}}(u,\zeta,k^{2})}{s_{\nu}^ {2}(u)-(k+q)^{2}}\,, \tag{92}\] \[\Pi^{\rm OPE}(q,k_{1},k_{2}) = \frac{m_{c}\sin\theta}{2}\int_{0}^{1}\frac{du}{u}\frac{\Phi_{ \parallel,\left[\pi\pi\right]_{\rm S}}^{I=0,s\bar{s}}(u,\zeta,k^{2})\left[m_{ D_{s}}^{2}-q^{2}-(1-2u)k^{2}\right]}{s_{\nu}^{2}(u)-(k+q)^{2}}\,. \tag{93}\]
After applying the quark-hadron duality and Borel transformation, we obtain the relations between \(D_{s}\to\left[\pi\pi\right]_{\rm S}\) form factors
\[\frac{2\sqrt{q^{2}}}{\lambda_{D_{s}}}F_{0}(q^{2},k^{2},\zeta)- \frac{4(q\cdot k)(q\cdot\bar{k})}{\lambda_{D_{s}}\sqrt{k^{2}}}F_{\parallel}(q ^{2},k^{2},\zeta)=\frac{2m_{c}(m_{c}+m_{s})\sin\theta}{m_{D_{s}}^{2}f_{D_{s}} }\int_{u_{0}}^{1}\frac{du}{u}\,\Phi_{\parallel,\left[\pi\pi\right]_{\rm S}}^{I= 0,s\bar{s}}(u,\zeta,k^{2})\,e^{-\frac{s_{\nu}^{2}(u)-m_{D_{s}}^{2}}{M^{2}}}\,, \tag{94}\] \[\frac{F_{t}(q^{2},k^{2},\zeta)}{\sqrt{q^{2}}}-\frac{2(q\cdot k)F_ {0}(q^{2},k^{2},\zeta)}{\sqrt{\lambda_{D_{s}}q^{2}}}+\frac{4\sqrt{k^{2}}(q \cdot\bar{k})F_{\parallel}(q^{2},k^{2},\zeta)}{\lambda_{D_{s}}}=0\,. \tag{95}\]
Here \(s_{2}^{\prime}(u)=\bar{u}k^{2}+(m_{c}^{2}-\bar{u}q^{2})/u\). The sum rules are ultimately obtained as
\[\cos\theta_{\pi}F_{\parallel}(q^{2},k^{2},\zeta)=\frac{m_{c}(m_{ c}+m_{s})\sin\theta}{m_{D_{s}}^{2}f_{D_{s}}\sqrt{\lambda_{D_{s}}}\beta_{\pi\pi}(k^{2})} \int_{u_{0}}^{1}\frac{du}{u}\,\left[4u\left(k^{2}\right)^{3/2}\right]\Phi_{ \parallel,\left[\pi\pi\right]_{\rm S}}^{I=0,s\bar{s}}(u,\zeta,k^{2})\,e^{- \frac{s_{\nu}^{2}(u)-m_{D_{s}}^{2}}{M^{2}}}, \tag{96}\] \[F_{0}(q^{2},k^{2},\zeta)=\frac{m_{c}(m_{c}+m_{s})\sin\theta}{m_{ D_{s}}^{2}f_{D_{s}}\sqrt{\lambda_{D_{s}}}\sqrt{q^{2}}}\int_{u_{0}}^{1}\frac{du}{u} \,\left[\lambda_{D_{s}}+2uk^{2}\left(m_{D_{s}}^{2}+q^{2}-k^{2}\right)\right] \Phi_{\parallel,\left[\pi\pi\right]_{\rm S}}^{I=0,s\bar{s}}(u,\zeta,k^{2})\,e^ {-\frac{s_{\nu}^{2}(u)-m_{D_{s}}^{2}}{M^{2}}},\] (97) \[\sqrt{q^{2}}F_{t}(q^{2},k^{2},\zeta)=\frac{m_{c}(m_{c}+m_{s})\sin \theta}{m_{D_{s}}^{2}f_{D_{s}}}\int_{u_{0}}^{1}\frac{du}{u}\,\left[m_{D_{s}}^{ 2}-q^{2}-(1-2u)k^{2}\right]\Phi_{\parallel,\left[\pi\pi\right]_{\rm S}}^{I=0,s \bar{s}}(u,\zeta,k^{2})\,e^{-\frac{s_{\nu}^{2}(u)-m_{D_{s}}^{2}}{M^{2}}}. \tag{98}\]
We can check that \(F_{0}(q^{2},k^{2},\zeta)=F_{t}(q^{2},k^{2},\zeta)\) at the full recoiled point \(q^{2}=0\). From the view of partial-wave analysis [48], \(D_{s}\to\pi\pi\) form factors are expanded by
\[F_{0,t}(q^{2},k^{2},\zeta)=\sum_{\ell=0}^{\infty}\sqrt{2\ell+1}\, F_{0,t}^{(\ell)}(q^{2},k^{2})P_{\ell}^{(0)}(\cos\theta_{\pi})\,,\] \[F_{\parallel,\perp}(q^{2},k^{2},\zeta)=\sum\sum_{\ell=0}\sqrt{2 \ell+1}\,F_{\parallel,\perp}^{(\ell)}(q^{2},k^{2})\frac{P_{\ell}^{(1)}(\cos \theta_{\pi})}{\sin\theta_{\pi}}\,. \tag{99}\]
The associated Legendre polynomials have the orthogonality relations
\[\int_{-1}^{1}P_{\ell}^{n}(x)P_{k}^{n}(x)dx=\frac{2}{2\ell+1}\frac{(\ell+n)!}{( \ell-n)!}\delta_{k\ell},\quad\int_{-1}^{1}\frac{P_{\ell}^{m}(x)P_{\ell}^{n}(x )}{1-x^{2}}dx=\frac{(\ell+m)!}{m(\ell-m)!}\delta_{mn}\quad\operatorname{with}m,n\neq 0. \tag{100}\]
Multiplying \(P_{\ell}^{(0)}(\cos\theta_{\pi})\) to both sides of Eqs. (96,97) and integrating over \(\cos\theta_{\pi}\), we obtain the sum rules of \(D_{s}\to[\pi\pi]_{\mathrm{S}}\) form factors at \(\ell^{\prime}\)-wave (\(\ell^{\prime}=\operatorname{even}\) and \(\ell^{\prime}\leqslant n+1\))
\[\sum_{\ell=1}^{\infty}I_{\ell^{\prime}\ell}^{I=0}\,F_{\parallel}^ {(\ell)}(q^{2},k^{2})=\frac{m_{c}(m_{c}+m_{s})\sin\theta}{m_{D_{s}}^{2}f_{D_{ s}}\sqrt{\lambda_{D_{s}}}}\,\sum_{n=1,\operatorname{odd}}^{\infty}\frac{1}{2 \ell^{\prime}+1}\,J_{n}^{\parallel}(q^{2},k^{2},M^{2},s_{0})\,B_{n\ell^{ \prime},\parallel}^{I=0}(k^{2}), \tag{101}\] \[F_{0}^{(\ell^{\prime})}(q^{2},k^{2})=\frac{m_{c}(m_{c}+m_{s}) \sin\theta}{m_{D_{s}}^{2}f_{D_{s}}\sqrt{\lambda_{D_{s}}}\sqrt{q^{2}}}\sum_{n=1,\operatorname{odd}}^{\infty}\frac{\beta_{\pi\pi}(k^{2})}{\sqrt{2\ell^{ \prime}+1}}\,J_{n}^{0}(q^{2},k^{2},M^{2},s_{0})\,B_{n\ell^{\prime},\parallel}^ {I=0}(k^{2}),\] (102) \[F_{t}^{(\ell^{\prime})}(q^{2},k^{2})=\frac{m_{c}(m_{c}+m_{s}) \sin\theta}{m_{D_{s}}^{2}f_{D_{s}}\sqrt{q^{2}}}\sum_{n=1,\operatorname{odd}}^ {\infty}\frac{\beta_{\pi\pi}(k^{2})}{\sqrt{2\ell^{\prime}+1}}\,J_{n}^{t}(q^{2},k^{2},M^{2},s_{0})\,B_{n\ell^{\prime},\parallel}^{I=0}(k^{2}), \tag{103}\]
with the conformal expansion functions \(J_{n}\)
\[J_{n}^{\parallel}(q^{2},k^{2},M^{2},s_{0})=6\int_{u_{0}}^{1}du\, \bar{u}\,C_{n}^{3/2}(2u-1)\left[4\left(k^{2}\right)^{3/2}\right]e^{-\frac{\ell _{2}^{\prime}(u)-m_{D_{s}}^{2}}{M^{2}}}, \tag{104}\] \[J_{n}^{0}(q^{2},k^{2},M^{2},s_{0})=6\int_{u_{0}}^{1}du\,\bar{u} \,C_{n}^{3/2}(2u-1)\left[\lambda_{D_{s}}+2uk^{2}\left(m_{D_{s}}^{2}+q^{2}-k^{2 }\right)\right]e^{-\frac{\ell_{2}^{\prime}(u)-m_{D_{s}}^{2}}{M^{2}}},\] (105) \[J_{n}^{t}(q^{2},k^{2},M^{2},s_{0})=6\int_{u_{0}}^{1}du\,\bar{u} \,C_{n}^{3/2}(2u-1)\left[m_{D_{s}}^{2}-q^{2}-(1-2u)k^{2}\right]e^{-\frac{\ell _{2}^{\prime}(u)-m_{D_{s}}^{2}}{M^{2}}}. \tag{106}\]
and the additional partial-wave expansion function \(I_{\ell^{\prime}\ell}\)
\[I_{\ell^{\prime}\ell}^{I=0}=\sqrt{2l+1}\,\int_{-1}^{1}d(\cos\theta_{\pi})\frac {\cos\theta_{\pi}}{\sin\theta_{\pi}}\,P_{\ell^{\prime\prime}}^{(0)}(\cos\theta _{\pi})P_{\ell}^{(1)}(\cos\theta_{\pi})\,. \tag{107}\]
We mark that \(I_{\ell\ell^{\prime}}^{I=0}\) is zero when \(\ell\) goes over odd number, \(I_{02}^{I=0}=-2\sqrt{5},I_{22}^{I=0}=-4/\sqrt{5}\) and \(I_{\ell^{\prime}2}^{I=0}=0\) when \(\ell^{\prime}>2\).
In figure 11 we depict the evolutions of \(S\)-wave \(D_{s}\to[\pi\pi]_{\mathrm{S}}\) form factor on the momentum transfers (left panel) and invariant mass (right panel). and in figure 12 for the form factor with \(D\)-wave \(\pi\pi\) system. We take the mixing angle between the isoscalar scalar \(\pi\pi\) and \(KK\) systems at \(\theta=20^{\circ}\pm 10^{\circ}\) which is similar as the angle in the \(\sigma\)-\(f_{0}\) mixing [32]. The uncertainty arose from the mixing angle is added up to the LCSRs parameters uncertainty and shown in the magnetic bands. We find that the \(D\)-wave form factor \(\sqrt{q^{2}}F_{0}^{(l=2)}(q^{2})\) is much smaller than the \(S\)-wave \(\sqrt{q^{2}}F_{0}^{(l=0)}(q^{2})\) when the invariant mass is small, while in the resonant regions \(D\)-wave contribution is comparable or even larger than the \(S\)-wave.
### \(D_{s}\to[\pi\pi]_{\mathrm{S}}\,e\nu_{e}\) decay at leading twist
In figure 13, we depict the evolution of form factor on the momentum transfers \(q^{2}\) where the invariant mass dependence is integrated out.
\[\sqrt{q^{2}}F_{0}^{(l)}(q^{2})=\int_{4m_{2}^{2}}^{s_{\mathrm{max}}(q^{2})}ds \sqrt{q^{2}}F_{0}^{(l)}(q^{2},s), \tag{108}\]
here \(s_{\rm max}(q^{2})\) is the solution of \(\lambda(m_{D_{s}}^{2},q^{2},s)=0\). Again, the kinematical region with small recoling is extrapolated by the SSE parameterization.
Considering the partial-wave expansion of \(D_{s}\to\pi\pi\) form factors in Eq. (99) and the orthogonal conditions in Eq. (100), the three-order differential decay width in Eq. (76) deduces to the second-order one after integrating over the angle \(\theta_{\pi}\),
\[\frac{d^{2}\Gamma(D_{s}^{+}\to\left[\pi\pi\right]_{\rm S}I^{+} \nu)}{dk^{2}dq^{2}} = \frac{G_{F}^{2}|V_{cs}|^{2}}{192\pi^{3}m_{D_{s}}^{3}}\frac{\beta_{ \pi\pi}(k^{2})\sqrt{\lambda_{D_{s}}}}{16\pi}\sum_{\ell=0}^{\infty}2|\sqrt{q^{2 }}F_{0}^{(\ell)}(q^{2},k^{2})|^{2}. \tag{109}\]
After doing the integration on the invariant mass, we plot the momentum transfer dependence of the \(D_{s}^{+}\to\left[\pi\pi\right]_{\rm S}e^{+}\nu_{e}\) decay width (in unit of \(\rm ns^{-1}/GeV^{2}/c^{4}\)) in the right panel of figure. 14. We mark that the result is obtained at leading twist level of the dipion LCDAs, so we compare it with the result obtained from the narrow width approximation and Flatte resonant model with the leading twist \(D_{s}\to f_{0}\) form factor, which is shown in the left panel. The three curves of central value indicates from one side the sensitivity of predictions to the resonant model, and from another side that the direct calculation from \(D_{s}\to\left[\pi\pi\right]\) form factor shows relatively moderate evolution with larger allowed momentum transfers. We mark again the subleading twist contribution is the dominate one in \(D_{s}\to f_{0}\) form factors, so the further study on the twist three dipion LCDAs could provide a model independent solution to the four-body semileptonic decays of \(D_{s}\) meson.
## V Summary
In this work we firstly update the QCD sum rules predictions of isoscalar scalar meson \(f_{0}\), especially for the decay constant and the gegenbauer expansion coefficients in the LCDAs, followed which we calculate the \(D_{s}\to f_{0}\) transition form factors from the LCSRs with \(f_{0}\) LCDAs and obtain the differential width of semileptonic decay \(D_{s}^{+}\to f_{0}e^{+}\nu_{e}\). With the updated decay constant of \(f_{0}\) which is about two times of the one used in the previous LCSRs calculation, our result of \(D_{s}\to f_{0}\) form factors \(f_{+}(q^{2})\) with considering the mixing between \(\bar{s}s\) and \(\bar{u}u+\bar{d}d\) of \(f_{0}\) is in consistent with the data extracted from the measurement of differential width in the whole momentum transfer region, indicating that the energetic picture of \(f_{0}\) in charm meson decay is still reliable. The result of differential width obtained under the narrow width approximation shows a litter bit lower than the data, the result obtained under the intermediate resonant model with Flatte formula shows the consistence. In order to get rid of the model dependence and give more powerful prediction, we suggest to describe the unstable scalar meson by the dipion LCDAs and calculate the \(D_{s}\to[\pi\pi]_{\rm S}\) form factors. At leading twist, the directly calculation of differential width of \(D_{s}^{+}\to[\pi\pi]_{\rm S}e^{+}\nu_{e}\) decay exhibits a moderate evolution on the momentum transfers, comparing to the result obtained under the narrow width approximation and the Flatte model.
Our calculation of \(D_{s}\to[\pi\pi]_{\rm S}\) form factors is carried out at leading twist due to the finite knowledge of dipion LCDAs, so an important issue of further development in this project is to construct the twist three dipion LCDAs and take in to account their contributions. Of course, the next-to-leading-order QCD radiation corrections of the correlation function is also imperative to improve the prediction accuracy. This work reveals a bright prospect to study the four-body leptonic decays of heavy mesons with the dimeson light-cone distribution amplitudes, the future experiment with larger integrated luminosity [49; 50] would help
Figure 14: The same as figure. 9, but for the leading twist contribution.
us to understand the LCDAs of dipion system much better.
## VI Acknowledgements
We are grateful to Ling-Yun Dai and Hai-Bo Li for fruitful discussions, especially to Hai-Yang Cheng for the careful reading of the draft and the helpful comments. SC is supported by the National Science Foundation of China (NSFC) under Grant No. 11975112. SLZ acknowledge the support from the Natural Science Foundation of Hunan Province, China under Contract No. 2021JJ40036 and the Fundamental Research Funds for the Central Universities under Contract No. 020400/531118010467.
|
2301.00848 | The topology of Liouville foliation for the Kovalevskaya integrable case
on the Lie algebra so(4) | In this paper we study topological properties of an integrable case for
Euler's equations on the Lie algebra $\textrm{so}(4)$, which can be regarded as
an analogue of the classical Kovalevskaya case in rigid body dynamics. In
particular, for all values of the parameters of the system under consideration
the bifurcation diagrams of the momentum mapping are constructed, the types of
critical points of rank $0$ are determined, the bifurcations of Liouville tori
are described and the loop molecules for all singular points of the bifurcation
diagram are computed. It follows from the obtained results that some
topological properties of the classical Kovalevskaya case can be obtained from
the corresponding properties of the considered integrable case on the Lie
algebra $\textrm{so}(4)$ by taking a natural limit. | Ivan Kozlov | 2023-01-02T19:24:01Z | http://arxiv.org/abs/2301.00848v1 | # The topology of Liouville foliation for the Kovalevskaya integrable case on the Lie algebra so(4).
###### Abstract
In this paper we study topological properties of an integrable case for Euler's equations on the Lie algebra so(4), which can be regarded as an analogue of the classical Kovalevskaya case in rigid body dynamics. In particular, for all values of the parameters of the system under consideration the bifurcation diagrams of the momentum mapping are constructed, the types of critical points of rank 0 are determined, the bifurcations of Liouville tori are described and the loop molecules for all singular points of the bifurcation diagram are computed. It follows from the obtained results that some topological properties of the classical Kovalevskaya case can be obtained from the corresponding properties of the considered integrable case on the Lie algebra so(4) by taking a natural limit.
Bibliography: 21 titles.
**Keywords:** integrable Hamiltonian systems, Kovalevskaya case, Liouville foliation, bifurcation diagram, topological invariants.
###### Contents
* 1 Introduction
* 2 Basic Definitions and Problem Formulation
* 3 Main results
* 3.1 Case \(\varkappa>0,b=0\)
* 4 Proof of the main statements
* 4.1 Critical points of rank 1
* 4.2 Types of bifurcation diagrams. (Case \(b\neq 0\))
* 4.3 Critical points of rank 0
* 4.4 Proof of Theorems 1, 2 and 3
* 5 Classical Kovalevskaya case (\(\varkappa=0\))
## 1 Introduction
The Kovalevskaya top is one of the most well-known integrable Hamiltonian systems in classical mechanics. It was proved by Sofia Kovalevskaya in her paper [1] that besides the cases of Euler, Lagrange and her own, opened earlier in [2], there is no other rigid body systems that would be integrable in the same way for any value of the area constant. The Kovalevskaya top is more complex than the Euler and Lagrange tops hence various methods that would allow us to simplify the work with this top are of interest. In this paper we demonstrate one of such possible methods. Namely, we consider a one-parameter family of integrable Hamiltonian systems defined on the pencil of Lie algebras \(\mathrm{so}(4)-\mathrm{e}(3)-\mathrm{so}(3,1)\) found in the paper [3] and show that some information about the classical Kovalevskaya case, which is an integrable Hamiltonian system on the Lie algebra \(\mathrm{e}(3)\), can be obtained by studying the integrable Hamiltonian systems on the Lie algebra \(\mathrm{so}(4)\). To be more precise, in this paper we calculate some topological invariants of these integrable cases using the theory of topological classification of integrable Hamiltonian systems (see [4]). The obtained results for the Lie algebra \(\mathrm{so}(4)\) allow us to make some conclusions about the topological properties of the classical Kovalevskaya case.
In this paper for the integrable cases on the algebra \(\mathrm{so}(4)\) under consideration it is done the following:
1. the bifurcation diagrams of the momentum mapping are constructed (Theorem 1),
2. the types of critical points of rank 0 are determined (Lemma 2),
3. the bifurcations of Liouville tori are described (Theorem 2) and the loop molecules for all singular points of the bifurcation diagram are computed (Theorem 3).
Using the results of this paper it is not hard to obtain the classification of isoenergetic surfaces up to the rough Liouville equivalence. In other words, for each isoenergetic surface \(H=\mathrm{const}\) it is not hard to construct the corresponding unmarked molecule (the Fomenko invariant). We do not do it in this paper because the description of all possible molecules would be rather cumbersome. The molecules depend not only on the type of bifurcation diagrams, but also on the value of the Hamiltonian at the critical points of rank 0. The knowledge of loop molecules also allows us to easily restore a number of marks for these rough molecules. Recall that the marked molecules (the Fomenko-Zieschang invariants) are important topological invariants of integrable
Hamiltonian systems, which completely describe the structure of the Liouville foliation on non-singular three-dimensional isoenergetic surfaces. Namely, the Fomenko-Zieschang theorem states that two integrable Hamiltonian systems on two isoenergetic surfaces are Liouville equivalent (that is, there exists a diffeomorphism taking one Liouville foliation to another) if and only if their marked molecules coincide (for more details about the Fomenko-Zieschang invariants see, for example, [4]).
The idea to consider integrable Hamiltonian systems on compact Lie algebras can become fruitful: the coadjoint orbits of compact Lie algebras are compact which greatly simplifies the analysis of integrable Hamiltonian systems on them. For example, in the case under consideration, since the orbits are compact, for the construction of bifurcation diagrams and the calculation of loop molecules it suffices to find the curves that contain the image of critical points and find the types of critical points of rank \(0\). Earlier integrable Hamiltonian systems on the Lie algebra so(4) were studied in the papers [5] (compact analogue of the Clebsch case), [6] (the Sokolov case) and [7] (compact analogue of the Steklov case). Let us also note that algebraic and topological properties of integrable systems related to the Lie algebra so(4) and other compact Lie algebras were studied in [8] and [9].
The topology of the classical Kovalevskaya case was studied in detail in the book [10] (see also [11] and [12]). In particular, in this book the bifurcation diagrams of the momentum mapping and the bifurcations of Liouville tori for the critical values of the momentum mapping are described. The loop molecules (as well as fine Liouville classification) for the classical case Kovalevskaya are contained in the paper [13]. All necessary results on the classical Kovalevskaya case are collected in a convenient for us form in the book [4]. The connection between the classical Kovalevskaya case and the considered in this paper system on the Lie algebra so(4) is discussed in Section 5.
The case of the zero area constant (\(b=0\)) was described earlier in the paper [14].
## 2 Basic Definitions and Problem Formulation
Recall that there exists a natural linear Poisson bracket on the dual space \(\mathfrak{g}^{*}\) to any finite-dimensional Lie algebra \(\mathfrak{g}\) given by the formula
\[\{f,g\}=\langle x,[df|_{x},\,dg|_{x}]\rangle. \tag{1}\]
Here \(\langle\cdot,\cdot\rangle\) denotes the value of the covector in \(\mathfrak{g}^{*}\) on a vector in \(\mathfrak{g}\), and \([\cdot,\cdot]\) denotes the commutator in the Lie algebra \(\mathfrak{g}\). In the formula (1) we use the canonical isomorphism \((\mathfrak{g}^{*})^{*}=\mathfrak{g}\). The bracket (1) is called the Lie-Poisson bracket.
Definition 1. Let \(\mathfrak{g}\) be a finite-dimensional Lie algebra, \(x_{1},\ldots,x_{n}\) be linear coordinates in the dual space \(\mathfrak{g}^{*}\), and \(H\) be a smooth function on \(\mathfrak{g}^{*}\). The equations
\[\dot{x}_{i}=\{x_{i},H\}, \tag{2}\]
which define a dynamical system on \(\mathfrak{g}^{*}\), are called _Euler's equations_ for the Lie algebra \(\mathfrak{g}\).
It is well-known (see, for example, [4]), that the classical Kovalevskaya case can be defined by Euler's equations on the Lie algebra \(\mathrm{e}(3)\). It turns out that the classical Kovalevskaya case can be included in a one-parameter family of integrable systems defined on the pencil of Lie algebras \(\mathrm{so}(4)-\mathrm{e}(3)-\mathrm{so}(3,1)\).
Consider the six-dimensional space \(\mathbb{R}^{6}\) and fix a basis \(e_{1},e_{2},e_{3},f_{1},f_{2},f_{3}\) in it. Consider the following one-parameter family of commutators in \(\mathbb{R}^{6}\) depending on the parameter \(\varkappa\in\mathbb{R}\):
\[[e_{i},e_{j}]=\varepsilon_{ijk}e_{k},\quad[e_{i},f_{j}]=\varepsilon_{ijk}f_{k },\quad[f_{i},f_{j}]=\varkappa\varepsilon_{ijk}e_{k}, \tag{3}\]
where \(\varepsilon_{ijk}\) is the sign of the permutation \(\{123\}\to\{ijk\}\). It is easy to check that the cases \(\varkappa>0\), \(\varkappa=0\) and \(\varkappa<0\) correspond to the Lie algebras \(\mathrm{so}(4)\), \(\mathrm{e}(3)\) and \(\mathrm{so}(3,1)\) respectively.
In the coordinates \(J_{1},J_{2},J_{3},x_{1},x_{2},x_{3}\) on the dual linear space corresponding to the basis \(e_{1},e_{2},e_{3},f_{1},f_{2},f_{3}\) the Lie-Poisson bracket has a similar form
\[\{J_{i},J_{j}\}=\varepsilon_{ijk}J_{k},\quad\{J_{i},x_{j}\}=\varepsilon_{ijk} x_{k},\quad\{x_{i},x_{j}\}=\varkappa\varepsilon_{ijk}J_{k}. \tag{4}\]
For any value of the parameter \(\varkappa\in\mathbb{R}\) the bracket (4) has two Casimir functions:
\[f_{1}=\mathbf{x}^{2}+\varkappa\mathbf{J}^{2},\qquad f_{2}=\langle\mathbf{x}, \mathbf{J}\rangle, \tag{5}\]
where \(\mathbf{x}\) and \(\mathbf{J}\) denote the three-dimensional vectors \((x_{1},x_{2},x_{3})\) and \((J_{1},J_{2},J_{3})\) respectively and \(\langle\cdot,\cdot\rangle\) denotes the Euclidean scalar product of two vectors in \(\mathbb{R}^{3}\). Recall that functions are called Casimir functions of a Poisson bracket if they commute with any other function with respect to this bracket. The joint level surfaces
\[M_{a,b}=\{(\mathbf{J},\mathbf{x})|\quad f_{1}(\mathbf{J},\mathbf{x})=a,\quad f _{2}(\mathbf{J},\mathbf{x})=b\} \tag{6}\]
are the orbits of the coadjoint representation except for the case \(\varkappa\leq 0,a=0,b=0\) (in this case, the level surface is a union of several orbits of the coadjoint representation). In all other cases the surfaces \(M_{a,b}\) are symplectic leaves of the bracket (4), in particular, the bracket (4) defines a symplectic structure on them. If \(\varkappa>0\) and \(a>2\sqrt{\varkappa}|b|\), then the orbits \(M_{a,b}\) are four-dimensional submanifolds of \(\mathbb{R}^{6}(\mathbf{J},\mathbf{x})\) diffeomorphic to the product of two two-dimensional spheres \(\mathbb{S}^{2}\times\mathbb{S}^{2}\). If \(\varkappa>0\), then the singular orbits \(a=2\sqrt{\varkappa}|b|\) are diffeomorphic to the two-dimensional sphere \(\mathbb{S}^{2}\). If \(\varkappa>0\), then there is no orbits satisfying the condition \(a<2\sqrt{\varkappa}|b|\). Let us also note that if \(\varkappa=0\), then the non-singular orbits \(a>0\) are diffeomorphic to the cotangent bundle of the two-dimensional sphere \(T^{*}\mathbb{S}^{2}\) (in particular, they are not compact).
In this paper we examine the following integrable case for Euler's equations defined on the pencil of Lie algebras \(\mathrm{so}(4)-\mathrm{e}(3)-\mathrm{so}(3,1)\) described above (see, for example, [3] or [15]). In the coordinates \((J_{i},x_{j})\) described above the Hamiltonian is equal to
\[H=J_{1}^{2}+J_{2}^{2}+2J_{3}^{2}+2c_{1}x_{1} \tag{7}\]
and the integral has the form
\[K=(J_{1}^{2}-J_{2}^{2}-2c_{1}x_{1}+\varkappa c_{1}^{2})^{2}+(2J_{1}J_{2}-2c_{1 }x_{2})^{2}, \tag{8}\]
where \(c_{1}\) is an arbitrary constant.
Note that without loss of generality we may assume that \(c_{1}=1\) and \(\varkappa=-1,0\) or \(1\) since the change of coordinates and parameters of the system given by the formulas
\[J^{\prime}=\mu J,\quad x^{\prime}=\lambda\mu x,\quad a^{\prime}=\mu^{2}\lambda^{ 2}a,\quad b^{\prime}=\mu^{2}\lambda b,\quad c^{\prime}_{1}=\frac{\mu}{\lambda} c_{1},\quad\varkappa^{\prime}=\lambda^{2}\varkappa,\]
where \(\lambda,\mu\) are arbitrary constants, multiplies the Hamiltonian and the integral by \(\mu^{2}\) and \(\mu^{4}\) respectively. Nevertheless we will preserve both \(c_{1}\) and \(\varkappa\) in order to get "homogeneous" formulas (for example, the Hamiltonian and the integral are homogeneous functions of \({\bf J},{\bf x},c_{1}\)).
Remark 1. It is not hard to verify that if \(\varkappa=0\) (and \(c_{1}=1\)), then we obtain the classical Kovalevskaya case in the form in which it was described, for example, in the book [4]. More precisely, in the book [4] the Hamiltonian is \(2\) times less than the Hamiltonian (7) and the first integral is \(4\) times less than the integral (8). Note also that in the book [4] the following notation is used: \(S_{i}=J_{i},R_{j}=x_{j}\), the value \(b\) of the integral \(f_{2}\) is denoted by \(g\) and the value \(a\) of the integral \(f_{1}\) is assumed to be equal to \(1\).
Remark 2. The change of coordinates \(({\bf J},{\bf x})\to(-{\bf J},{\bf x})\) preserves the integral \(f_{1}\), the Hamiltonian (7) and the integral (8) whereas it changes the sign of the integral \(f_{2}\). Therefore, without loss of generality, we can assume that \(b\geq 0\).
Remark 3. Note also that the system has the following two natural symmetries that preserve the Hamiltonian (7), the first integral (8) and both integral \(f_{1}\) and \(f_{2}\). The first symmetry \(\sigma_{2}\) changes the signs of the coordinates \(J_{2}\) and \(x_{2}\) and preserves the remaining coordinates:
\[\sigma_{2}:(J_{1},J_{2},J_{3},x_{1},x_{2},x_{3})\to(J_{1},-J_{2},J_{3},x_{1},- x_{2},x_{3}).\]
Similarly, the second symmetry \(\sigma_{3}\) simultaneously changes the signs of the coordinates \(J_{3}\) and \(x_{3}\):
\[\sigma_{3}:(J_{1},J_{2},J_{3},x_{1},x_{2},x_{3})\to(J_{1},J_{2},-J_{3},x_{1},x _{2},-x_{3}).\]
Further we construct the bifurcation diagrams of the momentum mapping, determine the types of critical points, describe the bifurcations of Liouville tori and calculate the loop molecules for singular points of the bifurcation diagrams for the given Hamiltonian systems with Hamiltonian (7) and integral (8) on non-singular orbits \(M_{a,b}\). We use some facts and notation from the theory of topological classification developed by A. T. Fomenko and his disciples. The definitions of topological invariants (atoms, molecules) as well as the basic facts about this theory can be found in the book [4]. For various applications of this theory in rigid body dynamics see, for example, [16]. Let us also note that this theory has recently played an important role in the study of symmetries of Liouville tori bifurcations and in the construction of a classification theory for such symmetries (see [17], [18], [19] and [20]).
In this paper we only briefly recall the idea of the method of loop molecules. A smooth curve without self-intersections in the plane \(\mathbb{R}^{2}(H,K)\) is called admissible if it intersects the bifurcation diagram transversally and does not pass through its singular
points. The preimage of any admissible curve is a three-dimensional manifold equipped with a Liouville foliation. An invariant of this foliation is a marked molecule, which is a graph whose edges correspond to the one-parameter families of Liouville tori and whose vertices correspond to the singular leaves of this foliation. Symbols are placed in the vertices of the graph which specify the types of the bifurcations. In addition the graph is endowed, in a certain way, with three types of marks (\(r\), \(\epsilon\) and \(n\)), which indicate the relation between different bifurcations. The loop molecule of a singular point \(x\) of a bifurcation diagram is the marked molecule that describes the Liouville foliation in the preimage of any sufficiently small admissible closed curve surrounding the point \(x\). Loop molecules can provide information about different molecules of admissible curves (for example, sometimes molecules of curves can be "glued" together from parts of loop molecules). Examples of loop molecules are given in Tables 2 and 3.
## 3 Main results
We now state the main results. First, we describe the results for the case \(b\neq 0\) and then for the case \(b=0\). Let us start with the description of the bifurcation diagrams of the momentum mapping.
Lemma 1._Let \(b\neq 0\) and \(\varkappa\neq 0\). Then for any non-singular orbit \(M_{a,b}\) (that is, for any orbit such that \(a^{2}-4\varkappa b^{2}\neq 0\)) the bifurcation diagram \(\Sigma_{h,k}\) for the integrable Hamiltonian system with Hamiltonian (7) and integral (8) is contained in the union of the following three families of curves on the plane \(\mathbb{R}^{2}(h,k)\):_
1. _The line_ \(k=0\)_;_
2. _The parametric curve_ \[h(z)=\frac{b^{2}c_{1}^{2}}{z^{2}}+2z,\quad k(z)=\left(4ac_{1}^{2}-\frac{4b^{2} c_{1}^{2}}{z}+\frac{b^{4}c_{1}^{4}}{z^{4}}\right)-2\varkappa c_{1}^{2}h(z)+ \varkappa^{2}c_{1}^{4},\] (9) _where_ \(z\in\mathbb{R}-\{0\}\)_._
3. _The union of two parabolas_ \[k=\left(h-\varkappa c_{1}^{2}-\frac{a}{\varkappa}+\frac{\sqrt{a^{2}-4 \varkappa b^{2}}}{\varkappa}\right)^{2}\] (10) _and_ \[k=\left(h-\varkappa c_{1}^{2}-\frac{a}{\varkappa}-\frac{\sqrt{a^{2}-4 \varkappa b^{2}}}{\varkappa}\right)^{2}.\] (11)
The proof of Lemma 1 is given in Section 4.1.
In order to construct the bifurcation diagrams of the momentum mapping it remains to throw away several parts of the curves described in Lemma 1. The precise description of the bifurcation diagrams is given in the following theorem.
**Theorem 1**: _Let \(\varkappa>0\) and \(b>0\). Then the functions \(f_{k},f_{r},f_{m},f_{t}\) and \(f_{l}\) given by the formulas_
\[f_{k}(b)=\frac{3b^{4/3}+6\varkappa b^{2/3}c_{1}^{4/3}-\varkappa^{2}c_{1}^{8/3}}{4 c_{1}^{2/3}} \tag{12}\]
\[f_{r}(b)=\frac{b^{4/3}}{c_{1}^{2/3}}+\varkappa b^{2/3}c_{1}^{2/3} \tag{13}\]
\[f_{m}(b)=\frac{b^{2}}{\varkappa c_{1}^{2}}+\varkappa^{2}c_{1}^{2} \tag{14}\]
\[f_{t}(b)=\left(\frac{\varkappa c_{1}^{2}+t^{2}}{2c_{1}}\right)^{2}+\varkappa t ^{2},\qquad\mbox{where}\quad b=t\left(\frac{\varkappa c_{1}^{2}+t^{2}}{2c_{1}}\right) \tag{15}\]
\[f_{l}(b)=2\sqrt{\varkappa}|b| \tag{16}\]
_divide the area \(\{b>0,a>2\sqrt{\varkappa}b\}\subset\mathbb{R}^{2}(a,b)\) into \(9\) areas (see Fig. 2 and 2). In Fig. 3 - 21 the corresponding bifurcation diagrams of the momentum mapping for the integrable Hamiltonian system with Hamiltonian (7) and integral (8) on the orbit \(M_{a,b}\) of the Lie algebra \(so(4)\) are shown for each of these areas. More precisely, in each case it is specified from which parts of the line \(k=0\), the curve (9) and two parabolas (10) and (11) the bifurcation diagram of the momentum mapping is composed._
Recall that the bifurcation diagram for an orbit \(M_{a,b}\) with \(b<0\) coincides with the bifurcation diagram for the orbit \(M_{a,-b}\) (see Remark 2).
**Remark 4**: _In Fig. 3 - 21 the arcs \(y_{8}z_{2},z_{2}z_{1},y_{5}z_{3},z_{8}z_{9},z_{8}z_{11}\) and \(z_{9}z_{11}\) belong to the parametric curve (9). The rest of the arcs of the bifurcation diagrams distribute between the curves in an obvious way._
**Remark 5**: In this paper 9 areas of the plane \(\mathbb{R}^{2}(a,b)\) are numbered with Roman numerals I-IX as it is shown in Fig. 1 and 2 (the relative positioning of the graphs of functions \(f_{k},f_{r},f_{m},f_{t}\) and \(f_{l}\) is also described in Assertion 14). We continue the numbering on the areas of the line \(b=0\) as follows:
* area X: \(\{b=0,\quad\varkappa^{2}c_{1}^{2}<a\}\);
* area XI: \(\{b=0,\quad\frac{\varkappa^{2}c_{1}^{2}}{4}<a<\varkappa^{2}c_{1}^{2}\}\);
* area XII: \(\{b=0,\quad 0<a<\frac{\varkappa^{2}c_{1}^{2}}{4}\}\);
A detailed description of the relative positioning of the curves that contain the bifurcation diagrams of the momentum mapping are given in Section 4.2. The proof of Theorem 1 is given in Section 4.4. In fact, in order to prove Theorem 1 it suffices to know the types of critical points of rank 0, which are described in the following lemma (for the definition of nondegenerate critical points of rank 0 and their types see, for example, [4]).
**Lemma 2**: _Let \(\varkappa>0\) and \(b>0\). Then the image of critical points of rank \(0\) is contained in the union of the following three families of points._
1. _The point of intersection of the parabolas (_10_) and (_11_) (the point_ \(z_{5}\) _in Fig._ 15_). It has coordinates_ \[h=\varkappa c_{1}^{2}+\frac{a}{\varkappa},\quad k=\frac{a^{2}-4\varkappa b^{2 }}{\varkappa^{2}}.\] _If_ \(a>f_{m}(b)\)_, where the function_ \(f_{m}(b)\) _is given by the formula (_14_), then there
are two critical points of rank \(0\) in the preimage of the point on the orbit \(M_{a,b}\). If \(a=f_{m}(b)\), then there is one critical point of rank \(0\) in the preimage and if \(a<f_{m}(b)\), then there is no critical points of rank \(0\) in the preimage. If \(a>f_{m}(b)\), then all critical points from this series are nondegenerate critical points of saddle-center type._
2. _The point of intersection of the parabolas (_10_), (_11_) and the curve (_9_). The
corresponding values of the parameter \(z\) of the curve_ (9) _are given by the equation_
\[z^{2}=\frac{a\pm\sqrt{a^{2}-4\varkappa b^{2}}}{2}c_{1}^{2}.\]
_For each non-singular orbit_ \(M_{a,b}\) _there is exactly one critical point of rank_ \(0\) _in the preimage of each intersection point of the curve_ (9) _with the parabolas. Both critical points such that_ \(z<0\) _(that is the points_ \(y_{1}\) _and_ \(z_{4}\) _in Fig._ 3_-_25_) have center-center type._
Figure 21: Area IX: an enlarged fragment of Fig. 20
_The types of the remaining points are given in Table 1. Here \(z_{+l}\) and \(z_{+r}\) are the remaining points of intersection of the curve (9) with the left parabola (10) and the right parabola (11) respectively and the functions \(f_{r}(b)\), \(f_{m}(b)\) and \(f_{t}(b)\) are given by the formulas (13), (14) and (15) respectively. (In Fig. 3-21 the point \(z_{+r}\) is denoted by \(z_{3}\), \(z_{6}\) or \(z_{11}\) depending on the type of the point. The point \(z_{+l}\) is denoted by \(y_{3}\), \(y_{7}\), \(y_{12}\), \(z_{9}\) or \(z_{10}\).)_
3. _The points of intersection of the curve (_9_) and the line_ \(k=0\) _(the points_ \(y_{10}\)_,_ \(y_{11}\) _and_ \(z_{1}\) _in Fig. 3-21). In the preimage of each point of intersection lying to the right of the point_ \[h=\varkappa c_{1}^{2}+\frac{a}{\varkappa}-\frac{\sqrt{a^{2}-4\varkappa b^{2}}} {\varkappa}\] _(that is, to the right of the point of intersection of the parabola (_11_) and the line_ \(k=0\)_), there are exactly two critical points of rank_ \(0\) _and the types of these two points coincide. The preimages of all other points of intersection are empty. In other words,_ * _if_ \(0<b^{2}<\varkappa^{3}c_{1}^{4}\) _and_ \(a<f_{t}(b)\)_, where the function_ \(f_{t}(b)\) _is given by the formula (_15_), then there is no critical points of rank_ \(0\) _from this series in the preimage;_ * _if_ \(a>f_{t}(b)\) _and_ \(a>f_{k}(b)\)_, where_ \(f_{k}(b)\) _is given by the formula (_12_), the there are_ \(2\) _point from this series on the orbit;_ * _if_ \(f_{k}(b)>a>f_{t}(b)\) _(it is possible only if_ \(b^{2}>\varkappa^{3}c_{1}^{4}\)_), then the preimage contains_ \(6\) _points from this series._ * _if_ \(b^{2}>\varkappa^{3}c_{1}^{4}\) _and_ \(a<f_{t}(b)\)_, then the preimage contains_ \(4\) _points from this series._ _The points corresponding to the points of intersection with the parameter_ \(z>z_{\text{cusp}}=\sqrt[3]{b^{2}c_{1}^{2}}\)_, that is with the parameter greater than the parameter of the cusp of the curve (_9_) (that is the points_ \(y_{10}\) _and_ \(z_{1}\) _in Fig. 3-18), have center-center type. If_ \(a\neq f_{t}(b)\)_, then the points corresponding to the points of intersection with the parameter_ \(z<z_{\text{cusp}}=\sqrt[3]{b^{2}c_{1}^{2}}\) _(that is the point_ \(y_{11}\) _in Fig. 6-8) have center-saddle type._
\begin{table}
\begin{tabular}{|l|c||c|c|} \hline & & \(0<b^{2}<\varkappa^{3}c_{1}^{4}\) & \(b^{2}>\varkappa^{3}c_{1}^{4}\) \\ \hline \(a>f_{m}(b)\) & \(z_{+r}\) & center-center & center-center \\ & \(z_{+l}\) & saddle-saddle & saddle-saddle \\ \hline \(f_{m}(b)>a>f_{t}(b)\) & \(z_{+r}\) & center-center & center-saddle \\ \(a\neq f_{r}(b)\) & \(z_{+l}\) & center-saddle & saddle-saddle \\ \hline \(f_{t}(b)>a>f_{l}(b)\) & \(z_{+r}\) & center-center & center-saddle \\ \(a\neq f_{r}(b)\) & \(z_{+l}\) & center-center & center-saddle \\ \hline \end{tabular}
\end{table}
Table 1: Types of intersections of the curve (9) and the parabolas (10) and (11).
The proof of Lemma 2 is given in Section 4.3.
As it turned out, in the case under consideration the obtained information about the types of critical points allows us not only to construct the bifurcation diagrams of the momentum mapping but also to determine the types of bifurcations of Liouville tori and the loop molecules for singular points of the momentum mapping. Just as the proof of Theorem 1, the proofs of Theorems 2 and 3 are given in Section 4.4.
Just as in the classical Kovalevskaya case there are only four types of bifurcation of tori corresponding to the smooth regular arcs of the bifurcation diagram. In the terminology from [4] they correspond to the atoms \(A\), \(A^{*}\), \(B\) and \(C_{2}\).
Theorem 2._In Fig. 3-30 for each bifurcation diagram of the momentum mapping the bifurcations of Liouville tori corresponding to different arcs of the bifurcation diagrams are specified and all singular points of the bifurcation diagrams are marked._
The singular points of the bifurcation diagrams are denoted by \(y_{1}\)-\(y_{13}\) and \(z_{1}\)-\(z_{11}\). These points correspond to the cusps, the points of intersection and tangency of the bifurcation curves that form the bifurcation diagram of the momentum mapping (recall that these curves are described in Lemma 1).
The singular points \(y_{1},y_{3},y_{7},y_{10}\)-\(y_{12}\), \(z_{1},z_{3}\)-\(z_{7}\) and \(z_{9}\)-\(z_{11}\) correspond to nondegenerate singularities of the momentum mapping \({\cal F}=(H,K):M^{4}_{a,b}\to\mathbb{R}^{2}\). The points marked with the same letters correspond to singularities of the same type. The types of these nondegenerate singularities are given in Lemmas 2 and 4.
The points \(y_{2},y_{4},y_{5},y_{6},y_{8},y_{9},y_{13}\), \(z_{2}\) and \(z_{8}\) correspond to degenerate one-dimensional orbits of the action of the group \(\mathbb{R}^{2}\) on \(M_{a,b}\) generated by the Hamiltonian (7) and the integral (8). (In this paper we do not determine the types of critical points of rank 1, but it is possible to make an assumption about their types -- most likely they are typical singularities described in [4]. The type of a singularity can be easily guessed from its loop molecule.) The next statement ensures that the singularities in the preimage of all other points of the bifurcation diagrams are nondegenerate.
**Assertion 1**: _The considered integrable Hamiltonian systems with Hamiltonian (7) and (8) on all non-singular orbits \(M_{a,b}\) of the Lie algebras \(so(4)\), \(e(3)\) and \(so(3,1)\) are of Bott type. In other words, for all non-singular values of the parameters \(a\) and \(b\) all critical points in the preimage of non-singular points of the bifurcation diagrams (that is, the points that are not cusps or points of intersection or tangency for the smooth arcs of the bifurcation diagrams) are nondegenerate points of rank \(1\)._
The proof of Assertion 1 is given in Section 4.1. Let us emphasize that in this paper the fact that the systems are of Bott type is proved for all non-singular orbits of the pencil \(so(4)-e(3)-so(3,1)\).
Remark 6. It is not hard to understand the structure of the bifurcation diagrams for the remaining non-singular orbits that are not shown in Fig. 3-30. For example in the case \(a=f_{k}(b),\varkappa>0\) the parametric curve (9) intersects the line \(k=0\) at the cusp point. There is no doubt that the critical points of rank 0 at which the structure of bifurcation diagrams changes are degenerate. In section 4.3 during the proof of Lemma 1 we actually prove a more general statement that the remaining points -- in a neighbourhood of which the bifurcation diagram does not change its structure when
passing from one area of the plane \(\mathbb{R}^{2}(a,b)\) to another -- remain nondegenerate and their types do not change.
Theorem 3._The loop molecules for all singular points of the bifurcation diagrams shown in Fig. 3-30 are listed in Tables 2 and 3. The loop molecules for singular points of the diagrams marked with the same letters coincide._
Remark 7. For the points on the boundary of bifurcation diagrams the loop molecules in Tables 2 and 3 are shown counterclockwise. Although in this case the ambiguity may arise only for the point \(z_{2}\). The loop molecule for this point must
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(y_{1}\) & A & & \\ \hline \(y_{2}\) & A & & \\ \(y_{3}\) & A & B & A \\ \(y_{4}\) & A & A & A \\ \(y_{5}\) & A & A & A \\ \(y_{11}\) & A & A & A \\ \(y_{12}\) & A & A & A \\ \(y_{13}\) & A & A \\ \(y_{6}\) & A & B & A \\ \(y_{13}\) & A & A \\ \(y_{14}\) & A & A \\ \(y_{15}\) & A & A \\ \(y_{16}\) & A & A \\ \(y_{17}\) & A & A \\ \(y_{18}\) & A & A \\ \(y_{19}\) & A & A \\ \(y_{19}\) & A & A \\ \(y_{20}\) & A & A \\ \(y_{21}\) & A & A \\ \(y_{22}\) & A & A \\ \(y_{23}\) & A & A \\ \(y_{24}\) & A & A \\ \(y_{25}\) & A & A \\ \(y_{26}\) & A & A \\ \(y_{27}\) & A & A \\ \(y_{28}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{20}\) & A & A \\ \(y_{21}\) & A & A \\ \(y_{23}\) & A & A \\ \(y_{24}\) & A & A \\ \(y_{25}\) & A & A \\ \(y_{26}\) & A & A \\ \(y_{27}\) & A & A \\ \(y_{28}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{20}\) & A & A \\ \(y_{21}\) & A & A \\ \(y_{23}\) & A & A \\ \(y_{24}\) & A & A \\ \(y_{25}\) & A & A \\ \(y_{26}\) & A & A \\ \(y_{27}\) & A & A \\ \(y_{28}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{20}\) & A & A \\ \(y_{21}\) & A & A \\ \(y_{23}\) & A & A \\ \(y_{24}\) & A & A \\ \(y_{25}\) & A & A \\ \(y_{26}\) & A & A \\ \(y_{27}\) & A & A \\ \(y_{28}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{20}\) & A & A \\ \(y_{21}\) & A & A \\ \(y_{23}\) & A & A \\ \(y_{24}\) & A & A \\ \(y_{25}\) & A & A \\ \(y_{26}\) & A & A \\ \(y_{27}\) & A & A \\ \(y_{28}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{20}\) & A & A \\ \(y_{21}\) & A & A \\ \(y_{23}\) & A & A \\ \(y_{24}\) & A & A \\ \(y_{25}\) & A & A \\ \(y_{26}\) & A & A \\ \(y_{27}\) & A & A \\ \(y_{28}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{20}\) & A & A \\ \(y_{21}\) & A & A \\ \(y_{23}\) & A & A \\ \(y_{24}\) & A & A \\ \(y_{25}\) & A & A \\ \(y_{26}\) & A & A \\ \(y_{27}\) & A & A \\ \(y_{28}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{20}\) & A & A \\ \(y_{21}\) & A & A \\ \(y_{23}\) & A & A \\ \(y_{24}\) & A & A \\ \(y_{25}\) & A & A \\ \(y_{26}\) & A & A \\ \(y_{27}\) & A & A \\ \(y_{28}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{20}\) & A & A \\ \(y_{21}\) & A & A \\ \(y_{21}\) & A & A \\ \(y_{23}\) & A & A \\ \(y_{24}\) & A & A \\ \(y_{25}\) & A & A \\ \(y_{26}\) & A & A \\ \(y_{27}\) & A & A \\ \(y_{28}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{28}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{20}\) & A & A \\ \(y_{21}\) & A & A \\ \(y_{21}\) & A & A \\ \(y_{23}\) & A & A \\ \(y_{24}\) & A & A \\ \(y_{25}\) & A & A \\ \(y_{26}\) & A & A \\ \(y_{27}\) & A & A \\ \(y_{28}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{20}\) & A & A \\ \(y_{21}\) & A & A \\ \(y_{22}\) & A & A \\ \(y_{23}\) & A & A \\ \(y_{24}\) & A & A \\ \(y_{25}\) & A & A \\ \(y_{26}\) & A & A \\ \(y_{27}\) & A & A \\ \(y_{28}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{20}\) & A & A \\ \(y_{21}\) & A & A \\ \(y_{21}\) & A & A \\ \(y_{23}\) & A & A \\ \(y_{24}\) & A & A \\ \(y_{25}\) & A & A \\ \(y_{26}\) & A & A \\ \(y_{27}\) & A & A \\ \(y_{28}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{28}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{20}\) & A & A \\ \(y_{20}\) & A & A \\ \(y_{21}\) & A & A \\ \(y_{21}\) & A & A \\ \(y_{22}\) & A & A \\ \(y_{23}\) & A & A \\ \(y_{24}\) & A & A \\ \(y_{25}\) & A & A \\ \(y_{26}\) & A & A \\ \(y_{27}\) & A & A \\ \(y_{28}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{28}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{20}\) & A & A \\ \(y_{20}\) & A & A \\ \(y_{21}\) & A & A \\ \(y_{21}\) & A & A \\ \(y_{23}\) & A & A \\ \(y_{24}\) & A & A \\ \(y_{25}\) & A & A \\ \(y_{26}\) & A & A \\ \(y_{27}\) & A & A \\ \(y_{28}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{28}\) & A & A \\ \(y_{29}\) & A & A \\ \(y_{20}\) & A & A \\ \(y_{20}\) & A & A \\ \(y_{21}\) & A & A \\ \(y_{21}\) & A & A \\ \(y_{22}\) & A & A \\ \(y_{23}\) & A & A \\ \(y_{24}\) & A & A \\ \(y_{25}\) & A & A \\ \(y_{26}\) & A & A \\ \(y_{27}\) & A & A \\
consists of two identical molecules both having the same form as for the degenerate singularity called elliptic period-doubling bifurcation. (For more about degenerate singularities, see, for example, [4].)
Let us emphasize that in Theorems 2 and 3 we consider not only the case \(\varkappa>0,b\neq 0\) but also the cases \(\varkappa>0,b=0\) and \(\varkappa=0\). Note that the for \(\varkappa=0\) the obtained results completely coincide with the known results for the classical Kovalevskaya case (see, for example, [4]).
Remark 8. There is an inaccuracy in the book [4] in the list of loop molecules for the Kovalevskaya integrable case: the molecules for the points \(y_{8}\) and \(y_{9}\) should be repeated twice.
### Case \(\varkappa>0,b=0\)
We now describe the results in the case when the second integral \(b=0\). The following lemma is proved in Section 4.1, as well as Lemma 1.
Lemma 3.: _Let \(\varkappa\neq 0\) and \(b=0\). Then for any non-singular orbit \(M_{a,0}\) (that is, for orbits such that \(a\neq 0\)) the bifurcation diagram \(\Sigma_{h,k}\) for the integrable Hamiltonian system with Hamiltonian (7) and integral (8) is contained in the union of the following
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(z_{1}\) & A \(\xrightarrow{r=0}\) A & \(z_{6}\) & A \(\xrightarrow{r=0}\) A \\ & A \(\xrightarrow{r=0}\) A & & A \(\xrightarrow{r=\infty}\) A \\ \hline \(z_{2}\) & A \(\xrightarrow{r=0}\) A\({}^{*}\xrightarrow{r=1/2}\) A & \(z_{7}\) & A \(\xrightarrow{r=0}\) A \\ & A \(\xrightarrow{r=0}\) A\({}^{*}\xrightarrow{r=1/2}\) A & \(z_{7}\) & A \(\xrightarrow{r=0}\) A \\ \hline \(z_{3}\) & A \(\xrightarrow{r=\infty}\) B \(\xrightarrow{r=\infty}\) A & \(z_{8}\) & A \(\xrightarrow{r=0}\) B \(\xrightarrow{r=0}\) A \\ & A & A & A \(\xrightarrow{r=0}\) A \\ \hline \(z_{4}\) & A \(\xrightarrow{r=0}\) A & \(z_{9}\) & A \(\xrightarrow{r=0}\) A \\ \hline \(z_{5}\) & A \(\xrightarrow{r=\infty}\) C\({}_{2}\) & A & \(z_{10}\) & A \(\xrightarrow{r=\infty}\) B \(\xrightarrow{r=\infty}\) A \\ & A \(\xrightarrow{r=\infty}\) C\({}_{2}\) & A & \(z_{11}\) & A \(\xrightarrow{r=0}\) A \\ \hline \end{tabular}
\end{table}
Table 3: New loop molecules.
three families of curves on the plane \(\mathbb{R}^{2}(h,k)\):_
1. _The line_ \(k=0\)_;_
2. _The union of the parabola_ \[k=(h-\varkappa c_{1}^{2})^{2}+4ac_{1}^{2}\] (17) _and the tangent line to this parabola at the point_ \(h=0\)__ \[k=-2\varkappa c_{1}^{2}h+(4ac_{1}^{2}+\varkappa^{2}c_{1}^{4});\] (18)
3. _The union of two parabolas_ \[k=\left(h-\varkappa c_{1}^{2}\right)^{2}\] (19) _and_ \[k=\left(h-\varkappa c_{1}^{2}-\frac{2a}{\varkappa}\right)^{2}.\] (20)
Now in order to construct the bifurcation diagrams of the momentum mapping it remains to throw away several parts of the curves described in Lemma 3. A precise description of the bifurcation diagrams is given in Theorem 4 (see also Fig. 22-25).
Let us now describe the set of critical points of rank \(0\). The following lemma is proved in Section 4.3, as well as Lemma 2.
**Lemma 4**: _Let \(\varkappa>0\) and \(b=0\). Then the image of critical points of rank \(0\) is contained in the union of the following three families of points:_
1. _The point of intersection of the parabolas (_19_) and (_20_) (the point_ \(z_{5}\) _in Fig. 22). This point has coordinates_ \[h=\varkappa c_{1}^{2}+\frac{a}{\varkappa},\quad k=\frac{a^{2}}{\varkappa^{2}}.\] _If_ \(a>\varkappa^{2}c_{1}^{2}\)_, then there are two critical points of rank_ \(0\) _in the preimage of the point on the orbit_ \(M_{a,0}\)_. If_ \(a=\varkappa^{2}c_{1}^{2}\)_, then there is one critical point of rank_ \(0\) _in the preimage, and if_ \(a<\varkappa^{2}c_{1}^{2}\)_, then there is no critical points of rank_ \(0\) _in the preimage. If_ \(a>\varkappa^{2}c_{1}^{2}\) _then all critical points from this series are nondegenerate critical points of saddle-center type._
2. _The point of intersection of the upper parabola (_17_) and the tangent line (_18_) with the parabolas (_19_) and (_20_). For any_ \(a>0\) _there are two points of intersection of the line (_18_) and the left parabola (_19_) with coordinates_ \[h=\pm 2\sqrt{a}c_{1},\quad k=(\pm 2\sqrt{a}c_{1}-\varkappa^{2}c_{1}^{2})^{2}\] _and there is exactly one critical point of rank_ \(0\) _in the preimage of each of these points on the orbit_ \(M_{a,0}\)
_._ 1. \(0<a<\frac{\varkappa^{2}c_{1}^{2}}{4}\)_,_
2. \(\frac{\varkappa^{2}c_{1}^{2}}{4}<a<\varkappa^{2}c_{1}^{2}\)_,_
3. \(\varkappa^{2}c_{1}^{2}<a\)_._
_The corresponding bifurcation diagrams are shown in Fig. 22-25, where the formulas for the lines and parabolas are given in Lemma 3._
**Remark 9**: In Fig. 22-25 the arcs \(z_{2}z_{1},z_{8}z_{9}\) and \(z_{8}z_{10}\) belong to the line (18) and the arc \(y_{4}z_{7}\) belongs to the upper parabola (17). The rest of the arcs distribute between the curves in an obvious way.
Bifurcation diagrams for the case \(b=0\) were also previously described in the paper [14].
The bifurcations of Liouville tori and the loop molecules for the singular points are
described earlier in Theorems 2 and 3 respectively.
## 4 Proof of the main statements
### Critical points of rank \(1\)
In this section we prove Lemmas 1 and 3, which claim that the bifurcation diagrams are contained in the curves described in these lemmas. For this we first describe all critical points of the momentum mapping in Assertion 2 and then study their image under the momentum mapping. Note that all these critical points apart from the points from Assertion 15 are critical points of rank \(1\).
Let us emphasize that in this section we do not impose restrictions on the parameters \(\varkappa\) and \(b\) (that is \(\varkappa,b\in\mathbb{R}\), unless otherwise stated).
**Assertion 2**.: _The set of points where the Hamiltonian vector fields corresponding to the Hamiltonian (7) and the integral (8) are linearly dependent is the union of the following six families of points. The first three families are three-parametric and the last three families are four-parametric._
_Four-parameter families:_
1. \(x_{1}=\frac{\varkappa c_{1}^{2}+J_{1}^{2}-J_{2}^{2}}{2c_{1}},\qquad x_{2}= \frac{J_{1}J_{2}}{c_{1}}\)__
2. \(J_{2}=0,\qquad x_{3}=\frac{J_{1}J_{3}}{c_{1}}\)__
3. \(x_{1}=\varkappa c_{1}+(J_{1}-c_{1}\frac{x_{3}}{J_{3}})\frac{x_{2}}{J_{2}}, \qquad x_{2}=J_{2}\frac{(J_{1}x_{3}-\varkappa c_{1}J_{3})(J_{1}J_{3}-c_{1}x_{3 })+J_{2}^{2}J_{3}x_{3}}{(J_{1}J_{3}-c_{1}x_{3})^{2}+J_{2}^{2}J_{3}^{2}},\)__ _where_ \(J_{2}J_{3}\neq 0\)_._
_Three-parameter families:_
1. \(J_{2}=0,\qquad x_{2}=0,\qquad J_{1}x_{3}-J_{3}x_{1}=0\)__
2. \(J_{3}=0,\qquad x_{3}=0,\)__
3. \(((x_{1}-\varkappa c_{1})J_{1}+J_{2}x_{2})(J_{2}(x_{1}-\varkappa c_{1})-J_{1}x_ {2})+c_{1}x_{2}(x_{1}(x_{1}-\varkappa c_{1})+x_{2}^{2})=0\)__
4. \(J_{1}=0,\qquad J_{3}=0,\qquad x_{2}=0\)__
Proof.: The points at which the Hamiltonian vector fields \(X_{H}\) and \(X_{K}\) are linearly dependent are exactly the points at which all \(15\) rank \(2\) minors of the matrix \(\begin{pmatrix}X_{H},&X_{K}\end{pmatrix}\) composed from the coordinates of the vectors \(X_{H}\) and \(X_{K}\) are equal to zero.
Note that the minor \(\Delta_{13}\) corresponding to the first and the third lines has form
\[\begin{vmatrix}\{J_{1},H\}&\{J_{1},K\}\\ \{J_{3},H\}&\{J_{3},K\}\end{vmatrix}=16c_{1}\left(c_{1}x_{2}-J_{1}J_{2}\right) \left(J_{2}J_{3}\left(\varkappa c_{1}-x_{1}\right)+\left(J_{1}J_{3}-c_{1}x_{3 }\right)x_{2}\right)\]
Therefore either \(x_{2}=\frac{J_{1}J_{2}}{c_{1}}\) or \(J_{2}J_{3}\left(\varkappa c_{1}-x_{1}\right)+\left(J_{1}J_{3}-c_{1}x_{3}\right) x_{2}=0\).
First, we examine the case \(x_{2}=\frac{J_{1}J_{2}}{c_{1}}\). Substituting \(x_{2}\) in the matrix consisting of minors we immediately obtain the first solution
\[x_{1}=\frac{\varkappa c_{1}^{2}+J_{1}^{2}-J_{2}^{2}}{2c_{1}},\qquad x_{2}=\frac{ J_{1}J_{2}}{c_{1}}. \tag{21}\]
The rest of the proof is by exhaustion. If \(x_{3}=\frac{J_{1}J_{3}}{c_{1}}\), then we obtain the following three families of solutions:
\[x_{1}=\varkappa c_{1},\qquad x_{2}=\frac{J_{1}J_{2}}{c_{1}},\qquad x_{3}=\frac {J_{1}J_{3}}{c_{1}}, \tag{22}\]
\[J_{2}=0,\qquad x_{2}=0,\qquad x_{3}=\frac{J_{1}J_{3}}{c_{1}}, \tag{23}\]
\[J_{1}=0,\qquad J_{3}=0,\qquad x_{2}=0,\qquad x_{3}=0. \tag{24}\]
If \(x_{3}\neq\frac{J_{1}J_{3}}{c_{1}}\), then we get the following two families of solutions:
\[J_{1}x_{3}-J_{3}x_{1}=0,\qquad J_{2}=0,\qquad x_{2}=0, \tag{25}\]
\[J_{1}=0,\qquad J_{3}=0,\qquad x_{2}=0. \tag{26}\]
Now suppose that \(J_{2}J_{3}\left(\varkappa c_{1}-x_{1}\right)+\left(J_{1}J_{3}-c_{1}x_{3} \right)x_{2}=0\), \(x_{2}\neq\frac{J_{1}J_{2}}{c_{1}}\). There are two variants: either \(J_{2}J_{3}=0\) or \(x_{1}=\frac{\varkappa c_{1}J_{2}J_{3}+J_{1}J_{3}x_{2}-c_{1}x_{2}x_{3}}{J_{2}J_{ 3}}\).
If \(J_{2}=0\), then we get the second four-parameter family of points of rank one:
\[J_{2}=0,\qquad x_{3}=\frac{J_{1}J_{3}}{c_{1}}. \tag{27}\]
And if \(J_{2}\neq 0\), \(J_{3}=0\), then we obtain the following solution:
\[J_{3}=0,\qquad x_{3}=0, \tag{28}\]
\[((x_{1}-\varkappa c_{1})J_{1}+J_{2}x_{2})(J_{2}(x_{1}-\varkappa c_{1})-J_{1}x _{2})+c_{1}x_{2}(x_{1}(x_{1}-\varkappa c_{1})+x_{2}^{2})=0.\]
Not let us consider the case \(x_{1}=\frac{\varkappa c_{1}J_{2}J_{3}+J_{1}J_{3}x_{2}-c_{1}x_{2}x_{3}}{J_{2}J_ {3}}\). It is easy to check that in this case the minor \(\Delta_{12}\) is equal to
\[-16c_{1}\left(x_{2}(J_{2}^{2}J_{3}^{2}+(J_{1}J_{3}-c_{1}x_{3})^{2})-J_{2}(( \varkappa c_{1}^{2}+J_{1}^{2}+J_{2}^{2})J_{3}x_{3}-c_{1}J_{1}x_{3}^{2}- \varkappa c_{1}J_{1}J_{3}^{2})\right).\]
The coefficient by \(x_{2}\) is not equal to zero since the case \(J_{2}J_{3}=0\) has already been analyzed. Expressing \(x_{2}\) from this equation we obtain the ninth solution:
\[x_{1}=\frac{\varkappa c_{1}J_{2}J_{3}+J_{1}J_{3}x_{2}-c_{1}x_{2}x_{3}}{J_{2}J_ {3}},\]
\[x_{2}=J_{2}\frac{(\varkappa c_{1}^{2}+J_{1}^{2}+J_{2}^{2})J_{3}x_{3}-c_{1}J_{1 }x_{3}^{2}-\varkappa c_{1}J_{1}J_{3}^{2}}{(J_{1}J_{3}-c_{1}x_{3})^{2}+J_{2}^{2 }J_{3}^{2}}. \tag{29}\]
Thus we have considered all the cases. It remains to collect all the solutions together. It is obvious that the family (24) is a particular case of the solution (26) and that the solution (23) is a special case of the solution (27). It remains to note that the family (22) is contained in the families (27), (28) and (29). Assertion 2 is proved.
Now let us prove Lemma 1. For this, we show that the images of the critical points from Assertion 2 belong to the curves described in Lemma 1.
**Assertion 3**.: _Let \(\varkappa\neq 0\) and \(b\neq 0\). Then the images of the families of critical points described in Assertion 2 are arranged as follows:_
1. _The images of critical points from the family 1 lie on the line_ \(k=0\)_._
2. _The images of critical points from the family 2 belong to the curve (_9_)._
3. _The images of critical points from the families 3, 4, 5 and 6 lie on the union of two parabolas (_10_) and (_11_)._
Proof.:
1. It is explicitly checked that \(k=0\).
2. The equation (9) can be obtained as follows. Take the function \(J_{3}^{2}+c_{1}x_{1}\) as the parameter \(z\). Note that \(J_{3}^{2}+c_{1}x_{1}\neq 0\) since \(b\neq 0\). Therefore, \(J_{1}\) can be expressed from the formula for \(b\) and then \(x_{2}\) can be expressed from the formula for \(a\). It remains to substitute the obtained expressions for \(J_{1}\) and \(x_{2}\) in the equations for the Hamiltonian (7) and the first integral (8) and then replace \(J_{3}^{2}+c_{1}x_{1}\) by \(z\). As a result, the equations (7) and (8) take the form (9), as required.
3. It can be explicitly checked that for any point from the Families 3, 4 and 5 one of the two equations (10) and (11) holds. In this case it is easier to verify first that \(k=\left(-\frac{\lambda}{2}\right)^{2}\), where \(\lambda\) is the coefficient of proportionality between \(X_{K}\) and \(X_{H}\) (that is \(X_{K}+\lambda X_{H}=0\)), and then to check that \[-\frac{\lambda}{2}=h-\varkappa c_{1}^{2}-\frac{a}{\varkappa}\pm\frac{\sqrt{a^ {2}-4\varkappa b^{2}}}{\varkappa}.\] While testing this equality it is convenient to use the relation \[\frac{a}{b}=\frac{x_{3}}{J_{3}}+\varkappa\frac{J_{3}}{x_{3}},\] which holds if \(J_{3}\neq 0\) and \(x_{3}\neq 0\). Assertion 3 is proved.
In what follows we need the following statement about the critical points of the family 2 from Assertion 2.
**Assertion 4**.: _Let \(\varkappa\neq 0\) and \(b\neq 0\). Then for \(z^{2}>\frac{a+\sqrt{a^{2}-4\varkappa b^{2}}}{2}c_{1}^{2}\) either there is no critical points in the preimage of points of the curve (9) or the critical points in the preimage from two critical circles and the symmetry \((J_{3},x_{3})\to(-J_{3},-x_{3})\) interchanges these circles._
Proof.: It is not hard to check that for a fixed parameter \(z\) the critical points form the family 2 are given by the following equations
\[J_{1}=\frac{bc_{1}}{z},\quad J_{2}=0,\quad x_{1}=\frac{z-J_{3}^{2}}{c_{1}}, \quad x_{3}=\frac{b}{z}J_{3},\]
where the coordinates \(J_{3}\) and \(x_{2}\) satisfy an equation of the form
\[\left(\frac{J_{3}^{2}}{c_{1}}+d\right)^{2}+x_{2}^{2}=R^{2} \tag{30}\]
for some constants \(d\) and \(R\) depending on \(\varkappa,a,b\) and \(z\). Thus in order to prove the assertion it remains to prove that \(J_{3}\neq 0\) for \(z^{2}>\frac{a+\sqrt{a^{2}-4\varkappa b^{2}}}{2}c_{1}^{2}\). It is not hard to verify explicitly: if \(J_{3}=0\), then the equation (30) has the form
\[z^{4}-ac_{1}^{2}z^{2}+\varkappa b^{2}c_{1}^{4}=0,\]
which has a solution precisely when
\[\frac{a-\sqrt{a^{2}-4\varkappa b^{2}}}{2}c_{1}^{2}<z^{2}<\frac{a+\sqrt{a^{2}-4 \varkappa b^{2}}}{2}c_{1}^{2}.\]
Assertion 4 is proved.
Now let us prove Lemma 3.
**Assertion 5**.: _Let \(\varkappa\neq 0\) and \(b=0\). Then the images of the families of critical points described in Assertion 2 are arranged as follows:_
1. _The images of critical points from the family 1 lie on the line_ \(k=0\)_._
2. _The images of critical points from the family 2 belong to the union of the parabola (_17_) and the tangent line (_18_)._
3. _Images of the critical points from the families 3, 4, 5 and 6 lie on the union of two parabolas (_19_) and (_20_)._
Proof.: The proof of this statement is almost identical to the proof of Assertion 3 except for the following. First, the case \(J_{3}^{2}+c_{1}x_{1}=0\) should be considered while determining the image of the family 2. It is not hard to check explicitly that in this case the image lies on the parabola (17). Second, we have to consider the family 6 and show that its image lies on the left parabola (19).
We now prove Assertion 1 about the nondegeneracy of critical points of rank 1. In order to prove it we use the following simple criterion of nondegeneracy for critical points of rank 1 (see [4]).
**Assertion 6**.: _Let \((M^{4},\omega)\) be a symplectic manifold and \(y_{0}\in(M,\omega)\) be a critical point of rank \(1\) for an integrable system with Hamiltonian \(H\) and integral \(K\). Denote by \(F=\alpha H+\beta K\) a nontrivial linear combination for which the point \(y_{0}\) is a critical point and by \(A_{F}\) the linearization of the corresponding Hamiltonian vector field \(X_{F}\) at this point \(y_{0}\). The point \(y_{0}\) is nondegenerate if and only if the operator \(A_{F}\) has a non-zero eigenvalue._
In the case under consideration the system is defined on a Poisson manifold. Hence it is convenient to use the following statement.
**Assertion 7**: _Suppose that in local coordinates \((p^{1},\ldots,p^{k},q^{1},\ldots,q^{k},\\ z^{1},\ldots,z^{m})\) in a neighbourhood of a point \(x_{0}\) a Poisson bracket has the form \(\sum_{i=1}^{k}\frac{\partial}{\partial p^{i}}\wedge\frac{\partial}{\partial q^ {i}}\). Suppose also that \(x_{0}\) is a critical point for a Hamiltonian vector field \(X_{F}\) with Hamiltonian \(F\). Then the linearization \(A_{F}\) of the Hamiltonian vector field \(X_{F}\) has the form_
\[(\frac{\partial^{2}F}{\partial q^{i}\partial p^{j}}\frac{\partial}{ \partial p^{i}}\otimes dp^{j}+\frac{\partial^{2}F}{\partial q^{i}\partial q^{ j}}\frac{\partial}{\partial p^{i}}\otimes dq^{j}+\frac{\partial^{2}F}{\partial q^{i} \partial z^{j}}\frac{\partial}{\partial p^{i}}\otimes dz^{j})-\] \[(\frac{\partial^{2}F}{\partial p^{i}\partial p^{j}}\frac{ \partial}{\partial q^{i}}\otimes dp^{j}+\frac{\partial^{2}F}{\partial p^{i} \partial q^{j}}\frac{\partial}{\partial q^{i}}\otimes dq^{j}+\frac{\partial^ {2}F}{\partial p^{i}\partial z^{j}}\frac{\partial}{\partial q^{i}}\otimes dz^ {j}).\]
Thus if \(\hat{F}\) is the restriction of the function \(F\) to a symplectic leaf, then the spectrum of the linearization of the vector field \(X_{F}\) can be obtained from the spectrum of the linearization of the vector field \(X_{\hat{F}}\) by adding zeros in the amount equal to the codimension of the symplectic leaf. Therefore, as well as in the symplectic case, in order to check the nondegeneracy of points of rank \(1\) it is sufficient to verify that the spectrum of the corresponding operator does not consist solely of zeros.
This can be verified explicitly. To simplify the verification in the following assertion we specify the coefficient of proportionality \(\lambda\) between the Hamiltonian vector fields corresponding to the Hamiltonian (7) and the integral (8) as well as describe the spectrum of the linearization of the corresponding Hamiltonian vector field \(X_{K+\lambda H}\) for all critical points of rank \(1\) of the integrable Hamiltonian system under consideration (that is, for all points from Assertion 2 except for the points of rank \(0\) from Assertion 15).
**Assertion 8**: _For each critical point of rank \(1\) from Assertion 2 we specify \(\lambda\) such that \(X_{K}+\lambda X_{H}=0\) at this point and \(\mu\) such that the spectrum of the operator \(A_{K+\lambda H}\) consists of four zero and \(\pm\mu\)._
_Four-parameter families:_
1. _Family 1. The coefficient of proportionality_ \(\lambda=0\)_, that is_ \(X_{K}=(0,0,0,0,0,0)\)_. The nontrivial eigenvalue:_ \[\mu=8i|\left(\varkappa c_{1}^{2}+J_{1}^{2}+J_{2}^{2}\right)J_{3}-2c_{1}J_{1}x_ {3}|.\]
2. _Family 2. The coefficient of proportionality:_ \[\lambda=2\left(\varkappa c_{1}^{2}-J_{1}^{2}\right),\] _the square of the eigenvalue:_ \[\mu^{2}=64c_{1}\left(J_{1}^{2}-J_{3}^{2}-c_{1}x_{1}\right)\left(\left(J_{1}^{ 2}-c_{1}x_{1}\right)\left(x_{1}-\varkappa c_{1}\right)-c_{1}x_{2}^{2}\right).\]
3. _Family 3. The coefficient of proportionality:_ \[\lambda=2\left(\varkappa c_{1}^{2}+J_{1}^{2}+J_{2}^{2}-2c_{1}J_{1}\frac{x_{3}} {J_{3}}\right),\] _the square of the eigenvalue:_ \[\mu^{2}=-32\lambda\left(\left(J_{1}J_{3}-c_{1}x_{3}\right){}^{2}+\left(\left(J_ {1}^{2}+J_{2}^{2}\right)-c_{1}\frac{J_{1}x_{3}}{J_{3}}\right){}^{2}+J_{2}^{2} \left(\varkappa c_{1}^{2}+J_{3}^{2}\right)\right).\]
_Three-parameter families:_
1. _Family 4. The coefficient of proportionality:_ \[\lambda=2\left(\varkappa c_{1}^{2}+J_{1}^{2}-2c_{1}x_{1}\right),\] _the square of the eigenvalue:_ \[\mu^{2}=-32\lambda\left(\left(J_{1}^{2}-c_{1}x_{1}\right){}^{2}+\left(J_{1}J_{ 3}-c_{1}x_{3}\right){}^{2}\right)\]
2. _Family 5. In this item we assume that_ \(x_{2}\neq 0\) _since all point from the family 5 that satisfy the condition_ \(x_{2}=0\) _either have rank_ \(0\) _or belong to the family 6. The coefficient of proportionality:_ \[\lambda=2(\varkappa c_{1}^{2}-J_{1}^{2}+J_{2}^{2})+\frac{4}{x_{2}}J_{1}J_{2} \left(x_{1}-\varkappa c_{1}\right).\] _If in addition_ \(x_{1}\neq\frac{\varkappa c_{1}^{2}+J_{1}^{2}-J_{2}^{2}}{2c_{1}}\)_, then the square of the eigenvalue is equal to:_ \[\mu^{2}=\frac{16\lambda^{2}J_{2}\gamma}{x_{2}\left(\varkappa c_{1}^{2}+J_{1}^ {2}-J_{2}^{2}-2c_{1}x_{1}\right)},\] _where_ \[\gamma=\left(c_{1}J_{1}x_{2}^{2}-J_{1}\left(x_{1}-\varkappa c_{1}\right) \left(J_{1}^{2}-J_{2}^{2}-c_{1}x_{1}\right)-J_{2}x_{2}\left(\varkappa c_{1}^{ 2}+J_{1}^{2}-J_{2}^{2}-2c_{1}x_{1}\right)\right).\] _If_ \(x_{1}=\frac{\varkappa c_{1}^{2}+J_{1}^{2}-J_{2}^{2}}{2c_{1}}\)_, then either_ \(x_{2}=\frac{J_{1}J_{2}}{c_{1}}\) _or_ \(x_{2}=\pm\frac{\varkappa c_{1}^{2}-J_{1}^{2}+J_{2}^{2}}{2c_{1}}\)_. In the first case_ \(\mu=0\)_, in the second case_ \[\mu^{2}=-32J_{2}^{2}\lambda\left(\varkappa c_{1}^{2}+\left(J_{1}\mp J_{2} \right){}^{2}\right).\]
3. _Family 6. The coefficient of proportionality:_ \[\lambda=2\left(\varkappa c_{1}^{2}-J_{2}^{2}-2c_{1}x_{1}\right),\] _the square of the eigenvalue:_ \[\mu^{2}=-32\lambda c_{1}^{2}\left(\varkappa J_{2}^{2}+x_{1}^{2}+x_{3}^{2} \right).\]
_Assertion 1._ We use Assertion 6 to prove the nondegeneracy of points of rank 1. The coefficients of proportionality and the spectrum of the corresponding operators are described in Assertion 8. After thats the nondegeneracy is proved by exhaustion. Assertion 1 is proved.
### Types of bifurcation diagrams. (Case \(b\neq 0\))
In this section we show that the curves from Lemma 1 are positioned relative to each other as it is shown in Fig. 3-21 (for the corresponding values of the parameters \(a\) and \(b\)). Thereby we actually describe all possible bifurcation diagrams of the momentum mapping. We are interested in the singular points of bifurcation diagrams, that is in the cusps, in the points of intersection and tangency of these curves in the first place.
First of all, it is obvious that for any values of the parameters \(a\) and \(b\) the parabolas (10) and (11) intersect the line \(k=0\) at the points
\[h=\varkappa c_{1}^{2}+\frac{a}{\varkappa}-\frac{\sqrt{a^{2}-4\varkappa b^{2}} }{\varkappa},\qquad\mbox{ and }\qquad h=\varkappa c_{1}^{2}+\frac{a}{\varkappa}+\frac{\sqrt{a^{2}-4 \varkappa b^{2}}}{\varkappa}\]
respectively and intersect each other at the point
\[h=\varkappa c_{1}^{2}+\frac{a}{\varkappa},\qquad k=\frac{a^{2}-4\varkappa b^{ 2}}{\varkappa^{2}}. \tag{31}\]
Thus, it remains to describe the relative position of the curve (9) with respect to the line \(k=0\) and the parabolas (10), (11).
In this section we first describe this curve (9) (see Assertion 9), then we determine the number of its points of intersection with the parabolas described above (see Assertion 10) and the line \(k=0\) (see Assertion 11). After that the rest of the section is devoted to the study of the mutual interposition of the found "singular" points. The final result can be formulated as.
Lemma 5. _The functions \(f_{k},f_{r},f_{m},f_{t}\) and \(f_{l}\) given by the formulas (12), (13), (14), (15) and (16) respectively divide the area \(\{b>0,a>2\sqrt{\varkappa}b\}\) into \(9\) sub-areas. For each of these sub-areas the cusps, the point of intersection and tangency of the line \(k=0\), the parabolas (10), (11) and the curve (9) are located on this four curves as it is shown in Fig. 3-21._
We begin with a description of the curve (9).
Assertion 9. _Let \(\varkappa\neq 0\). Then for any \(a,b\in\mathbb{R}\), \(b\neq 0\), the curve (9) has one cusp point_
\[z_{\mbox{\scriptsize cusp}}=\sqrt[3]{b^{2}c_{1}^{2}} \tag{32}\]
_and two points of local extrema_
\[z_{\mbox{\tiny+ext}}=\frac{|b|}{\sqrt{\varkappa}}\qquad\mbox{ and }\qquad z_{\mbox{\tiny-ext}}=-\frac{|b|}{\sqrt{\varkappa}}. \tag{33}\]
_The point \(z_{\mbox{\tiny-ext}}\) is a local minimum for any values of the parameters \(a\) and \(b\). If \(b>\varkappa^{3/2}c_{1}^{2}\), then the point \(z_{\mbox{\tiny+ext}}\) is a local maximum. If \(b<\varkappa^{3/2}c_{1}^{2}\), then the point \(z_{\mbox{\tiny+ext}}\) is a local minimum. (If \(b=\varkappa^{3/2}c_{1}^{2}\), then the point \(z_{\mbox{\tiny+ext}}\) coincides with the cusp \(z_{\mbox{\scriptsize cusp}}\).) In other words, the function \(k(z)\) monotonically increases between the points \(z_{\mbox{\tiny+ext}}\) and \(z_{\mbox{\scriptsize cusp}}\) and monotonically decreases on the remaining parts of the ray \(z>0\)._
_The graph of the corresponding function is convex upward for \(z<z_{\mbox{\scriptsize cusp}}\) and convex downward for \(z>z_{\mbox{\scriptsize cusp}}\)._
_As \(z\to\pm\infty\) the curve (9) asymptotically tends to the line_
\[k=-2\varkappa c_{1}^{2}h+(4ac_{1}^{2}\varkappa^{2}c_{1}^{4}).\]
_Moreover, as \(z\to\pm 0\) both functions \(h(z)\) and \(k(z)\) simultaneously tend to \(+\infty\) and besides_
\[\frac{k(z)}{h^{2}(z)}\underset{z\to\pm 0}{\longrightarrow}1.\]
We now describe the points of intersection of the curve (9) with the other curves: with the parabolas (10) and (11) and the line \(k=0\). We start with the points of intersection with the parabolas. The proof of the following assertion is by direct computation.
**Assertion 10**: _Let \(\varkappa\neq 0\) and \(b\neq 0\). Then the curve (9) and the left parabola (10) have two points of intersection and one point of tangency. For the points of intersection the corresponding values of the parameter \(z_{+l}\) and \(z_{-l}\) are given by the relation_
\[z^{2}=\frac{a+\sqrt{a^{2}-4\varkappa b^{2}}}{2}c_{1}^{2}. \tag{34}\]
_The tangency point corresponds to the value of the parameter_
\[z_{lt}=\frac{a-\sqrt{a^{2}-4\varkappa b^{2}}}{2\varkappa}. \tag{35}\]
_Analogously, the curve (9) and the right parabola (11) have two points of intersection with the corresponding values of the parameter \(z_{+r}\) and \(z_{-r}\) given by the relation_
\[z^{2}=\frac{a-\sqrt{a^{2}-4\varkappa b^{2}}}{2}c_{1}^{2} \tag{36}\]
_and one point of tangency corresponding to the value of the parameter_
\[z_{rt}=\frac{a+\sqrt{a^{2}-4\varkappa b^{2}}}{2\varkappa}. \tag{37}\]
**Remark 10**: _In Assertion 10 (and further in the text) we assume that \(z_{+l}>0\) and \(z_{+r}>0\). As a consequence \(z_{-l}<0\) and \(z_{-r}<0\)._
Now let us find the number of intersection points of the line \(k=0\) and the curve (9).
**Assertion 11**: _Suppose that \(\varkappa>0\). Then the number of intersection points of the line \(k=0\) and the curve (9) depends on the values of the parameters \(a\) and \(b\) as follows. Consider the function_
\[f_{k}(b)=\frac{3b^{4/3}+6\varkappa b^{2/3}c_{1}^{4/3}-\varkappa^{2}c_{1}^{8/3} }{4c_{1}^{2/3}}.\]
1. _Assume that_ \(b>\varkappa^{3/2}c_{1}^{2}\)_. If_ \(a>f_{k}(b)\)_, then the line_ \(k=0\) _and the curve (_9_) have exactly_ \(3\) _points of intersection. If_ \(a<f_{k}(b)\)_, then there is only_ \(1\) _point of intersection and if_ \(a=f_{k}(b)\)_, then there are_ \(2\) _points of intersection._
2. _Assume that_ \(0<b<\varkappa^{3/2}c_{1}^{2}\)_. If_ \(a<f_{k}(b)\)_, then the line_ \(k=0\) _and the curve (_9_) have exactly_ \(3\) _points of intersection. If_ \(a>f_{k}(b)\)_, then there is only_ \(1\) _point of intersection and if_ \(a=f_{k}(b)\)_, then there are_ \(2\) _points of intersection._
3. _If_ \(b=\varkappa^{3/2}c_{1}^{2}\)_, then the line_ \(k=0\) _and the curve (_9_) have exactly_ \(1\) _point of intersection for any value of the parameter_ \(a\)_._
The results of Assertion 11 are collected together in table 4.
Proof.: It is clear that there is no points of intersection if \(z<0\) since
\[k(z)=4ac_{1}^{2}-4\varkappa c_{1}^{2}z-\frac{4b^{2}c_{1}^{2}}{z}+\left(\frac{b ^{2}c_{1}^{2}}{z^{2}}-\varkappa c_{1}^{2}\right)^{2}>0.\]
It follows from Assertion 9 that the function \(k(z)\) has two local extrema if \(z>0\):
\[z_{\mbox{\tiny+ext}}=\frac{b}{\sqrt{\varkappa}}\qquad\mbox{and}\qquad z_{\mbox {\tiny cusp}}=\sqrt[3]{b^{2}c_{1}^{2}}.\]
It remains to examine the location of these extrema and whether the value of the function \(k(z)\) at these points is greater than zero. Assertion 11 is proved.
Thus we found the points \(z_{\mbox{\tiny cusp}},z_{\mbox{\tiny zext}},z_{\mbox{\tiny z}l},z_{\pm r},z_{lt}\) and \(z_{rt}\) on the curve (9) given by the formulas (32), (33), (34), (35), (36) and (37) respectively. We now determine the order of these points on the \(z\)-axis. It is obvious that
\[z_{-l}<z_{\mbox{\tiny-ext}}<z_{-r}<0,\]
and that the value of the parameter \(z\) for the other described points is greater than zero. The next assertion is proved by direct calculation.
**Assertion 12**.: _Let \(\varkappa>0\). Then, depending on the values of the parameters \(a\) and \(b\), the points \(z_{\mbox{\tiny cusp}},z_{\mbox{\tiny+ext}},z_{+l},z_{+r},z_{lt}\) and \(z_{rt}\) (described in Assertions 9 and 10) are arranged on the ray \(z>0\) so as it is described in Tables 5 and 6. Here functions \(f_{r}(b)\) and \(f_{m}(b)\) are given by the formulas (13) and (14) respectively._
We now describe when three curves intersect at one point.
**Assertion 13**.: _Suppose that \(\varkappa>0\), \(b\neq 0\) and \(a^{2}-4\varkappa b^{2}>0\). Then the line \(k=0\), the curve (9) and the right parabola (11) can not intersect at one point. The line \(k=0\), the curve (9) and the left parabola (10) intersect at one point if and only if_
\[a=\left(\frac{\varkappa c_{1}^{2}+t^{2}}{2c_{1}}\right)^{2}+\varkappa t^{2}, \qquad b=t\left(\frac{\varkappa c_{1}^{2}+t^{2}}{2c_{1}}\right) \tag{38}\]
_for some \(t\in\mathbb{R}\)._
\begin{table}
\begin{tabular}{|c||c|c|c|} \hline & \(0<b^{2}<\varkappa^{3}c_{1}^{4}\) & \(b^{2}=\varkappa^{3}c_{1}^{4}\) & \(b^{2}>\varkappa^{3}c_{1}^{4}\) \\ \hline \hline \(a>f_{k}(b)\) & 1 & 1 & 3 \\ \hline \(a=f_{k}(b)\) & 2 & 1 & 2 \\ \hline \(a<f_{k}(b)\) & 3 & 1 & 1 \\ \hline \end{tabular}
\end{table}
Table 4: Number of intersection points of the curve (9) and the line \(k=0\).
Note that in the formula (38) the parameter \(a\) is uniquely determined by \(b\), therefore the formula (38) defines a function, which we denoted by \(a=f_{t}(b)\) (see the formula (15)).
_of Assertion 13_. It is not hard to check that under the conditions of the assertion the points of rank 1 that belong to the families 1, 2 and also to one of the families 3, 4, 5 and 6 are the points
\[x_{1}=\frac{\varkappa c_{1}^{2}+J_{1}^{2}}{2c_{1}},\quad J_{2}=0,\quad J_{3}=0,\quad x_{2}=0,\quad x_{3}=0.\]
For these points the equality (38) holds with \(t=J_{1}\). Therefore, if the equality (38) holds, then the curves intersect at one point.
We now show that if three curves intersect at one point, then the equality (38) holds. It is not hard see that the only point of the curve (9) that can be a point of triple intersection is its point of intersection with the left parabola \(z_{+l}\) (it also easily follows from geometrical considerations). Since by assumption the curves intersect at one point the equality
\[\varkappa c_{1}^{2}+\frac{a}{\varkappa}-\frac{\sqrt{a^{2}-4\varkappa b^{2}}}{ \varkappa}=\frac{b^{2}c_{1}^{2}}{z_{+l}^{2}}+2z_{+l}\]
holds. Substituting the equality (34) we get
\[(z-\varkappa c_{1}^{2})^{2}=\sqrt{a^{2}-4\varkappa b^{2}}c_{1}^{2}. \tag{39}\]
If we put
\[z=\frac{t^{2}+\varkappa c_{1}^{2}}{2},\]
where \(\mathrm{sgn}(t)=\mathrm{sgn}(b)\), then we get
\[\sqrt{a^{2}-4\varkappa b^{2}}=\frac{(t^{2}-\varkappa c_{1}^{2})^{2}}{4c_{1}^{ 2}}. \tag{40}\]
\begin{table}
\begin{tabular}{|l|l|} \hline \(a>f_{m}(b)\) & \(z_{rt}>z_{+l}>z_{\mathrm{cusp}}>z_{+\mathrm{ext}}>z_{+r}>z_{lt}\) \\ \hline \(a=f_{m}(b)\) & \(z_{rt}=z_{+l}>z_{\mathrm{cusp}}>z_{+r}>z_{lt}\) \\ \hline \(f_{r}(b)<a<f_{m}(b)\) & \(z_{rt}>z_{+l}=z_{+\mathrm{ext}}>z_{\mathrm{cusp}}>z_{+r}=z_{lt}\) \\ \hline \(f_{r}(b)<a<f_{m}(b)\) & \(z_{rt}>z_{+\mathrm{ext}}>z_{+l}>z_{\mathrm{cusp}}>z_{+l}>z_{+r}\) \\ \hline \end{tabular}
\end{table}
Table 6: Interposition of points of the curve (9) for \(b^{2}>\varkappa^{3}c_{1}^{4}\).
\begin{table}
\begin{tabular}{|l|l|} \hline \(a>f_{m}(b)\) & \(z_{rt}>z_{+l}>z_{\mathrm{cusp}}>z_{+\mathrm{ext}}>z_{+r}>z_{lt}\) \\ \hline \(a=f_{m}(b)\) & \(z_{rt}=z_{+l}>z_{\mathrm{cusp}}>z_{+\mathrm{ext}}=z_{+r}>z_{lt}\) \\ \hline \(f_{r}(b)<a<f_{m}(b)\) & \(z_{+l}>z_{rt}>z_{\mathrm{cusp}}>z_{+r}>z_{\mathrm{ext}}>z_{lt}\) \\ \hline \(a=f_{r}(b)\) & \(z_{+l}>z_{rt}=z_{\mathrm{cusp}}=z_{+r}>z_{+\mathrm{ext}}>z_{lt}\) \\ \hline \(a<f_{r}(b)\) & \(z_{+l}>z_{+r}>z_{\mathrm{cusp}}>z_{rt}>z_{+\mathrm{ext}}>z_{lt}\) \\ \hline \end{tabular}
\end{table}
Table 5: Interposition of points of the curve (9) for \(0<b^{2}<\varkappa^{3}c_{1}^{4}\).
Substituting (39) and (40) in the formula (34) we obtain the required expression for \(a\) from the formula (38). This immediately implies the formula (38) for \(b\). Assertion 13 is proved.
It follows from Assertion 9, 10, 11, 12 and 13 that the interposition of the curves from Lemma 1 qualitatively differs depending on position of the parameters \(a\) and \(b\) with respect to the functions \(f_{k},f_{r},f_{m},\) and \(f_{t}\), given by the formulas (12), (13), (14) and (15) respectively.
Since in the case \(\varkappa>0\) the values of the parameters \(a\) and \(b\) always satisfy the inequality \(a^{2}-4\varkappa b^{2}\geq 0\) we also consider the function
\[f_{l}(b)=2\sqrt{\varkappa}|b|.\]
We now describe how the graphs of the functions \(f_{k},f_{r},f_{m},f_{t}\) and \(f_{l}\) are positioned relative to each other. The next assertion is proved by direct calculation.
**Assertion 14**.: _Let \(\varkappa>0\). Denote by \(\alpha_{0}\) the only root of the equation \(x^{3}+x^{2}+x-1=0\). The graphs of functions \(f_{k},f_{r},f_{m},f_{t}\) and \(f_{l}\) given by the formulas (12), (13), (14), (15) and (16) respectively are symmetrical about the axis \(b=0\) and in the case \(b>0\) they are positioned as it is shown in Fig. 1 and 2. In other words, for \(b>0\) all the graphs intersect at the point_
\[M=(\varkappa^{3/2}c_{1}^{2},\ 2\varkappa^{2}c_{1}^{2}),\]
_and the graphs of functions \(f_{r}\) and \(f_{t}\) intersect at the point_
\[N=(\alpha_{0}^{3}\varkappa^{3/2}c_{1}^{2},\ f_{r}(\alpha_{0}^{3}\varkappa^{3/ 2}c_{1}^{2})).\]
_Further, if \(0<b^{2}<\alpha_{0}^{6}\varkappa^{3}c_{1}^{4}\), then_
\[f_{k}<f_{l}<f_{r}<f_{t}<f_{m}.\]
_If \(\alpha_{0}^{6}\varkappa^{3}c_{1}^{4}<b^{2}<\varkappa^{3}c_{1}^{4}\), then_
\[f_{k}<f_{l}<f_{t}<f_{r}<f_{m}.\]
_And if \(b^{2}>\varkappa^{3}c_{1}^{4}\), then_
\[f_{l}<f_{t}<f_{k}<f_{r}<f_{m}.\]
_of Lemma 5._ Lemma 5 easily follows from earlier proved Assertions 9, 10, 11, 12, 13 and simple geometric considerations.
### Critical points of rank 0
In this section we prove Lemmas 2 and 4 about types of critical points of rank 0. At first we describe all critical points of the momentum mapping of rank 0 and then we find their types and images under the momentum mapping. Recall that on non-singular orbits (that is, on orbits \(M_{a,b}\) such that \(a^{2}-4\varkappa b^{2}>0\)) non-singular points of rank 0 are precisely the points at which both Hamiltonian vector field \(X_{H}\) and \(X_{K}\) vanish. Let us emphasize that the following statement holds for any value of the parameter \(\varkappa\in\mathbb{R}\)
**Assertion 15**.: _The set of points where both Hamiltonian vector fields \(X_{H}\) and \(X_{K}\) with Hamiltonians (7) and (8) respectively vanish is the union of the following (two-parameter) families of points in \(\mathbb{R}^{6}(J,x)\):_
1. \((J_{1},J_{2},0,\varkappa c_{1},0,0),\)__
2. \((J_{1},0,0,x_{1},0,0),\)__
3. \((J_{1},0,J_{3},\frac{\varkappa c_{1}^{2}+J_{2}^{2}}{2c_{1}},0,\frac{J_{1}J_{ 3}}{c_{1}}).\)__
Proof.: The vector field \(X_{H}\) has the following coordinates:
\[\{J_{1},H\}=-2J_{2}J_{3},\qquad\{J_{2},H\}=2J_{1}J_{3}-2c_{1}x_{3}\] \[\{J_{3},H\}=2c_{1}x_{2},\qquad\{x_{1},H\}=2J_{2}x_{3}-4J_{3}x_{2}\] \[\{x_{2},H\}=4J_{3}x_{1}-2J_{1}x_{3}-2\varkappa c_{1}J_{3},\qquad \{x_{3},H\}=2J_{1}x_{2}-2J_{2}x_{1}+2\varkappa c_{1}J_{2}\]
It is easy to see that \(x_{2}=0\) and that either \(J_{2}=0\) or \(J_{3}=0,x_{3}=0\). The rest of the proof is by exhaustion.
of Lemmas 2 and 4.: It can be proved by direct calculation that the images of the points of rank \(0\) lie on these curves. It is possible to explicitly find all the points from the \(1\) and \(2\) series on each orbit \(M_{a,b}\). We now prove the statement about the number of points for the \(3\) series on the orbit \(M_{a,b}\). The case \(b=0\) is trivial, hence we assume that \(b\neq 0\). It is not hard to verify that the set of points (or rather the corresponding values of the parameters \(J_{1}\) and \(J_{3}\)) is given by the following system of equations
\[J_{1}^{5}-2\varkappa c_{1}^{2}J_{1}^{3}-4bc_{1}J_{1}^{2}+\left(4ac_{1}^{2}+ \varkappa^{2}c_{1}^{4}\right)J_{1}-4\varkappa bc_{1}^{3}=0\] \[J_{3}^{2}=\frac{2bc_{1}-\varkappa c_{1}^{2}J_{1}-J_{1}^{3}}{2J_{ 1}}. \tag{41}\]
It is clear that there are exactly two points in the preimage for each point in the image (the coordinates \(J_{1}\) for these points coincide and the coordinates \(J_{3}\) are opposite). In order to find the exact number of solutions let us first divide the set of parameters \((a,b)\) into areas for which this number is constant and then solve this problem for each of the areas. Let us slightly simplify the equation (41) by putting
\[\hat{b}=\frac{b}{\varkappa^{3/2}c_{1}^{2}},\qquad\hat{a}=\frac{a}{\varkappa^{ 2}c_{1}^{2}},\qquad s=\frac{J_{1}}{\varkappa^{1/2}c_{1}}.\]
Then the number of points in the image is equal to the number of solutions of the equation
\[s^{5}-2\hat{b}s^{3}-8\hat{b}s^{2}+(4\hat{a}+1)s-4\hat{b}=0 \tag{42}\]
on the segment
\[0\leq s+s^{3}\leq 2\hat{b}. \tag{43}\]
Note that the function \(s+s^{3}\) is monotonically increasing, so the inequality (43) defines a segment. Note that if we vary the parameters \(a\) and \(b\), then the number of solutions can change only in the following cases:
1. The equation (42) has multiple roots.
2. One of the endpoints of the segment (43) is a solution of the equation (42). (It is easy to see that the point \(0\) can not be a root of the equation (42), therefore we only need to check the point \(s+s^{3}=2\hat{b}\)).
It can be verified that the point \(s+s^{3}=2\hat{b}\) is a solution of the equation (42) if and only if \(a=f_{t}(b)\) (recall that the function \(f_{t}(b)\) is defined by the formula (15) and that \(a=f_{t}(b)\) if and only if three curves of the bifurcation diagram intersect at one point, see Assertion 13). It can also be verified that in a sufficiently small neighbourhood of any point of the curve \(a=f_{t}(b)\) (except, maybe, for the points at which the equation (42) has multiple roots) the points in the area \(a>f_{t}(b)\) have one more point in the preimage than the points in the area \(a<f_{t}(b)\).
Further, it is easy to verify that the equation (42) has multiple roots in the following cases: either \(a=\pm 2\sqrt{\varkappa}b\) or \(a=f_{k}(b)\), where \(f_{k}(b)\) is given by the formula (12). In the case \(a=2\sqrt{\varkappa}|b|\) the only multiple root is \(s=1\) of multiplicity \(2\). It follows that for \(b^{2}<\varkappa^{3}c_{1}^{4}\) and \(2\sqrt{\varkappa}|b|<a<f_{t}(b)\) there is no point from the \(3\) series in the preimage and for \(b^{2}>\varkappa^{3}c_{1}^{4}\) and \(2\sqrt{\varkappa}|b|<a<f_{t}(b)\) there are exactly two point from the \(3\) series in the preimage. Since the number of points in the preimage increases by \(1\) when passing through the curve \(a=f_{t}(b)\) by increasing the parameter \(a\) (for a fixed \(b\)) we conclude that in the area \(a>f_{t}(b),a>f_{k}(b)\) there is exactly one point in the preimage and there are three points in the preimage for the area \(f_{k}(b)>a>f_{t}(b)\).
Thus the statement about the number of points on each orbit is proved. It remains to prove about the statement about their types. It is not hard to do using the following criteria (for more details see [4]).
**Assertion 16**.: _Consider a symplectic manifold \((M^{4},\omega)\) and suppose that \(x_{0}\in(M,\omega)\) is a critical point of rank \(0\) for an integrable Hamiltonian system with Hamiltonian \(H\) and integral \(K\). Then the point \(x_{0}\) is nondegenerate if and only if the linearizations \(A_{H}\) and \(A_{K}\) of the Hamiltonian vector fields \(X_{H}\) and \(X_{K}\) at the point \(x_{0}\) satisfy the following properties:_
1. _the operators_ \(A_{H}\) _and_ \(A_{K}\) _are linearly independent,_
2. _there exists a linear combination_ \(\lambda A_{H}+\mu A_{K}\) _such that all its eigenvalues are different and not equal to_ \(0\)_._
_Moreover, if the point \(x_{0}\) is nondegenerate, then its type is completely determined by the spectrum of any linear combination \(\lambda A_{H}+\mu A_{K}\) that has no zero eigenvalues. More precisely the type of the point depends on the type of the spectrum as follows._
* _If the spectrum of a linear combination_ \(\lambda A_{H}+\mu A_{K}\) _has the form_ \(\alpha,-\alpha,\beta,-\beta\)_, where_ \(\alpha,\beta\in\mathbb{R}-\{0\}\)_, then the point_ \(x_{0}\) _is a critical point of saddle-saddle type._
* _If the spectrum has the form_ \(i\alpha,-i\alpha,i\beta,-i\beta\)_, where_ \(\alpha,\beta\in\mathbb{R}-\{0\}\)_, then the point_ \(x_{0}\) _is a critical point of center-center type._
* _If the spectrum has the form_ \(i\alpha,-i\alpha,\beta,-\beta\)_, where_ \(\alpha,\beta\in\mathbb{R}-\{0\}\)_, then the point_ \(x_{0}\) _is a critical point of center-saddle type._
* _If the spectrum has the form_ \(\alpha+i\beta,\alpha-i\beta,-\alpha+i\beta,-\alpha-i\beta\)_, where_ \(\alpha,\beta\in\mathbb{R}-\{0\}\)_, then the point_ \(x_{0}\) _is a critical point of focus-focus type._
We compute the spectrum of the linearization of the Hamiltonian vector field \(X_{H}\) with Hamiltonian (7) using Assertion 7.
Assertion 17. _For all three series of critical points of rank \(0\) from Assertion 15 the spectrum of the linearization of the Hamiltonian vector field \(X_{H}\) with Hamiltonian (7) contains a zero eigenvalue with multiplicity \(2\). The spectrum also contains the following elements:_
1. _For the 1 series of critical points of rank_ \(0\) _the spectrum also contains the eigenvalues_ \(\pm\sqrt{2}\sqrt{\alpha\pm\sqrt{\alpha^{2}+4\varkappa\beta^{2}}}\)_, where_ \[\alpha=\varkappa c_{1}^{2}-J_{1}^{2}-J_{2}^{2}=(2\varkappa c_{1}^{2}-\frac{a}{ \varkappa})\] \[\beta=c_{1}J_{2}=c_{1}^{2}a-\varkappa^{2}c_{1}^{4}-\frac{b^{2}}{\varkappa}\]
2. _For the 2 series the spectrum also contains the eigenvalues_ \[\pm 2\sqrt{c_{1}(x_{1}-\varkappa c_{1})}\quad\text{and}\quad\pm 2\sqrt{-J_{1}^{ 2}+2c_{1}x_{1}-\varkappa c_{1}^{2}}\]
3. _For the 3 series the spectrum also contains the eigenvalues_ \[\pm 4iJ_{3}\quad\text{and}\quad\pm\sqrt{2}\sqrt{J_{1}^{2}-2J_{3}^{2}-\varkappa c _{1}^{2}}\]
We further note that in order to prove the nondegeneracy of the points it suffices to verify that all the 4 eigenvalues of the operator \(A_{H}\) are not equal to 0 and that there exists \(\lambda\in\mathbb{R}\) such that the spectrum of the operator \(A_{F}\), where \(F=K+\lambda H\), has exactly 2 non-zero eigenvalues. Indeed, then the restrictions of the operators \(A_{H}\) and \(A_{K}\) are linearly independent since otherwise the spectrum of any linear combination is obtained from the spectrum of the operator \(A_{H}\) by a multiplication by a constant (it follows from Assertion 7). Moreover, there is no need to verify that all the eigenvalues of the operator \(A_{H}\) are distinct since in this case the spectrum of some linear combination \(H+\mu F\) has 4 different non-zero eigenvalues.
For the series 2 and 3 the required coefficient of proportionality \(\lambda\) has already been found in Assertion 8 (we can put \(\lambda=2\left(\varkappa c_{1}^{2}-J_{1}^{2}\right)\) and \(\lambda=0\) for the series 2 and 3 respectively). For the series 1 we can use the fact that \(k=\frac{\lambda^{2}}{4}\) for the critical points in the preimage of points of the parabolas (10) and (11) and formally put \(\lambda=\sqrt{\frac{k}{2}}\). Then for the function \(F=K+\sqrt{\frac{k}{2}}H\) the spectrum of the operator \(A_{F}\) consists only of 4 zeroes and
\[\pm 4\sqrt{2}\sqrt{\frac{4\varkappa b^{2}-a^{2}}{\varkappa^{2}}\left(\frac{a^ {2}-2\varkappa c_{1}^{2}}{\varkappa}+\sqrt{\frac{a^{2}-4\varkappa b^{2}}{ \varkappa^{2}}}\right)}.\]
It is now not hard to check the nondegeneracy of points from Lemmas 2 and 4. (Note that the degenerate points appear only at the boundaries of the areas of the plane \(\mathbb{R}^{2}(a,b)\) and their degenerations are related to the restructurization of the bifurcation diagrams in a neighbourhood of these points).
The types of critical points can be easily found using Assertions 16 and 17. Lemmas 2 and 4 are proved.
### Proof of Theorems 1, 2 and 3
Theorems 1, 2 and 3 easily follow from earlier-proved Lemmas 1, 2, 3, 4 and the following simple geometric considerations.
1. First, we use the fact that the orbits of the coadjoint representation \(M_{a,b}\) of the Lie algebra so(4) are compact. In particular, since the image of a compact set is compact, it allows us to discard all unlimited domains during the construction of bifurcation diagrams.
2. Second, we use some well known results about nondegenerate singularities. All the required statements are described in detail in the book [4] so we only briefly recall them. 1. There exists only one, up to Liouville equivalence, singular point of center-center type. The bifurcation diagram in a neighbourhood of a center-center singularity is the union of two curves emanating from this point. The loop molecule of the singularity has the form \(A-A\) and the mark \(r=0\). 2. Any singularity of center-saddle type is Liouville equivalent to a direct product of a saddle atom and the elliptic atom \(A\). In a neighbourhood of an image of a center-saddle point the bifurcation diagram is the union of a curve passing through this point and another curve emanating from this point. The loop molecule is obtained from the corresponding saddle atom by adding the atom \(A\) at the end of each edge, all marks \(r=\infty\). (For example the loop molecule of the point \(y_{12}\) in Table 2 has such form.) 3. There exists exactly 4 singularities of saddle-saddle type of complexity 1 (that is, containing exactly one singular point on the leaf). These singularities are completely determined by their loop molecules. (The required two loop molecules for the saddle-saddle singularities are given in Table 2 for the points \(y_{3}\) and \(y_{7}\).)
3. Third, since we consider a two-parameter family of orbits \(M_{a,b}\) we can use the fact that some invariants of the system continuously depend on the parameters \(a\) and \(b\) (for example a small perturbation of the parameters does not change the number of tori in the preimage of regular points from the "same area").
We also use stability of bifurcations of types \(A\), \(B\) and \(A^{*}\).
In this section we do not consider the classical Kovalevskaya case (\(\varkappa=0\)) because it was considered earlier in the paper [10] and was described in detail in the book [4].
Also, we do not consider the case \(b=0\) in much detail: the proof of the statements in this case is similar to the proof in the case \(b\neq 0\).
_of Theorems 1 and 2_. Previously it was shown that the required bifurcation diagrams of the momentum mapping is contained in the union of curves described in Lemma 1. Therefore, to prove the theorems is remains to throw away several parts of the described curves and determine the bifurcations for the remaining arcs. Let us show how it can be done using the previously obtained information about the types of critical points of rank \(0\) (see Lemmas 2 and 4).
We prove Theorems 1 and 2 for the values of the parameters \(a\) and \(b\) from the area \(I\), that is in the case \(|b|>\varkappa^{3/2}c_{1}^{2}\), \(2\sqrt{\varkappa}b<a<f_{t}(b)\) (where the function \(f_{t}(b)\) is given by the formula (15)). The remaining cases are treated similarly.
In this case the bifurcation diagram of the momentum mapping should have the form shown in Fig. 3, 4, 5. Denote by \(P\), \(Q\) and \(R\) the point of intersection of the curve (9) with the line \(k=0\), the point of intersection of the parabolas (10) and (11) and the point of intersection of the right parabola (11) with the line \(k=0\) respectively.
First of all, we discard all unlimited domains (since the orbits of \(so(4)\) are compact) and all areas lying below the line \(k=0\) (obviously, \(k\geq 0\)). Then notice that the "curvilinear triangles" \(y_{12}y_{13}P\) and \(z_{1}z_{2}R\) do not belong to the image of the momentum mapping. Indeed, the point \(z_{1}\) is the rightmost point in the image of the coadjoint orbits because all the points in the preimage of the rightmost point on the line \(k=0\) must be of center-center type and there is no images of critical points of rank \(0\) to the right of the point \(z_{1}\). Similarly, if the point \(P\) belonged to the image of the momentum mapping, then it would be the leftmost and the lowest point in some of its neighbourhood. Therefore, there would have to be points of rank \(0\) in its preimage, which is false.
Further, since we know the types of all points of rank \(0\) we can easily determine the bifurcations corresponding to all arcs that contain an image of a critical points of rank \(0\). In this case the only ambiguity occurs for the point \(y_{10}\). This point is the image of two points of center-center type thus a priori there are \(3\) variants: for the area lying to the left of the point \(y_{10}\) there are either \(4\), or \(2\) or \(0\) tori in the preimage. Let us show that only the first case is possible. If we increase the parameter \(a\), then in the case \(f_{t}(b)<a<f_{k}(b)\) there appears the point \(y_{7}\) of saddle-saddle type, therefore there have to be \(4\) tori in the preimage of a point in the neighbouring camera. For the reasons of continuity in the case \(a<f_{t}(b)\) there should also be \(4\) tori for the camera to the left of the point \(y_{10}\).
It remains to determine whether the "curvilinear triangle" \(y_{8}z_{2}Q\) and the arc \(y_{8}y_{13}\) belong to the bifurcation diagram and what bifurcations correspond to these arcs. The curve \(y_{8}Q\) does not belong to the bifurcation diagram, and the curve \(z_{2}Q\) belongs since in this case there is no critical points of rank \(0\) in the preimage of the point \(q\). Here we use the following simple assertion.
**Assertion 18**: _Let \((M^{4},\omega)\) be a compact symplectic manifold, \(H,K\) be two commuting (with respect to the Poisson bracket) functions on \(M^{4}\), which are independent almost everywhere. Suppose that in a neighbourhood of a point \(x\in\mathbb{R}^{2}\) the bifurcation diagram has the same structure as for the critical points of rank \(0\) of center-center type_
(that is, two arcs from the boundary of the image intersect transversely) or of center-saddle type (that is, an arc transversely intersects a smooth arc from the boundary of the image of the momentum mapping). The there is a critical point of rank \(0\) in the preimage of the point \(x\)._
It remains to show that the arcs \(y_{13}y_{8}\) and \(y_{8}z_{2}\) belong to the bifurcation diagram and that the corresponding bifurcations have types \(2B\) and \(2A^{*}\) respectively. Let us start with the arc \(y_{8}z_{2}\). We already know (see Assertion 4) that the preimage of the points of this arc is either empty or consists of two critical circle and in the latter case the symmetry \((J_{3},x_{3})\to(-J_{3},-x_{3})\) interchanges these circles. First of all we show that if the preimage consists of two critical circles, then the bifurcations corresponding to each of them have type \(A^{*}\). Since there are two circles and they transform two tori into two and the system has a symmetry, the only possible bifurcation apart from \(2A^{*}\) is the bifurcation \(C_{2}\). In order to show that the latter case can not occur let us consider the isoenergetic surfaces \(H=\mathrm{const}\) for the values of energy \(H\) close to \(h_{0}=\frac{b^{2}c_{1}^{2}}{z_{rt}^{2}}+2z_{rt}\), where \(z_{rt}\) is given by the formula (37) (this value of energy corresponds to the point of tangency of the right parabola (11) and the parametric curve (9)). Since there is no points where the Hamiltonian vector field \(X_{H}\) vanishes for the values of \(H\) close to \(h_{0}\) the type of the isoenergetic surface does not change in a neighbourhood of \(h_{0}\). However, when the energy parameter increases \(H>h_{0}\) the isoenergetic surface is obviously disconnected: its rough molecule consists of two copies of \(A-A\). Therefore, it is disconnected for the lower values of \(H\) as well. However, if the bifurcation for the curve \(y_{8}z_{2}\) had type \(C_{2}\), then the isoenergetic surface would have been connected. Thus the only bifurcation that can correspond to the curve \(y_{8}z_{2}\) is \(2A^{*}\).
Now let us show that the bifurcations for the curve \(y_{8}z_{2}\) do really exist. First of all we notice that for sufficiently large values of the parameter \(a\) (more precisely, for \(a>f_{m}(b)\)) these bifurcations exist (and they are of type \(2A^{*}\)). It suffices to show that the loop molecule of the point \(y_{3}\) has the form shown in Table 2. It is true because there are 3 tori in the preimage of the points to the "left" of the point \(y_{3}\) and there are two tori in the preimage of the points to the "right". It easily follows from the analysis of the types of critical points in the case \(f_{r}(b)<a<f_{m}(b)\) and in the case \(a>f_{m}(b)\) it follows from the reasons of continuity. Note that for \(a>f_{m}(b)\) the bifurcation for the arc \(y_{3}z_{2}\) has type \(2A^{*}\) since the critical point of rank \(0\) in the preimage of the point \(z_{3}\) has center-saddle type and hence the bifurcation for the curve \(y_{3}z_{3}\) has to be orientable.
From the reasons of continuity it follows that the bifurcations for the curve \(y_{8}z_{2}\) exist in case \(a<f_{m}(b)\). Let us describe the last transition in more detail. On one hand, the bifurcations of the type \(A^{*}\) are stable, hence they survive under a small perturbation of parameters \(a\) and \(b\). Therefore, the set of points \(a,b\) for which the bifurcation for the curve \(y_{8}z_{2}\) exists and has type \(2A^{*}\) is an open subset. On the other hand, the set of points where the Hamiltonian vector fields \(X_{H}\) and \(X_{K}\) are linearly dependent is a closed subset of \(\mathbb{R}^{7}=\mathbb{R}^{7}(\mathbf{J},\mathbf{x},\varkappa)\). Therefore, since the orbits of \(\mathrm{so}(4)\) compact, the image of all critical points with \(\varkappa>0\) by the mapping \((H,K,a,b,\varkappa):\mathbb{R}^{7}(\mathbf{J},\mathbf{x},\varkappa)\to \mathbb{R}^{5}\) is a closed set. Therefore, the set of points \(a,b\) for which the bifurcations for the curve \(y_{8}z_{2}\) exist is a closed subset. It follows that these bifurcations exist and have type \(2A^{*}\) for any value of the parameter \(a>2\sqrt{\varkappa}b\) (and \(b^{2}>\varkappa^{3}c_{1}^{4}\)).
Finally we prove that the bifurcation corresponding to the arc \(y_{8}y_{13}\) has type \(2B\). The arc \(y_{8}y_{13}\) belongs to the bifurcation diagram because there are 4 tori over the "bottom" region and 2 tori over the "top" region. The number of tori in the areas can be easily found by a careful examination of the types of critical points of rank 0. For example, 2 tori lie over the "top" region since the point \(y_{12}\) is the image of a single point of center-saddle type and hence the bifurcation for the curve \(y_{12}z_{5}\) has type \(B\) (that is, one torus transforms into two).
We further note that the bifurcation corresponding to the arc \(y_{8}y_{13}\) consists of two identical parts. Moreover, the following statement holds, which follows easily from Assertion 4 and the fact that there is no images of the points of rank 0 in a neighbourhood of the point \(y_{8}\).
Assertion 19. _The preimage of a sufficiently small neighbourhood of the singular point \(y_{8}\) consists of two connected components and the symmetry \(\sigma_{3}:(J_{3},x_{3})\to(-J_{3},-x_{3})\) interchanges these components._
Thus there are two identical bifurcations corresponding to the arc \(y_{8}y_{13}\), which transform two tori into four. It follows from considerations of continuity as previously for the bifurcations \(2A^{*}\) that both these bifurcations have type \(B\): for \(a>f_{t}(b)\) the bifurcation corresponding to the arc \(y_{7}y_{8}\) has type \(2B\) (the loop molecule of the saddle-saddle point in the preimage of the point \(y_{7}\) is uniquely determined by the fact that there are 4 tori in the preimage of points in one of the neighbouring cameras).
Thus we determined all the bifurcations. Theorems 1 and 2 are proved.
_of Theorem 3._ The structure of loop molecules for nondegenerate singularities of rank 0 is well known and is described in detail in the book [4]. It remains to prove Theorem 3 for the images of degenerate critical points of rank 1. In almost all cases the loop molecules (without marks) for degenerate critical points can be uniquely determined using the obtained information about the bifurcations and the number of tori for all areas. Ambiguity for the loop molecules of points \(y_{8}\) and \(y_{9}\) can be easy solved using the fact that the loop molecules of these points must consist of two identical parts (see Assertion 19). Marks for the degenerate singularities can be found using standard methods ("rule of summation of the marks", considerations of continuity), which are described, for example, in [4] or [21].
## 5 Classical Kovalevskaya case (\(\varkappa=0\))
In this section we show that the bifurcation diagrams for classical Kovalevskaya case defined on the Lie algebra e(3) by the Hamiltonian
\[H=J_{1}^{2}+J_{2}^{2}+2J_{3}^{2}+2x_{1} \tag{44}\]
and the integral
\[K=(J_{1}^{2}-J_{2}^{2}-2x_{1})^{2}+(2J_{1}J_{2}-2x_{2})^{2} \tag{45}\]
can be obtained from the bifurcation diagrams of the integrable Hamiltonian system with Hamiltonian (7) and the first integral (8) (where \(c_{1}=1\)) on the Lie algebra so(4)
by passing to the limit \(\varkappa\to 0\). This limit \(\varkappa\to 0\) preserves types of critical points of rank \(0\), the bifurcations of Liouville tori and the loop molecules of singular points of the momentum mappping.
The structure of bifurcation diagrams for the classical Kovalevskaya case is well known and is described, for example, in [10] and [4]. However, in this section we not only compare the answers but also show how to construct the bifurcation diagram for the classical Kovalevskaya case and compute some of its invariants (more precisely, in this section we determine the bifurcations of Liouville tori) using the obtained information about the Kovalevskaya case on the Lie algebra \(so(4)\) while performing as little as possible additional calculations.
In this section we denote by \(\Sigma(a,b,0)\) the bifurcation diagram of the momentum mapping for the orbit \(M_{a,b}\) of the Lie algebra for which the corresponding value of the parameter of the pencil is equal to \(\varkappa\).
Lemma 6.: _Consider arbitrary \(a,b\in\mathbb{R}\) such that \(a>0\). Then a point \(x\) belongs to the bifurcation diagram \(\Sigma(a,b,0)\) if and only if there exists a sequence of points \(x_{n}\in\Sigma(a_{n},b_{n},\varkappa_{n})\) such that \(\lim_{n\to\infty}(a_{n},b_{n},\varkappa_{n})=(a,b,0)\)._
Proof.: In one direction, it follows from Assertion 2 that for any critical point \(z\) on the Lie algebra \(e(3)\) (that is, for a point with the parameter \(\varkappa=0\)) there exists a sequence of critical points \(z_{n}\) on the Lie algebra \(so(4)\) (that is, such that the value of the parameter \(\varkappa>0\)) converging to it. Therefore, the image of the point \(z\) is the limit of the images of points \(z_{n}\).
In the other direction, the proof is by contradiction. Suppose that a point \(x\in\mathbb{R}^{2}\) is regular, that is \(x\not\in\Sigma(a,b,0)\), but there exists a sequence \(x_{n}\in\Sigma(a_{n},b_{n},\varkappa_{n})\) such that \(x_{n}\to x\) and \((a_{n},b_{n},\varkappa_{n})\to(a,b,0)\) as \(\varkappa\to 0\). In order to get a contradiction, we choose a point \(z_{n}\) in the preimage of each point \(x_{n}\) and prove that sequence \(z_{n}\) contains a convergent subsequence. To do this we show that the sequence \(z_{n}\) is contained in a compact set \(A\subset\mathbb{R}^{7}(\mathbf{J},\mathbf{x},\varkappa)\).
Consider two sufficiently small closed discs \(\overline{D_{1}}\) and \(\overline{D_{2}}\subset\mathbb{R}^{2}\) that contain points \(x\) and \((a,b)\) respectively and a small segment \([0,T]\subset\mathbb{R}\). Then the set
\[A=\{(J,x,\varkappa)|(H,K,f_{1},f_{2},\varkappa)(\mathbf{J},\mathbf{x}, \varkappa)\subset\overline{D_{1}}\times\overline{D_{2}}\times[0,T]\}\]
is compact. Indeed, \(A\subset\mathbb{R}^{7}\) is a closed subset since \(\overline{D_{1}}\times\overline{D_{2}}\times[0,T]\) is a closed subset of \(\mathbb{R}^{5}\) and the mapping \((H,K,f_{1},f_{2},\varkappa):\mathbb{R}^{7}(\mathbf{J},\mathbf{x},\varkappa) \to\mathbb{R}^{5}\) is continuous. It remains to prove that the set \(A\) is bounded. Note that the set of numbers \((x_{1},x_{2},x_{3})\) is bounded since the integral \(f_{1}=\varkappa\mathbf{J}^{2}+\mathbf{x}^{2}\) is bounded above and below by some constants. It remains to note that the set of numbers \((J_{1},J_{2},J_{3})\) is also bounded because
\[J_{1}^{2}+J_{2}^{2}+2J_{3}^{2}=H-2c_{1}x_{1}\]
and the right side is bounded since \((H,K)\in\overline{D_{1}}\) for all points from \(A\). Lemma 6 is proved.
It is not hard to get the exact equations for the curves that contain the bifurcation diagrams of the momentum mapping for \(\varkappa=0\). To do this it suffices to pass to the limit \(\varkappa\to 0\) in the equations for the curves (9) and (10) (the right parabola (11) "shifts to the right to infinity" as \(\varkappa\to 0\) thus it has no limit points for \(\varkappa=0\)).
**Lemma 7**: _Let \(\varkappa=0\) and \(b\neq 0\). Then for any non-singular orbit \(M_{a,b}\) (that is, for any orbit such that \(a>0\)) the bifurcation diagram \(\Sigma_{h,k}\) for the integrable Hamiltonian system with Hamiltonian (7) and integral (8) is contained in the union of the following three families of curves on the plane \(\mathbb{R}^{2}(h,k)\):_
1. _The line_ \(k=0\)_;_
2. _The parametric curve_ \[h(z)=\frac{b^{2}c_{1}^{2}}{z^{2}}+2z,\qquad k(z)=4ac_{1}^{2}-\frac{4b^{2}c_{1}^ {2}}{z}+\frac{b^{4}c_{1}^{4}}{z^{4}},\] (46) _where_ \(z\in\mathbb{R}-\{0\}\)_._
3. _The parabola_ \[k=\left(h-\frac{2b^{2}}{a}\right)^{2}.\] (47)
We have an analogous statement for \(b=0\).
**Lemma 8**: _Let \(\varkappa=0\) and \(b=0\). Then for any non-singular orbit \(M_{a,0}\) (that is, for any orbit such that \(a>0\)) the bifurcation diagram \(\Sigma_{h,k}\) for the integrable Hamiltonian system with Hamiltonian (7) and integral (8) is contained in the union of the following three families of curves on the plane \(\mathbb{R}^{2}(h,k)\):_
1. _The line_ \(k=0\)_;_
2. _The union of the parabola_ \[k=h^{2}+4ac_{1}^{2}\] (48) _and the tangent line to this parabola at the point_ \(h=0\)__ \[k=4ac_{1}^{2}.\] (49)
3. _The parabola_ \[k=h^{2}.\] (50)
Now we determine which areas described in Theorems 1 and 4 survive as \(\varkappa\to 0\). It is not hard to check that in the limit the curves \(f_{k},f_{r},f_{t}\) and \(f_{l}\) given by the formulas (12), (13), (15) and (16) respectively go to the curves \(a=\frac{3}{4}\frac{b^{4/3}}{c_{1}^{2/3}}\), \(a=\frac{b^{4/3}}{c_{1}^{2/3}}\), \(a=\frac{1}{2^{2/3}}\frac{b^{4/3}}{c_{1}^{2/3}}\) and \(b=0\) respectively. (For a fixed \(b\neq 0\) the curve \(f_{m}(b)\) has no limit points in the area \(\{a>0,b>0\}\) as \(\varkappa\to 0\).) The found curves divide the area \(\{a>0,b>0\}\) into 4 sub-areas which we denote in this paper as follows:
1. Area I\({}^{\prime}\) is the area \(\{\varkappa=0,\quad 0<b,\quad 0<a<\frac{1}{2^{2/3}}\frac{b^{4/3}}{c_{1}^{2/3}}\}\);
2. Area II\({}^{\prime}\): \(\{\varkappa=0,\quad 0<b,\quad\frac{1}{2^{2/3}}\frac{b^{4/3}}{c_{1}^{2/3}}<a< \frac{3}{4}\frac{b^{4/3}}{c_{1}^{2/3}}\}\);
3. Area III\({}^{\prime}\) : \(\{\varkappa=0,\quad 0<b,\quad\frac{3}{4}\frac{b^{4/3}}{c_{1}^{2/3}}<a< \frac{b^{4/3}}{c_{1}^{2/3}}\}\);
4. Area IV\({}^{\prime}\): \(\{\varkappa=0,\quad 0<b,\quad\frac{b^{4/3}}{c_{1}^{2/3}}<a\}\).
If \(\varkappa=0\), then all the curves \(f_{r},f_{k},f_{t}\) and \(f_{l}\) intersect only at the origin, therefore in this case we should consider only one additional area of the line \(b=0\):
1. Area V\({}^{\prime}\): \(\{\varkappa=0,\quad b=0,\quad 0<a\}\).
Remark 11. If \(\varkappa=0\), then without loss of generality, we can assume that \(a=1\) (other orbits can be obtained from the case \(a=1\) by a suitable change of variables). It is not hard to check that the line \(a=1\) intersects the areas I\({}^{\prime}\)-V\({}^{\prime}\) at the following subsets: it intersects the first four areas at the intervals \(2<b^{2}\), \((4/3)^{3/2}<b^{2}<2\), \(1<b^{2}<(4/3)^{3/2}\) and \(0<b^{2}<1\) (intervals are listed in ascending order of areas) and it intersects the area V\({}^{\prime}\) at the point \(b=0\).
Now it is not hard to understand how the bifurcation diagrams for the classical Kovalevskaya case look like: roughly speaking, they are obtained from the diagrams shown in Fig. 3-14 (or in Fig. 22 for \(b=0\)) by removing all the "right arcs", that is all the arcs belonging to the right parabola (11) and the arc \(z_{1}z_{2}\). Namely, the following theorem holds.
Theorem 5 ([10]). _Let \(\varkappa=0\) and \(b>0\). The curves_
\[a=\frac{b^{4/3}}{c_{1}^{2/3}},\quad a=\frac{3}{4}\frac{b^{4/3}}{c_{1}^{2/3}} \quad\text{and}\quad a=\frac{1}{2^{2/3}}\frac{b^{4/3}}{c_{1}^{2/3}}\]
_divide the area \(\{a>0,b>0\}\) into \(4\) areas. In Fig. 27- 30 the bifurcation diagrams of the momentum mapping for the integrable Hamiltonian system with Hamiltonian (7) and integral (8) on the orbit \(M_{a,b}\) of the Lie algebra \(e(3)\) are shown for each of the these areas. The enlarged fragments of Fig. 27, 28, 29 and 30 have the same forms as Fig. 14, 11, 8 and 5 respectively. The bifurcations diagram for the orbit \(M_{a,0}\) of the Lie algebra \(e(3)\) (that is, in the case \(\varkappa=0,b=0,a>0\)) is shown in Fig. 26._
Remark 12. In Fig. 27-30 the arcs \(y_{1}y_{2},y_{2}y_{3},y_{3}y_{5},y_{2}y_{7},y_{7}y_{8},y_{1}y_{12},y_{12}y_{13}\) and \(y_{13}y_{8}\) belong to the parabola (47). The rest of the arcs distribute between the curve (46) and the line \(k=0\) in an obvious way.
_of Theorem 5._ First of all, it is necessary to check the nondegeneracy of the critical points in the case \(\varkappa=0\) since in general critical points may become degenerate during a variation of parameters (for example, for the integrable system on the Lie algebra \(so(4)\) under consideration this happens when passing from one area of parameters \(a,b\) to another). Nevertheless, all calculations done in Sections 4.1 and 4.3 for the points of rank 1 and 0 respectively remain valid in the case \(\varkappa=0\), therefore it is not hard to verify that all critical points corresponding to non-singular points of the bifurcation diagram are nondegenerate critical points of rank 1 and that all critical points of rank 0 are nondegenerate and have the same type as the corresponding critical points of rank 0 for the Lie algebra \(so(4)\).
Furthermore, it follows from the reasons of continuity that the preimage of each regular point in the image of the momentum mapping for the Lie algebra \(e(3)\) must
contain the same number tori as the preimage of a regular point from the corresponding region for the Lie algebra \(so(4)\). Similarly, the number of critical circles in the preimage of non-singular points of bifurcation diagrams must coincide. (The number of critical circles does not decrease because all singularities are nondegenerate. Under a small perturbation of parameters a complex singularities can decompose into several simple ones but it does not occur in this case -- it follows from the explicit form of the bifurcation diagrams. The number of critical circles does not increase for the same arguments as in the proof of the fact in Lemma 6 that there is no points of bifurcation
diagrams in a neighbourhood of a regular point).
Now, since we know the types of critical points of rank \(0\), the numbers of tori and critical circles we can determine almost all bifurcations of Liouville tori. It remains to use the stability of bifurcations \(A,B\) and \(A^{*}\) and standard considerations of continuity to get rid of the ambiguity for some arcs. For example, in the case \(\varkappa=0,b=0\) the bifurcation corresponding to the arc \(P_{1}\) has type \(C_{2}\) and not \(2A^{*}\) since the bifurcation corresponding to the arc \(y_{3}z_{5}\) has type \(C_{2}\).
Theorem 5 is completely proved.
|
2304.13754 | Finding the effective dynamics to make rare events typical in chaotic
maps | Dynamical fluctuations or rare events associated with atypical trajectories
in chaotic maps due to specific initial conditions can crucially determine
their fate, as the may lead to stability islands or regions in phase space
otherwise displaying unusual behavior. Yet, finding such initial conditions is
a daunting task precisely because of the chaotic nature of the system. In this
work, we circumvent this problem by proposing a framework for finding an
effective topologically-conjugate map whose typical trajectories correspond to
atypical ones of the original map. This is illustrated by means of examples
which focus on counterbalancing the instability of fixed points and periodic
orbits, as well as on the characterization of a dynamical phase transition
involving the finite-time Lyapunov exponent. The procedure parallels that of
the application of the generalized Doob transform in the stochastic dynamics of
Markov chains, diffusive processes and open quantum systems, which in each case
results in a new process having the prescribed statistics in its stationary
state. This work thus brings chaotic maps into the growing family of systems
whose rare fluctuations -- sustaining prescribed statistics of dynamical
observables -- can be characterized and controlled by means of a
large-deviation formalism. | Ricardo Gutiérrez, Adrián Canella-Ortiz, Carlos Pérez-Espigares | 2023-04-26T18:00:08Z | http://arxiv.org/abs/2304.13754v3 | # Making rare events typical in chaotic maps
###### Abstract
Dynamical fluctuations or rare events associated with atypical trajectories in chaotic maps due to specific initial conditions can be very relevant, as the may lead to stability islands or regions in phase space with other features of interest. Yet, finding such initial conditions is a daunting task precisely because of the chaotic nature of the system. In this work, we circumvent this problem by proposing a framework for finding an effective topologically-conjugate map whose typical trajectories correspond to atypical ones of the original map. This is illustrated by means of examples which focus on counterbalancing the instability of fixed points and periodic orbits, as well as on the characterization of a dynamical phase transition involving the finite-time Lyapunov exponent. The procedure parallels that of the application of the generalized Doob transform in the stochastic dynamics of Markov chains, diffusive process and open quantum systems, which in each case results in a new process having the prescribed statistics in its stationary state. This work thus brings chaotic maps into the increasing family of systems whose rare fluctuations can be characterized and controlled by means of a large-deviation formalism.
_Introduction_-- The study of dynamical large deviations in stochastic systems deals with fluctuations of time-averaged observables whose probabilities are exponentially suppressed in time [1; 2; 3]. This field has been enriched in recent years by the possibility of constructing effective processes where those rare fluctuations are made typical, i.e. are transformed into high-probability events of the new stationary distribution. This offers the possibility of controlling on demand the statistics of trajectory observables, which is especially relevant in the context of dynamical phase transitions, allowing, e.g., for the selection of certain dynamical phases that are otherwise extremely unlikely to be observed [4; 5]. The methodology combines biased ensembles of time-averaged observables [6; 7] with the generalized Doob transform [8; 9; 10; 11; 12], and has been recently applied in a remarkable variety of contexts, including lattice gas models [4; 13; 14; 15], continuum diffusive systems [11; 12; 16], and many-body systems, both classical [17] and quantum [18; 19; 20], their main element in common being their stochasticity.
Deterministic dynamical systems are of a different nature, yet they also require a probabilistic description when their evolution is considered from a given distribution of initial conditions, which is particularly relevant in the study of chaotic systems [21]. In that respect, the focus of the literature on large deviations of chaotic dynamical systems from the last decades of the past century revolves around observables arising in the context of information theory and fractal geometry, including the characterization of strange attractors and multifractals [22]. A large-deviation approach to chaotic systems based on observables as general as those considered in stochastic systems, however, seems to have become available only relatively recently. Among those contributions, we highlight the so-called Lyapunov weighted dynamics [23; 24; 25], a computational adaptation of the cloning algorithm [26; 27] to Hamiltonian systems for selecting trajectories with unusual chaoticity, and, more recently, the extension of the large-deviation formalism to general time-averaged observables in chaotic maps [28]. Despite these recent advances, the adaptation of the generalized Doob transform, whereby the dynamics creating those rare trajectories is unveiled --thus giving a powerful handle on the analysis and control of large fluctuations--, has not yet
Figure 1: **Rare trajectories due to the repulsive effect of an unstable fixed point are made typical.** Fluctuations of the time-averaged indicator function, \(A=N^{-1}\sum_{n=1}^{N}\mathbb{I}_{[x^{*}\pm 0.05]}(x_{n})\), of the tent map \(x_{n+1}=1-|1-2x_{n}|\) around the unstable fixed point \(x^{*}=2/3\). (a) Cobweb plot for \(N=100\) iterations. The support of the indicator function is highlighted in light blue. (b) Trajectory illustrated in (a). (c) Histogram, \(P(A=a)\), based on \(10^{5}\) trajectories, with mean \(\langle A\rangle=a_{1}=0.1\). (d) Cobweb plot for \(N=100\) iterations of the Doob effective map with \(s_{0}=-1\), making typical the rare fluctuation highlighted in (c). (e) Trajectory illustrated in (d). (f) Histogram, \(P_{s_{0}}(A=a)\), based on \(10^{5}\) trajectories of the map in (d), with mean \(\langle A\rangle=a_{2}\approx 0.78\).
been accomplished in the context of chaotic maps. This is a conspicuous gap in the literature that we aim to fill with the present work.
In this Letter, we propose a framework for constructing effective maps whose natural invariant measures are tailored to the statistics of general trajectory observables of a given original map. The study of rare events of chaotic maps is thus brought to a level of development that is comparable to that found in recent studies on various types of stochastic systems [4; 13; 14; 17; 19]. The goal is illustrated in Fig. 1, which shows an application of our framework to the case of the tent map [22], \(x_{n+1}=1-|1-2x_{n}|\) [displayed in Fig. 1(a); see Fig. 1(b) for a representative trajectory corresponding to the cobweb plot]. Rare events given by trajectories with an unusually large time spent in a narrow interval centered around the unstable fixed point \(x^{*}=2/3\), which are in the tail of the long-time probability distribution of the time-averaged indicator function [see Fig. 1(c)], become typical in a new effective map [see Fig. 1(d)], as illustrated in the histogram [Fig. 1(f)] obtained from its trajectories [a representative one is displayed in Fig. 1(e)].
The structure is as follows. We first show how, by extending the generalized Doob transform to the context of Frobenius-Perron operators of chaotic maps, one can generate topologically-conjugate effective maps where rare fluctuations of the original dynamics become typical. Then we illustrate our framework by applying it to mitigate the repulsive effect of unstable periodic orbits. Finally, we employ it to characterize dynamical phases involved in a dynamical phase transition associated with the finite-time Lyapunov exponent in the logistic map. Concluding remarks and ideas for future work are presented at the end.
_Large-deviation formalism--_ We consider a chaotic discrete-time dynamical system \(x_{n+1}=f(x_{n})\), where \(f\!:\!I\to I\) is a smooth map and \(I\) is some compact interval of the real line. Starting from a probability density of initial values \(\alpha_{0}(x)\), the evolution \(\alpha_{n+1}(x)=L[\alpha_{n}(x)]\) for \(n=0,1,2,\ldots\) is given by the Frobenius-Perron operator
\[L[\alpha(x)]=\int_{I}\alpha(y)\delta(x-f(y))\,dy=\sum_{z\in f^{-1}(x)}\frac{ \alpha(z)}{|f^{\prime}(z)|}, \tag{1}\]
where \(\delta(x)\) is a Dirac delta, and \(f^{-1}(x)\) is shorthand for the set of pre-images of \(x\) under the (generally non-invertible) map \(f\)[22]. We assume that the map \(f\) is ergodic with respect to an invariant measure \(\rho(x)=L[\rho(x)]\). The adjoint Frobenius-Perron operator \(L^{\dagger}\) is defined by the equality \(\langle\beta,L[\alpha]\rangle=\langle L^{\dagger}[\beta],\alpha\rangle\), where the angular brackets denote the standard inner product, yielding \(L^{\dagger}[\alpha(x)]=\alpha(f(x))\); see the Supplemental Material (SM) for details [29]. Taking \(\beta(x)=\mathbb{1}(x)=1\) above, it is clear that probability conservation, i.e. \(\int L[\alpha(x)]dx=\int\alpha(x)dx=1\), implies that \(L^{\dagger}[\mathbb{1}(x)]=1\).
Under quite general conditions, the probability density of the time-averaged dynamical observable \(A=N^{-1}\sum_{n=0}^{N}g(x_{n})\) acquires the asymptotic large-deviation form \(P(A=a)\sim e^{-NI(a)}\) for long times \(N\gg 1\)[30; 31]. This probability concentrates around its average value, \(\langle A\rangle=\int g(x)\rho(x)dx\), at a rate given by \(I(a)\) --the so-called rate function--, which is positive and has a single zero located at \(\langle A\rangle\)[2]. Thus fluctuations different from \(\langle A\rangle\) become exponentially unlikely in time, and the expansion up to second order of \(I(a)\) around the mean displays Gaussian fluctuations with variance \(\sigma^{2}=[NI^{\prime\prime}(\langle A\rangle)]^{-1}\). This is illustrated in Fig. 1 (c), where the probability of the time-averaged indicator function \(A=N^{-1}\sum_{n=1}^{N}\mathbb{I}_{[x^{\prime}\pm 0.05]}(x_{n})\), where \(\mathbb{I}_{\Omega}(x)=1\) if \(x\in\Omega\) and zero otherwise, concentrates around \(\langle A\rangle=a_{1}\).
The conventional method for biasing these probabilities towards specific values of \(A\) is to introduce an ensemble of trajectories --sometimes known as the \(s\)-ensemble [1]-- such that \(P_{s}(a)=e^{-sNa}P(a)/Z(s)\) with \(Z(s)=\int e^{-sNa}P(a)\,da\). Here \(s\) is a biasing field which favors (for \(s<0\)) or suppresses (for \(s>0\)) the probability of having values larger than \(\langle A\rangle\). Thus in Fig. 1 a suitable choice of \(s=s_{0}=-1\) transforms the probability \(P(a)\) with average \(a_{1}=0.1\) [Fig. 1 (c)], into the probability \(P_{s_{0}}(a)\) with average \(a_{2}\approx 0.78\) [Fig. 1 (f)]. This is an unusually large value in the case of the tent map if one considers a uniform distribution of initial conditions, corresponding to its natural invariant measure. Indeed, \(P(a_{2})\sim e^{-NI(a_{2})}\) is on the order of \(10^{-18}\) for \(N=100\) [see its position far into the right tail of \(P(a)\) in Fig. 1(c)].
In this biased ensemble, the complete statistics of time-averaged observables \(A\) for long times is given by the scaled cumulant generating function \(\theta(s)=\lim_{N\to\infty}N^{-1}\log Z(s)\)[7]. The latter is related to the rate function \(I(a)\) by a Legendre transform, \(\theta(s)=-\min_{a}[I(a)+sa]\)[2], highlighting the analogy with the (minus) free-energy and the entropy density in equilibrium statistical mechanics, with the biasing field \(s\) playing a role akin to that of the inverse temperature [7]. Since the derivatives of the SCGF provide the cumulants of the observable \(A\) in the tilted distribution \(P_{s}(a)\), the (minus) first derivative gives the average \(-\theta^{\prime}(s)=\langle A\rangle_{s}\). Thus the value of choice for \(s\) is the one matching the fluctuation \(a\), such that \(-\theta^{\prime}(s)=a\), or equivalently \(I^{\prime}(a)=s\). In Fig. 1, \(-\theta^{\prime}(s_{0})=a_{2}\) and \(I^{\prime}(a_{2})=s_{0}\), while in the absence of a bias \(-\theta^{\prime}(0)=a_{1}\) and \(I^{\prime}(a_{1})=0\).
The SCGF is obtained from the spectral problem \(L_{s}[r_{s}(x)]=e^{\theta(s)}r_{s}(x)\)[2; 28], where \(r_{s}(x)\) is the right eigenfunction associated with the eigenvalue with largest real part, which is \(e^{\theta(s)}\), of the so-called tilted Frobenius-Perron operator [28]
\[L_{s}[\alpha(x)]=\!\!\!\int_{I}e^{-sa(y)}\alpha(y)\delta(x\!-\!f(y))\,dy=\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Markov chains [7] and open quantum systems [18], and has been recently studied in the context of chaotic maps in Ref. [28]. On the other hand, the left eigenfunction of (S5) associated with \(e^{\theta(s)}\), denoted as \(l_{s}(x)\), satisfies \(L_{s}^{1}[l_{s}(x)]=e^{\theta(s)}l_{s}(x)\), with \(L_{s}^{1}\) being the tilted adjoint operator given by \(L_{s}^{1}[\alpha(x)]=e^{-sg(x)}\alpha(f(x))\), see SM [29]. The eigenfunctions are normalized such that \(\int r_{s}(x)dx=1\) and \(\int l_{s}(x)r_{s}(x)dx=1\). Note that for \(s=0\), we have \(\theta(0)=0\), \(r_{0}(x)=\rho(x)\) and \(l_{0}(x)=\mathbbm{1}(x)\), while other eigenfunctions have associated eigenvalues smaller than \(1\). The tilted operator (S5), however, does not represent a proper physical evolution, since it does not conserve probability, i.e. \(L_{s}^{1}[\mathbbm{1}(x)]\neq 1\). Therefore it is not obvious that one can derive a map, associated with \(L_{s}\), generating the trajectories that sustain the fluctuation \(a\), though such trajectories have been computationally obtained by adapting the cloning algorithm [26] through the Lyapunov weighted dynamics [23]. Our contribution is to show below how to obtain the effective chaotic map [as displayed in Fig. 1(d)] generating those rare trajectories corresponding to \(s\neq 0\) [see Fig. 1(e)], which follow the biased distribution \(P_{s}(a)\) for long times [Fig. 1(f)].
_Doob operator and Doob effective map_-- By analogy with the auxiliary Doob process of discrete-time stochastic systems [32; 33], we define the Doob operator for a given \(s=s_{0}\), based on the tilted operator (S5), its left eigenfunction \(l_{s_{0}}(x)\) and the SCGF \(\theta(s_{0})\), as
\[L_{s_{0}}^{D}[\alpha(x)]=e^{-\theta(s_{0})}l_{s_{0}}(x)L_{s_{0}}[(l_{s_{0}}(x ))^{-1}\,\alpha(x)]. \tag{3}\]
The right eigenfunction associated with the largest eigenvalue of \(L_{s_{0}}^{D}[\alpha(x)]\), which is \(1\), is given by \(\rho_{s_{0}}^{D}(x)=l_{s_{0}}(x)r_{s_{0}}(x)\), and corresponds to the stationary distribution of \(L_{s_{0}}^{D}\), since \(L_{s_{0}}^{D}[\rho_{s_{0}}^{D}(x)]=e^{-\theta(s_{0})}l_{s_{0}}(x)L_{s_{0}}[r_ {s_{0}}(x)]=\rho_{s_{0}}^{D}(x)\). Indeed, the Doob operator (S11) has the two crucial properties we sought: (i) conservation of probability, i.e. \((L_{s}^{D})^{\dagger}[\mathbbm{1}(x)]=1\), and (ii) generation of trajectories distributed according to \(P_{s_{0}}(a)\) for long times. Property (i) follows immediately, while (ii) can be shown by studying the tilted Doob operator, i.e. \(L_{s_{0},s}^{D}\), whose largest eigenvalue \(e^{\theta^{D}(s)}\), for \(\theta^{D}(s)=\theta(s_{0}+s)-\theta(s_{0})\) is associated with the right eigenfunction \(l_{s_{0}}(x)r_{s_{0}+s}(x)\), see SM [29] for details. By Legendre transforming \(\theta^{D}(s)\) we get \(I^{D}(a)=I(a)+\theta(s_{0})+s_{0}a\), which is the rate function of the tilted distribution, \(P_{s_{0}}(a)\sim e^{-NI^{D}(a)}\), as we wanted to prove. Notice that the Doob SCGF, \(\theta^{D}(s)\), amounts to a translation of the origin of the original SCGF to the point \((s_{0},\theta(s_{0}))\): all the cumulants at \(s_{0}\), after applying the Doob transformation, lie at \(s=0\). The atypical fluctuations of the natural dynamics given by Eq. (S1), associated with some \(s_{0}\neq 0\) in Eq. (S5), thus become typical in the Doob-transformed dynamics (S11).
In summary, the Doob operator (S11) has a stationary state \(\rho_{s_{0}}^{D}(x)\) that naturally yields the statistics for \(A\) corresponding to rare fluctuations of the original dynamics, which are exponentially suppressed in \(\rho(x)\), i.e. the invariant measure of \(f\). Nevertheless, we still need to find the Doob effective map, \(f_{s_{0}}^{D}\), generating the atypical trajectories \(y_{n+1}=f_{s_{0}}^{D}(y_{n})\). This requires finding a chaotic map with a prescribed invariant measure [34], which in this case is \(\rho_{s_{0}}^{D}(y)\). Assuming that \(\rho(x)\) and \(\rho_{s_{0}}^{D}(y)\) are strictly positive and integrable (as in all the examples considered below), so that their cumulative distributions \(F(x)=\int_{-\infty}^{x}\rho(u)du\) and \(F_{s_{0}}^{D}(y)=\int_{-\infty}^{y}\rho_{s_{0}}^{D}(u)du\) are continuous and increasing (hence invertible) functions, the transformation that is required is \(y=\gamma(x)=(F_{s_{0}}^{D})^{-1}(F(x))\), as it is easy to verify, see SM [29]. Applying this transformation it is straightforward to find the Doob effective map taking into account that \(y_{n+1}=f_{s_{0}}^{D}(y_{n})=f_{s_{0}}^{D}(\gamma(x_{n}))\) and that
Figure 2: **Rare trajectories due to the repulsive effect of unstable period-2 orbits are made typical.** Fluctuations of the time-averaged indicator function, \(A=N^{-1}\sum_{\{\mathbb{I}_{[x^{*}\pm 0,0.025]}(x_{n})+\mathbb{I}_{[x^{*}\pm 0,0 25]}(x_{n})\}}\), of the logistic map \(x_{n+1}=4x_{n}(1-x_{n})\) around the period-2 orbit formed by \(x_{+}^{*}=(5\pm\sqrt{5})/8\). (a) SCGF \(\theta(s)\) and biased average \(\langle A\rangle_{s}=-\theta^{\prime}(s)\). The three points highlighted correspond to \(s=-1\) (square), \(s=0\) (circle), \(s=1\) (triangle). (b) Rate function \(I(a)\), and quadratic approximation corresponding to Gaussian fluctuations around its average \(\langle A\rangle\). (c) Cobweb plot of the Doob effective map for \(s_{0}=-1\). The support of the indicator function (in this and other panels) is highlighted in light blue. (d) Trajectory corresponding to the cobweb in (c). (e, f) Cobweb plot and trajectory of the original (unbiased) logistic map (\(s_{0}=0\)). (g, h) Cobweb plot and trajectory of the Doob effective map for \(s_{0}=1\). All trajectories are based on \(N=100\) iterations.
\(y_{n+1}=\gamma(x_{n+1})=\gamma(f(x_{n}))\). From these equations we obtain \(f_{s_{0}}^{D}(\gamma(x_{n}))=\gamma(f(x_{n}))\), so that the Doob effective map, which is topologically conjugate to \(f\), takes the form
\[f_{s_{0}}^{D}=\gamma\circ f\circ\gamma^{-1}\,. \tag{4}\]
The evolution is given by \(f\) after a change of coordinates, \(y=\gamma(x)\), such that \(y_{n+1}=f_{s_{0}}^{D}(y_{n})=\gamma(f(\gamma^{-1}(y_{n})))\). Indeed, mathematically speaking the conjugacy is smoother than that provided by a homeomorphism, as \(\gamma\) is differentiable. The Doob effective map sustaining the rare event corresponding to \(s_{0}=-1\) in the example based on the tent map is illustrated in Fig. 1(d); see the SM for the numerical method employed to obtain the eigenfunctions on which its construction is based [29]. While \(a_{2}\) is practically impossible to sample with the original dynamics \(f\), by contrast, in the dynamics given by the effective map \(f_{s_{0}}^{D}\) it is the average value. Thus the fraction of time spent in the interval \(x^{*}\pm 0.05\) is much higher, \(78\%\), as illustrated in Fig. 1(e), and in the histogram of Fig. 1(f).
Remarkably, while \(x^{*}=2/3\) is an unstable fixed point of the tent map \(f\), \(y^{*}=\gamma(x^{*})\) (which is close to, yet different from, \(2/3\)) is also an unstable fixed point of the Doob map \(f_{s_{0}}^{D}\). This is true in general and is imposed by the conjugacy: \(f_{s_{0}}^{D}(y^{*})=(\gamma\circ f\circ\gamma^{-1})(y^{*})=\gamma(f(x^{*}))= y^{*}\), and \((f_{s_{0}}^{D})^{\prime}(y^{*})=(\gamma\circ f)^{\prime}(x^{*})(\gamma^{-1})^{ \prime}(y^{*})=\gamma^{\prime}(x^{*})f^{\prime}(x^{*})\left(\gamma^{\prime}(x )\right)^{-1}=f^{\prime}(x^{*})\). Despite this, the peculiar shape of \(f_{s_{0}}^{D}\) makes the trajectory spend most of the time around \(x^{*}\) [see Fig. 1(d)]. One can similarly show that a fixed point of \(f^{n}=f\circ f\circ\cdots\circ f\) maps into a fixed point of \((f_{s_{0}}^{D})^{n}\) with the same stability. Those fixed points lie in periodic orbits of \(f\) (with period \(n\) or integers factors thereof), which is the topic we turn to next.
_Counterbalancing the instabilities of periodic orbits_--Unstable periodic orbits are very relevant, as many properties of chaotic systems are studied by focusing on such orbits embedded within chaotic attractors (see, e.g., Refs. [22; 35]). In Fig. 2 we illustrate how to use our methodology to counterbalance the repulsive effect of unstable periodic orbits. We focus on the logistic map \(f(x)=rx(1-x)\) with \(r=4\) (sometimes called the Ulam map), see the black line in Fig. 2(e). It has a period-2 orbit comprising \(x_{\pm}^{*}=(5\pm\sqrt{5})/8\), which is unstable, as \((f^{2})^{\prime}(x_{\pm}^{*})=-4\). Due to this instability, the average value of the indicator function \(A=N^{-1}\sum_{n=1}^{N}(\mathbb{I}_{[x_{\pm}^{*}\pm 0.025]}(x_{n})+\mathbb{I}_{[x_{ \pm}^{*}\pm 0.025]}(x_{n}))\) is only \(\langle A\rangle\approx 0.09\). See Fig. 2(a), which shows the SCGF \(\theta(s)\), as well as its (minus) first derivative \(\langle A\rangle_{s}\), as well as Fig. 2(e) and Fig. 2(f), displaying the cobweb plot and a typical trajectory of the unbiased dynamics respectively (\(s=0\)). As \(s\) is moved towards negative (positive) values, the time average becomes larger (smaller). We will focus on \(s_{0}=-1\), which yields \(\langle A\rangle_{s_{0}}\approx 0.79\), associated with a much longer time spent in the vicinity of the period-2 orbit, and \(s_{0}=1\), corresponding to \(\langle A\rangle_{s_{0}}\approx 0.02\), for which the vicinity of the orbit is seldom visited, as displayed by Fig. 2(d) and Fig. 2(h), respectively. Those values of \(s_{0}\) correspond to large deviations of \(a\), well beyond the range of the Gaussian approximation, as shown by the rate function \(I(a)\) in Fig. 2(b).
The Doob map for \(s_{0}=-1\), see Fig. 2(c), is remarkably different from the logistic map, represented in Fig. 2(e). In the case of \(s_{0}=1\) [Fig. 2(g)] the difference is more subtle, yet sufficient for avoiding mapping values of \(x_{n}\) into values of \(x_{n+1}\) in the support of the indicator function. The trajectories shown in each case [Fig. 2(d), (f) and (h)] correspond to the cobweb plots in the panels immediately above, and confirm all expectations.
_A dynamical phase transition for the Lyapunov exponent--_ To conclude we focus on the timely topic of dynamical phase transitions (DPTs) [36; 37; 38; 39; 40; 41; 42]. Specifically, we characterize the dynamical phases sustaining the fluctuations of the finite-time Lyapunov exponent, \(A=N^{-1}\sum_{n=0}^{N}\ln|f^{\prime}(x_{n})|\), in the logistic map. For long times, the average of this fluctuating observable, which can be interpreted as a time-averaged information loss [22], converges to the Lyapunov exponent. The latter is \(\langle A\rangle=\ln 2\), as obtained from the topological conjugacy of the logistic map and the tent map [22; 35]. As the tilting parameter \(s\) is varied, one finds that there are just two possible values of the biased average \(\langle A\rangle_{s}\), namely \(\ln 4\) and \(\ln 2\) (including obviously \(s=0\)). Indeed the SCGF, which for this observable is closely related to the so-called topological pressure (see e.g. [22]),
Figure 3: **Characterization of phases in a DPT for the Lyapunov exponent of the logistic map.** Main panel: SCGF \(\theta(s)\) and biased average \(\langle A\rangle_{s}=-\theta^{\prime}(s)\). The three points highlighted correspond to \(s=-3\) (square), \(s=-2\) (circle), \(s=0\) (triangle). The latter corresponds to the logistic map, shown in Fig. 2(e) with a typical trajectory displayed in Fig. 2(f). Lower inset: Doob effective map and representative trajectory for \(s_{0}=-3\). Upper inset: Same as lower inset but at the critical point \(s_{0}=-2\), exhibiting coexistence between both dynamical phases. In both insets the original (logistic) map is also shown (see dashed lines)
is \(\theta(s)=-2(s+1)\ln 2\) for \(s\leq-2\) and \(\theta(s)=-s\ln 2\) for \(s\geq-2\), as discussed, with different conventions, in Refs. [43; 44] and others therein. Both the SCGF \(\theta(s)\) and the average \(\langle A\rangle_{s}=-\theta^{\prime}(s)\) are displayed in Fig. 3.
We next characterize the two dynamical phases, as well as the critical point (\(s=s_{0}=-2\)) by means of the large-deviation framework. Recall that this is otherwise challenging, because of the exponential decay of probabilities for \(s\neq 0\). For \(s<-2\), the Doob effective map, presented on the left of the lower inset to Fig. 3 for \(s_{0}=-3\), generates trajectories that localize in the vicinity of the point \(x=0\), as displayed on the right of the same inset. There small intervals expand with a rate \(\ln 4\) (instead of the common expansion rate \(\ln 2\) to be found elsewhere in phase space [22; 44]), leading to \(\langle A\rangle_{s}=\ln 4\). On the other side of the DPT, for \(s>-2\), \(\langle A\rangle_{s}=\ln 2\), as in the unbiased dynamics (\(s=0\)), whose trajectories are like the one displayed in Fig. 2(f), where the region around \(x\approx 0\) is hardly ever visited. Finally, the Doob effective map at the critical point \(s_{0}=-2\) is shown in the upper inset to Fig. 3. This map generates trajectories as the one presented on the right of the inset, which exhibits a remarkable intermittency between the behavior for \(s_{0}=-3\) and that obtained for \(s_{0}=0\), illustrating the coexistence between dynamical phases characteristic of first-order DPTs [16; 36; 39; 7].
_Concluding remarks--_ Given a physical system evolving in time, the problem of finding another one for which rare events of the former become typical had been previously explored in classical and quantum systems undergoing different kinds of stochastic dynamics. We have developed a theoretical framework that achieves this goal in chaotic maps. Apart from its obvious interest for dynamical control purposes, it allows for the characterization of phases involved in DPTs occurring far away from the unbiased dynamics, which we have illustrated in a DPT for the finite-time Lyapunov exponent of the logistic map. While our approach has been developed for 1D systems, the formalism can be extended to cover higher-dimension maps, and perhaps also continuous-time flows. The adaptation of this framework to fluctuations at finite times by means of the finite-time Doob transform may also be feasible with currently-available techniques [12; 19; 45].
The authors thank P. Garrido, P. Hurtado, M. A. Munoz and R. Hurtado-Gutierrez for insightful discussions. The research leading to these results has been supported by Ministerio de Ciencia e Innovacion (Spain), by Agencia Estatal de Investigacion (AEI, Spain, 10.13039/501100011033) and by European Regional Development Fund (ERDF, A way of making Europe), through Grants PID2020-113681GB-I00, PID2021-128970OA-I00 and PID2021-123969NB-I00, by Junta de Andalucia (Spain)-Consejeria de Economia y Conocimiento 2014-2020 through grant A-FQM-644-UGR20, and by Comunidad de Madrid (Spain) under the Multiannual Agreement with UC3M in the line of Excellence of University Professors (EPUC3M23), in the context of the V Plan Regional de Investigacion Cientifica e Innovacion Tecnologica (PRICIT). We are grateful for the the computing resources and related technical support provided by PROTEUS, the supercomputing center of Institute Carlos I in Granada, Spain.
|
2304.02881 | The Westervelt--Pennes--Cattaneo model: local well-posedness and
singular limit for vanishing relaxation time | In this work, we investigate a mathematical model of nonlinear ultrasonic
heating based on a coupled system of the Westervelt equation and the hyperbolic
Pennes bioheat equation (Westervelt--Pennes--Cattaneo model). Using the energy
method together with a fixed point argument, we prove that our model is locally
well-posed and does not degenerate under a smallness assumption on the pressure
data in the Westervelt equation. In addition, we perform a singular limit
analysis and show that the Westervelt--Pennes--Fourier model can be seen as an
approximation of the Westervelt--Pennes--Cattaneo model as the relaxation
parameter tends to zero. This is done by deriving uniform bounds of the
solution with respect to the relaxation parameter. | Imen Benabbas, Belkacem Said-Houari | 2023-04-06T06:03:32Z | http://arxiv.org/abs/2304.02881v2 | The Westervelt-Pennes-Cattaneo model: local well-posedness and singular limit for vanishing relaxation time
###### Abstract.
In this work, we investigate a mathematical model of nonlinear ultrasonic heating based on a coupled system of the Westervelt equation and the hyperbolic Pennes bioheat equation (Westervelt-Pennes-Cattaneo model). Using the energy method together with a fixed point argument, we prove that our model is locally well-posed and does not degenerate under a smallness assumption on the pressure data in the Westervelt equation. In addition, we perform a singular limit analysis and show that the Westervelt-Pennes-Fourier model can be seen as an approximation of the Westervelt-Pennes-Cattaneo model as the relaxation parameter tends to zero. This is done by deriving uniform bounds of the solution with respect to the relaxation parameter.
Key words and phrases:ultrasonic heating, Westervelt's equation, nonlinear acoustics, Pennes bioheat equation, Cannateo model, HIFU 2010 Mathematics Subject Classification: 35L70, 35K05 \({}^{\dagger}\)AMNEDP Laboratory, Faculty of Mathematics, USTHB ([email protected]). \({}^{\ddagger}\)Department of Mathematics, College of Sciences, University of Sharjah, P. O. Box: 27272, Sharjah, United Arab Emirates ([email protected])
## 1. Introduction
We are interested in the analysis of a nonlinear thermo-acoustic system modeling the propagation of ultrasonic waves, such as the high intensity focused ultrasound (HIFU), trough thermo-viscous fluids. With various applications including medical and industrial use in lithotripsy, thermotherapy, ultrasound cleaning and sonochemistry [7, 9, 10, 18, 31], the behavior of high-intensity focused ultrasound (HIFU) and the mathematical models describing it are receiving a great deal of attention from researchers. For instance, in medical procedures, the focused ultrasound is used to generate localized heating that can destroy the targeted region. Indeed, this technique is proving its success in the treatment of both benign and malignant tumors. Due to the high frequency of the sound waves, the nonlinear effect of their propagation cannot be eluded and it is well-established in nonlinear acoustics that Westervelt equation [27] takes this effect into consideration.
In this paper, we consider a coupled system of the Westervelt for the pressure and a hyperbolic Pennes equation for the temperature. More precisely, we consider the system
\[\begin{cases}p_{tt}-c^{2}(\bar{\Theta})\Delta p-b\Delta p_{t}=K(\bar{\Theta}) \left(p^{2}\right)_{tt},&\text{in }\Omega\times(0,T),\\ \rho_{\text{a}}C_{\text{a}}\bar{\Theta}_{t}+\nabla\cdot q+\rho_{\text{b}}C_{ \text{b}}W(\bar{\Theta}-\Theta_{\text{a}})=\mathcal{Q}(p_{t}),&\text{in }\Omega \times(0,T),\\ \tau q_{t}+q+\kappa_{\text{a}}\nabla\bar{\Theta}=0,&\text{in }\Omega \times(0,T).\end{cases} \tag{1.1}\]
Here the acoustic pressure and the temperature fluctuations are denoted respectively by \(p\) and \(\bar{\Theta}\); \(c\) is the speed of sound and \(b>0\) is the sound diffusivity. The function \(K(\bar{\Theta})\) is allowed to depend \(\bar{\Theta}\) and it is given by
\[K(\bar{\Theta})=\frac{\beta_{\rm acous}}{\rho c^{2}(\bar{\Theta})},\]
where \(\rho\) is the mass density and \(\beta_{\rm acous}\) is the parameter of nonlinearity. The source term in the second equation \(\mathcal{Q}(p_{t})\) represents the acoustic energy absorbed by the tissue. The medium parameters \(\rho_{\rm a},C_{\rm a}\) and \(\kappa_{\rm a}\) stand, respectively, for the ambient density, the ambient heat capacity and thermal conductivity of the tissue. The additional term \(\rho_{\rm b}C_{\rm b}W(\bar{\Theta}-\Theta_{\rm a})\) accounts for the heat loss due to blood circulation, with \(\rho_{\rm b},C_{\rm b}\) being the density and specific heat capacity of blood, and \(W\) expressing the tissue's volumetric perfusion rate measured in milliliters of blood per milliliter of tissue per second.
The second and the third equations in (1.1) consist the hyperbolic version of the Pennes equation (1.3), where the heat equation is given by the Cattaneo (or the Cattaneo-Maxwell law). We supplement (1.1) with the initial conditions
\[p|_{t=0}=p_{0},\quad p_{t}|_{t=0}=p_{1},\quad\bar{\Theta}|_{t=0}=\bar{\Theta} _{0},\quad q|_{t=0}=q_{0}\]
and Dirichlet boundary conditions
\[p|_{\partial\Omega}=0,\qquad\bar{\Theta}|_{\partial\Omega}=\Theta_{\rm a},\]
with \(\Theta_{\rm a}\) denoting the ambient temperature, that is typically taken in the human body to be \(37^{\circ}C\); see [4].
When \(c\) and \(K\) are constants, the first equation in (1.1) reduces to the the Westervelt equation for the pressure \(p(x,t)\):
\[p_{tt}-c^{2}\Delta p-b\Delta p_{t}=K\left(p^{2}\right)_{tt}. \tag{1.2}\]
Equation (1.2) is widely used in acoustics and describes the propagation of sound waves in a fluid medium and can be derived from the Navier-Stokes-Fourier model by assuming the Fourier law of heat conduction and by taking into account the thermoviscous effects. See [27], [5] and [16, Chapter 5] for more details. Significant progress have been made recently toward the understanding of the solutions to Westervelt equation and their behavior; see [3, 12, 13, 15, 19, 26] and the references therein. The results in [13, 19, 26] investigated, local well-posedness, global well-posedness and asymptotic behavior of solutions, for different types of boundary conditions and in various functional settings. Particularly, in [19, 26] the authors relied on maximal regularity in \(L^{p}\)-spaces to obtain the existence of a unique solution with low regularity assumptions on the initial data. This is feasible thanks to the strong damping represented by the term \(-b\Delta p_{t}\) when \(b>0\), that lends to the Westervelt equation its parabolic character.
Combining the second and the third equation equations in (1.1), we obtain the hyperbolic Pennes equation (see [11, Eq. 3] and [29, Eq. 7])
\[\begin{split}&\tau\rho_{\rm a}C_{\rm a}\bar{\Theta}_{tt}+(\rho_{ \rm a}C_{\rm a}+\tau\rho_{\rm b}C_{\rm b}W)\bar{\Theta}_{t}+\rho_{\rm b}C_{ \rm b}W(\bar{\Theta}-\Theta_{\rm a})-\kappa_{\rm a}\Delta\bar{\Theta}\\ &=\mathcal{Q}(p_{t})+\tau\partial_{t}\mathcal{Q}(p_{t}).\end{split} \tag{1.3}\]
The terms in (1.3) that appear with the \(\tau\) prefactor arise from the use of the Cattaneo law of heat condition [2]:
\[\tau q_{t}+q+\kappa_{\rm a}\nabla\bar{\Theta}=0 \tag{1.4}\]
which is a modified version of the classical Fourier law of heat conduction:
\[q+\kappa_{\rm a}\nabla\bar{\Theta}=0. \tag{1.5}\]
Equation (1.5) implies instantaneous thermal energy deposition in the medium. That is any temperature disturbance causes an instantaneous perturbation in the temperature at each point in the medium. The Cattaneo law was introduced to overcome this drawback of the infinite speed of thermal signals in the Fourier law. The idea is to introduce a time lag into the relationship between the heat flux and the temperature gradient, which results in the term \(\tau q_{t}\) in (1.4), where \(\tau\) is the relaxation time parameter.
Using (1.4) models the heat propagation as a damped wave equation. This phenomenon is known as second sound effect and it is experimentally observed in materials at a very low temperature, where the heat seems to propagate as a thermal wave, which is the reason for this name (see the review paper [24]).
For \(\tau=0\), equation (1.3) becomes
\[\rho_{\rm a}C_{\rm a}\bar{\Theta}_{t}-\kappa_{\rm a}\Delta\bar{\Theta}+\rho_{ \rm b}C_{\rm b}W(\bar{\Theta}-\Theta_{\rm a})=\mathcal{Q}(p_{t}). \tag{1.6}\]
Equation (1.6) is the parabolic Pennes equation which is a bioheat transfer equation that is widely used for studying heat transfer in biological systems. It takes into account the heat transfer by conduction in the tissues and the convective heat transfer due to blood perfusion. See [23] for the derivation of (1.6).
In [21] Nikolic and Said-Houari considered the Westervelt-Pennes-Fourier system, which is the coupling between the first equation in (1.1) and (1.6). That is, they investigated the system
\[\begin{cases}p_{tt}-c^{2}(\bar{\Theta})\Delta p-b\Delta p_{t}=k(\bar{\Theta}) \left(p^{2}\right)_{tt},\\ \rho_{\rm a}C_{\rm a}\bar{\Theta}_{t}-\kappa_{\rm a}\Delta\bar{\Theta}+\rho_ {\rm b}C_{\rm b}W(\bar{\Theta}-\Theta_{\rm a})=\mathcal{Q}(p_{t})\end{cases} \tag{1.7}\]
in \(\Omega\times(0,T)\) with Dirichlet-Dirichlet boundary conditions. They proved a local well-posedness of (1.7) using the energy method together with a fixed point argument. The work in [21] was followed by [22], where the authors proved a global existence and asymptotic behavior of the solution of (1.7) under a smallness assumption on the initial data. Using the maximal regularity estimate for parabolic systems, Wilke in [28] improves slightly the regularity assumptions in [21] and also considered the case \(b=b(\bar{\Theta})\).
In this paper, we consider the Westervelt-Pennes-Cattaneo system (1.1) and investigate the local well-posedness and the singular limit as the time relaxation parameter \(\tau\) tends to zero. To state and prove our result and to lighten the notation, we put
\[m=\rho_{\rm a}C_{\rm a}\qquad\text{and}\qquad\ell=\rho_{\rm b}C_{\rm b}W,\]
and make the change of variables \(\Theta=\bar{\Theta}-\Theta_{a}\) in the temperature and denote by
\[k(\Theta)=K(\Theta+\Theta_{\rm a})=\frac{1}{\rho c^{2}(\Theta+\Theta_{\rm a} )}\beta_{\rm acou}\quad\text{and}\quad h(\Theta)=c^{2}(\Theta+\Theta_{\rm a}) \tag{1.8}\]
to get the following system
\[\begin{cases}p_{tt}-h(\Theta)\Delta p-b\Delta p_{t}=k(\Theta)\left(p^{2}\right)_{ tt},&\text{in }\Omega\times(0,T)\\ m\Theta_{t}+\nabla\cdot q+\ell\Theta=\mathcal{Q}(p_{t}),&\text{in }\Omega\times(0,T)\\ \tau q_{t}+q+\kappa_{\text{a}}\nabla\Theta=0,&\text{in }\Omega\times(0,T)\end{cases}\] complemented with homogeneous boundary conditions (1.9b) \[p|_{\partial\Omega}=0,\qquad\Theta|_{\partial\Omega}=0\] and the initial conditions (1.9c) \[p|_{t=0}=p_{0},\quad p_{t}|_{t=0}=p_{1},\quad\Theta|_{t=0}=\Theta_{0}:=\bar{ \Theta}_{0}-\Theta_{\text{a}},\quad q|_{t=0}=q_{0}. \tag{1.9a}\]
As in [21], here the medium parameters \(c\) and \(K\) in the Westervelt equation are not constant. They are taken explicitly dependent on the temperature, in order to account for the fact that the heating generated by the ultrasound waves affects their speed of propagation and the position of the focal region: a phenomenon that is known as thermal lensing [4, 8]. Precisely, we assume this dependence to be polynomial, in agreement with the experimentally observed behavior documented in [1]. Also, for simplicity, we assume that the function \(\mathcal{Q}\) has the form
\[\mathcal{Q}(p_{t})=\mathcal{Q}(p_{t})=\frac{2b}{\rho_{\text{a}}C_{\text{a}}^{ 4}}(p_{t})^{2}\]
although our proof works for quite general \(\mathcal{Q}(p_{t})\) satisfying Assumption 2 in [21].
To establish well-posedness, we carry out an energy analysis for the linearization of the underlying system, which will allow applying a fixed-point argument to work out the existence of a unique solution to the nonlinear problem (1.9). In doing so, we encounter two main challenges. On the one hand, we have to take into account the interplay between the pressure and the temperature owing to the coupling of their respective equations. The temperature-dependent of the coefficients \(h(\Theta)\) and \(k(\Theta)\) in the pressure equation necessitates that they should be kept bounded. That is to say, the function \(\Theta\) needs to be in \(L^{\infty}((0,T)\times\Omega)\). Moreover, note that the first equation in (1.9) is quasilinear. To see this, it suffices to write the term on the right as \(k(\Theta)(p^{2})_{tt}=2k(\Theta)pp_{tt}+2k(\Theta)(p_{t})^{2}\). This brings about the risk of degeneracy of the term \((1-2k(\Theta)p)p_{tt}\), which is usually avoided by imposing a smallness constraint on the acoustic pressure [13]. Therefore, we are led to work with higher-order energies for both the pressure and the temperature. One key ingredient in obtaining the a priori estimates for these energies is the Sobolev embedding theorem, especially the continuous embedding \(H^{2}(\Omega)\hookrightarrow L^{\infty}(\Omega)\). On the other hand, obtaining uniform energy bound with respect to \(\tau\) for the heat conduction system requires careful attention. Further, considering that in practical situations, \(\tau\) takes very small values, it is interesting to investigate the behavior of the system when \(\tau\) goes to zero. In order to do this, we adopt the approach in [14, 20]. Even though the models under consideration in these works involve single equations, the method is quite constructive and could be applied to the coupled problem at hand. The key ideas are to first derive estimates that are uniform with respect to \(\tau\), which will justify taking the limit as \(\tau\) tends to zero; then we make use of the compactness of Sobolev embeddings to show that as \(\tau\) tends to zero, the solution of (1.9) converges to the solution of the system corresponding to \(\tau=0\).
Our paper is organized as follows. The main results are stated in Section 2. In Section 3, we collect some theoretical results that will prove useful in the sequel, and we state the general assumptions on the coefficients in system (1.9). Sections 4 is devoted to the energy analysis of the hyperbolic Pennes equation (the Cattaneo system), while Section 5 treats the linearized Westervelt equation. In Section 6, we prove the local well-posedness of the nonlinear problem (1.9). Finally, in Section 7, we perform the singular limit analysis and show that the solution of the Westervelt-Pennes-Cattaneo model converges to the solution of the Westervelt-Pennes-Fourier system when the time relaxation vanishes.
## 2. Main results
In this section, we state the main results of this paper and give the strategy of the proof. In order to give context to the main results stated below, we begin by specifying the functional setup adopted in this work. We define
\[\mathcal{X}:=X_{p}\times X_{\Theta}\times X_{q},\]
where the spaces \(X_{p},X_{\Theta}\) and \(X_{q}\) are given, respectively as
\[X_{p}= \Big{\{}p\in L^{\infty}(0,T;H^{3}(\Omega)\cap H^{1}_{0}(\Omega)),\] \[\quad p_{t}\in L^{\infty}(0,T;H^{2}(\Omega)\cap H^{1}_{0}(\Omega ))\cap L^{2}(0,T;H^{3}(\Omega)\cap H^{1}_{0}(\Omega)),\] \[\quad p_{tt}\in L^{\infty}(0,T;H^{1}_{0}(\Omega))\cap L^{2}(0,T; H^{2}(\Omega)\cap H^{1}_{0}(\Omega)),\] \[\quad p_{ttt}\in L^{2}(0,T;L^{2}(\Omega))\Big{\}};\] \[X_{\Theta}= \{\Theta\in L^{\infty}(0,T;H^{2}(\Omega)\cap H^{1}_{0}(\Omega)), \Theta_{t}\in L^{\infty}(0,T;H^{1}_{0}(\Omega)),\] \[\quad\Theta_{tt}\in L^{\infty}(0,T;L^{2}(\Omega))\};\] \[X_{q}= \{q\in L^{\infty}(0,T;(H^{1}(\Omega))^{d});q_{t},q_{tt}\in L^{2}( 0,T;(L^{2}(\Omega))^{d})\}. \tag{2.1}\]
Our first result ensures local in time well-posedness, which is uniform with respect to the relaxation parameter \(\tau\).
**Theorem 2.1**.: _Let \(T>0\) and \(\bar{\tau}>0\) be a fixed small constant. Let \(\tau\in(0,\bar{\tau}]\) and assume that_
\[(p_{0},p_{1})\in H^{3}(\Omega)\cap H^{1}_{0}(\Omega)\times H^{2}(\Omega)\cap H ^{1}_{0}(\Omega),\]
\[(\Theta_{0},q_{0})\in H^{2}(\Omega)\cap H^{1}_{0}(\Omega)\times(H^{1}(\Omega) )^{d}.\]
_There exists \(\delta=\delta(T)>0\) such that if_
\[\left\|p_{0}\right\|_{H^{3}}+\left\|p_{1}\right\|_{H^{2}}+\left\|p_{tt}(0) \right\|_{H^{1}}\leq\delta\]
_then system (1.9) has a unique solution \((p,\Theta,q)\in\mathcal{X}\)._
Let us now outline the main steps in the proof of Theorem 2.1 and list some comments about the above result.
1. We consider a linearization of the underlying model (1.9) that will see the Westervelt equation decoupled from the Cattaneo system for heat transfer. This allows us to treat the first equation in (1.9) separately from the second and
third equations. However, the decoupling of the linearized system does not allow us to transfer the damping induced by the damped wave equation for the temperature to the linearized Westervelt equation. We use Galerkin approximations to prove the existence of a unique solution for linearized Cattaneo system (4.1), together with uniform energy estimates with respect to \(\tau\). Next, motivated by [21], we often rely on Sobolev embedding theorem to conduct an energy analysis yielding the well-posedness of the linearized Westervelt equation in a finite time horizon \(T>0\). In addition, we derive some energy bounds that will be useful in the analysis of the nonlinear problem.
2. Having all the necessary ingredients, we can tackle the nonlinear coupled problem (1.9) by defining the solution of the system (1.9) as the fixed point of a carefully defined mapping. Bringing together the already established energy estimates and using the Banach fixed-point theorem, we prove well-posedness for the nonlinear problem and show that the solution does not degenerate under a smallness assumption on the initial data of the acoustic pressure.
3. Note that the smallness condition in Theorem 2.1 is imposed only on the pressure data and not on the temperature data. This seems necessary to avoid the degeneracy of the Westervelt equation.
4. Using the same method, we can also treat the case \(b=b(\Theta)\). Our assumption on \(b\) to be constant is just to avoid technicalities in the proof. Also, the condition \(b>0\) is crucial in our analysis. It is an important open problem to study the case \(b=0\).
In the following theorem, we state a convergence of the solution of the Westervelt-Pennes-Cattaneo model to the solution of Westervelt-Pennes-Fourier model as the relaxation time \(\tau\) tends to zero and as a consequence, we recover at the limit, the well-posedness result in [21]. To facilitate relating the limit to the solution of Westervelt-Pennes-Fourier model, in this part we use the wave equation (1.3) instead of Cattaneo's system. We denote by \(\Theta_{1}^{\tau}:=\Theta_{t}^{\tau}(0,x)\).
**Theorem 2.2**.: _Given \(T>0\) and \(\tau\in(0,\bar{\tau}]\). Let the initial data_
\[(p_{0}^{\tau},p_{1}^{\tau})\in H^{3}(\Omega)\cap H^{1}_{0}(\Omega )\times H^{2}(\Omega)\cap H^{1}_{0}(\Omega),\] \[(\Theta_{0}^{\tau},\Theta_{1}^{\tau})\in H^{2}(\Omega)\cap H^{1} _{0}(\Omega)\times H^{1}_{0}(\Omega),\]
_satisfy the assumptions of Theorem 2.1. Then, the familly of solutions \((p^{\tau},\Theta^{\tau})_{\tau\in(0,\bar{\tau}]}\) converges weakly (see (7.4), (7.2)) to the solution \((p,\Theta)\in X_{p}\times X_{\Theta}\) of the Westervelt-Pennes-Fourier system:_
\[\begin{cases}(1-2k(\Theta)p)p_{tt}-h(\Theta)\Delta p-b\Delta p_{t}=2k(\Theta) (p_{t})^{2},&\text{in }\Omega\quad\times(0,T),\\ m\Theta_{t}+\ell\Theta-\kappa_{\text{a}}\Delta\Theta=\mathcal{Q}(p_{t}),&\text {in }\Omega\quad\times(0,T),\\ (p,p_{t})|_{t=0}=(p_{0},p_{1}),\quad\Theta|_{t=0}=\Theta_{0},&\end{cases} \tag{2.2}\]
_with homogeneous Dirichlet conditions (1.9b)._
It is essential to note that the main difficulty in proving Theorem 2.2 lies in obtaining energy estimates that are uniform with respect to \(\tau\). In other words, our goal is to prevent the constants from becoming infinitely large as \(\tau\) approaches zero in the estimates.
This justifies the process of passing to the limit when the parameter \(\tau\) tends to zero. The uniformity in \(\tau\) is not a requirement for the well-posedness result in Theorem 2.1, but it is essential in the proof of Theorem 2.2. Lastly, we remark that as the estimates are dependent on the time \(T>0\), the existence of solutions is local in nature. However, we can expect to reach global well-posedness by taking the approach in [22].
## 3. Preliminaries and assumptions
In this section, we introduce a few notations and collect some helpful embedding results and inequalities that we will repeatedly use in the proofs.
### Notation
Throughout the paper, we assume that \(\Omega\subset\mathbb{R}^{d}\), where \(d\in\{1,2,3\}\), is a bounded and smooth domain. We denote by \(T>0\) the final propagation time. The letter \(C\) denotes a generic positive constant that does not depend on time, and can have different values on different occasions. We often write \(f\lesssim g\) where there exists a constant \(C>0\), independent of parameters of interest such that \(f\leq Cg\). We often omit the spatial and temporal domain when writing norms; for example, \(\|\cdot\|_{L^{p}L^{q}}\) denotes the norm in \(L^{p}(0,T;L^{q}(\Omega))\).
### Inequalities and embedding results
In the upcoming analysis, we shall employ the continuous embeddings \(H^{1}(\Omega)\hookrightarrow L^{4}(\Omega)\), \(H^{1}(\Omega)\hookrightarrow L^{6}(\Omega)\) and \(H^{2}(\Omega)\hookrightarrow L^{\infty}(\Omega)\). In particular, using Poincare's inequality we obtain for \(v\in H^{1}_{0}(\Omega)\) (see [25, Theorem 7.18])
\[\text{if }d>2,\quad\|v\|_{L^{p}}\leq C\|\nabla v\|_{L^{2}}\quad \text{for}\quad 2\leq p\leq\frac{2d}{d-2},\] \[\text{if }d=2,\quad\|v\|_{L^{p}}\leq C\|\nabla v\|_{L^{2}}\quad \text{for}\quad 2\leq p<\infty.\]
Moreover, taking into account the boundedness of the operator \((-\Delta)^{-1}:L^{2}(\Omega)\to H^{2}(\Omega)\cap H^{1}_{0}(\Omega)\), we find the inequality
\[\|v\|_{L^{\infty}}\leq C_{1}\|v\|_{H^{2}}\leq C_{2}\|\Delta v\|_{L^{2}}.\]
We will also call on the 1D-embedding \(H^{1}(0,T,L^{2}(\Omega))\hookrightarrow C(0,T;L^{2}(\Omega))\). In fact, this last embedding combined with Poincare's inequality, yields for all \(v\in H^{1}(0,T,L^{2}(\Omega))\)
\[\max_{t\in[0,T]}\|v(t)\|_{L^{2}} \leq C(\|v\|_{L^{2}L^{2}}+\|v_{t}\|_{L^{2}L^{2}})\] \[\leq C(\|\nabla v\|_{L^{2}L^{2}}+\|v_{t}\|_{L^{2}L^{2}}), \tag{3.1}\]
where the constant \(C>0\) depends only on \(T\) (see, e.g. [6, Theorem 2, p. 286]).
We recall Young's \(\varepsilon\)-inequality
\[xy\leq\varepsilon x^{n}+C(\varepsilon)y^{m},\quad\text{where}\quad x,y>0, \quad 1<m,n<\infty,\quad\frac{1}{m}+\frac{1}{n}=1,\]
and \(C(\varepsilon)=(\varepsilon n)^{-m/n}m^{-1}\).
Further, we will make use of Ladyzhenskaya's inequality for \(u\in H^{1}(\Omega)\)
\[\|u\|_{L^{4}}\leq C\|u\|_{L^{2}}^{1-d/4}\|u\|_{H^{1}}^{d/4},\qquad 1\leq d\leq 4. \tag{3.2}\]
We state a version of Gronwall's inequality which will be utilized in the proofs and has been provided in [21].
**Lemma 3.1**.: _Let \(I=[0,t]\) and let \(\alpha,\beta:I\to\mathbb{R}\) be locally integrable functions. Given \(u,v:I\to\mathbb{R}\) such that \(v\) is non-negative and integrable and \(u\) is in \(C^{1}(I)\). We assume that_
\[u^{\prime}(t)+v(t)\leq\alpha(t)u(t)+\beta(t),\text{ for }t\in I,\quad u(0)=u_{0}.\]
_Then, it holds that_
\[u(t)+\int_{0}^{t}v(s)\,\mathrm{d}s\leq u_{0}e^{A(t)}+\int_{0}^{t}\beta(s)e^{A( t)-A(s)}\,\mathrm{d}s,\]
_where_
\[A(t)=\int_{0}^{t}\alpha(s)\,\mathrm{d}s.\]
### Assumptions
In accordance with the perceived polynomial growth of the speed of sound \(c=c(\Theta)\), we make the following assumptions on the functions \(h\) and \(k\).
We assume that \(h\in C^{2}(\mathbb{R})\) and there exists \(h_{1}>0\) such that
(H1) \[h(s)\geq h_{1},\quad\forall s\in\mathbb{R}.\]
Moreover, assume that there exist \(\gamma_{1}>0\) and \(C>0\), such that
(H2) \[|h^{\prime\prime}(s)|\leq C(1+|s|^{\gamma_{1}}),\quad\forall s\in\mathbb{R}.\]
Using Taylor's formula, we also have
(H3) \[|h^{\prime}(s)|\leq C(1+|s|^{1+\gamma_{1}}),\quad\forall s\in\mathbb{R}.\]
Since the function \(k\) is related to the speed of sound by the formula (1.8), it follows that
(K1) \[|k(s)|\leq k_{1}:=\frac{\beta_{\text{acous}}}{\rho h_{1}}.\]
Further, we have
\[|k^{\prime\prime}(s)|\lesssim k_{1}^{2}|h^{\prime\prime}(s)|+k_{1}^{3}|h^{ \prime}(s)|^{2}\lesssim k_{1}^{2}(1+|s|^{\gamma_{1}})+k_{1}^{3}(1+|s|^{1+ \gamma_{1}})^{2},\]
which by using Taylor's formula, implies that there exists \(\gamma_{2}>0\), such that
(K2) \[|k^{\prime}(s)|\lesssim(1+|s|^{1+\gamma_{2}}),\qquad|k^{\prime\prime}(s)| \lesssim(1+|s|^{\gamma_{2}}).\]
## 4. The hyperbolic bioheat equation
In this section, we consider the hyperbolic heat equation
\[\begin{cases}m\Theta_{t}+\nabla\cdot q+\ell\Theta=f,&\text{ in }\Omega\times(0,T),\\ \tau q_{t}+q+\kappa_{\text{a}}\nabla\Theta=0,&\text{ in }\Omega\times(0,T), \end{cases} \tag{4.1}\]
together with the initial conditions in (1.9c) and the boundary condition in (1.9b). Our main goal is to prove a priori estimates under minimal assumptions on the initial data \(\Theta_{0}\) and \(q_{0}\) and on the the source term \(f\).
In order to state and prove our main result, we define the total energy associated to (4.1) as
\[E^{\tau}[\Theta,q](t):=\sum_{k=0}^{2}E_{k}[\Theta,q](t),\quad t\geq 0 \tag{4.2}\]
where the energies \(E_{k},k=0,1,2\) are given by
\[E_{k}[\Theta,q](t):=\frac{1}{2}\Big{(}m\kappa_{\mathrm{a}}\|\partial_{t}^{k} \Theta(t)\|_{L^{2}}^{2}+\tau\|\partial_{t}^{k}q(t)\|_{L^{2}}^{2}\Big{)}. \tag{4.3}\]
We also define the associated dissipation
\[D[\Theta,q](t):=\sum_{k=0}^{2}D_{k}[p,\Theta](t) \tag{4.4}\]
with
\[D_{k}[\Theta,q](t):=\ell\kappa_{\mathrm{a}}\|\partial_{t}^{k}\Theta(t)\|_{L^{ 2}}^{2}+\|\partial_{t}^{k}q(t)\|_{L^{2}}^{2},\quad k=0,1,2. \tag{4.5}\]
Since, the coefficients in (1.9a) depends on \(\Theta\), we should establish some higher order estimates on \(\Theta\) which will allow us to control the \(L^{\infty}\)-norm of \(\Theta\) through the Sobolev embedding theorem. Hence, we introduce the following energy in terms of \(\Theta\) only
\[\mathcal{E}[\Theta](t):=\mathcal{E}_{0}[\Theta](t)+\mathcal{E}_{1}[\Theta](t ),\quad t\geq 0\]
where \(\mathcal{E}_{0}\) and \(\mathcal{E}_{1}\) are defined as follows
\[\begin{cases}\mathcal{E}_{0}[\Theta](t):=\frac{m\kappa_{\mathrm{a}}}{2}(\| \Theta(t)\|_{L^{2}}^{2}+\|\Theta_{t}(t)\|_{L^{2}}^{2}+\|\Theta_{tt}(t)\|_{L^{2 }}^{2}),\\ \mathcal{E}_{1}[\Theta](t):=\frac{m+\tau\ell}{2}\|\nabla\Theta(t)\|_{L^{2}}^{2 }+\kappa_{\mathrm{a}}\|\nabla\Theta_{t}(t)\|_{L^{2}}^{2}+\kappa_{\mathrm{a}} \|\Delta\Theta(t)\|_{L^{2}}^{2}.\end{cases} \tag{4.6}\]
The associated dissipation rate to \(\mathcal{E}[\Theta](t)\) is
\[\mathcal{D}[\Theta](t):=\mathcal{D}_{0}[\Theta](t)+\mathcal{D}_{1}[\Theta](t),\]
with
\[\begin{cases}\mathcal{D}_{0}[\Theta](t):=\ell\kappa_{\mathrm{a}}(\|\Theta(t) \|_{L^{2}}^{2}+\|\Theta_{t}(t)\|_{L^{2}}^{2}+\|\Theta_{tt}(t)\|_{L^{2}}^{2}), \\ \mathcal{D}_{1}[\Theta](t):=\ell\|\nabla\Theta(t)\|_{L^{2}}^{2}+\kappa_{ \mathrm{a}}\|\nabla\Theta_{t}(t)\|_{L^{2}}^{2}+\kappa_{\mathrm{a}}\|\Delta \Theta(t)\|_{L^{2}}^{2}.\end{cases} \tag{4.7}\]
The definition of \(\mathcal{E}[\Theta](t)\) and \(\mathcal{D}[\Theta](t)\) are inspired from a damped wave equation in \(\Theta\) that can be obtained by combining the two equations in system (4.1). See equation (4.15) below.
The following lemma will allow us to estimate \(\mathcal{E}[\Theta]\) in terms of \(E^{\tau}[\Theta,q]\) uniformly with respect to \(\tau\).
**Lemma 4.1**.: _Let \(\bar{\tau}>0\) be a fixed small number. Then for all \(t\geq 0\), the estimate_
\[\mathcal{E}[\Theta](t)\lesssim(1+\bar{\tau}+\bar{\tau}^{2})\big{(}E^{\bar{ \tau}}[\Theta,q](t)+\|f\|_{H^{1}L^{2}}^{2}\big{)}. \tag{4.8}\]
_holds uniformly in \(\tau\in(0,\bar{\tau}]\). The hidden constant in (4.8) does not depend on \(\tau\)._
Proof.: It is clear that we have for all \(t\geq 0\),
\[\mathcal{E}_{0}[\Theta](t)\leq E^{\tau}[\Theta,q](t). \tag{4.9}\]
Now, we want to show that \(\mathcal{E}_{1}[\Theta](t)\) is also bounded by \(E^{\tau}[\Theta,q](t)\), we can simply make use of the second equation in (4.1) to get
\[\kappa_{\mathrm{a}}^{2}\|\nabla\Theta\|_{L^{2}}^{2}\leq 2\tau^{2}\|q_{t}\|_{L^{ 2}}^{2}+2\|q\|_{L^{2}}^{2}\leq C(\tau)E^{\tau}[\Theta,q].\]
However the constant \(C(\tau)\to\infty\) as \(\tau\to 0\). To avoid this, we take the time derivative \(\partial_{t}^{k},k=0,1\) of the system (4.1) to obtain
\[\begin{cases}m\partial_{t}^{k}\Theta_{t}+\nabla\cdot(\partial_{t}^{k}q)+\ell \partial_{t}^{k}\Theta=\partial_{t}^{k}f,&\text{in }\Omega\times(0,T),\\ \tau\partial_{t}^{k}q_{t}+\partial_{t}^{k}q+\kappa_{\text{a}}\nabla(\partial_ {t}^{k}\Theta)=0,&\text{in }\Omega\times(0,T).\end{cases} \tag{4.10}\]
We multiply the second equation in (4.10) by \(\nabla(\partial_{t}^{k}\Theta),k=0,1\) and integrate over \(\Omega\). Using integration by parts, we obtain
\[\kappa_{\text{a}}\|\nabla(\partial_{t}^{k}\Theta)\|_{L^{2}}^{2}=-\tau\int_{ \Omega}\partial_{t}^{k}q_{t}\cdot\nabla(\partial_{t}^{k}\Theta)\,\mathrm{d}x +\int_{\Omega}\nabla\cdot(\partial_{t}^{k}q)\partial_{t}^{k}\Theta\,\mathrm{d}x. \tag{4.11}\]
Note that we can recover the last term on the right from the first equation in (4.10) by testing by \(\partial_{t}^{k}\Theta\) and integrating over \(\Omega\). Hence, we get
\[\int_{\Omega}\nabla\cdot(\partial_{t}^{k}q)\partial_{t}^{k}\Theta\,\mathrm{d }x=-m\int_{\Omega}\partial_{t}^{k}\Theta_{t}\partial_{t}^{k}\Theta\,\mathrm{d }x-\ell\|\partial_{t}^{k}\Theta\|_{L^{2}}^{2}+\int_{\Omega}\partial_{t}^{k}f \partial_{t}^{k}\Theta\,\mathrm{d}x. \tag{4.12}\]
Collecting (4.11) and (4.12), we have
\[\kappa_{\text{a}}\|\nabla(\partial_{t}^{k}\Theta)\|_{L^{2}}^{2} =-\tau\int_{\Omega}\partial_{t}^{k}q_{t}\cdot\nabla(\partial_{t}^ {k}\Theta)\,\mathrm{d}x-m\int_{\Omega}\partial_{t}^{k}\Theta_{t}\partial_{t}^ {k}\Theta\,\mathrm{d}x\] \[\qquad\qquad-\ell\|\partial_{t}^{k}\Theta\|_{L^{2}}^{2}+\int_{ \Omega}\partial_{t}^{k}f\partial_{t}^{k}\Theta\,\mathrm{d}x.\]
Thus using Cauchy-Schwarz and Young inequalities, it results that
\[\begin{split}\frac{\kappa_{\text{a}}}{2}\|\nabla\partial_{t}^{k }\Theta\|_{L^{2}}^{2}+\frac{\ell}{2}\|\partial_{t}^{k}\Theta\|_{L^{2}}^{2}& \leq\frac{\tau^{2}}{2\kappa_{\text{a}}}\|\partial_{t}^{k}q_{t} \|_{L^{2}}^{2}+\frac{m^{2}}{\ell}\|\partial_{t}^{k}\Theta_{t}\|_{L^{2}}^{2}+ \frac{1}{\ell}\|\partial_{t}^{k}f\|_{L^{2}}^{2}\\ &\leq\Big{(}\frac{2m}{\ell\kappa_{\text{a}}}+\frac{\tau}{\kappa_ {\text{a}}}\Big{)}E^{\tau}[\Theta,q]+\frac{1}{\ell}\|\partial_{t}^{k}f\|_{L^{2 }}^{2},\quad k=0,1.\end{split} \tag{4.13}\]
The above estimate also implies
\[\frac{\tau\ell}{2}\|\nabla\Theta\|_{L^{2}}^{2}\lesssim(\tau+\tau^{2})E^{\tau} [\Theta,q]+\tau\|f\|_{L^{2}}^{2}. \tag{4.14}\]
Next, we focus on estimating the term \(\|\Delta\Theta\|_{L^{2}}\) in \(\mathcal{E}_{1}[\Theta](t)\). First, we take the time derivative of the first equation in (4.1) and apply the divergence operator to the second equation, so that we have
\[\begin{cases}m\Theta_{tt}+\nabla\cdot q_{t}+\ell\Theta_{t}=f_{t},\\ \tau\nabla\cdot q_{t}+\nabla\cdot q+\kappa_{\text{a}}\Delta\Theta=0.\end{cases}\]
Combining these two equations, and using the first equation in (4.1), we obtain
\[\tau m\Theta_{tt}+(m+\tau\ell)\Theta_{t}+\ell\Theta-\kappa_{\text{a}}\Delta \Theta=f+\tau f_{t}. \tag{4.15}\]
From here, we infer that
\[\begin{split}\kappa_{\text{a}}^{2}\|\Delta\Theta\|_{L^{2}}^{2}& \lesssim\ell^{2}\|\Theta\|_{L^{2}}^{2}+(m+\tau\ell)^{2}\|\Theta_{t}\|_{L^{2 }}^{2}+\tau^{2}m^{2}\|\Theta_{tt}\|_{L^{2}}^{2}+\|f\|_{L^{2}}^{2}+\tau^{2}\|f_{ t}\|_{L^{2}}^{2}\\ &\lesssim(1+\tau^{2})\big{(}E^{\tau}[\Theta,q]+\|f\|_{H^{1}L^{2}} ^{2}\big{)}.\end{split} \tag{4.16}\]
Putting together the estimates (4.9), (4.13), (4.14), (4.16) and noting that \(\tau\in(0,\bar{\tau}]\), we get the estimate (4.8). This completes the proof of Lemma 4.1.
**Proposition 4.1**.: _Given \(T>0\), and \(f\in H^{2}(0,T;L^{2}(\Omega))\). Assume that the initial data satisfy_
\[(\Theta_{0},q_{0})\in H^{2}(\Omega)\cap H^{1}_{0}(\Omega)\times(H^{1}(\Omega))^{ d}.\]
_Then, the system (4.1) has a unique solution \((\Theta,q)\in X_{\Theta}\times X_{q}\). Furthermore, the solution satisfies_
\[\mathcal{E}[\Theta](t)+\int_{0}^{t}\mathcal{D}[\Theta](s)\,\mathrm{d}s\lesssim (1+\bar{\tau}+\bar{\tau}^{2})\big{(}E^{\bar{\tau}}[\Theta,q](0)+\|f\|_{H^{2}L^ {2}}^{2}\big{)} \tag{4.17}\]
_and_
\[\begin{split}&\|q(t)\|_{H^{1}}^{2}+\sum_{k=0}^{2}\int_{0}^{t}\| \partial_{t}^{k}q(s)\|_{L^{2}}^{2}\,\mathrm{d}s\\ &\lesssim\!\|q_{0}\|_{H^{1}}^{2}+(1+\bar{\tau}+\bar{\tau}^{2}) \big{(}E^{\bar{\tau}}[\Theta,q](0)+\|f\|_{H^{2}L^{2}}^{2}\big{)}\end{split} \tag{4.18}\]
_for all \(0\leq t\leq T\), where the hidden constants in (4.17) and (4.18) are uniform with respect to \(\tau\in(0,\bar{\tau}]\)._
We rely on the Faedo-Galerkin approach to prove the result above. In the first step, we show some uniform _a priori_ estimates to hold for the approximations of the solution \((\Theta,q)\). Precisely, we take the eigenfunctions of the Dirichlet-Laplacian as approximations for the temperature \(\Theta\). As for the heat flux \(q\), we exploit the fact that \((H^{1}(\Omega))^{d}\) is separable, yielding the existence of a dense sequence that can be used to approximate \(q\) (see [17]). In the second step, due to the fact that the estimates are uniform, we can pass to the limit as in [6, chapter 7] (see also [17, Chapter 1]) to obtain the existence of solutions. Further, uniqueness follows by noting that the only solution to the homogeneous problem \((f=0,\Theta_{0}=0\) and \(q_{0}=0)\) is \((\Theta,q)=(0,0)\). Lastly, from the weak and weak-\(\star\) semi-continuity of norms, we get that the estimate (4.17) also holds for the solution \((\Theta,q)\). Since the approach is quite classical, our emphasis in the following is on the energy analysis, and we refer to [6, 17] for the procedure of passing to the limit. To start with, we prove the following intermediate estimates.
**Lemma 4.2**.: _Let \(\tau\in(0,\bar{\tau}]\). Assume that \(\partial_{t}^{k}f\in L^{2}(\Omega),k=0,1,2\). Then for all \(t\geq 0\), we have the following estimates_
\[\frac{\mathrm{d}}{\mathrm{d}t}E_{k}[\Theta,q](t)+D_{k}[\Theta,q](t)\lesssim\| \partial_{t}^{k}f\|_{L^{2}}^{2},\quad k=0,1,2. \tag{4.19}\]
_where the hidden constant is independent of \(\tau\)._
Proof.: Applying \(\partial_{t}^{k},\ k=0,1,2\) to system (4.1), we get
\[\begin{cases}m\partial_{t}^{k}\Theta_{t}+\nabla\cdot\partial_{t}^{k}q+\ell \partial_{t}^{k}\Theta=\partial_{t}^{k}f,&\text{in }\Omega\times(0,T),\\ \tau\partial_{t}^{k}q_{t}+\partial_{t}^{k}q+\kappa_{\mathrm{a}}\nabla\partial _{t}^{k}\Theta=0,&\text{in }\Omega\times(0,T).\end{cases} \tag{4.20}\]
Multiplying the first equation in (4.20) by \(\kappa_{\mathrm{a}}\partial_{t}^{k}\Theta\) and integrating over \(\Omega\), it follows
\[\frac{m\kappa_{\mathrm{a}}}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\partial_{t}^{k} \Theta\|_{L^{2}}^{2}+\kappa_{\mathrm{a}}\int_{\Omega}\partial_{t}^{k}\Theta \nabla\cdot\partial_{t}^{k}q\,\mathrm{d}x+\ell\kappa_{\mathrm{a}}\|\partial _{t}^{k}\Theta\|_{L^{2}}^{2}=\kappa_{\mathrm{a}}\int_{\Omega}\partial_{t}^{k}f \partial_{t}^{k}\Theta\,\mathrm{d}x.\]
Integrating by parts the second term on the left and using the fact that \(\partial_{t}^{k}\Theta|_{\partial\Omega}=0\) (due to (1.9b)), we obtain
\[\frac{m\kappa_{\mathrm{a}}}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\partial_{t}^{k} \Theta\|_{L^{2}}^{2}-\kappa_{\mathrm{a}}\int_{\Omega}\partial_{t}^{k}q\cdot \nabla\partial_{t}^{k}\Theta\,\mathrm{d}x+\ell\kappa_{\mathrm{a}}\|\partial_{t }^{k}\Theta\|_{L^{2}}^{2}=\kappa_{\mathrm{a}}\int_{\Omega}\partial_{t}^{k}f \partial_{t}^{k}\Theta\,\mathrm{d}x. \tag{4.21}\]
Next, multiplying the second equation in (4.20) by \(\partial_{t}^{k}q\) and integrating over \(\Omega\), we get
\[\frac{\tau}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\partial_{t}^{k}q\|_{L^{2}}^{2} +\|\partial_{t}^{k}q\|_{L^{2}}^{2}+\kappa_{\mathrm{a}}\int_{\Omega}\partial_{ t}^{k}q\cdot\nabla\Theta\,\mathrm{d}x=0. \tag{4.22}\]
Collecting (4.21) and (4.22) yields
\[\frac{\mathrm{d}}{\mathrm{d}t}E_{k}[\Theta,q]+D_{k}[\Theta,q]=\kappa_{ \mathrm{a}}\int_{\Omega}\partial_{t}^{k}f\partial_{t}^{k}\Theta\,\mathrm{d}x,\]
where \(E_{k}[\Theta,q]\) and \(D_{k}[\Theta,q]\) are given by (4.3), (4.5), respectively. Using Cauchy-Schwarz inequality together with Young's inequality on the right-hand side gives the desired estimate (4.19).
**Remark 1**.: _Recalling (4.2) and using (4.19), we obtain_
\[\frac{\mathrm{d}}{\mathrm{d}t}E^{\tau}[\Theta,q]+cE^{\tau}[\Theta,q]\lesssim\|f \|_{L^{2}}^{2}+\|f_{t}\|_{L^{2}}^{2}+\|f_{tt}\|_{L^{2}}^{2} \tag{4.23}\]
_with_
\[c=\min\left\{\frac{\ell}{m},\frac{2}{\tau}\right\}.\]
_Multiplying (4.23) by \(e^{ct}\), we obtain_
\[\frac{\mathrm{d}}{\mathrm{d}t}\left(e^{ct}E^{\tau}[\Theta,q]\right)(t)\lesssim e ^{ct}(\|f\|_{L^{2}}^{2}+\|f_{t}\|_{L^{2}}^{2}+\|f_{tt}\|_{L^{2}}^{2}),\quad t\geq 0\]
_Hence, integrating in time, we have_
\[E^{\tau}[\Theta,q](t) \lesssim e^{-ct}E^{\tau}[\Theta,q](0)+\int_{0}^{t}e^{-c(t-s)}(\| f(s)\|_{L^{2}}^{2}+\|f_{t}(s)\|_{L^{2}}^{2}+\|f_{tt}(s)\|_{L^{2}}^{2})\, \mathrm{d}s\] \[\lesssim e^{-ct}E^{\tau}[\Theta,q](0)+\frac{1-e^{-ct}}{c}\|f\|_{H ^{2}L^{2}}^{2}. \tag{4.24}\]
_It is clear from (4.24) that if \(f\in H^{2}(0,t,L^{2}(\Omega))\), then \(E^{\tau}[\Theta,q](t)\) has an exponential decay._
### Proof of Proposition 4.1
First, we integrate over time the estimate (4.19), sum up over \(k=0,1,2\) and recall (4.2) and (4.4) to find
\[E^{\tau}[\Theta,q](t)+\int_{0}^{t}D[\Theta,q](s)\,\mathrm{d}s\lesssim E^{\tau }[\Theta,q](0)+\|f\|_{H^{2}L^{2}}^{2}, \tag{4.25}\]
which also implies, by recalling (4.6) and (4.7) that
\[\mathcal{E}_{0}[\Theta](t)+\int_{0}^{t}\mathcal{D}_{0}[\Theta](s)\,\mathrm{d} s\lesssim E^{\tau}[\Theta,q](0)+\|f\|_{H^{2}L^{2}}^{2}. \tag{4.26}\]
On the other hand, applying the divergence operator to the second equation in (4.1) and testing the resulting equation by \(\Delta\Theta\), we obtain for all \(t\geq 0\)
\[\tau\int_{\Omega}\nabla\cdot q_{t}\Delta\Theta\,\mathrm{d}x+\int_{\Omega}\nabla \cdot q\Delta\Theta\,\mathrm{d}x+\kappa_{\mathrm{a}}\|\Delta\Theta\|_{L^{2}}^{2 }=0. \tag{4.27}\]
Next, we multiply the first equation in (4.1) by \(\Delta\Theta\) and integrate by parts, to get
\[\int_{\Omega}\nabla\cdot q\Delta\Theta\,\mathrm{d}x=\frac{m}{2}\frac{\mathrm{d }}{\mathrm{d}t}\|\nabla\Theta\|_{L^{2}}^{2}+\ell\|\nabla\Theta\|_{L^{2}}^{2}+ \int_{\Omega}f\Delta\Theta\,\mathrm{d}x. \tag{4.28}\]
Further, differentiating the first equation in (4.1) with respect to \(t\), we find
\[m\Theta_{tt}+\nabla\cdot q_{t}+\ell\Theta_{t}=f_{t}.\]
Testing the above equation by \(\tau\Delta\Theta\) and integrating over \(\Omega\), it follows
\[\tau\int_{\Omega}\nabla\cdot q_{t}\Delta\Theta\,\mathrm{d}x=-\tau m\int_{ \Omega}\Theta_{tt}\Delta\Theta\,\mathrm{d}x+\frac{\tau\ell}{2}\frac{\mathrm{d }}{\mathrm{d}t}\|\nabla\Theta\|_{L^{2}}^{2}+\tau\int_{\Omega}f_{t}\Delta\Theta \,\mathrm{d}x. \tag{4.29}\]
Plugging (4.28) and (4.29) into (4.27), we deduce
\[\frac{1}{2}(m+\tau\ell)\frac{\mathrm{d}}{\mathrm{d}t}\|\nabla \Theta\|_{L^{2}}^{2}+\ell\|\nabla\Theta\|_{L^{2}}^{2}+\kappa_{\mathrm{a}}\| \Delta\Theta\|_{L^{2}}^{2}\] \[=\tau m\int_{\Omega}\Theta_{tt}\Delta\Theta\,\mathrm{d}x-\int_{ \Omega}f\Delta\Theta\,\mathrm{d}x-\tau\int_{\Omega}f_{t}\Delta\Theta\, \mathrm{d}x.\]
Therefore, by applying Cauchy-Schwarz and Young inequalities, we have
\[\frac{1}{2}(m+\tau\ell)\frac{\mathrm{d}}{\mathrm{d}t}\|\nabla \Theta\|_{L^{2}}^{2}+\ell\|\nabla\Theta\|_{L^{2}}^{2}+\frac{\kappa_{\mathrm{a }}}{4}\|\Delta\Theta\|_{L^{2}}^{2}\lesssim\tau^{2}\|\Theta_{tt}\|_{L^{2}}^{2}+ \|f\|_{L^{2}}^{2}+\tau^{2}\|f_{t}\|_{L^{2}}^{2}.\]
Integrating with respect to \(t\), to get
\[\begin{split}&\frac{1}{2}(m+\tau\ell)\|\nabla\Theta(t)\|_{L^{2}}^{ 2}+\ell\int_{0}^{t}\|\nabla\Theta\|_{L^{2}}^{2}+\frac{\kappa_{\mathrm{a}}}{4} \int_{0}^{t}\|\Delta\Theta\|_{L^{2}}^{2}\\ \lesssim&\frac{1}{2}(m+\tau\ell)\|\nabla\Theta_{0} \|_{L^{2}}^{2}+\tau^{2}\int_{0}^{t}\|\Theta_{tt}\|_{L^{2}}^{2}+\|f\|_{L^{2}L^{ 2}}^{2}+\tau^{2}\|f_{t}\|_{L^{2}L^{2}}^{2}.\end{split} \tag{4.30}\]
Using inequality (4.8), we have for all \(\tau\in(0,\bar{\tau}]\)
\[(1+\tau)\|\nabla\Theta(t)\|_{L^{2}}^{2}\lesssim(1+\bar{\tau}+\bar{\tau}^{2}) \big{(}E^{\bar{\tau}}[\Theta,q](t)+\|f(t)\|_{H^{1}L^{2}}^{2}\big{)}.\]
Further, since
\[\|\partial_{t}^{k}f(0)\|_{L^{2}}^{2}\leq C_{T}\big{(}\|\partial_{t}^{k}f\|_{L^ {2}L^{2}}^{2}+\|\partial_{t}^{k}f_{t}\|_{L^{2}L^{2}}^{2}\big{)},\quad k=0,1,\]
it follows that
\[(1+\tau)\|\nabla\Theta_{0}\|_{L^{2}}^{2}\lesssim(1+\bar{\tau}+\bar{\tau}^{2}) \big{(}E^{\bar{\tau}}[\Theta,q](0)+\|f\|_{H^{2}L^{2}}^{2}\big{)}. \tag{4.31}\]
Moreover, from estimate (4.25), we have
\[\tau^{2}\int_{0}^{t}\|\Theta_{tt}\|_{L^{2}}^{2}\lesssim\bar{\tau}^{2}\big{(}E^ {\bar{\tau}}[\Theta,q](0)+\|f\|_{H^{2}L^{2}}^{2}\big{)}. \tag{4.32}\]
Combining the estimates (4.30), (4.31) and (4.32), we obtain
\[\begin{split}\frac{1}{2}(m+&\tau\ell)\|\nabla\Theta(t )\|_{L^{2}}^{2}+\ell\int_{0}^{t}\|\nabla\Theta\|_{L^{2}}^{2}\,\mathrm{d}s+ \frac{\kappa_{\mathrm{a}}}{4}\int_{0}^{t}\|\Delta\Theta\|_{L^{2}}^{2}\,\mathrm{ d}s\\ &\lesssim(1+\bar{\tau}+\bar{\tau}^{2})\big{(}E^{\bar{\tau}}[ \Theta,q](0)+\|f\|_{H^{2}L^{2}}^{2}\big{)}.\end{split} \tag{4.33}\]
Next, according to the first inequality in (4.13) for \(k=1\) and the estimate (4.25), we have for all \(t\geq 0\)
\[\begin{split}\kappa_{\mathrm{a}}\int_{0}^{t}\|\nabla\Theta_{t}(s )\|_{L^{2}}^{2}\,\mathrm{d}s&\lesssim\tau^{2}\int_{0}^{t}\|q_{tt} (s)\|_{L^{2}}^{2}\,\mathrm{d}s+\int_{0}^{t}\|\Theta_{tt}(s)\|_{L^{2}}^{2}\, \mathrm{d}s+\|f_{t}\|_{L^{2}L^{2}}^{2}\\ &\lesssim(1+\bar{\tau}^{2})(E^{\bar{\tau}}[\Theta,q](0)+\|f\|_{H ^{2}L^{2}}^{2}).\end{split} \tag{4.34}\]
On the other hand, thanks to the 1D embedding \(H^{1}(0,T)\hookrightarrow L^{\infty}(0,T)\), we have \(f,f_{t}\in L^{\infty}(0,T;L^{2}(\Omega))\). Then, the estimate (4.8) together with (4.25) gives for all \(t\geq 0\)
\[\kappa_{\mathrm{a}}\|\nabla\Theta_{t}(t)\|_{L^{2}}^{2}\lesssim(1+\bar{\tau}+ \bar{\tau}^{2})\big{(}E^{\bar{\tau}}[\Theta,q](0)+\|f\|_{H^{2}L^{2}}^{2}\big{)}. \tag{4.35}\]
Similarly, we make use of (4.8), (4.25) and the fact that \(f,f_{t}\in L^{\infty}(0,T;L^{2}(\Omega))\) to get
\[\kappa_{\mathrm{a}}\|\Delta\Theta(t)\|_{L^{2}}^{2}\lesssim(1+\bar{\tau}+\bar{ \tau}^{2})\big{(}E^{\bar{\tau}}[\Theta,q](0)+\|f\|_{H^{2}L^{2}}^{2}\big{)}. \tag{4.36}\]
Collecting the estimates (4.26), (4.33), (4.34), (4.35) and (4.36), we arrive at the estimate (4.17).
To establish estimate (4.18), first observe that from (4.25), we immediately have for \(t\geq 0\)
\[\int_{0}^{t}(\|q(s)\|_{L^{2}}^{2}+\|q_{t}(s)\|_{L^{2}}^{2}+\|q_{tt}(s)\|_{L^{2} }^{2})\,\mathrm{d}s\lesssim E^{\tau}[\Theta,q](0)+\|f\|_{H^{2}L^{2}}^{2}. \tag{4.37}\]
Thus, it remains to prove that \(q\) is in \((H^{1}(\Omega))^{d}\). From the second equation in (4.1), we obtain
\[q(x,t)=q_{0}(x)-\frac{\kappa_{\mathrm{a}}}{\tau}\int_{0}^{t}\nabla\Theta e^{ -(t-s)/\tau}\,\mathrm{d}s,\]
which implies that for all \(t\geq 0\)
\[\|q(t)\|_{H^{1}}\leq\|q_{0}\|_{H^{1}}+\frac{\kappa_{\mathrm{a}}}{\tau}\int_{0} ^{t}e^{-(t-s)/\tau}\|\nabla\Theta(s)\|_{H^{1}}\,\mathrm{d}s.\]
Using elliptic regularity, one finds
\[\begin{split}\|q(t)\|_{H^{1}}&\leq\|q_{0}\|_{H^{1}} +\kappa_{\mathrm{a}}(1-e^{-t/\tau})\|\Delta\Theta\|_{L^{\infty}L^{2}}\\ &\lesssim\|q_{0}\|_{H^{1}}+\|\Delta\Theta\|_{L^{\infty}L^{2}}. \end{split}\]
This last estimate along with (4.36) gives
\[\|q(t)\|_{H^{1}}^{2}\lesssim\|q_{0}\|_{H^{1}}^{2}+(1+\bar{\tau}+\bar{\tau}^{2} )\big{(}E^{\bar{\tau}}[\Theta,q](0)+\|f\|_{H^{2}L^{2}}^{2}\big{)} \tag{4.38}\]
Summing up the estimates (4.37) and (4.38), we obtain (4.18). Further, from (4.17), (4.18), we deduce that \((\Theta,q)\in X_{\Theta}\times X_{q}\). This concludes the proof of Proposition 4.1.
## 5. The linearized Westervelt equation
In this section, we consider the linearization of the Westervelt equation (1.1):
\[\alpha(x,t)p_{tt}-r(x,t)\Delta p-b\Delta p_{t}=g(x,t),\qquad x\in\Omega,\quad t \geq 0, \tag{5.1}\]
supplemented by initial conditions (1.9c) and boundary conditions (1.9b).
We define the energies associated to (5.1) as follows
\[E_{1}[p](t) :=\frac{1}{2}\big{(}\|\sqrt{\alpha(t)}p_{t}(t)\|_{L^{2}}^{2}+\| \sqrt{r(t)}\nabla p(t)\|_{L^{2}}^{2}\big{)},\] \[E_{2}[p](t) :=\frac{1}{2}\big{(}\|\sqrt{\alpha(t)}p_{tt}(t)\|_{L^{2}}^{2}+\| \sqrt{r(t)}\nabla p_{t}(t)\|_{L^{2}}^{2}+\|\sqrt{b}\Delta p(t)\|_{L^{2}}^{2} \big{)},\] \[E_{3}[p](t) :=\frac{1}{2}\big{(}\|\sqrt{b}\nabla p_{tt}(t)\|_{L^{2}}^{2}+\| \sqrt{b}\nabla\Delta p(t)\|_{L^{2}}^{2}\big{)}.\]
The total acoustic energy is given by
\[\mathfrak{E}[p](t):=\sum_{k=1}^{3}E_{k}[p](t),\qquad t\geq 0. \tag{5.2}\]
We denote by \(\mathfrak{D}[p]\) its associated dissipation rate given by
\[\mathfrak{D}[p](t):=\mathfrak{D}_{0}[p](t)+b\|\nabla\Delta p_{t}\|_{L^{2}}^{2} +b\|\Delta p_{tt}(s)\|_{L^{2}}^{2},\]
where \(\mathfrak{D}_{0}[p]\) is given by
\[\begin{split}\mathfrak{D}_{0}[p](t):=b\|\nabla p_{t}(t)\|_{L^{2} }^{2}+b\|\nabla p_{tt}(t)\|_{L^{2}}^{2}+\|\sqrt{r}\Delta p(t)\|_{L^{2}}^{2}+b \|\Delta p_{t}(t)\|_{L^{2}}^{2}\\ +\|\sqrt{r}\nabla\Delta p(t)\|_{L^{2}}^{2}+\|\sqrt{\alpha}p_{ttt} (t)\|_{L^{2}}^{2}.\end{split} \tag{5.3}\]
We make the following regularity and non-degeneracy assumptions on the coefficients \(\alpha\) and \(r\).
**Assumption 1**.: _Assume that_
* \(\alpha\in L^{\infty}(0,T;L^{\infty}(\Omega))\cap L^{2}(0,T;W^{1,3}(\Omega))\)_,_
* \(\alpha_{t}\in L^{2}(0,T;L^{3}(\Omega))\cap L^{\frac{4}{4-d}}(0,T;L^{2}(\Omega))\)_,_
* _There exist_ \(0<\alpha_{0}\leq\alpha_{1}\) _such that_ \[\alpha_{0}\leq\alpha(x,t)\leq\alpha_{1}\quad\text{a.e. in}\quad\Omega\times(0,T).\]
* \(r\in L^{\infty}(0,T;L^{\infty}(\Omega))\cap L^{2}(0,T;W^{1,3}(\Omega))\)_,_
* \(r_{t}\in L^{2}(0,T;L^{3}(\Omega))\cap L^{\frac{4}{4-d}}(0,T;L^{2}(\Omega))\)_,_
* _There exist_ \(0<r_{0}\leq r_{1}\)__ \[r_{0}\leq r(x,t)\leq r_{1}\quad\text{a.e. in}\quad\Omega\times(0,T).\]
The main result of this section reads as follows.
**Proposition 5.1**.: _Given \(T>0\) and \(g\in H^{1}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{1}_{0}(\Omega))\). Let_
\[(p_{0},p_{1})\in H^{3}(\Omega)\cap H^{1}_{0}(\Omega)\times H^{2}(\Omega)\cap H ^{1}_{0}(\Omega).\]
_Then under Assumption 1, the linearized pressure equation (5.1) has a unique solution \(p\in X_{p}\), which satisfies for all \(0\leq t\leq T\)_
\[\begin{split}\mathfrak{E}[p](t)+b\|\Delta p_{t}(t)\|_{L^{2}}^{2}+ \int_{0}^{t}\mathfrak{D}[p](s)\,\mathrm{d}s\lesssim&\,\mathfrak{ E}[p](0)\exp\Big{(}\int_{0}^{t}(1+\Lambda(s))\,\mathrm{d}s\Big{)}\\ &+\int_{0}^{t}\mathfrak{F}(s)\exp\Big{(}\int_{s}^{t}(1+\Lambda( \sigma))\mathrm{d}\sigma\Big{)}\,\mathrm{d}s\end{split} \tag{5.4}\]
_where_
\[\begin{split}\Lambda(t)=\|\alpha_{t}(t)\|_{L^{2}}^{2}& +\|\alpha_{t}(t)\|_{L^{2}}^{\frac{4}{4-d}}+\|r_{t}(t)\|_{L^{2}}^{ \frac{4}{4-d}}+\|\nabla r(t)\|_{L^{2}}^{2}+\|r_{t}(t)\|_{L^{3}}^{2}\\ &+\|\alpha_{t}(t)\|_{L^{3}}^{2}+\|\nabla r(t)\|_{L^{3}}^{2}+\| \nabla\alpha(t)\|_{L^{3}}^{2},\end{split} \tag{5.5}\]
_and_
\[\mathfrak{F}(t)=\|\nabla g(t)\|_{L^{2}}^{2}+\|g_{t}(t)\|_{L^{2}}^{2}. \tag{5.6}\]
We carry out the proof of Proposition 5.1 by means of the Faedo-Galerkin method using the smooth eigenfunctions of the Dirichlet-Laplacian as approximations of the solution in space, see [6]. Below, we present the energy analysis for the mentioned approximation, which leads to uniform _a priori_ estimates, and we direct the reader interested in the process of taking the limit towards [6, chapter 7].
Before embarking on the proof of Proposition 5.1, we first prove two lemmas.
**Lemma 5.1**.: _Given \(g\in L^{2}(\Omega)\) such that \(g_{t}\in L^{2}(\Omega)\). Let Assumption 1 hold. Then for all \(t\geq 0\), we have the following energy estimate_
\[\begin{split}&\frac{\mathrm{d}}{\mathrm{d}t}\Big{(}E_{1}[p](t)+E_{2 }[p](t)\Big{)}+b\|\nabla p_{t}(t)\|_{L^{2}}^{2}\\ &+b\|\nabla p_{tt}(t)\|_{L^{2}}^{2}+\|\sqrt{r}\Delta p(t)\|_{L^{ 2}}^{2}+b\|\Delta p_{t}(t)\|_{L^{2}}^{2}\\ \lesssim&\,(1+\Lambda_{1}(t))(E_{1}[p](t)+E_{2}[p] (t))+\|g\|_{L^{2}}^{2}+\|g_{t}\|_{H^{-1}}^{2}.\end{split} \tag{5.7}\]
_where_
\[\begin{split}\Lambda_{1}(t)=&\,\|r_{t}(t)\|_{L^{3 }}^{2}+\|\nabla r(t)\|_{L^{3}}^{2}+\|r_{t}(t)\|_{L^{2}}^{\frac{4}{4-d}}+\| \nabla r(t)\|_{L^{2}}^{2}\\ &+\|\alpha_{t}(t)\|_{L^{2}}^{\frac{4}{4-d}}+\|\alpha_{t}(t)\|_{L^ {2}}^{2}.\end{split}\]
Proof.: The proof is based on ideas that are drawn from [21]. Multiplying the equation (5.1) by \(p_{t}\), integrating over \(\Omega\) and using integration by parts, we find
\[\begin{split}&\frac{\mathrm{d}}{\mathrm{d}t}E_{1}[p](t)+b\int_{ \Omega}|\nabla p_{t}|^{2}\,\mathrm{d}x\\ =&\,\int_{\Omega}gp_{t}\,\mathrm{d}x+\frac{1}{2}\int_ {\Omega}\alpha_{t}p_{t}^{2}\,\mathrm{d}x-\int_{\Omega}\nabla r\cdot\nabla pp _{t}\,\mathrm{d}x+\frac{1}{2}\int_{\Omega}r_{t}|\nabla p|^{2}\,\mathrm{d}x. \end{split} \tag{5.8}\]
Our goal now is to estimate the terms on the right-hand side of (5.8). Applying Young and Poincare inequalities, we obtain
\[\int_{\Omega}gp_{t}\,\mathrm{d}x\leq C(\varepsilon)\|g\|_{L^{2}}^{2}+ \varepsilon\|\nabla p_{t}\|_{L^{2}}^{2}.\]
The next two terms on the right-hand side of (5.8) can be estimated using Holder, Young and Poincare's inequalities as well as the embedding \(H^{1}(\Omega)\hookrightarrow L^{4}(\Omega)\). Hence, we have
\[\int_{\Omega}\alpha_{t}p_{t}^{2}\,\mathrm{d}x \leq\|\alpha_{t}\|_{L^{2}}\|p_{t}\|_{L^{4}}^{2}\] \[\leq C(\varepsilon)\|\alpha_{t}\|_{L^{2}}^{2}\|\sqrt{r}\nabla p_{ t}\|_{L^{2}}^{2}+\varepsilon\Big{\|}\frac{1}{\sqrt{r}}\Big{\|}_{L^{\infty}}^{2}\| \nabla p_{t}\|_{L^{2}}^{2}.\]
Considering \(\varepsilon\) suitably small and using Assumption 1, we can absorb the last terms on the above estimate by the dissipation terms on the left-hand side of (5.8).
Likewise, we obtain
\[\int_{\Omega}\nabla r\cdot\nabla pp_{t}\,\mathrm{d}x \leq\|\nabla r\|_{L^{2}}\|\nabla p\|_{L^{4}}\|p_{t}\|_{L^{4}}\] \[\leq C(\varepsilon)\|\nabla r\|_{L^{2}}^{2}\|\sqrt{r}\nabla p_{ t}\|_{L^{2}}^{2}+\varepsilon\Big{\|}\frac{1}{r}\Big{\|}_{L^{\infty}}^{2}\| \sqrt{r}\Delta p\|_{L^{2}}^{2}\]
where we have also taken into account the elliptic estimate
\[\|\nabla p\|_{H^{1}}\leq\|p\|_{H^{2}}\leq C\|\Delta p\|_{L^{2}}. \tag{5.9}\]
To estimate the last term on the right-hand side of (5.8), we employ the Ladyzhenskaya inequality (3.2) together with (5.9) to find
\[\int_{\Omega}r_{t}|\nabla p|^{2}\,\mathrm{d}x \leq\|r_{t}\|_{L^{2}}\|\nabla p\|_{L^{4}}^{2}\] \[\lesssim\|r_{t}\|_{L^{2}}\Big{\|}\frac{1}{r}\Big{\|}_{L^{\infty}} \|\sqrt{r}\nabla p\|_{L^{2}}^{2(1-\frac{d}{4})}\|\sqrt{r}\Delta p\|_{L^{2}}^{ \frac{d}{2}}\] \[\leq C(\varepsilon)\|r_{t}\|_{L^{2}}^{\frac{4}{4-d}}\|\sqrt{r} \nabla p\|_{L^{2}}^{2}+\varepsilon\Big{\|}\frac{1}{r}\Big{\|}_{L^{\infty}}^{ \frac{4}{d}}\|\sqrt{r}\Delta p\|_{L^{2}}^{2}.\]
Altogether, the estimates above yield for all \(t\geq 0\)
\[\frac{\mathrm{d}}{\mathrm{d}t}E_{1}[p](t)+b\|\nabla p_{t}(t)\|_{L^ {2}}^{2}\lesssim \,\|r_{t}(t)\|_{L^{2}}^{\frac{4}{4-d}}E_{1}[p](t)\] \[+(\|\alpha_{t}(t)\|_{L^{2}}^{2}+\|\nabla r(t)\|_{L^{2}}^{2})E_{2} [p](t)\] \[+\|g(t)\|_{L^{2}}^{2}+\varepsilon\|\sqrt{r(t)}\Delta p(t)\|_{L^{2 }}^{2}. \tag{5.10}\]
In order to get an estimate for the energy \(E_{2}[p]\), we differentiate (5.1) with respect to \(t\), to obtain
\[\alpha p_{ttt}-r\Delta p_{t}-b\Delta p_{tt}=-\alpha_{t}p_{tt}+r_{t}\Delta p+g_ {t}. \tag{5.11}\]
Next, we multiply (5.11) by \(p_{tt}\) and integrate over \(\Omega\), using integration by parts, we find
\[\frac{1}{2}\,\frac{\mathrm{d}}{\mathrm{d}t}\Big{(}\|\sqrt{\alpha }p_{tt}\|_{L^{2}}^{2}+\|\sqrt{r}\nabla p_{t}\|_{L^{2}}^{2}\Big{)}+b\|\nabla p _{tt}\|_{L^{2}}^{2}\] \[=-\frac{1}{2}\int_{\Omega}\alpha_{t}p_{tt}^{2}\,\mathrm{d}x+\frac {1}{2}\int_{\Omega}r_{t}|\nabla p_{t}|^{2}\,\mathrm{d}x-\int_{\Omega}\nabla r \cdot\nabla p_{t}p_{tt}\,\mathrm{d}x\] \[\qquad+\int_{\Omega}r_{t}\Delta pp_{tt}\,\mathrm{d}x+\int_{\Omega }g_{t}p_{tt}\,\mathrm{d}x. \tag{5.12}\]
First, Ladyzhenskaya and Young inequalities along with the embedding \(H^{1}(\Omega)\hookrightarrow L^{4}(\Omega)\) allow us to get the following upper bound for the first term on the right-hand side of (5.12). That is
\[-\frac{1}{2}\int_{\Omega}\alpha_{t}p_{tt}^{2}\,\mathrm{d}x \leq\frac{1}{2}\|\alpha_{t}\|_{L^{2}}\|p_{tt}\|_{L^{4}}^{2}\] \[\lesssim\Big{\|}\frac{1}{\alpha}\Big{\|}_{L^{\infty}}^{1-\frac{d} {4}}\|\alpha_{t}\|_{L^{2}}\|\sqrt{\alpha}pt_{t}\|_{L^{2}}^{2(1-\frac{d}{4})}\| \nabla p_{tt}\|_{L^{2}}^{\frac{d}{2}}\] \[\leq C(\varepsilon)\|\alpha_{t}\|_{L^{2}}^{\frac{4}{4-d}}\|\sqrt {\alpha}p_{tt}\|_{L^{2}}^{2}+\varepsilon\Big{\|}\frac{1}{\alpha}\Big{\|}_{L^{ \infty}}^{\frac{4}{4-1}}\|\nabla p_{tt}\|_{L^{2}}^{2}.\]
We can take \(\varepsilon\) as small as needed in order to absorb the last term in the estimate above into the dissipation term on the left-hand side of (5.12).
Similarly, we can derive the following estimate for the second term on the right-hand side of (5.12)
\[\int_{\Omega}r_{t}|\nabla p_{t}|^{2}\,\mathrm{d}x \leq\|r_{t}\|_{L^{2}}\|\nabla p_{t}\|_{L^{4}}^{2}\] \[\lesssim\Big{\|}\frac{1}{r}\Big{\|}_{L^{\infty}}^{1-\frac{d}{4}} \|r_{t}\|_{L^{2}}\|\sqrt{r}\nabla p_{t}\|_{L^{2}}^{2(1-\frac{d}{4})}\|\Delta p _{t}\|_{L^{2}}^{\frac{d}{2}}\] \[\leq C(\varepsilon)\|r_{t}\|_{L^{2}}^{\frac{4}{4-d}}\|\sqrt{r} \nabla p_{t}\|_{L^{2}}^{2}+\varepsilon\Big{\|}\frac{1}{r}\Big{\|}_{L^{\infty}} ^{\frac{4}{4}-1}\|\Delta p_{t}\|_{L^{2}}^{2}.\]
Moreover, making use of the embedding \(H^{1}(\Omega)\hookrightarrow L^{6}(\Omega)\) and Poincare's inequality, it follows
\[\int_{\Omega}\nabla r\cdot\nabla p_{t}p_{tt}\,\mathrm{d}x \leq\|\nabla r\|_{L^{3}}\|\nabla p_{t}\|_{L^{2}}\|p_{tt}\|_{L^{6}}\] \[\leq C(\varepsilon)\|\nabla r\|_{L^{3}}^{2}\|\sqrt{r}\nabla p_{t} \|_{L^{2}}^{2}+\varepsilon\Big{\|}\frac{1}{\sqrt{r}}\Big{\|}_{L^{\infty}}^{2} \|\nabla p_{tt}\|_{L^{2}}^{2}.\]
Again, we call on the same tools to estimate the two remaining terms on the right of (5.12). So we can show that
\[\int_{\Omega}r_{t}\Delta pp_{tt}\,\mathrm{d}x \leq\Big{\|}\frac{r_{t}}{b}\Big{\|}_{L^{3}}\|p_{tt}\|_{L^{6}}\| \sqrt{b}\Delta p\|_{L^{2}}\] \[\leq C(\varepsilon)\|r_{t}\|_{L^{3}}^{2}\|\sqrt{b}\Delta p\|_{L^{ 2}}^{2}+\varepsilon\|\nabla p_{tt}\|_{L^{2}}^{2}.\]
In addition, we have
\[\int_{\Omega}g_{t}p_{tt}\,\mathrm{d}x\leq C(\varepsilon)\|g_{t}\|_{L^{2}}^{2} +\varepsilon\|\nabla p_{tt}\|_{L^{2}}^{2}.\]
Collecting the above estimates and selecting \(\varepsilon\) small enough we obtain for all \(t\geq 0\)
\[\begin{split}&\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\Big{(}\| \sqrt{\alpha(t)}p_{tt}(t)\|_{L^{2}}^{2}+\|\sqrt{r(t)}\nabla p_{t}(t)\|_{L^{2}} ^{2}\Big{)}+\frac{b}{2}\|\nabla p_{tt}(t)\|_{L^{2}}^{2}\\ \lesssim&\Big{(}\|r_{t}(t)\|_{L^{3}}^{2}+\|r_{t}(t) \|_{L^{2}}^{\frac{4}{4-d}}+\|\alpha_{t}(t)\|_{L^{2}}^{\frac{4}{4-d}}+\|\nabla r (t)\|_{L^{3}}^{2}\Big{)}E_{2}[p](t)\\ &+\varepsilon\Big{\|}\frac{1}{r}\Big{\|}_{L^{\infty}}^{\frac{4}{4 -1}}\|\Delta p_{t}(t)\|_{L^{2}}^{2}+\|g_{t}(t)\|_{L^{2}}^{2}.\end{split} \tag{5.13}\]
Now, we focus on establishing estimates for \(\|\Delta p\|_{L^{2}}\) and \(\|\Delta p_{t}\|_{L^{2}}\). Testing the equation (5.1) by \(-\Delta p\) and integrating over \(\Omega\) gives
\[\frac{b}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\Delta p\|_{L^{2}}^{2}+\int_{\Omega} r|\Delta p|^{2}\,\mathrm{d}x=-\int_{\Omega}g(\Delta p)\,\mathrm{d}x+\int_{\Omega} \alpha p_{tt}\Delta p\,\mathrm{d}x.\]
Applying Young and Poincare inequalities, we get the following bound
\[\begin{split}&\frac{b}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\Delta p \|_{L^{2}}^{2}+\|\sqrt{r}\Delta p\|_{L^{2}}^{2}\\ \leq& 2\|g\|_{L^{2}}^{2}+\frac{2}{b}\|\sqrt{b} \Delta p\|_{L^{2}}^{2}+C(\varepsilon)\|\sqrt{b}\Delta p\|_{L^{2}}^{2}+ \varepsilon\|\alpha\|_{L^{\infty}}^{2}\|\nabla p_{tt}\|_{L^{2}}^{2}.\end{split} \tag{5.14}\]
Adding up (5.14) and (5.13), now the last term on the right of (5.14) can be absorbed by the left-hand side of (5.13). Hence, we get
\[\begin{split}&\frac{\mathrm{d}}{\mathrm{d}t}E_{2}[p](t)+\frac{b}{ 3}\|\nabla p_{tt}(t)\|_{L^{2}}^{2}+\|\sqrt{r(t)}\Delta p(t)\|_{L^{2}}^{2}\\ \lesssim&\,(1+\|r_{t}(t)\|_{L^{3}}^{2}+\|r_{t}(t)\|_ {L^{2}}^{\frac{4}{4-d}}+\|\alpha_{t}(t)\|_{L^{2}}^{\frac{4}{4-d}}+\|\nabla r(t )\|_{L^{3}}^{2})E_{2}[p](t)\\ &+\varepsilon\Big{\|}\frac{1}{r}\Big{\|}_{L^{\infty}}^{\frac{4}{ 4}-1}\|\Delta p_{t}(t)\|_{L^{2}}^{2}+\|g\|_{L^{2}}^{2}+\|g_{t}\|_{L^{2}}^{2}. \end{split} \tag{5.15}\]
Next, we multiply the equation (5.1) by \(-\Delta p_{t}\) and we intergrate over \(\Omega\) to get
\[b\|\Delta p_{t}\|_{L^{2}}^{2}=\int_{\Omega}\alpha p_{tt}\Delta p_{t}\,\mathrm{ d}x-\int_{\Omega}r\Delta p\Delta p_{t}\,\mathrm{d}x+\int_{\Omega}g(-\Delta p_{t}) \,\mathrm{d}x.\]
Using Young's inequality and recalling Assumption 1, we obtain
\[b\|\Delta p_{t}\|_{L^{2}}^{2}\leq \,C(\varepsilon)\Big{(}\|\sqrt{\alpha}p_{tt}\|_{L^{2}}^{2}+\| \sqrt{b}\Delta p\|_{L^{2}}^{2}+\|g\|_{L^{2}}^{2}\Big{)}+\varepsilon\|\Delta p _{t}\|_{L^{2}}^{2}.\]
Taking \(\varepsilon\) small enough, we find
\[b\|\Delta p_{t}(t)\|_{L^{2}}^{2}\lesssim E_{2}[p](t)+\|g(t)\|_{L^{2}}^{2}, \quad t\geq 0. \tag{5.16}\]
Thus, putting (5.15) and (5.16) together, we have
\[\begin{split}&\frac{\mathrm{d}}{\mathrm{d}t}E_{2}[p](t)+\frac{b}{ 4}\|\nabla p_{tt}(t)\|_{L^{2}}^{2}+\|\sqrt{r(t)}\Delta p(t)\|_{L^{2}}^{2}+ \frac{b}{2}\|\Delta p_{t}(t)\|_{L^{2}}^{2}\\ \lesssim&\,\Big{(}1+\|r_{t}(t)\|_{L^{3}}^{2}+\|r_{t} (t)\|_{L^{2}}^{\frac{4}{4-d}}+\|\alpha_{t}(t)\|_{L^{2}}^{\frac{4}{4-d}}+\| \nabla r(t)\|_{L^{3}}^{2}\Big{)}E_{2}[p](t)\\ &+\|g(t)\|_{L^{2}}^{2}+\|g_{t}(t)\|_{L^{2}}^{2}.\end{split} \tag{5.17}\]
Finally, summing up the estimates (5.10), (5.17) and selecting \(\varepsilon\) small enough in the last term on the right-hand side of (5.10), we obtain (5.7). This completes the proof of Lemma 5.1.
**Lemma 5.2**.: _Let \(g\in H^{1}_{0}(\Omega)\) and \(g_{t}\in L^{2}(\Omega)\). Then, under Assumption 1, the energy \(E_{3}\) satisfies for \(t\geq 0\) the estimate_
\[\begin{split}&\frac{\mathrm{d}}{\mathrm{d}t}E_{3}[p](t)+\|\sqrt{r} \nabla\Delta p(t)\|_{L^{2}}^{2}+\|\sqrt{\alpha}p_{ttt}(t)\|_{L^{2}}^{2}\\ \lesssim&\,(1+\Lambda_{2}(t))E_{3}[p](t)+\|\sqrt{b} \Delta p_{t}(t)\|_{L^{2}}^{2}+\varepsilon\|\nabla p_{tt}(t)\|_{L^{2}}^{2}\\ &+\|\nabla g(t)\|_{L^{2}}^{2}+\|g_{t}(t)\|_{L^{2}}^{2}\end{split} \tag{5.18}\]
_where_
\[\Lambda_{2}(t)=\|r_{t}(t)\|_{L^{3}}^{2}+\|\alpha_{t}(t)\|_{L^{3}}^{2}+\|\nabla \alpha(t)\|_{L^{3}}^{2}+\|\nabla r(t)\|_{L^{3}}^{2}.\]
Proof.: We test the linearized Westervelt equation (5.1) by \(\Delta^{2}p\) and we integrate over \(\Omega\), using integration by parts, we obtain
\[\int_{\Omega}(r|\nabla\Delta p|^{2}+b\nabla\Delta p_{t}\cdot\nabla \Delta p)\,\mathrm{d}x\] \[= \int_{\Omega}(-\alpha\nabla p_{tt}-p_{tt}\nabla\alpha-\Delta p \nabla r-\nabla g)\cdot\nabla\Delta p\,\mathrm{d}x\]
Using Holder's inequality, we find
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\sqrt{b}\nabla\Delta p \|_{L^{2}}^{2}+\|\sqrt{r}\nabla\Delta p\|_{L^{2}}^{2}\] \[\leq \|\alpha\|_{L^{\infty}}\|\nabla p_{tt}\|_{L^{2}}\|\nabla\Delta p \|_{L^{2}}+\|\nabla\alpha\|_{L^{3}}\|p_{tt}\|_{L^{6}}\|\nabla\Delta p\|_{L^{2}}\] \[\quad+\|\nabla r\|_{L^{3}}\|\Delta p\|_{L^{6}}\|\nabla\Delta p\|_ {L^{2}}+\|\nabla g\|_{L^{2}}\|\nabla\Delta p\|_{L^{2}}.\]
Moreover, taking advantage of Assumption 1, the embedding \(H^{1}(\Omega)\hookrightarrow L^{6}(\Omega)\) and applying Young's inequality, we find
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\sqrt{b}\nabla\Delta p \|_{L^{2}}^{2}+\|\sqrt{r}\nabla\Delta p\|_{L^{2}}^{2}\] \[\leq \,C(\varepsilon)\Big{(}1+\|\nabla\alpha\|_{L^{3}}^{2}\Big{)}\| \sqrt{b}\nabla\Delta p\|_{L^{2}}^{2}+\varepsilon\|\nabla p_{tt}\|_{L^{2}}^{2}\] \[+C(\varepsilon)\Big{(}\|\nabla r\|_{L^{3}}^{2}\|\sqrt{b}\nabla \Delta p\|_{L^{2}}^{2}+\|\nabla g\|_{L^{2}}^{2}\Big{)}+\varepsilon\Big{\|} \frac{1}{r}\Big{\|}_{L^{\infty}}^{2}\|\sqrt{r}\nabla\Delta p\|_{L^{2}}^{2}.\]
We can fix \(\varepsilon\) as small as needed to absorb the last term on the right-hand side into the dissipative term on the left-hand side. Hence, we have
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\sqrt{b}\nabla\Delta p \|_{L^{2}}^{2}+\|\sqrt{r}\nabla\Delta p\|_{L^{2}}^{2}\] \[\lesssim \,(1+\|\nabla\alpha\|_{L^{3}}^{2}+\|\nabla r\|_{L^{3}}^{2})\| \sqrt{b}\nabla\Delta p\|_{L^{2}}^{2}+\|\nabla g\|_{L^{2}}^{2}+\varepsilon\| \nabla p_{tt}\|_{L^{2}}^{2}. \tag{5.19}\]
Next, we multiply the time-differentiated equation (5.11) by \(p_{ttt}\) and we integrate over \(\Omega\) to find
\[\frac{b}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\nabla p_{tt}\|_{L^{2}}^{2}+\|\sqrt{ \alpha}p_{ttt}\|_{L^{2}}^{2}=\int_{\Omega}(-\alpha_{t}p_{tt}+r_{t}\Delta p+r \Delta p_{t}+g_{t})p_{ttt}\,\mathrm{d}x. \tag{5.20}\]
Our goal now is to estimate the terms of the right-hand side of (5.20).
First, we have by using Holder's inequality
\[\frac{b}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\nabla p_{tt}\|_{L^{2} }^{2}+\|\sqrt{\alpha}p_{ttt}\|_{L^{2}}^{2}\] \[\leq \,\|\alpha_{t}\|_{L^{3}}\|p_{ttt}\|_{L^{6}}\|p_{ttt}\|_{L^{2}}+\| r_{t}\|_{L^{3}}\|\Delta p\|_{L^{6}}\|p_{ttt}\|_{L^{2}}\] \[+\|r\|_{L^{\infty}}\|\Delta p_{t}\|_{L^{2}}\|p_{ttt}\|_{L^{2}}+\| g_{t}\|_{L^{2}}\|p_{ttt}\|_{L^{2}}.\]
Furthermore, making use of Young's inequality together with the embedding \(H^{1}(\Omega)\hookrightarrow L^{6}(\Omega)\), we obtain
\[\frac{b}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\nabla p_{tt}\|_{L^{2}}^{ 2}+\|\sqrt{\alpha}p_{ttt}\|_{L^{2}}^{2}\] \[\leq C(\varepsilon)\Big{(}\|\alpha_{t}\|_{L^{3}}^{2}\|\sqrt{b} \nabla p_{tt}\|_{L^{2}}^{2}+\|r_{t}\|_{L^{3}}^{2}\|\sqrt{b}\nabla\Delta p\|_{L ^{2}}^{2}+\|\sqrt{b}\Delta p_{t}\|_{L^{2}}^{2}+\|g_{t}\|_{L^{2}}^{2}\Big{)}\] \[\quad+\varepsilon\Big{\|}\frac{1}{\sqrt{\alpha}}\Big{\|}_{L^{ \infty}}^{2}\|\sqrt{\alpha}p_{ttt}\|_{L^{2}}^{2}.\]
By selecting \(\varepsilon\) small enough, we find
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\sqrt{b}\nabla p_{tt} \|_{L^{2}}^{2}+\|\sqrt{\alpha}p_{ttt}\|_{L^{2}}^{2}\] \[\lesssim \,\|\alpha_{t}\|_{L^{3}}^{2}\|\sqrt{b}\nabla p_{tt}\|_{L^{2}}^{2 }+\|r_{t}\|_{L^{3}}^{2}\|\sqrt{b}\nabla\Delta p\|_{L^{2}}^{2}+\|\sqrt{b} \Delta p_{t}\|_{L^{2}}^{2}+\|g_{t}\|_{L^{2}}^{2}. \tag{5.21}\]
Consequently, collecting all the estimates (5.19) and (5.21), we arrive at (5.18).
Proof of the Proposition 5.1.: Collecting the estimates (5.7) and (5.18) and employing Poincare's inequality to find for all \(t\geq 0\)
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathfrak{E}[p](t)+\mathfrak{D}_{0}[ p](t)\] \[\lesssim \,(1+\Lambda_{1}(t)+\Lambda_{2}(t))\mathfrak{E}[p](t)+\|\sqrt{b} \Delta p_{t}(t)\|_{L^{2}}^{2}+\varepsilon\|\nabla p_{tt}(t)\|_{L^{2}}^{2}\] \[+\|\nabla g(t)\|_{L^{2}}^{2}+\|g_{t}(t)\|_{L^{2}}^{2},\]
where \(\mathfrak{E}[p]\) and \(\mathfrak{D}_{0}[p]\) are defined in (5.2) and (5.3) respectively. So now the terms depending on \(\varepsilon\) can be absorbed into \(\mathfrak{D}_{0}[p]\) for small values of \(\varepsilon\). In addition, recalling (5.16) and using Poincare's inequality, we obtain the following estimate for the total acoustic energy
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathfrak{E}[p](t)+\mathfrak{D}_{0}[ p](t)\lesssim(1+\Lambda(t))\mathfrak{E}[p](t)+\mathfrak{F}(t), \tag{5.22}\]
where the functions \(\Lambda\) and \(\mathfrak{F}\) are given in (5.5) and (5.6), respectively.
Next, to estimate the second term on the left-hand side of (5.4), we multiply the equation (5.11) by \(-\Delta p_{tt}\) and integrate over \(\Omega\), we have
\[b\|\Delta p_{tt}\|_{L^{2}}^{2}= \int_{\Omega}(\alpha p_{ttt}+\alpha_{t}p_{tt}-r_{t}\Delta p-r \Delta p_{t}-g_{t})\Delta p_{tt}\,\mathrm{d}x.\]
Using Holder's inequality, it follows that
\[b\|\Delta p_{tt}\|_{L^{2}}^{2} \leq\|\sqrt{\alpha}\|_{L^{\infty}}\|\sqrt{\alpha}p_{ttt}\|_{L^{2}} \|\Delta p_{tt}\|_{L^{2}}+\|\alpha_{t}\|_{L^{3}}\|p_{tt}\|_{L^{6}}\|\Delta p_{ tt}\|_{L^{2}}\] \[\quad+\|r_{t}\|_{L^{3}}\|\Delta p\|_{L^{6}}\|\Delta p_{tt}\|_{L^ {2}}+\|r\|_{L^{\infty}}\|\Delta p_{t}\|_{L^{2}}\|\Delta p_{tt}\|_{L^{2}}\] \[\quad+\|g_{t}\|_{L^{2}}\|\Delta p_{tt}\|_{L^{2}}.\]
Applying Young's inequality, making use of the continuous embedding \(H^{1}(\Omega)\hookrightarrow L^{6}(\Omega)\), and keeping in mind Assumption 1, we find
\[b\|\Delta p_{tt}\|_{L^{2}}^{2}\lesssim \,\|\sqrt{\alpha}p_{ttt}\|_{L^{2}}^{2}+\|\alpha_{t}\|_{L^{3}}^{2} \|\sqrt{b}\nabla p_{tt}\|_{L^{2}}^{2}+\|r_{t}\|_{L^{3}}^{2}\|\sqrt{b}\nabla \Delta p\|_{L^{2}}^{2}\] \[+\|\sqrt{b}\Delta p_{t}\|_{L^{2}}^{2}+\|g_{t}\|_{L^{2}}^{2}+ \varepsilon\|\Delta p_{tt}\|_{L^{2}}^{2}.\]
Fixing \(\varepsilon>0\) small enough and keeping in mind (5.16), we get for all \(t\geq 0\)
\[\begin{split}\frac{b}{2}\|\Delta p_{tt}(t)\|^{2}\lesssim& \,\|\sqrt{\alpha}p_{ttt}(t)\|_{L^{2}}^{2}+(\|\alpha_{t}(t)\|_{L^{3}}^{2}+\|r_{ t}(t)\|_{L^{3}}^{2})E_{3}[p](t)\\ &+E_{2}[p](t)+\|g(t)\|_{L^{2}}^{2}+\|g_{t}(t)\|_{L^{2}}^{2}.\end{split} \tag{5.23}\]
We sum up \(\gamma\times(5.23)\) and the estimate (5.22), then we take \(\gamma>0\) suitably small in order to absorb the first term on the right of (5.23) by the dissipation \(\mathfrak{D}_{0}[p]\) in (5.22), then, it follows that
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathfrak{E}[p](t)+\mathfrak{D}_{0}[p](t)+b\| \Delta p_{tt}(t)\|^{2}\lesssim(1+\Lambda(t))\mathfrak{E}[p](t)+\mathfrak{F}(t). \tag{5.24}\]
On the other hand, by multiplying the equation (5.1) by \(\Delta^{2}p_{t}\) and integrating over \(\Omega\), we obtain
\[b\|\nabla\Delta p_{t}\|_{L^{2}}^{2}=\int_{\Omega}(\alpha\nabla p_{tt}+p_{tt} \nabla\alpha-\Delta p\nabla r-r\nabla\Delta p-\nabla g)\cdot\nabla\Delta p_{t} \,\mathrm{d}x.\]
Applying Holder's inequality, we find
\[\begin{split} b\|\nabla\Delta p_{t}\|_{L^{2}}^{2}\leq& \,\|\alpha\|_{L^{\infty}}\|\nabla p_{tt}\|_{L^{2}}\|\nabla\Delta p _{t}\|_{L^{2}}+\|\nabla\alpha\|_{L^{3}}\|p_{tt}\|_{L^{6}}\|\nabla\Delta p_{t}\| _{L^{2}}\\ &+\|r\|_{L^{\infty}}\|\nabla\Delta p\|_{L^{2}}\|\nabla\Delta p_{t }\|_{L^{2}}+\|\nabla r\|_{L^{3}}\|\Delta p\|_{L^{6}}\|\nabla\Delta p_{t}\|_{L^ {2}}\\ &+\|\nabla g\|_{L^{2}}\|\nabla\Delta p_{t}\|_{L^{2}}.\end{split}\]
Furthermore, the embedding \(H^{1}(\Omega)\hookrightarrow L^{6}(\Omega)\) together with Young's inequality and Assumption 1 yield
\[\begin{split} b\|\nabla\Delta p_{t}\|_{L^{2}}^{2}\lesssim& \,\|\sqrt{b}\nabla p_{tt}\|_{L^{2}}^{2}+\|\nabla\alpha\|_{L^{3}}^{2}\|\sqrt{b }\nabla p_{tt}\|_{L^{2}}^{2}+\|\sqrt{b}\nabla\Delta p\|_{L^{2}}^{2}\\ &+\|\nabla r\|_{L^{3}}^{2}\|\sqrt{b}\nabla\Delta p\|_{L^{2}}^{2}+ \|\nabla g\|_{L^{2}}^{2}+\varepsilon\|\nabla\Delta p_{t}\|_{L^{2}}^{2}.\end{split}\]
Hence, selectiong \(\varepsilon\) as small as needed, we infer that
\[\frac{b}{2}\|\nabla\Delta p_{t}(t)\|_{L^{2}}^{2}\lesssim \,(1+\|\nabla\alpha\|_{L^{3}}^{2}+\|\nabla r\|_{L^{3}}^{2})E_{3}[p](t)+\| \nabla g(t)\|_{L^{2}}^{2},\quad t\geq 0. \tag{5.25}\]
Adding up the estimates (5.24) and (5.25) gives
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathfrak{E}[p](t)+\mathfrak{D}[p](t)\lesssim(1+ \Lambda(t))\mathfrak{E}[p](t)+\mathfrak{F}(t).\]
Consequently, applying Gronwall's inequality cf. Lemma 3.1, we infer that
\[\begin{split}\mathfrak{E}[p](t)+\int_{0}^{t}\mathfrak{D}[p](s) \,\mathrm{d}s\lesssim&\,\mathfrak{E}[p](0)\exp\Big{(}\int_{0}^{t} (1+\Lambda(s))\,\mathrm{d}s\Big{)}\\ &+\int_{0}^{t}\mathfrak{F}(s)\exp\Big{(}\int_{s}^{t}(1+\Lambda( \sigma))\mathrm{d}\sigma\Big{)}\,\mathrm{d}s.\end{split} \tag{5.26}\]
Furthermore, the estimates (5.26) and (5.16) yield
\[\begin{split} b\|\Delta p_{t}(t)\|_{L^{2}}^{2}\lesssim& \,\mathfrak{E}[p](t)+\|g(t)\|_{L^{2}}^{2}\\ \lesssim&\,\mathfrak{E}[p](0)\exp\Big{(}\int_{0}^{t} (1+\Lambda(s))\,\mathrm{d}s\Big{)}\\ &+\int_{0}^{t}\mathfrak{F}(s)\exp\Big{(}\int_{s}^{t}(1+\Lambda (\sigma))\mathrm{d}\sigma\Big{)}\,\mathrm{d}s.\end{split} \tag{5.27}\]
where we employed the inequality (3.1) to bound from above the term \(\|g(t)\|_{L^{2}}^{2}\). Then, it suffices to sum up (5.26) and (5.27) in order to reach the estimate (5.4) for the smooth eigenfunctions of the Dirichlet-Laplacian. Standard compactness method allows passing to the limit; thus proving the existence of a solution \(p\in X_{p}\) to (5.1). From the weak and weak-\(\star\) lower semi-continuity of norms, we get that \(p\) satisfies the same energy bound (5.4). The uniqueness of the solution \(p\) is guaranteed, since the only solution to the homogeneous problem
\[\alpha(x,t)p_{tt}-r(x,t)\Delta p-b\Delta p_{t}=0,\quad p(x,0)=p_{t}(x,0)=0, \quad p|_{\partial\Omega}=0\]
is zero. Indeed, from (5.4), we have \(\mathfrak{E}[p](t)=0\) which immediately gives \(p=0\). We conclude the proof by noting that \(p\) belonging to \(X_{p}\) implies that (see [17, Chapter 1, Lemma 2.1] and [6, Chapter 5, Theorem 2])
\[p\in C(0,T;H^{3}(\Omega)\cap H^{1}_{0}(\Omega)),\qquad p_{t}\in C(0,T;H^{2}( \Omega)\cap H^{1}_{0}(\Omega)).\]
This completes the proof of Proposition 5.1.
## 6. Uniform local well-posedness: Proof of Theorem 2.1
In this section we prove Theorem 2.1. The proof is accomplished using Banach's fixed point theorem. For this purpose, we recall (2.1) and define the ball \(B\) as
\[B=\Big{\{}(p^{*},\Theta^{*},q^{*})\in\mathcal{X}:(p^{*}(0),p_{t}^{*}(0),\Theta ^{*}(0),q^{*}(0))=(p_{0},p_{1},\Theta_{0},q_{0})\]
\[\|p^{*}\|_{L^{\infty}L^{\infty}}\leq\gamma<\frac{1}{2k_{1}},\quad\|p^{*}\|_{X_ {p}}\leq R_{1},\quad\|(\Theta^{*},q^{*})\|_{X_{\Theta}\times X_{q}}\leq R_{2} \Big{\}},\]
where \(\mathcal{X}:=X_{p}\times X_{\Theta}\times X_{q}\). Note that we have two different radii. The reason behind this will be made clear in the proof below, as we will need to impose a smallness condition on \(R_{1}>0\) but not on \(R_{2}>0\).
To check that the ball \(B\) is a non-empty subset of \(\mathcal{X}\), one can take for instance (see [21]) \(\alpha=r=1\) and \(f=g=0\) in (5.1) and (4.1). Then the solution \((p,\Theta,q)\in\mathcal{X}\) lies in \(B\) if we choose \(R_{1},R_{2}\) and \(\gamma\) such that
\[C_{T}\mathfrak{E}[p](0)\leq R_{1}^{2}\leq\gamma^{2}<\frac{1}{(2k_{1})^{2}}, \quad C_{T}\big{(}\|q_{0}\|_{H^{1}}^{2}+(1+\bar{\tau}+\bar{\tau}^{2})E^{\bar{ \tau}}[\Theta,q](0)\big{)}\leq R_{2}^{2}.\]
The solution spaces \(X_{p}\), \(X_{\Theta}\) and \(X_{q}\) (defined in (2.1)) are endowed with the norms
\[\|p\|_{X_{p}}:= \|p\|_{L^{\infty}H^{3}}+\|p_{t}\|_{L^{\infty}H^{2}}+\|\nabla \Delta p_{t}\|_{L^{2}L^{2}}+\|\nabla p_{tt}\|_{L^{\infty}L^{2}}\] \[+\|\Delta p_{tt}\|_{L^{2}L^{2}}+\|p_{ttt}\|_{L^{2}L^{2}},\] \[\|\Theta\|_{X_{\Theta}}:= \|\Theta\|_{L^{\infty}H^{2}}+\|\Theta_{t}\|_{L^{\infty}H^{1}}+\| \Theta_{tt}\|_{L^{\infty}L^{2}},\] \[\|q\|_{X_{q}}:= \|q\|_{L^{\infty}H^{1}}+\sum_{k=1}^{2}\|\partial_{t}^{k}q\|_{L^{ 2}L^{2}}.\]
Then, the product space \(\mathcal{X}\) is equipped with the norm
\[\|(p,\Theta,q)\|_{\mathcal{X}}^{2}=\|p\|_{X_{p}}^{2}+\|\Theta\|_{X_{\Theta}}^{ 2}+\|q\|_{X_{q}}^{2}.\]
These norms are clearly equivalent to the energies \(\mathcal{E}[\Theta]\) and \(\mathfrak{E}[p]\). We have for all \(t\geq 0\)
\[\sup_{t\in(0,T)}\mathcal{E}[\Theta](t)+\|q\|_{L^{\infty}H^{1}}^{2}+\sum_{k=1}^{2} \int_{0}^{T}\|\partial_{t}^{k}q\|_{L^{2}}^{2}\lesssim\|(\Theta,q)\|_{X_{\Theta }\times X_{q}}^{2}\]
and
\[\|(\Theta,q)\|_{X_{\Theta}\times X_{q}}^{2}\lesssim\sup_{t\in(0,T)}\mathcal{E} [\Theta](t)+\|q\|_{L^{\infty}H^{1}}^{2}+\sum_{k=1}^{2}\int_{0}^{T}\|\partial_{t }^{k}q\|_{L^{2}}^{2}.\]
where \(\|(\Theta,q)\|_{X_{\Theta}\times X_{q}}^{2}=\|\Theta\|_{X_{\Theta}}^{2}+\|q\|_ {X_{q}}^{2}\). Furthermore, taking into account Assumption 1, especially the boundedness of the functions \(\alpha\) and \(r\), we obtain
\[\|p\|_{X_{p}}^{2} \lesssim\sup_{t\in(0,T)}\mathfrak{E}[p](t)+\sup_{t\in(0,T)}b\| \Delta p_{t}(t)\|_{L^{2}}^{2}+\int_{0}^{T}\|\sqrt{\alpha}p_{ttt}\|_{L^{2}}^{2 }\,\mathrm{d}s\] \[\quad+\int_{0}^{T}b\|\nabla\Delta p_{t}\|_{L^{2}}^{2}\,\mathrm{d} s+\int_{0}^{T}b\|\Delta p_{tt}\|_{L^{2}}^{2}\,\mathrm{d}s\] \[\lesssim\sup_{t\in(0,T)}\mathfrak{E}[p](t)+\sup_{t\in(0,T)}b\| \Delta p_{t}(t)\|_{L^{2}}+\int_{0}^{T}\mathfrak{D}[p](s)\,\mathrm{d}s.\]
The inverse inequality also holds
\[\sup_{t\in(0,T)}\mathfrak{E}[p](t)+\sup_{t\in(0,T)}b\|\Delta p_{t} (t)\|_{L^{2}}^{2}+\int_{0}^{T}\|\sqrt{\alpha}p_{ttt}\|_{L^{2}}^{2}\,\,\mathrm{d}s\] \[\qquad\qquad\qquad+\int_{0}^{T}b\|\nabla\Delta p_{t}\|_{L^{2}}^{2 }\,\mathrm{d}s+\int_{0}^{T}b\|\Delta p_{tt}\|_{L^{2}}^{2}\,\mathrm{d}s\, \lesssim\|p\|_{X_{p}}^{2}.\]
Notice that the norm \(\|(p,\Theta,q)\|_{\mathcal{X}}\) is independent of \(\tau\). This plays a crucial role in the study of the limit \(\tau\to 0\).
We consider the operator \(\mathcal{T}\) that maps \((p^{*},\Theta^{*},q^{*})\in B\subset\mathcal{X}\) to \((p,\Theta,q)\in\mathcal{X}\) the solution of the coupled problem
\[\begin{cases}(1-2k(\Theta^{*})p^{*})p_{tt}-h(\Theta^{*})\Delta p-b\Delta p_{t} =2k(\Theta^{*})(p_{t}^{*})^{2},&\text{ in }\quad\Omega\times(0,T),\\ m\Theta_{t}+\nabla\cdot q+\ell\Theta=\mathcal{Q}(p_{t}^{*}),&\text{ in }\quad\Omega\times(0,T),\\ \tau q_{t}+q+\kappa_{\mathrm{a}}\nabla\Theta=0,&\text{ in }\quad\Omega\times(0,T).\end{cases} \tag{6.1}\]
The existence of a unique solution of system (1.9) is equivalent to the existence of a unique fixed point in \(B\) to the mapping \(\mathcal{T}\), which will be guaranteed by Banach's fixed-point theorem. Therefore, to ensure applicability of the latter, we want to show that for \(R_{2}\) large enough, \(R_{1}\) small enough and for \(\delta\) small enough, we have:
**(i):**: The mapping \(\mathcal{T}:B\to B\) is well defined.
**(ii):**: \(\mathcal{T}\) is a contraction mapping.
This is achieved by following the lines of the analysis presented in [21, Section 4]. The results are contained in the next two lemmas.
**Lemma 6.1**.: _Given \(\tau\in(0,\bar{\tau}]\). Then, for small enough \(R_{1}\) and \(\delta\), the operator \(\mathcal{T}\) is self-mapping; namely \(\mathcal{T}(B)\subset B\)._
Proof.: Given \((p^{*},\Theta^{*},q^{*})\in B\). We are looking to show that \((p,\Theta,q)=\mathcal{T}(p^{*},\Theta^{*},q^{*})\) the solution of (6.1) also lies in \(B\). To do this, we write the system (6.1) in the context of the Propositions 4.1 and 5.1. To this end, we set
\[\alpha(x,t) =1-2k(\Theta^{*})p^{*},\qquad r(x,t)=h(\Theta^{*}),\] \[g(x,t) =2k(\Theta^{*})(p_{t}^{*})^{2},\qquad f(x,t)=\mathcal{Q}(p_{t}^{* })=\frac{2b}{\rho_{\mathrm{a}}\mathrm{C}_{\mathrm{a}}^{4}}(p_{t}^{*})^{2}\]
and our goal is to prove that these functions satisfy Assumption 1. We begin by checking the nondegeneracy condition. Indeed, we have
\[\|2k(\Theta^{*})p^{*}\|_{L^{\infty}L^{\infty}}\leq 2k_{1}\|p^{*}\|_{L^{\infty}L^{ \infty}}\leq 2k_{1}\gamma,\]
hence
\[0<\alpha_{0}=1-2k_{1}\gamma\leq\alpha(x,t)\leq 1+2k_{1}\gamma=\alpha_{1}.\]
In addition, from (H1), we have
\[0<r_{0}=h_{1}\leq h(\Theta^{*}),\]
which ensures that the functions \(r,\alpha\) do not degenerate.
Next, we focus on the function \(\Lambda(t)\) given in (5.5). Using the properties of \(k\) (see the assumption (K2)) and the embedding \(H^{2}(\Omega)\hookrightarrow L^{\infty}(\Omega)\), we have
\[\|\alpha_{t}\|_{L^{2}L^{2}} =\|\partial_{t}(2k(\Theta^{*})p^{*})\|_{L^{2}L^{2}}\] \[\leq 2\|k(\Theta^{*})p_{t}^{*}\|_{L^{2}L^{2}}+2\|k^{\prime}( \Theta^{*})\Theta_{t}^{*}p^{*}\|_{L^{2}L^{2}}\] \[\lesssim\|k(\Theta^{*})\|_{L^{\infty}L^{\infty}}\|p_{t}^{*}\|_{L^ {2}L^{2}}+\|k^{\prime}(\Theta^{*})\|_{L^{\infty}L^{\infty}}\|\Theta_{t}^{*}\|_ {L^{2}L^{2}}\|p^{*}\|_{L^{\infty}L^{\infty}}\] \[\lesssim k_{1}\|p_{t}^{*}\|_{L^{2}L^{2}}+(1+\|\Theta^{*}\|_{L^{ \infty}L^{\infty}}^{\gamma_{2}+1})\|\Theta_{t}^{*}\|_{L^{2}L^{2}}\|p^{*}\|_{L^ {\infty}L^{\infty}}\] \[\leq C_{T}(R_{1}+(1+R_{2}^{1+\gamma_{2}})R_{1}R_{2}).\]
Let \(\beta=\frac{4}{4-d}\). Then, we obtain
\[\|\alpha_{t}\|_{L^{\beta}L^{2}} \leq 2\|k(\Theta^{*})p_{t}^{*}\|_{L^{\beta}L^{2}}+2\|k^{\prime}( \Theta^{*})\Theta_{t}^{*}p^{*}\|_{L^{\beta}L^{2}}\] \[\lesssim\|k(\Theta^{*})\|_{L^{\infty}L^{\infty}}\|p_{t}^{*}\|_{L^ {\beta}L^{2}}+\|k^{\prime}(\Theta^{*})\|_{L^{\infty}L^{\infty}}\|\Theta_{t}^{ *}\|_{L^{\beta}L^{2}}\|p^{*}\|_{L^{\infty}L^{\infty}}\] \[\lesssim k_{1}\|p_{t}^{*}\|_{L^{\beta}L^{2}}+(1+\|\Theta^{*}\|_{L ^{\infty}L^{\infty}}^{\gamma_{2}+1})\|\Theta_{t}^{*}\|_{L^{\beta}L^{2}}\|p^{* }\|_{L^{\infty}H^{3}}\] \[\leq C_{T}(R_{1}+(1+R_{2}^{1+\gamma_{2}})R_{1}R_{2}),\]
where we used the embedding \(L^{\infty}(0,T)\hookrightarrow L^{\beta}(0,T)\). Further, using the embedding \(H^{1}(\Omega)\hookrightarrow L^{3}(\Omega)\) we can estimate \(\|\alpha_{t}\|_{L^{2}L^{3}},\|\nabla\alpha\|_{L^{2}L^{3}}\) as follows
\[\|\alpha_{t}\|_{L^{2}L^{3}} \leq 2\|k(\Theta^{*})p_{t}^{*}\|_{L^{2}L^{3}}+2\|k^{\prime}( \Theta^{*})\Theta_{t}^{*}p^{*}\|_{L^{2}L^{3}}\] \[\lesssim\|k(\Theta^{*})\|_{L^{\infty}L^{\infty}}\|p_{t}^{*}\|_{L^ {2}L^{3}}+\|k^{\prime}(\Theta^{*})\|_{L^{\infty}L^{\infty}}\|\Theta_{t}^{*}\|_ {L^{2}L^{3}}\|p^{*}\|_{L^{\infty}L^{\infty}}\] \[\lesssim k_{1}\|p_{t}^{*}\|_{L^{2}H^{1}}+(1+\|\Theta^{*}\|_{L^{ \infty}L^{\infty}}^{\gamma_{2}+1})\|\Theta_{t}^{*}\|_{L^{2}H^{1}}\|p^{*}\|_{L^ {\infty}H^{3}}, \tag{6.2}\]
and
\[\|\nabla\alpha\|_{L^{2}L^{3}} \leq 2\|k(\Theta^{*})\nabla p^{*}\|_{L^{2}L^{3}}+2\|k^{\prime}( \Theta^{*})\nabla\Theta^{*}p^{*}\|_{L^{2}L^{3}}\] \[\lesssim\|k(\Theta^{*})\|_{L^{\infty}L^{\infty}}\|\nabla p^{*}\|_{L ^{2}L^{3}}+\|k^{\prime}(\Theta^{*})\|_{L^{\infty}L^{\infty}}\|\nabla\Theta^{*}\| _{L^{2}L^{3}}\|p^{*}\|_{L^{\infty}L^{\infty}}\] \[\lesssim k_{1}\|p^{*}\|_{L^{2}H^{2}}+(1+\|\Theta^{*}\|_{L^{\infty}L ^{\infty}}^{\gamma_{2}+1})\|\Theta^{*}\|_{L^{2}H^{2}}\|p^{*}\|_{L^{\infty}H^{3}}. \tag{6.3}\]
Hence, it results from (6.2) and (6.3) that
\[\|\alpha_{t}\|_{L^{2}L^{3}}+\|\nabla\alpha\|_{L^{2}L^{3}}\leq C_{T}(R_{1}+(1+R_{2 }^{1+\gamma_{2}})R_{1}R_{2}).\]
Similarly, we can derive estimates for the terms of \(\Lambda\) involving the function \(r\). On account of the properties of the function \(h\) (see (H3)), we find
\[\|r_{t}\|_{L^{\beta}L^{2}} =\|h^{\prime}(\Theta^{*})\Theta_{t}^{*}\|_{L^{\beta}L^{2}}\leq\|h ^{\prime}(\Theta^{*})\|_{L^{\infty}L^{\infty}}\|\Theta_{t}^{*}\|_{L^{\beta}L^{ 2}}\] \[\leq C_{T}(1+\|\Theta^{*}\|_{L^{\infty}L^{\infty}}^{\gamma_{1}+1} )\|\Theta_{t}^{*}\|_{L^{\infty}L^{2}}\leq C_{T}(1+R_{2}^{\gamma_{1}+1})R_{2},\]
and
\[\|\nabla r\|_{L^{2}L^{2}} =\|h^{\prime}(\Theta^{*})\nabla\Theta^{*}\|_{L^{2}L^{2}}\leq\|h^{ \prime}(\Theta^{*})\|_{L^{\infty}L^{\infty}}\|\nabla\Theta^{*}\|_{L^{2}L^{2}}\] \[\lesssim(1+\|\Theta^{*}\|_{L^{\infty}L^{\infty}}^{\gamma_{1}+1}) \|\nabla\Theta^{*}\|_{L^{2}L^{2}}\leq C_{T}(1+R_{2}^{\gamma_{1}+1})R_{2}.\]
Again, using the embedding \(H^{1}(\Omega)\hookrightarrow L^{3}(\Omega)\), we have
\[\|r_{t}\|_{L^{2}L^{3}} =\|h^{\prime}(\Theta^{*})\Theta_{t}^{*}\|_{L^{2}L^{3}}\lesssim(1+ \|\Theta^{*}\|_{L^{\infty}L^{\infty}}^{\gamma_{1}+1})\|\Theta_{t}^{*}\|_{L^{2} L^{3}}\] \[\lesssim(1+\|\Theta^{*}\|_{L^{\infty}L^{\infty}}^{\gamma_{1}+1}) \|\Theta_{t}^{*}\|_{L^{2}H^{1}}.\]
Moreover, elliptic regularity allows one to get
\[\|\nabla r\|_{L^{2}L^{3}} =\|h^{\prime}(\Theta^{*})\nabla\Theta^{*}\|_{L^{2}L^{3}}\leq\|h^ {\prime}(\Theta^{*})\|_{L^{\infty}L^{\infty}}\|\nabla\Theta^{*}\|_{L^{2}L^{3}}\] \[\lesssim(1+\|\Theta^{*}\|_{L^{\infty}L^{\infty}}^{\gamma_{1}+1}) \|\Theta^{*}\|_{L^{2}H^{2}}.\]
Thus, it follows that
\[\|r_{t}\|_{L^{2}L^{3}}+\|\nabla r\|_{L^{2}L^{3}}\leq C_{T}(1+R_{2}^{\gamma_{1} +1})R_{2}.\]
For convenience, we recall the definition of \(\Lambda\)
\[\Lambda(t)=\|\alpha_{t}(t)\|_{L^{2}}^{2} +\|\alpha_{t}(t)\|_{L^{2}}^{\frac{4}{4-d}}+\|r_{t}(t)\|_{L^{2}}^{ \frac{4}{4-d}}+\|\nabla r(t)\|_{L^{2}}^{2}+\|r_{t}(t)\|_{L^{3}}^{2}\] \[+\|\alpha_{t}(t)\|_{L^{3}}^{2}+\|\nabla r(t)\|_{L^{3}}^{2}+\| \nabla\alpha(t)\|_{L^{3}}^{2},\]
so altogether the above estimates imply that
\[\|\Lambda\|_{L^{1}(0,t)}\leq C_{1}(T,R_{1},R_{2}). \tag{6.4}\]
Now, we turn our attention to the source term \(g\). Using the embeddings \(H^{1}(\Omega)\hookrightarrow L^{4}(\Omega)\), \(H^{1}(\Omega)\hookrightarrow L^{6}(\Omega)\) and the fact that \(\|(p_{t}^{*})^{2}\|_{L^{3}}=\|p_{t}^{*}\|_{L^{6}}^{2}\), we have
\[\|g\|_{L^{2}H^{1}}+\|g_{t}\|_{L^{2}L^{2}}= \|4k(\Theta^{*})\nabla p_{t}^{*}p_{t}^{*}+2k^{\prime}(\Theta^{*}) \nabla\Theta^{*}(p_{t}^{*})^{2}\|_{L^{2}L^{2}}\] \[+\|4k(\Theta^{*})p_{t}^{*}p_{tt}^{*}+2k^{\prime}(\Theta^{*}) \Theta_{t}^{*}(p_{t}^{*})^{2}\|_{L^{2}L^{2}}\] \[\lesssim \|k(\Theta^{*})\|_{L^{\infty}L^{\infty}}\|\nabla p_{t}^{*}\|_{L^{ 2}L^{4}}\|p_{t}^{*}\|_{L^{\infty}L^{4}}\] \[+\|k^{\prime}(\Theta^{*})\|_{L^{\infty}L^{\infty}}\|\nabla\Theta^{ *}\|_{L^{2}L^{6}}\|(p_{t}^{*})^{2}\|_{L^{\infty}L^{3}}\] \[+\|k(\Theta^{*})\|_{L^{\infty}L^{\infty}}\|p_{t}^{*}\|_{L^{\infty} L^{4}}\|p_{tt}^{*}\|_{L^{2}L^{4}}\] \[+\|k^{\prime}(\Theta^{*})\|_{L^{\infty}L^{\infty}}\|\Theta_{t}^{*} \|_{L^{2}L^{6}}\|(p_{t}^{*})^{2}\|_{L^{\infty}L^{3}}\] \[\lesssim \|p_{t}^{*}\|_{L^{2}H^{2}}\|p_{t}^{*}\|_{L^{\infty}H^{1}}+(1+\| \Theta^{*}\|_{L^{\infty}L^{\infty}}^{\gamma_{1}+1})\|\Theta^{*}\|_{L^{2}H^{2}} \|p_{t}^{*}\|_{L^{\infty}H^{1}}^{2}\] \[+\|p_{t}^{*}\|_{L^{\infty}H^{1}}\|p_{tt}^{*}\|_{L^{2}H^{1}}+(1+ \|\Theta^{*}\|_{L^{\infty}L^{\infty}}^{\gamma_{2}+1})\|\Theta_{t}^{*}\|_{L^{2} H^{1}}\|p_{t}^{*}\|_{L^{\infty}H^{1}}^{2},\]
which gives
\[\|g\|_{L^{2}H^{1}}+\|g_{t}\|_{L^{2}L^{2}}\leq C_{T}R_{1}^{2}(1+R_{2}+R_{2}^{2+ \gamma_{2}}).\]
Thus, we can bound from above the function \(\mathfrak{F}\) defined in (5.6) as
\[\|\mathfrak{F}\|_{L^{1}(0,t)}=\int_{0}^{t}(\|\nabla g\|_{L^{2}}^{2}+ \|g_{t}\|_{L^{2}}^{2})\,\mathrm{d}s\leq \,\|g\|_{L^{2}H^{1}}^{2}+\|g_{t}\|_{L^{2}L^{2}}^{2}\] \[\leq C_{T}R_{1}^{4}(1+R_{2}^{2}(1+R_{2}^{2+2\gamma_{2}})).\]
Hence, it results that
\[\|\mathfrak{F}\|_{L^{1}(0,t)}\leq R_{1}^{4}C_{2}(T,R_{2}).\]
Consequently, from Proposition 5.1 we have the existence of a unique \(p\in X_{p}\) solution to the first equation in (6.1). Moreover, since \((p^{*},\Theta^{*},q^{*})\in B\), we have \(f=\mathcal{Q}(p^{*})\in H^{2}(0,T;L^{2}(\Omega))\). Then, according to Proposition 4.1, there exists a unique \((\Theta,q)\in X_{\Theta}\times X_{q}\) solution of the second and third equations in (6.1). That is to say, the mapping \(\mathcal{T}\) is well-defined.
On account of (5.4) and the fact that \(\mathfrak{E}[p](0)\leq\delta\), we obtain
\[\|p\|_{X_{p}}^{2} \lesssim\sup_{t\in(0,T)}\mathfrak{E}[p](t)+\sup_{t\in(0,T)}b\| \Delta p_{t}(t)\|_{L^{2}}^{2}+\int_{0}^{T}\mathfrak{D}[p](s)\,\mathrm{d}s\] \[\lesssim \,\delta\exp(T(1+C_{1}(T,R_{1},R_{2})))+T^{2}R_{1}^{4}\exp(T(1+C_ {1}(T,R_{1},R_{2})))C_{2}(T,R_{2}).\]
Thus, by choosing \(\delta\) and \(R_{1}\) small enough, we get
\[\|p\|_{X_{p}}\leq R_{1}.\]
Also, observing that
\[\|p\|_{L^{\infty}L^{\infty}}\lesssim\|\Delta p\|_{L^{\infty}L^{2}}\lesssim\|p \|_{X_{p}},\]
we obtain an upper bound \(\gamma<\dfrac{1}{2k_{1}}\) for \(\|p\|_{L^{\infty}L^{\infty}}\) by possibly reducing \(R_{1}\).
Therefore, it remains to verify that \(\|(\Theta,q)\|_{X_{\Theta}\times X_{q}}\leq R_{2}\). First, we have
\[\|f\|_{H^{2}L^{2}}^{2}\lesssim \,\|(p_{t}^{*})^{2}\|_{L^{2}L^{2}}^{2}+\|2p_{t}^{*}p_{tt}^{*}\|_{ L^{2}L^{2}}^{2}+\|2(p_{tt}^{*})^{2}+2p_{t}^{*}p_{ttt}^{*}\|_{L^{2}L^{2}}^{2}\] \[\lesssim \,\|p_{t}^{*}\|_{L^{\infty}L^{4}}^{2}\|p_{t}^{*}\|_{L^{2}L^{4}}^{ 2}+\|p_{t}^{*}\|_{L^{\infty}L^{4}}^{2}\|p_{tt}^{*}\|_{L^{2}L^{4}}^{2}\] \[+\|p_{tt}^{*}\|_{L^{\infty}L^{4}}^{2}\|p_{ttt}^{*}\|_{L^{2}L^{4}}^{ 2}+\|p_{t}^{*}\|_{L^{\infty}L^{\infty}}^{2}\|p_{ttt}^{*}\|_{L^{2}L^{2}}^{2}.\]
Thanks to the embedding \(H^{1}(\Omega)\hookrightarrow L^{4}(\Omega)\), we find
\[\|f\|_{H^{2}L^{2}}^{2}\leq C_{T}\|p\|_{X_{p}}^{4}\leq C_{T}R_{1}^{4}.\]
Then, according to Proposition 4.1 and particularly the estimates (4.17) and (4.18), we get
\[\|(\Theta,q)\|_{X_{\Theta}\times X_{q}}^{2}\lesssim C_{T}\Big{(}\|q_{0}\|_{H^ {1}}^{2}+(1+\bar{\tau}+\bar{\tau}^{2})(E^{\bar{\tau}}[\Theta,q](0)+\|f\|_{H^{2 }L^{2}}^{2})\Big{)}.\]
We emphasize that the constant \(C_{T}>0\) in this inequality does not depend on the parameter \(\tau\). So it suffices to take \(R_{2}\) large enough such that
\[C_{T}\Big{(}\|q_{0}\|_{H^{1}}^{2}+(1+\bar{\tau}+\bar{\tau}^{2})(E^{\bar{\tau}} [\Theta,q](0)+R_{1}^{4})\Big{)}\leq R_{2}^{2},\]
in order to conclude that the solution \((p,\Theta,q)\) of the system (6.1) remains in \(B\).
**Lemma 6.2**.: _Let \(\tau\in(0,\bar{\tau}]\). If \(R_{1}\) and \(\delta\) are sufficiently small, then the mapping \(\mathcal{T}\) is a contraction on \(B\)._
Proof.: Let \((p_{1}^{*},\Theta_{1}^{*},q_{1}^{*}),(p_{2}^{*},\Theta_{2}^{*},q_{2}^{*})\in B\) and let \((p_{1},\Theta_{1},q_{1}),(p_{2},\Theta_{2},q_{2})\) stand for their corresponding images by the operator \(\mathcal{T}\); that is
\[\mathcal{T}(p_{1}^{*},\Theta_{1}^{*},q_{1}^{*})=(p_{1},\Theta_{1},q_{1})\quad \text{and}\quad\mathcal{T}(p_{2}^{*},\Theta_{2}^{*},q_{2}^{*})=(p_{2},\Theta_{ 2},q_{2}).\]
Since \((p_{1},\Theta_{1},q_{1}),(p_{2},\Theta_{2},q_{2})\) are both solutions to system (6.1), clearly the differences
\[\hat{p}=p_{1}-p_{2},\quad\hat{\Theta}=\Theta_{1}-\Theta_{2},\quad\hat{q}=q_{1}- q_{2},\]
\[\hat{p}^{*}=p_{1}^{*}-p_{2}^{*},\quad\hat{\Theta}^{*}=\Theta_{1}^{*}-\Theta_{2} ^{*},\quad\hat{q}^{*}=q_{1}^{*}-q_{2}^{*}\]
solve the following system
\[\begin{cases}(1-2k(\Theta_{1}^{*})p_{1}^{*})\hat{p}_{tt}-h(\Theta_{1}^{*}) \Delta\hat{p}-b\Delta\hat{p}_{t}=g_{1},\\ m\hat{\Theta}_{t}+\nabla\cdot\hat{q}+\ell\hat{\Theta}=f_{1},&\text{in}\quad\Omega \times(0,T),\\ \tau\hat{q}_{t}+\hat{q}+\kappa_{\text{a}}\nabla\hat{\Theta}=0,\end{cases} \tag{6.5}\]
with the initial and boundary conditions
\[\hat{p}(x,0)=\hat{p}_{t}(x,0)=\hat{\Theta}(x,0)=0,\quad\hat{q}(x,0)=0,\quad \text{in}\quad\Omega.\]
\[\hat{p}=\hat{\Theta}=0,\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\text{ on}\quad\partial\Omega\times(0,T).\]
The forcing terms \(f_{1}\) and \(g_{1}\) are given by
\[f_{1}= \,\mathcal{Q}(p_{1t}^{*})-\mathcal{Q}(p_{2t}^{*}),\] \[g_{1}= \,2(k(\Theta_{1}^{*})p_{1}^{*}-k(\Theta_{2}^{*})p_{2}^{*})p_{2tt} +(h(\Theta_{1}^{*})-h(\Theta_{2}^{*}))\Delta p_{2}+2k(\Theta_{1}^{*})(p_{1t}^ {*})^{2}-2k(\Theta_{2}^{*})(p_{2t}^{*})^{2}\] \[= \,2(k(\Theta_{1}^{*})-k(\Theta_{2}^{*}))(p_{2}^{*}p_{2tt}+(p_{2t} ^{*})^{2})+(h(\Theta_{1}^{*})-h(\Theta_{2}^{*}))\Delta p_{2}\] \[+2k(\Theta_{1}^{*})\big{(}(p_{1t}^{*}+p_{2t}^{*})\hat{p}_{t}^{*}+ \hat{p}^{*}p_{2tt}\big{)}\] \[:= g_{11}+g_{12}+g_{13}.\]
We start by recalling some estimates derived in [21] on the functions \(k,h\) and their derivatives. We have
\[k(\Theta_{1}^{*})-k(\Theta_{2}^{*})=(\Theta_{1}^{*}-\Theta_{2}^{*})\int_{0}^{ 1}k^{\prime}(\Theta_{2}^{*}+\sigma(\Theta_{1}^{*}-\Theta_{2}^{*}))\mathrm{d}\sigma;\]
hence using (K2), together with the Sobolev embedding \(H^{2}\hookrightarrow L^{\infty}\), we get
\[\|k(\Theta_{1}^{*})-k(\Theta_{2}^{*})\|_{L^{\infty}L^{\infty}} =\Big{\|}(\Theta_{1}^{*}-\Theta_{2}^{*})\int_{0}^{1}k^{\prime}( \Theta_{2}^{*}+\sigma(\Theta_{1}^{*}-\Theta_{2}^{*}))\mathrm{d}\sigma\Big{\|} _{L^{\infty}L^{\infty}}\] \[\lesssim\|\Theta_{1}^{*}-\Theta_{2}^{*}\|_{L^{\infty}L^{\infty}} \Big{(}1+\|\Theta_{2}^{*}+\sigma(\Theta_{1}^{*}-\Theta_{2}^{*})\|_{L^{\infty}L ^{\infty}}^{1+\gamma_{2}}\Big{)}\] \[\lesssim\|(\hat{p}^{*},\hat{\Theta}^{*},\hat{q}^{*})\|_{\mathcal{ X}}\Big{(}1+\|\Theta_{1}^{*}\|_{L^{\infty}L^{\infty}}^{1+\gamma_{2}}+\|\Theta_{2}^{*}\|_{L^{ \infty}L^{\infty}}^{1+\gamma_{2}}\Big{)}. \tag{6.6a}\]
In a similar fashion, on account of (H2), (H3) and (K2), we can show that
\[\|k^{\prime}(\Theta_{1}^{*})-k^{\prime}(\Theta_{2}^{*})\|_{L^{\infty}L^{\infty} }\lesssim\|(\hat{p}^{*},\hat{\Theta}^{*},\hat{q}^{*})\|_{\mathcal{X}}\Big{(}1+ \|\Theta_{1}^{*}\|_{L^{\infty}L^{\infty}}^{2}+\|\Theta_{2}^{*}\|_{L^{\infty}L^{ \infty}}^{\gamma_{2}}\Big{)}; \tag{6.7a}\] \[\|h(\Theta_{1}^{*})-h(\Theta_{2}^{*})\|_{L^{\infty}L^{\infty}} \lesssim\|(\hat{p}^{*},\hat{\Theta}^{*},\hat{q}^{*})\|_{\mathcal{ X}}\Big{(}1+\|\Theta_{1}^{*}\|_{L^{\infty}L^{\infty}}^{1+\gamma_{1}}+\| \Theta_{2}^{*}\|_{L^{\infty}L^{\infty}}^{1+\gamma_{1}}\Big{)};\] (6.7b) \[\|h^{\prime}(\Theta_{1}^{*})-h^{\prime}(\Theta_{2}^{*})\|_{L^{\infty}L^{ \infty}}\lesssim\|(\hat{p}^{*},\hat{\Theta}^{*},\hat{q}^{*})\|_{\mathcal{X}} \Big{(}1+\|\Theta_{1}^{*}\|_{L^{\infty}L^{\infty}}^{\gamma_{1}}+\|\Theta_{2}^{*} \|_{L^{\infty}L^{\infty}}^{\gamma_{1}}\Big{)}. \tag{6.6b}\]
These estimates will be repeatedly used in the course of this proof.
First, we focus on estimating the \(L^{2}\)-norm of the gradient of \(g_{1}\). Recall that
\[g_{11}=2(k(\Theta_{1}^{*})-k(\Theta_{2}^{*}))(p_{2}^{*}p_{2tt}+(p_{2t}^{*})^{2}).\]
This entails that
\[\nabla g_{11}= \,2\nabla(k(\Theta_{1}^{*})-k(\Theta_{2}^{*}))(p_{2}^{*}p_{2tt}+( p_{2t}^{*})^{2})\] \[+2(k(\Theta_{1}^{*})-k(\Theta_{2}^{*}))\nabla(p_{2}^{*}p_{2tt}+( p_{2t}^{*})^{2}).\]
Since the heat flux \(q\) is not present in the expression of the source term \(g_{1}\); we can proceed as in [21, Section 4] to estimate \(\|\nabla g_{11}\|_{L^{2}L^{2}}\). The fact that the operator \(\mathcal{T}\) is self-mapping along with the estimate
\[\|\Theta_{1}^{*}-\Theta_{2}^{*}\|_{X_{\Theta}}^{2}+\|p_{1}^{*}-p_{2}^{*}\|_{X_ {p}}^{2}\leq\|(\hat{p}^{*},\hat{\Theta}^{*},\hat{q}^{*})\|_{\mathcal{X}}^{2},\]
allows us to obtain the following bound
\[\|\nabla g_{11}\|_{L^{2}L^{2}}\leq C_{T}R_{1}^{2}(1+R_{2}^{\gamma_{2}}+R_{2}^ {1+\gamma_{2}})\|(\hat{p}^{*},\hat{\Theta}^{*},\hat{q}^{*})\|_{\mathcal{X}}.\]
Analogously, we can follow the arguments in [21] in order to deal with the gradient of the second contribution
\[\nabla g_{12}=\nabla(h(\Theta_{1}^{*})-h(\Theta_{2}^{*}))\Delta p_{2}+h(\Theta _{1}^{*})-h(\Theta_{2}^{*}))\nabla\Delta p_{2}.\]
In fact, using the assumption (H3) on the function \(h\) together with (6.7a) and (6.7b), we find
\[\|\nabla g_{12}\|_{L^{2}L^{2}}\leq C_{T}R_{1}(1+R_{2}^{\gamma_{1}}+R_{2}^{1+ \gamma_{1}})\|(\hat{p}^{*},\hat{\Theta}^{*},\hat{q}^{*})\|_{\mathcal{X}}.\]
Thus, it remains to bound the gradient of the last component of \(g_{1}\)
\[\nabla g_{13}=2\big{(}\nabla(k(\Theta_{1}^{*}))\big{(}(p_{1t}^{*}+p_{2t}^{*}) \hat{p}_{t}^{*}+\hat{p}^{*}p_{2tt}\big{)}+k(\Theta_{1}^{*})\nabla\big{(}(p_{1t }^{*}+p_{2t}^{*})\hat{p}_{t}^{*}+\hat{p}^{*}p_{2tt}\big{)}\big{)}.\]
Again, we can do as in [21] to get
\[\|\nabla g_{13}\|_{L^{2}L^{2}}\leq C_{T}R_{1}(1+R_{2}+R_{2}^{\gamma_{2}+2})\| (\hat{p}^{*},\hat{\Theta}^{*},\hat{q}^{*})\|_{\mathcal{X}}.\]
Collecting the estimates of the components of \(g_{1}\), it follows that
\[\|\nabla g_{1}\|_{L^{2}L^{2}}\leq R_{1}C(T,R_{1},R_{2})\|(\hat{p}^{*},\hat{ \Theta}^{*},\hat{q}^{*})\|_{\mathcal{X}}. \tag{6.8}\]
Next, we seek a similar estimate for the derivative in time of \(g_{1}\). Differentiating \(g_{11}\) with respect to \(t\), we have
\[\partial_{t}g_{11}= 2\Big{(}\partial_{t}(k(\Theta_{1}^{*})-k(\Theta_{2}^{*}))(p_{2}^{ *}p_{2tt}+(p_{2t}^{*})^{2})+(k(\Theta_{1}^{*})-k(\Theta_{2}^{*}))\partial_{t }(p_{2}^{*}p_{2tt}+(p_{2t}^{*})^{2})\Big{)}\] \[= 2\Big{(}\big{(}k^{\prime}(\Theta_{1}^{*})\hat{\Theta}_{t}^{*}+(k ^{\prime}(\Theta_{1}^{*})-k^{\prime}(\Theta_{2}^{*}))\Theta_{2t}^{*}\big{)}(p_{2 }^{*}p_{2tt}+(p_{2t}^{*})^{2})\] \[+(k(\Theta_{1}^{*})-k(\Theta_{2}^{*}))(p_{2t}^{*}p_{2tt}+p_{2}^{* }p_{2tttt}+2p_{2t}^{*}p_{2tt}^{*})\Big{)}.\]
Using Holder's inequality, we obtain
\[\|\partial_{t}g_{11}\|_{L^{2}L^{2}}\lesssim \big{(}\|k^{\prime}(\Theta_{1}^{*})\|_{L^{\infty}L^{\infty}}\|\hat {\Theta}_{t}^{*}\|_{L^{2}L^{4}}+\|k^{\prime}(\Theta_{1}^{*})-k^{\prime}(\Theta_ {2}^{*})\|_{L^{\infty}L^{\infty}}\|\Theta_{2t}^{*}\|_{L^{2}L^{4}}\big{)}\] \[\times\big{(}\|p_{2}^{*}\|_{L^{\infty}L^{\infty}}\|p_{2tt}\|_{L^{ \infty}L^{4}}+\|p_{2t}^{*}\|_{L^{\infty}L^{\infty}}\|p_{2t}^{*}\|_{L^{\infty}L^ {4}}\big{)}\] \[+\|k(\Theta_{1}^{*})-k(\Theta_{2}^{*})\|_{L^{\infty}L^{\infty}} \big{(}\|p_{2t}^{*}\|_{L^{\infty}L^{4}}\|p_{2tt}\|_{L^{2}L^{4}}+\|p_{2}^{*}\| _{L^{\infty}L^{\infty}}\|p_{2tttt}\|_{L^{2}L^{2}}\] \[+\|p_{2t}^{*}\|_{L^{\infty}L^{4}}\|p_{2tt}^{*}\|_{L^{2}L^{4}}^{*} \big{)}\]
Then taking into account (K2), (6.6b) and using the embedding \(H^{1}(\Omega)\hookrightarrow L^{4}(\Omega)\), it follows
\[\|\partial_{t}g_{11}\|_{L^{2}L^{2}}\leq C_{T}R_{1}^{2}(1+R_{2}^{\gamma_{2}}+R_{ 2}^{1+\gamma_{2}})\|(\hat{p}^{*},\hat{\Theta}^{*},\hat{q}^{*})\|_{\mathcal{X}}.\]
The time derivative of \(g_{12}\) is given by
\[\partial_{t}g_{12}= \partial_{t}(h(\Theta_{1}^{*})-h(\Theta_{2}^{*}))\Delta p_{2}+(h( \Theta_{1}^{*})-h(\Theta_{2}^{*}))\Delta p_{2t},\] \[= \big{(}h^{\prime}(\Theta_{1}^{*})\hat{\Theta}_{t}^{*}+(h^{\prime }(\Theta_{1}^{*})-h^{\prime}(\Theta_{2}^{*}))\Theta_{2t}^{*}\big{)}\Delta p_{2 }+(h(\Theta_{1}^{*})-h(\Theta_{2}^{*}))\Delta p_{2t}.\]
Hence, it holds that
\[\|\partial_{t}g_{11}\|_{L^{2}L^{2}}\lesssim \big{(}\|h^{\prime}(\Theta_{1}^{*})\|_{L^{\infty}L^{\infty}}\| \hat{\Theta}_{t}^{*}\|_{L^{2}L^{4}}+\|h^{\prime}(\Theta_{1}^{*})-h^{\prime}( \Theta_{2}^{*})\|_{L^{\infty}L^{\infty}}\|\Theta_{2t}^{*}\|_{L^{2}L^{4}}\big{)} \|\Delta p_{2}\|_{L^{\infty}L^{4}}\] \[+\|h(\Theta_{1}^{*})-h(\Theta_{2}^{*})\|_{L^{\infty}L^{\infty}}\| \Delta p_{2t}\|_{L^{2}L^{2}},\]
which, thanks to (H3), (6.7b) implies
\[\|\partial_{t}g_{12}\|_{L^{2}L^{2}}\leq C_{T}R_{1}(1+R_{2}^{\gamma_{1}}+R_{2} ^{1+\gamma_{1}})\|(\hat{p}^{*},\hat{\Theta}^{*},\hat{q}^{*})\|_{\mathcal{X}}.\]
Differentiating in time the last contribution \(g_{13}\), we find
\[\partial_{t}g_{13}= 2\big{(}k^{\prime}(\Theta_{1}^{*})\Theta_{1t}^{*}\big{(}(p_{1t}^ {*}+p_{2t}^{*})\hat{p}_{t}^{*}+\hat{p}^{*}p_{2tt}\big{)}\] \[+k(\Theta_{1}^{*})\big{(}(p_{1tt}^{*}+p_{2tt}^{*})\hat{p}_{t}^{*} +(p_{1t}^{*}+p_{2t}^{*})\hat{p}_{tt}^{*}+\hat{p}_{t}^{*}p_{2tt}+\hat{p}^{*}p_ {2ttt}\big{)}\big{)}.\]
Applying Holder's inequality yields
\[\|\partial_{t}g_{13}\|_{L^{2}L^{2}}\] \[\lesssim \|k^{\prime}(\Theta_{1}^{*})\|_{L^{\infty}L^{\infty}}\|\Theta_{1t }^{*}\|_{L^{2}L^{4}}\big{(}\|p_{1t}^{*}+p_{2t}^{*}\|_{L^{\infty}L^{4}}\|\hat{ p}_{t}^{*}\|_{L^{\infty}L^{\infty}}+\|\hat{p}^{*}\|_{L^{\infty}L^{\infty}}\|p_{2tt}\|_{L^{ \infty}L^{4}}\big{)}\] \[+\|k(\Theta_{1}^{*})\|_{L^{\infty}L^{\infty}}\big{(}\|p_{1tt}^{*} +p_{2tt}^{*}\|_{L^{2}L^{4}}\|\hat{p}_{t}^{*}\|_{L^{\infty}L^{4}}+\|p_{1t}^{*} +p_{2t}^{*}\|_{L^{\infty}L^{4}}\|\hat{p}_{tt}^{*}\|_{L^{2}L^{4}}\] \[+\|\hat{p}_{t}^{*}\|_{L^{\infty}L^{4}}\|p_{2tt}\|_{L^{2}L^{4}}+\| \hat{p}^{*}\|_{L^{\infty}L^{\infty}}\|p_{2ttt}\|_{L^{2}L^{2}}\big{)}.\]
Then, from the assumption (K2) and the fact that the mapping \(\mathcal{T}\) being self-mapping, we arrive at
\[\|\partial_{t}g_{13}\|_{L^{2}L^{2}}\leq C_{T}R_{1}(1+R_{2}+R_{2}^{2+\gamma_{2} })\|(\hat{p}^{*},\hat{\Theta}^{*},\hat{q}^{*})\|_{\mathcal{X}}.\]
Putting together the above estimates results in
\[\|\partial_{t}g_{1}\|_{L^{2}L^{2}}\leq R_{1}C(T,R_{1},R_{2})\|(\hat{p}^{*}, \hat{\Theta}^{*},\hat{q}^{*})\|_{\mathcal{X}}. \tag{6.9}\]
As for the source term \(f_{1}\) in the temperature equation, we can handle it as follows
\[\|f_{1}\|_{H^{2}L^{2}}= \|f_{1}\|_{L^{2}L^{2}}+\|f_{1t}\|_{L^{2}L^{2}}+\|f_{1tt}\|_{L^{2} L^{2}}\] \[\lesssim \|(p_{1t}^{*})^{2}-(p_{2t}^{*})^{2}\|_{L^{2}L^{2}}+\|(p_{1tt}^{* }-p_{2tt}^{*})(p_{1t}^{*})\|_{L^{2}L^{2}}+\|(p_{1t}^{*}-p_{2tt}^{*})(p_{2tt}^{* })\|_{L^{2}L^{2}}\] \[+\|(p_{1tt}^{*}-p_{2ttt}^{*})p_{1t}^{*}\|_{L^{2}L^{2}}+\|(p_{1tt }^{*}-p_{2tt}^{*})p_{1tt}^{*}\|_{L^{2}L^{2}}\] \[+\|(p_{1tt}^{*}-p_{2tt}^{*})p_{2tt}^{*}\|_{L^{2}L^{2}}+\|(p_{1tt }^{*}-p_{2tt}^{*})p_{2tt}^{*}\|_{L^{2}L^{2}}.\]
Thus, we get
\[\|f_{1}\|_{H^{2}L^{2}}\lesssim \|p_{1t}^{*}-p_{2t}^{*}\|_{L^{\infty}L^{\infty}}\|p_{1t}^{*}+p_{2t }^{*}\|_{L^{2}L^{2}}+\|p_{1tt}^{*}-p_{2tt}^{*}\|_{L^{2}L^{4}}\|p_{1t}^{*}\|_{L^{ \infty}L^{4}}\] \[+\|p_{1t}^{*}-p_{2tt}^{*}\|_{L^{\infty}L^{4}}\|p_{2tt}^{*}\|_{L^{ 2}L^{4}}+\|p_{1tt}^{*}-p_{2tt}^{*}\|_{L^{2}L^{2}}\|p_{1t}^{*}\|_{L^{\infty}L^{ \infty}}\] \[+\|p_{1tt}^{*}-p_{2tt}^{*}\|_{L^{\infty}L^{4}}\|p_{1tt}^{*}\|_{L^{ 2}L^{4}}+\|p_{1tt}^{*}-p_{2tt}^{*}\|_{L^{\infty}L^{4}}\|p_{2tt}^{*}\|_{L^{2}L^{ 4}}\] \[+\|p_{1tt}^{*}-p_{2tt}^{*}\|_{L^{\infty}L^{\infty}}\|p_{2tttt}^{*} \|_{L^{2}L^{2}}.\]
Using the embeddings \(H^{1}(\Omega)\hookrightarrow L^{4}(\Omega)\), \(H^{2}(\Omega)\hookrightarrow L^{\infty}(\Omega)\) and elliptic regularity, we deduce that
\[\|f_{1}\|_{H^{2}L^{2}}\leq R_{1}C_{T}\|(\hat{p}^{*},\hat{\Theta}^{*},\hat{q}^{*} )\|_{\mathcal{X}}. \tag{6.10}\]
From the previous Lemma 6.1, we know that the coefficients of the first equation in the system (6.5)
\[\alpha_{1}=1-2k(\Theta_{1}^{*})p_{1}^{*},\qquad r_{1}=h(\Theta_{1}^{*})\]
satisfy the requirements of Proposition 5.1. Further, setting \(g=g_{1}\) and \(f=f_{1}\) and taking into account the above estimates, we deduce that the results of the Propositions 4.1, 5.1 hold. In particular, the energy estimates (4.17), (4.18) and (5.4) hold for the solution \((\hat{p},\hat{\Theta},\hat{q})\); that is
\[\|(\hat{p},\hat{\Theta},\hat{q})\|_{\mathcal{X}}^{2}= \|\mathcal{T}(p_{1}^{*},\Theta_{1}^{*},q_{1}^{*})-\mathcal{T}(p_ {2}^{*},\Theta_{2}^{*},q_{2}^{*})\|_{\mathcal{X}}^{2}\] \[\lesssim \sup_{t\in(0,T)}\mathfrak{E}[\hat{p}](t)+\sup_{t\in(0,T)}b\| \Delta\hat{p}_{t}(t)\|_{L^{2}}^{2}+\int_{0}^{T}\mathfrak{D}[\hat{p}](s)\, \mathrm{d}s\] \[+\sup_{t\in(0,T)}\mathcal{E}[\hat{\Theta}](t)+\|\hat{q}\|_{L^{ \infty}H^{1}}^{2}+\sum_{k=1}^{2}\int_{0}^{T}\|\partial_{t}^{k}\hat{q}(s)\|_{L^ {2}}^{2}\,\mathrm{d}s\] \[\lesssim \exp\big{(}\int_{0}^{T}(1+\Lambda(s))\,\mathrm{d}s\big{)}\int_{0 }^{T}(\|\nabla g_{1}\|_{L^{2}}^{2}+\|g_{1t}\|_{L^{2}}^{2})\,\mathrm{d}s\] \[+(1+\bar{\tau}+\bar{\tau}^{2})\|f_{1}\|_{H^{2}L^{2}}^{2}.\]
Hence, due to estimates (6.4), (6.8), (6.9) and (6.10), we find
\[\|(\hat{p},\hat{\Theta},\hat{q})\|_{\mathcal{X}}^{2}\lesssim \,R_{1}^{2}C(T,R_{1},R_{2})\exp(T+TC_{1}(T,R_{1},R_{2}))\big{\|}( \hat{p}^{*},\hat{\Theta}^{*},\hat{q}^{*})\big{\|}_{\mathcal{X}}^{2}\] \[+(1+\bar{\tau}+\bar{\tau}^{2})R_{1}^{2}C_{T}^{2}\|(\hat{p}^{*}, \hat{\Theta}^{*},\hat{q}^{*})\|_{\mathcal{X}}^{2},\]
where \(C_{1}(T,R_{1},R_{2})\) is the constant in (6.4). From here, we infer that
\[\|\mathcal{T}(p_{1}^{*},\Theta_{1}^{*},q_{1}^{*})-\mathcal{T}(p_{2}^{*}, \Theta_{2}^{*},q_{2}^{*})\|_{\mathcal{X}}\lesssim R_{1}C(T,R_{1},R_{2},\bar{ \tau})\|(\hat{p}^{*},\hat{\Theta}^{*},\hat{q}^{*})\|_{\mathcal{X}}.\]
Consequently, it suffices to select \(R_{1}\) as small as needed to ensure that the mapping \(\mathcal{T}\) is a strict contraction.
## 7. Limiting behaviour as \(\tau\searrow 0\): Proof of Theorem 2.2
We aim in this section to prove Theorem 2.2. More precisely, we prove that the solution of the Westervelt-Pennes-Cattaneo system (1.9) converges as \(\tau\) goes to zero, to the solution of the limiting system Westervelt-Pennes-Fourier system corresponding to \(\tau=0\). For the sake of simplicity and to be able to relate it to the Westervelt-Pennes-Fourier problem analyzed in [21], we replace the system (4.1) by the equivalent telegraph equation derived in (4.15). Let \((p^{\tau},\Theta^{\tau})\) be the solution of the following \(\tau\)-dependent problem
\[\begin{cases}(1-2k(\Theta^{\tau})p^{\tau})p_{tt}^{\tau}-h(\Theta^{\tau})\Delta p ^{\tau}-b\Delta p_{t}^{\tau}=2k(\Theta^{\tau})(p_{t}^{\tau})^{2},&\text{in }\Omega \times(0,T),\\ \tau m\Theta_{tt}^{\tau}+(m+\tau\ell)\Theta_{t}^{\tau}+\ell\Theta^{\tau}- \kappa_{\mathrm{a}}\Delta\Theta^{\tau}=\mathcal{Q}(p_{t}^{\tau})+\tau\partial_ {t}\big{(}\mathcal{Q}(p_{t}^{\tau})\big{)},&\text{in }\Omega\times(0,T),\end{cases} \tag{7.1}\]
subject to homogeneous Dirichlet boundary conditions (1.9b) and initial conditions
\[(p^{\tau},p_{t}^{\tau})|_{t=0}=(p_{0}^{\tau},p_{1}^{\tau}),\quad(\Theta^{\tau}, \Theta_{t}^{\tau})|_{t=0}=(\Theta_{0}^{\tau},\Theta_{1}^{\tau})\]
with \(\Theta_{1}^{\tau}=\frac{1}{m}\big{(}-\nabla\cdot q_{0}^{\tau}-\ell\Theta_{0}^{ \tau}+\mathcal{Q}(p_{1}^{\tau})\big{)}\).
The proof presented below is based on the method employed in [14, 20] to examine the weak limit of the Jordan-Moore-Gibson-Thompson-type equations.
### Proof of Theorem 2.2
According to Theorem 2.1, we know that if the initial data \(p_{0}^{\tau},p_{1}^{\tau}\), \(\Theta_{0}^{\tau},\Theta_{1}^{\tau}\) satisfies
\[\mathfrak{E}[p](0)\leq\delta,\] \[\mathcal{E}[\Theta](0)\leq C\Big{(}\|q_{0}\|_{H^{1}}^{2}+(1+ \bar{\tau}+\bar{\tau}^{2})\big{(}E[\Theta,q](0)+R_{1}^{2}\big{)}\Big{)}\leq R_{ 2}^{2},\]
then the system (7.1) admits a unique solution verifying
\[\|p^{\tau}\|_{X_{p}}\leq R_{1},\quad\|\Theta^{\tau}\|_{X_{\Theta}}\leq R_{2}.\]
Since the radii \(R_{1},R_{2}\) and \(\delta\) are independent of \(\tau\), we can find subsequences such that
\[\begin{array}{ll}(p_{0}^{\tau},p_{1}^{\tau})\rightharpoonup(p_{0},p_{1})& \text{weakly in }&H^{3}(\Omega)\cap H_{0}^{1}(\Omega)\times H^{2}(\Omega)\cap H_{0}^{1}( \Omega),\\ (\Theta_{0}^{\tau},\Theta_{1}^{\tau})\rightharpoonup(\Theta_{0},\Theta_{1})& \text{weakly in }&H^{2}(\Omega)\cap H_{0}^{1}(\Omega)\times H_{0}^{1}( \Omega).\end{array} \tag{7.2}\]
as \(\tau\) tends to zero. Further, owing to the compactness of the embedding \(H^{s+1}\hookrightarrow H^{s},s\in\mathbb{N}\), it follows that there exists strongly convergent subsequences still denoted by \((p_{0}^{\tau},p_{1}^{\tau}),(\Theta_{0}^{\tau},\Theta_{1}^{\tau})\), such that
\[\begin{array}{ll}(p_{0}^{\tau},p_{1}^{\tau})\to(p_{0},p_{1})&\text{in }&H^{2}(\Omega)\cap H_{0}^{1}(\Omega)\times H_{0}^{1}(\Omega),\\ (\Theta_{0}^{\tau},\Theta_{1}^{\tau})\to(\Theta_{0},\Theta_{1})&\text{in }&H_{0}^{1}(\Omega)\times L^{2}(\Omega).\end{array} \tag{7.3}\]
Similarly, since the solution \((p^{\tau},\Theta^{\tau})\) is bounded uniformly in \(\tau\), there exists subsequences (which we keep on denoting by the index \(\tau\)) converging in the weak-\(\star\) topology
\[\begin{array}{ll}p^{\tau}\rightharpoonup p&\text{weakly-$\star$}&\text{in }&L^{\infty}(0,T;H^{3}(\Omega)\cap H_{0}^{1}(\Omega));\\ p_{t}^{\tau}\rightharpoonup p_{t}&\text{weakly-$\star$}&\text{in }&L^{\infty}(0,T;H^{2}(\Omega)\cap H_{0}^{1}( \Omega));\\ p_{tt}^{\tau}\rightharpoonup p_{tt}&\text{weakly-$\star$}&\text{in }&L^{\infty}(0,T;H_{0}^{1}( \Omega));\\ p_{tttt}^{\tau}\rightharpoonup p_{tttt}&\text{weakly }&\text{in }&L^{2}(0,T;L^{2}(\Omega));\\ \Theta^{\tau}\rightharpoonup\Theta&\text{weakly-$\star$}&\text{in }&L^{\infty}(0,T;H^{2}( \Omega)\cap H_{0}^{1}(\Omega));\\ \Theta_{t}^{\tau}\rightharpoonup\Theta_{t}&\text{weakly-$\star$}&\text{in }&L^{\infty}(0,T;H_{0}^{1}( \Omega));\\ \Theta_{tt}^{\tau}\rightharpoonup\Theta_{tt}&\text{weakly-$\star$}&\text{in }&L^{\infty}(0,T;L^{2}(\Omega)).\end{array} \tag{7.4}\]
Next, we seek to connect the limit of the initial data sequence found in (7.3) to the initial state of the limit functions in (7.4). For this purpose, we invoke Aubin-Lions lemma [17, Lemma 1.2], which we can apply here thanks to the compactness of the
embeddings \(H^{s+1}\hookrightarrow H^{s},s\in\mathbb{N}\). Then, we get (up to subsequences)
\[\begin{split} p^{\tau}\to p&\text{ strongly in }& C(0,T;H^{2}(\Omega)\cap H^{1}_{0}(\Omega));\\ p^{\tau}_{t}\to p_{t}&\text{ strongly in }& C(0,T;H^{1}_{0}(\Omega));\\ p^{\tau}_{tt}\to p_{tt}&\text{ strongly in }& L^{2}(0,T;L^{2}(\Omega));\\ \Theta^{\tau}\to\Theta&\text{ strongly in }& C(0,T;H^{1}_{0}(\Omega));\\ \Theta^{\tau}_{t}\to\Theta_{t}&\text{ strongly in }& C(0,T;L^{2}(\Omega)).\end{split} \tag{7.5}\]
This also leads to strong convergence of the initial data as follows
\[\begin{split}&(p^{\tau},p^{\tau}_{t})(0)\to(p,p_{t})(0)\qquad \quad\text{ in }\quad H^{2}(\Omega)\cap H^{1}_{0}(\Omega)\times H^{1}_{0}(\Omega);\\ &(\Theta^{\tau},\Theta^{\tau}_{t})(0)\to(\Theta,\Theta_{t})(0) \qquad\text{ in }\quad H^{1}_{0}(\Omega)\times L^{2}(\Omega).\end{split} \tag{7.6}\]
Hence, the uniqueness of the limit along with (7.3), (7.6) and (7.2) gives
\[\begin{split}&(p,p_{t})(0)=(p_{0},p_{1})\in\,H^{3}(\Omega)\cap H ^{1}_{0}(\Omega)\times H^{2}(\Omega)\cap H^{1}_{0}(\Omega),\\ &(\Theta,\Theta_{t})(0)=(\Theta_{0},\Theta_{1})\in\,H^{2}(\Omega) \cap H^{1}_{0}(\Omega)\times H^{1}_{0}(\Omega).\end{split}\]
This means that the initial states of the limit functions in (7.4) are well-defined and have the necessary smoothness. Therefore, we can now focus on the task of passing to the limit in system (7.1) when \(\tau\) goes to zero. Let \(v\in L^{1}(0,T;L^{2}(\Omega))\). We denote by \((\hat{p},\hat{\Theta})\) the differences \(\hat{p}=p-p^{\tau},\hat{\Theta}=\Theta-\Theta^{\tau}\). Then, we have
\[\begin{split}&\int_{0}^{T}\int_{\Omega}\big{(}m\Theta_{t}-\kappa_{ \text{a}}\Delta\Theta+\ell\Theta-\mathcal{Q}(p_{t})\big{)}v\,\mathrm{d}x\, \mathrm{d}t\\ =&\int_{0}^{T}\int_{\Omega}\big{(}m\hat{\Theta}_{t}- \kappa_{\text{a}}\Delta\hat{\Theta}+\ell\hat{\Theta}-\tau m\Theta^{\tau}_{tt}- \tau\ell\Theta^{\tau}_{t}+\mathcal{Q}(p^{\tau}_{t})-\mathcal{Q}(p_{t})+\tau \partial_{t}\mathcal{Q}(p^{\tau}_{t})\big{)}v\,\mathrm{d}x\,\mathrm{d}t.\end{split} \tag{7.7}\]
From the weak-\(\star\) convergence (7.4), we get
\[\int_{0}^{T}\int_{\Omega}\big{(}m\hat{\Theta}_{t}-\kappa_{\text{a}}\Delta\hat{ \Theta}+\ell\hat{\Theta}\big{)}v\,\mathrm{d}x\,\mathrm{d}t\longrightarrow 0 \quad\text{as }\tau\to 0.\]
Moreover, recalling that \(\mathcal{Q}(p_{t})=\frac{2b}{\rho_{\text{a}}C^{4}_{\text{a}}}(p_{t})^{2}\) and keeping in mind the regularity provided by the fact that \((p^{\tau},\Theta^{\tau})\in X_{p}\times X_{\Theta}\), we obtain
\[\begin{split}&\int_{0}^{T}\int_{\Omega}\big{(}-\tau m\Theta^{ \tau}_{tt}-\tau\ell\Theta^{\tau}_{t}+\frac{4b\tau}{\rho_{\text{a}}C^{4}_{\text{ a}}}p^{\tau}_{t}p^{\tau}_{tt}\big{)}v\,\mathrm{d}x\,\mathrm{d}t\\ \lesssim&\,\tau\big{(}\|\Theta^{\tau}_{tt}\|_{L^{ \infty}L^{2}}+\|\Theta^{\tau}_{t}\|_{L^{\infty}L^{2}}+\|p^{\tau}_{t}\|_{L^{ \infty}L^{4}}\|p^{\tau}_{tt}\|_{L^{\infty}L^{4}}\big{)}\|v\|_{L^{1}L^{2}} \longrightarrow 0\end{split}\]
as \(\tau\) tends to zero. The two remaining terms on the right-hand side of (7.7) can be treated as follows
\[\begin{split}\int_{0}^{T}\int_{\Omega}\big{(}\mathcal{Q}(p^{\tau} _{t})-\mathcal{Q}(p_{t})\big{)}v\,\mathrm{d}x\,\mathrm{d}t&=-\int_ {0}^{T}\int_{\Omega}\frac{2b}{\rho_{\text{a}}C^{4}_{\text{a}}}(p^{\tau}_{t}+p_{t })\hat{p}_{t}v\,\mathrm{d}x\,\mathrm{d}t\\ &\lesssim\|(p^{\tau}_{t}+p_{t})\|_{L^{\infty}L^{4}}\|\hat{p}_{t}\|_ {L^{\infty}L^{4}}\|v\|_{L^{1}L^{2}}.\end{split}\]
Then, the embedding \(H^{1}(\Omega)\hookrightarrow L^{4}(\Omega)\) together with the boundedness of \(p_{t}^{\tau},p_{t}\) and the convergence (7.5) yields
\[\int_{0}^{T}\int_{\Omega}\big{(}\mathcal{Q}(p_{t}^{\tau})-\mathcal{Q}(p_{t}) \big{)}v\,\mathrm{d}x\,\mathrm{d}t\longrightarrow 0\quad\text{as $\tau\to 0$.}\]
Consequently, we infer that \((p,\Theta)\) satisfies in \(L^{\infty}(0,T;L^{2}(\Omega))\) the equation
\[m\Theta_{t}-\kappa_{\mathrm{a}}\Delta\Theta+\ell\Theta=\mathcal{Q}(p_{t}).\]
Next, we claim that for all \(v\in L^{1}(0,T;L^{2}(\Omega))\), it holds that
\[\int_{0}^{T}\int_{\Omega}\Big{(}(1-2k(\Theta)p)p_{tt}-h(\Theta)\Delta p-b \Delta p_{t}-2k(\Theta)(p_{t})^{2}\Big{)}v\,\mathrm{d}x\,\mathrm{d}t \longrightarrow 0\quad\text{as $\tau\to 0$.}\]
To see this, we first observe that
\[\int_{0}^{T}\int_{\Omega}\Big{(}(1-2k(\Theta)p)p_{tt}-h(\Theta) \Delta p-b\Delta p_{t}-2k(\Theta)(p_{t})^{2}\Big{)}v\,\mathrm{d}x\,\mathrm{d}t\] \[= \int_{0}^{T}\int_{\Omega}\Big{(}(1-2k(\Theta)p)\hat{p}_{tt}+(1-2k (\Theta)p)p_{tt}^{\tau}-h(\Theta)\Delta\hat{p}-h(\Theta)\Delta p^{\tau}\] \[-b\Delta\hat{p}_{t}-b\Delta p_{t}^{\tau}-2k(\Theta)((p_{t})^{2}-( p_{t}^{\tau})^{2})-2k(\Theta)(p_{t}^{\tau})^{2}\Big{)}v\,\mathrm{d}x\,\mathrm{d}t\] \[= \int_{0}^{T}\int_{\Omega}\Big{(}(1-2k(\Theta)p)\hat{p}_{tt}-h( \Theta)\Delta\hat{p}-b\Delta\hat{p}_{t}+(1-2k(\Theta^{\tau})p^{\tau})p_{tt}^{ \tau}+2\big{(}k(\Theta^{\tau})p^{\tau}-k(\Theta)p\big{)}p_{tt}^{\tau}\] \[-(h(\Theta)-h(\Theta^{\tau}))\Delta p^{\tau}-h(\Theta^{\tau}) \Delta p^{\tau}-b\Delta p_{t}^{\tau}-2k(\Theta)(p_{t}+p_{t}^{\tau})\hat{p}_{t} -2k(\Theta)(p_{t}^{\tau})^{2}\Big{)}v\,\mathrm{d}x\,\mathrm{d}t\] \[= \int_{0}^{T}\int_{\Omega}\Big{(}(1-2k(\Theta)p)\hat{p}_{tt}-h( \Theta)\Delta\hat{p}-b\Delta\hat{p}_{t}-2k(\Theta^{\tau})\hat{p}p_{tt}^{\tau}+ 2\big{(}k(\Theta^{\tau})-k(\Theta)\big{)}pp_{tt}^{\tau}\] \[-(h(\Theta)-h(\Theta^{\tau}))\Delta p^{\tau}-2k(\Theta)(p_{t}+p_{ t}^{\tau})\hat{p}_{t}+2\big{(}k(\Theta^{\tau})-k(\Theta)\big{)}(p_{t}^{\tau})^{2} \Big{)}v\,\mathrm{d}x\,\mathrm{d}t. \tag{7.8}\]
We want to prove that the integral on right-hand side goes to zero as the relaxation time vanishes. Again, the weak-\(\star\) convergence of \(p^{\tau}\) to \(p\) in (7.4) along with the boundedness of \((p,\Theta)\in X_{p}\times X_{\Theta}\) implies that
\[h(\Theta)\Delta p^{\tau}\rightharpoonup h(\Theta)\Delta p \text{weakly-}\star \text{in $L^{\infty}(0,T;H^{1}_{0}(\Omega))$;}\] \[((1-2k(\Theta)p)p_{tt}^{\tau}\rightharpoonup((1-2k(\Theta)p)p_{ tt} \text{weakly-}\star \text{in $L^{\infty}(0,T;H^{1}_{0}(\Omega))$.}\]
Then, we can conclude that
\[\int_{0}^{T}\int_{\Omega}\Big{(}(1-2k(\Theta)p)\hat{p}_{tt}-h(\Theta)\Delta \hat{p}-b\Delta\hat{p}_{t}\Big{)}v\,\mathrm{d}x\,\mathrm{d}t\longrightarrow 0 \quad\text{as $\tau\to 0$.}\]
Further, relying on (7.5), we have
\[\int_{0}^{T}\int_{\Omega}\Big{(}-2k(\Theta^{\tau})\hat{p}p_{tt}^{ \tau}-2k(\Theta)(p_{t}+p_{t}^{\tau})\hat{p}_{t}\Big{)}v\,\mathrm{d}x\,\mathrm{d}t\] \[\lesssim \Big{(}\|k(\Theta^{\tau})\|_{L^{\infty}L^{\infty}}\|\hat{p}\|_{L^ {\infty}L^{4}}\|p_{tt}^{\tau}\|_{L^{\infty}L^{4}}+\|k(\Theta)\|_{L^{\infty}L^{ \infty}}\|p_{t}+p_{t}^{\tau}\|_{L^{\infty}L^{4}}\|\hat{p}_{t}\|_{L^{\infty}L^{4} }\Big{)}\|v\|_{L^{1}L^{2}},\]
which yields that
\[\int_{0}^{T}\int_{\Omega}\Big{(}-2k(\Theta^{\tau})\hat{p}p_{tt}^{\tau}-2k(\Theta) (p_{t}+p_{t}^{\tau})\hat{p}_{t}\Big{)}v\,\mathrm{d}x\,\mathrm{d}t\longrightarrow 0 \quad\text{as }\tau\to 0.\]
In order to handle the terms involving the difference \(k(\Theta^{\tau})-k(\Theta)\) or \(h(\Theta^{\tau})-h(\Theta)\), we call upon analogous estimates to (6.6a), (6.7a). Since we can show as in (6.6a) that
\[\|k(\Theta^{\tau})-k(\Theta)\|_{L^{\infty}L^{4}}\lesssim\|\Theta^{\tau}-\Theta \|_{L^{\infty}L^{4}}\Big{(}1+\|\Theta^{\tau}\|_{L^{\infty}L^{4}}^{1+\gamma_{2} }+\|\Theta\|_{L^{\infty}L^{4}}^{1+\gamma_{2}}\Big{)},\]
the boundedness of \(\Theta,\Theta^{\tau}\) in \(X_{\Theta}\) allows to obtain that
\[\int_{0}^{T}\int_{\Omega}2\Big{(}\big{(}k(\Theta^{\tau})-k(\Theta )\big{)}pp_{tt}^{\tau}+\big{(}k(\Theta^{\tau})-k(\Theta)\big{)}(p_{t}^{\tau})^ {2}\Big{)}v\,\mathrm{d}x\,\mathrm{d}t\] \[\leq \,2\Big{(}\|p\|_{L^{\infty}L^{\infty}}\|p_{tt}^{\tau}\|_{L^{ \infty}L^{4}}+\|p_{t}^{\tau}\|_{L^{\infty}L^{4}}^{2}\Big{)}\|k(\Theta^{\tau})- k(\Theta)\|_{L^{\infty}L^{4}}\|v\|_{L^{1}L^{2}}\to 0\quad\text{as }\tau\to 0.\]
Likewise, the following estimate
\[\|h(\Theta^{\tau})-h(\Theta)\|_{L^{\infty}L^{4}}\lesssim\|\Theta^{\tau}-\Theta \|_{L^{\infty}L^{4}}\Big{(}1+\|\Theta^{\tau}\|_{L^{\infty}L^{4}}^{1+\gamma_{1} }+\|\Theta\|_{L^{\infty}L^{4}}^{1+\gamma_{1}}\Big{)},\]
leads to
\[\int_{0}^{T}\int_{\Omega}\big{(}h(\Theta^{\tau})-h(\Theta)\big{)}\Delta p^{ \tau}v\,\mathrm{d}x\,\mathrm{d}t\lesssim\|h(\Theta^{\tau})-h(\Theta)\|_{L^{ \infty}L^{4}}\|\Delta p^{\tau}\|_{L^{\infty}L^{4}}\|v\|_{L^{1}L^{2}}\]
converging to zero as \(\tau\) goes to zero. Thus, the terms on the right-hand side of (7.8) all vanish as \(\tau\) tends to zero, then we infer that the limit \(p\) satisfies the equation
\[(1-2k(\Theta)p)p_{tt}-h(\Theta)\Delta p-b\Delta p_{t}=2k(\Theta)(p_{t})^{2},\]
which is understood in \(L^{\infty}(0,T;L^{2}(\Omega))\).
We conclude that there exists a subsequence converging in the sense of (7.2), (7.4) whose limit \((p,\Theta)\) is a solution to the limiting system (2.2). In addition, since we have slightly more regularity than that provided by the well-posedness result in [21] especially in terms of the derivatives in time \(p_{t},p_{tt},\Theta_{t},\Theta_{tt}\), the uniqueness of the solution obtained in [21] extends to the functional setting at hand. Thus, we can confirm that the whole sequence \((p^{\tau},\Theta^{\tau})_{\tau\in(0,\tau]}\) converges to the solution of the parabolic system (2.2) based on [30, Proposition 10.13].
|
2305.04101 | SRTK: A Toolkit for Semantic-relevant Subgraph Retrieval | Information retrieval based knowledge base question answering (KBQA) first
retrieves a subgraph to reduce search space, then reasons on the subgraph to
select answer entities. Existing approaches have three issues that impede the
retrieval of such subgraphs. Firstly, there is no off-the-shelf toolkit for
semantic-relevant subgraph retrieval. Secondly, existing methods are
knowledge-graph-dependent, resulting in outdated knowledge graphs used even in
recent studies. Thirdly, previous solutions fail to incorporate the best
available techniques for entity linking or path expansion. In this paper, we
present SRTK, a user-friendly toolkit for semantic-relevant subgraph retrieval
from large-scale knowledge graphs. SRTK is the first toolkit that streamlines
the entire lifecycle of subgraph retrieval across multiple knowledge graphs.
Additionally, it comes with state-of-the-art subgraph retrieval algorithms,
guaranteeing an up-to-date solution set out of the box. | Yuanchun Shen | 2023-05-06T17:15:27Z | http://arxiv.org/abs/2305.04101v4 | # SRTK: A Toolkit for Semantic-relevant Subgraph Retrieval
###### Abstract
Information retrieval based knowledge base question answering (KBQA) first retrieves a subgraph to reduce search space, then reasons on the subgraph to select answer entities. Existing approaches have three issues that impede the retrieval of such subgraphs. Firstly, there is no off-the-shelf toolkit for semantic-relevant subgraph retrieval. Secondly, existing methods are knowledge-graph-dependent, resulting in outdated knowledge graphs used even in recent studies. Thirdly, previous solutions fail to incorporate the best available techniques for entity linking or path expansion. In this paper, we present SRTK, a user-friendly toolkit for semantic-relevant subgraph retrieval from large-scale knowledge graphs. SRTK is the first toolkit that streamlines the entire lifecycle of subgraph retrieval across multiple knowledge graphs. Additionally, it comes with state-of-the-art subgraph retrieval algorithms, guaranteeing an up-to-date solution set out of the box.
**Resource Type**: Software
**License**: MIT
**DOI**: [https://doi.org/10.5281/zenodo.7895612](https://doi.org/10.5281/zenodo.7895612)
**Repository**: [https://github.com/happen2me/subgraph-retrieval-toolkit](https://github.com/happen2me/subgraph-retrieval-toolkit)
Keywords:Subgraph Retrieval Knowledge Graph KBQA
## 1 Introduction
Knowledge base question answering (KBQA) aims to answer natural language questions over large scale knowledge graphs such as Wikidata [34], Freebase [2], and DBPedia [1]. One crucial step in KBQA is subgraph retrieval, which involves narrowing down the search space by retrieving a subset of entities and relations from the knowledge graph that are relevant to the question. By obtaining a smaller subgraph with more pertinent information, noise can be reduced, and the reasoning process can be facilitated.
Subgraph retrieval can be divided into two main steps: entity linking and path expansion. Entity linking identifies named entities in questions and anchors them to corresponding entities in knowledge graphs, while path expansion selectively includes neighbor entities and relations to form the subgraph. The scale and complexity of knowledge graphs pose challenges for both steps. Firstly, accurate entity linking requires detecting potential entity mentions and disambiguating
them to the correct entities in the knowledge graph. This task becomes challenging due to the large number of entities within the knowledge graph, where multiple entities may have similar names to those mentioned in the question. Once the entities are identified, selecting the most relevant paths for subgraph expansion is also non-trivial. Although numerous paths can be expanded from an entity, only a small subset of them are truly relevant. For instance, as of the time of writing, Wikidata has 3,278 entities connected to the United States as the subject of certain relations, exemplifying the vast number of potential paths to consider.
Existing KBQA works that involve subgraph retrieval have certain limitations. For entity linking, some recent works still rely on pure pattern matching, resulting in a significant number of irrelevant linked entities [38, 19, 7]. In terms of path expansion, many existing approaches either blindly expand entities within one or two hops [4], which quickly becomes infeasible for large knowledge graphs, or filter expansion paths using non-trainable embedding similarity [39], which may not capture the most relevant paths. Zhang et al. proposed a path expansion method called SR [44], which achieved state-of-the-art results on specific KBQA datasets, but it assumes pre-linked entities and only works with the outdated knowledge graph Freebase.
To address these gaps, we propose SRTK, a toolkit designed to simplify the retrieval of semantic-relevant subgraphs from large-scale knowledge graphs. SRTK integrates multiple off-the-shelf entity linking tools with unified interfaces and implements the SR path expansion algorithm [44] for both Freebase and up-to-date knowledge graphs such as Wikidata and DBpedia. Furthermore, we extend the SR algorithm to support contrastive losses and different base models during training, as well as varying beam width and search depth during inference. To our knowledge, SRTK is the first readily available toolkit for subgraph retrieval across multiple knowledge graphs. Its main features include:
* Out-of-the-box Functionality: SRTK provides command-line tool and Python library for subgraph retrieval. It comes with documentation and tutorials for a quick and effortless start.
* Full Lifecycle Support for Subgraph Retrieval: SRTK streamlines the complete lifecycle of subgraph retrieval. This includes not only retrieval itself (entity linking, path expansion), but also retrieval model training (data processing, scorer training, and retrieval evaluation).
* Multi-Knowledge Graph Support: SRTK provides support for Freebase, Wikidata, and DBpedia, employing unified access patterns. Furthermore, the toolkit can be extended to accommodate other knowledge graphs that have a SPARQL interface, thereby increasing its versatility.
* User-friendly Design: SRTK offers a user-friendly interface that is both intuitive and extensible. For instance, each step of the retrieval process can be executed with a single command; various steps are seamlessly connected using standardized JSONL files, ensuring a smooth workflow; extension to new knowledge graphs and entity linking tools can be accomplished by implementing standardized protocols, etc;.
* Inclusion of SOTA Algorithms: REL and DBpedia are among the best available off-the-shelf entity linking tools; For path expansion, we employed the SR algorithm proposed by Zhang et al [44], which achieves the state-of-the-art results on two Freebase KBQA datasets, ensuring path expansion is on par with the state-of-the-art methods.
* Huggingface Model Compatibility: SRTK supports training or evaluation with any language encoding models available on the Huggingface model hub.
* Interactive Visualization: Retrieved subgraphs can be visualized as interactive web pages, allowing users to explore and analyze the retrieved information in a user-friendly and intuitive manner.
The rest of the paper is structured as follows. We begin with a review of related works in section 2. Then we introduce the main functions and implementations of SRTK in section 3, exemplifying the usage of SRTK. Subsequently, in section 4, we discuss the impact, positioning to the state of the art, limitations, and future development plans. Finally, we conclude the paper in section 5.
## 2 Related Works
**KBQA** answers natural language questions with entities or relations from a knowledge graph [17]. Knowledge graph is a structured representation of knowledge. It usually represents real-world facts as _(subject, predicate, object)_ triples. Researchers have approached the problem of KBQA in different ways, which can be broadly categorized into two types. The first type is based on semantic parsing, where a graph query is constructed by filling patterns or end-to-end constructions to directly retrieve the targets [40, 7]. However, this method has limitations due to the difficulty in precisely interpreting human intentions and the fact that it excludes potentially helpful reasoning paths and connected nodes. The second type is based on information retrieval. It regards selecting targets as either binary classification or ranking over candidate entities [11]. They usually start with entity linking that matches mentions in question to entities in knowledge graphs, then expands the subgraphs by including neighbors. A reasoner is then applied to select correct answers from the retrieved subgraphs [44, 4, 39]. Our work mainly applies to this branch of KBQA solutions.
**Semantic-relevant Subgraph Retrieval** reduces the search space and noisy information for downstream reasoners by retrieving a pertinent subgraph that is likely to contain the target entities or relations, leveraging the semantic information conveyed in a natural language question. This retrieval process is typically divided into entity linking and path expansion. In entity linking, it first performs named entity recognition (NER) to detect entity mentions in the questions, then employs entity disambiguation to link entities to corresponding knowledge graphs. While in path expansion, neighbor entities and relations to existing entities are retrieved to form the subgraphs. The assumption behind this approach is that the target entities are within proximity of mentioned entities in the questions. There are also works that do not follow this approach.
For example, CLOCQ [6] use a combination of features, such as lexical matching, question-relevance, coherency, and connectivity, to directly retrieve top-\(k\) relevant entities or relations as the reduced search space.
Earlier methods to subgraph retrieval, which are still adopted by recent works [39, 19, 7], usually starts with keywords matching to link entities to knowledge graphs, then include all entities within a certain hop. Such methods are suitable for small-scale knowledge graphs, but for large knowledge graphs, lexically matching mentions may include many unrelated entities. Including all \(k\)-hop entities may further lead to a size explosion of the retrieved graph. Based on this approach, some optimizations were proposed to shrink the subgraph size. QA-GNN [39] uses GLOVE embedding distance as similarity metrics to filter out less similar entities and relations, but it is not trainable, thus incapable for situations where the expansion path is semantically dissimilar to the question itself.
Recent advancements in subgraph retrieval have demonstrated improvements in both entity linking and path expansion. Several entity linking tools, such as WAT [25], REL [14], DBpedia Spotlight [20], have been developed for different knowledge graphs, incorporating lexical, contextual, and semantic information. In terms of path expansion, recent works take semantic information of the retrieval paths into consideration. PullNet[30] assumes that entities are already linked. It iteratively _pulls_ neighboring entities or entities from a related corpus based on their probabilities predicted by a neural network, forming the subgraph. However, PullNet lacks the separation between path expansion and subgraph reasoning. Building upon this, Zhang et al. proposed SR [44], which decouples path expansion from reasoning to enable a plug-and-play framework that generalizes to different reasoners. SR performs iterative path expansion by comparing the trained semantic similarity of the next expansion path with the question concatenated with the previously expanded paths. By leveraging subgraphs retrieved through SR, Zhang et al. achieved state-of-the-art results on WebQSP [41] and CWQ [32] KBQA datasets. However, it is important to note that SR assumes the entities are already linked and is specifically designed for the Freebase knowledge graph, which ceased to be updated in 20151.
Footnote 1: [https://en.wikipedia.org/wiki/Freebase_](https://en.wikipedia.org/wiki/Freebase_)(database)
## 3 SRTK: Subgraph Retrieval Toolkit
The SRTK project is publicly available on PyPI2, and it serves as both a command line toolkit and a Python library. We focus on demonstrating its usage as a command line toolkit. For information on using it as a Python library, please consult the Python API section of the documentation3. In this section, we first define the problem of subgraph retrieval, then we introduce each API and the underlying implementation. The SRTK CLI is divided into five subcommands: preprocess, train, link, retrieve, and visualize. We categorize them based
on two workflows: the first being retrieving subgraphs with trained models, and the second being training models for subgraph retrieval.
### Problem Definition
The objective of **Semantic-relevant Subgraph Retrieval for KBQA** is to retrieve a subgraph \(\mathcal{G}=(E_{\mathcal{G}},R_{\mathcal{G}})\) from a given knowledge graph \(G=(E,R)\) based on a natural language question \(q\), such that \(\mathcal{G}\subseteq G\) contains the answer entities or relations \(A\subseteq G\) that the question expects. Here, \(E\) and \(R\) represent all entities and relations in the knowledge graph. The retrieved subgraph \(\mathcal{G}\) is passed to a reasoner to predict answers. As reasoning is decoupled and not within the scope of this paper, we assume that the reasoner \(p_{\phi}(A|q,\mathcal{G})\) will select \(m\) items from \(\mathcal{G}\) as answers if it is known that there are \(m\) answers. In the best case, \(\mathcal{G}\) will contain all the desired answers (\(\mathcal{G}=A\)), while in the worst case, none of the answers will be present in \(\mathcal{G}\). If the answers included in \(\mathcal{G}\) are fixed, a smaller subgraph \(\mathcal{G}\) makes it easier for the reasoner \(p_{\phi}(A|q,\mathcal{G})\) to select the correct answers. On the other hand, when the size of \(\mathcal{G}\) is fixed, including more answers in the subgraph increases the likelihood of the reasoner selecting the correct answers. Therefore, the goal of subgraph retrieval is to obtain a subset of the knowledge graph that is as close as possible to answers. The quality of the retrieval result is measured by both the size of the subgraph and the recall of the answers. One retrieved subgraph is considered strictly better than another if it has a smaller subgraph size while achieving a higher recall of the answers.
### Retrieval Workflow
#### 3.2.1 Entity Linking
Identifying named entities and aligning them with knowledge graph entities is typically the initial step in subgraph retrieval. If a dataset is already linked to a knowledge graph, this step may not be necessary.
Figure 1: The retrieval workflow of SRTK consists of three main steps: entity linking, subgraph retrieval, and visualization. Entity linking identifies knowledge graph entities mentioned in questions; subgraph retrieval retrieves semantic relevant subgraphs with trained models by iteratively including neighbors of the linked entities within certain proximity on a knowledge graph; visualization visualizes the retrieved subgraphs in the form of interactive webpages.
SRTK link subcommand builds upon existing entity linking services, enabling researchers to perform entity linking on various knowledge graphs through the same interface. Consider the example _Where is Hakata?_, once storing it in a jsonl file called question.jsonl like {"question": "Where is Hakata Ward?"}, users can link it to Wikidata with the following command:
srtk link --input question.jsonl \ --output linked.jsonl \ --knowledge-graphwikidata \ --el-endpoint[https://rel-entity-linker.d4science.org](https://rel-entity-linker.d4science.org)
The link shown above is the public endpoint of REL [14] service 4. Each line of the output linked.jsonl has the following format:
Footnote 4: In practice you also have to pass an authorization in the command line.
{"question_entities": ["Q1330839"], "spans": [[9,20]], "entity_names": ["Hakata-ku,_Fukuoka"]}
Where the _question entities_ are the linked entities from Wikidata, the _spans_ store the corresponding character index of the linked entity in the original text, and the _entity names_ show the name of the linked entities in the knowledge graph. Under the hood, it first invokes REL [14] to link entity mentions to Wikipedia articles, and then utilize _wikimapper_5 to map the corresponding Wikipedia IDs to their corresponding entities in Wikidata. The extra step is needed because most existing services only link entities to Wikipedia [14, 9, 25].
Footnote 5: [https://github.com/jcklie/wikimapper](https://github.com/jcklie/wikimapper)
For DBpedia, we integrate DBpedia Spotlight [20] to directly identify entity mentions and link them to DBpedia resources. By implementing annotation interfaces defined in SRTK, the entity linking can be extended to other knowledge graphs.
#### 3.2.2 Retrieval
The retrieval interface is at the core of the library. It retrieves a semantic-relevant subgraph with trained models and a given question.
The retrieval process can be divided into path search and fact retrieval. Path search identifies the most probable expansion paths from linked entities. An _expansion path_ comprises a sequence of relations from the knowledge graph, based on the idea that a question typically implies a reasoning chain [44]. For the example _Where is Hakata Ward?_, the corresponding expansion path would be _<locate in>_ from the triple _(Hakata, locate in,? )_ implied by the question, where _<locate in>_ forms a path of length one. Path search starts with loading a trained path scoring model that measures the likelihood to expand a relation upon a current path, then it iteratively compares, selects, and includes neighboring relations into expansion paths. Path search initially regards linked entities and tracked entities. Subsequently, at each step of expansion, it first queries knowledge graphs to retrieve a set of relations connected to the tracked entities, then expand the paths by one hop by selecting the most probable relations with the trained scoring model, taking the question and the previously expanded paths
into consideration. A path may also stop expansion when a special end relation is selected as the next relation. Afterward, path search updates the tracked entities with entities that the chosen relations connect to. During expansion, beam search is applied to keep track of top-\(k\) most probable expansion paths; otherwise, the search space will grow explosively as hop increases. Finally, until a certain hop is reached, or until all paths have ended with the end relation, path search stops expansion. After getting a set of probable expansion paths, fact retrieval creates the subgraph by retrieving the entities and relations present along the paths.
The retrieve subcommand is used to retrieve subgraphs with linked questions and a trained scoring model. Users can additionally specify the max hop to search with --max-depth, and beam width with --beam-width. A larger beam width and a greater hop may increase the probability to arrive at target entities, but will also increase the running time. Using the linked.jsonl from the last sub-section with questions and linked entities, one can retrieve subgraphs with the following command:
srtkretrieve --inputlinked.jsonl --outputsubgraph.jsonl --beam-width 2 --max-depth 1 --scorer-model-pathdrt/srtk-scorer --sparql-endpoint[https://query.wikidata.org/sparql](https://query.wikidata.org/sparql)
The --scorer-model-path specifies the location of a trained path scoring model. It can be either the folder of a saved local model or any language encoder model identifier from the Huggingface model hub6. Besides, if an encoder-decoder model like T5 [27] is fed in as the path scoring model, only the encoder part of the model will be used. The output has the following format, where the subgraph is saved as a list of _(subject, predicate, object)_ triples in the form of their identifiers on respective knowledge graphs.
Footnote 6: [https://huggingface.co/models](https://huggingface.co/models)
#### 3.2.2 # Outputsubgraph.jsonl {"triples": [["Q1330839","P31","Q26600"], ["Q1330839","P17","Q17"]]}
#### 3.2.3 Visualization
SRTK's visualization module generates interactive web pages that display the retrieved subgraphs. The labels of the unique identifiers are fetched from the knowledge graphs through their SPARQL endpoints, and the linked entities are highlighted for better visualization.
By feeding the output subgraph subgraph.jsonl from the last sub-section, one can visualize it with the following command. The resulting graph is depicted in Fig. 2.
srtkvisualize --inputsubgraph.jsonl --knowledge-graphwikidata --sparql-endpoint[https://query.wikidata.org/sparql](https://query.wikidata.org/sparql)
### Training Workflow
SRTK trains scorer models to align questions with their corresponding reasoning paths in the embedding space, enabling semantic-relevant subgraph retrieval.
The training process can be achieved through either full supervision or weak supervision. In full supervision, the correct subgraph or the gold expansion paths are known. While in weak supervision, only the source and target entities are known, which is more common in KBQA. SRTK supports both supervised and weakly supervised learning. For the latter, SRTK searches for the shortest paths from source to target entities in the knowledge graphs during the preprocessing stage, which are then used as weak supervision signals.
#### 3.3.1 Preprocessing
The preprocess subcommand simplifies the preprocessing of training data. In the fully supervised scenario, where the expansion paths from
Figure 3: The training workflow of SRTK involves three steps. Firstly, it preprocesses the raw data by identifying positive expansion paths and sampling negative ones. Next, it trains a scorer model that can selectively expand the subgraph according to a question by learning to pull together the question and the correct expansion path in the embedding space. Finally, if the answer entities are known, the scorer can be evaluated by calculating the answer coverage rate. The answer coverage rate in SRTK is calculated as the percentage of test samples where at least one answer entity is successfully retrieved.
Figure 2: The visualized subgraph with srtk visualize. The subgraph corresponds to the question _Where is Hakata Ward?_. It is visualized from a subgraph comprised of two triples: (_Hakata-ku_, _located in, Fukuoka_) and (_Hakata-ku, country, Japan_).
source entities to target entities are known, the preprocessing create training samples from each expansion step within an expansion path. For weak supervision scenarios, where only the source and target entities are known, it additionally retrieves likely paths between source and target entities within two hops. Then it creates training samples as that in full supervision. Here, we discuss the case of weak supervision for completeness.
To extract the most likely paths, we first query the SPARQL endpoints to search for the shortest paths from each source entity to the target entities. Since there may be multiple paths between the source and target entities, we then filter the paths by examining how close the entity sets derived from the paths to the answers. This involves retrieving a set of terminal entities of each path and scoring the paths based on the Jaccard index between the retrieved entities and the answer entities. Paths with scores below a certain threshold are discarded.
Once the expansion paths are known, we convert the retrieved paths into training samples by sampling negative relations for each relation in an expansion path. The intuition of this conversion is to mimic the decision-making process during retrieval, where the goal is to select the most suitable relation from the set of connected relations so that the probability of reaching the target entities will be maximized. This is a form of weak supervision, as the retrieved expansion path may not be the real reasoning chain. A \(K\)-hop path is decomposed into \(K+1\) positive samples. The first \(K\) samples are \(([q;r_{1},r_{2},\ldots,r_{k-1}],r_{k})\), with \(q\) being a given query,, \(r_{k}\) being the \(k\)-th relation, and \(k\) in range \(1,2,\ldots,K\). The last sample is \(([q;r_{1},\ldots,r_{K}];\text{END})\), where \(\text{END}\) is a special relation signifying stopping expansion. For a positive sample \(([q;r_{1},r_{2},\ldots,r_{k-1}],r_{k})\), \(N\) negative samples are retrieved from the knowledge graph by sampling from the relations connected to relation \(r_{k-1}\) via one-hop intermediate entities, excluding \(r_{k}\). Each positive sample is then converted to a training sample along with its negative samples and the query.
For instance, if we know from the previous example that the source entity is Q1330839 (Hakata-ku), while the target entity is Q26600 (Fukuoka City), we can first arrange the information into a JSONL file with one line as follows:
```
#Inputkbqa_dataset.jsonl {"question":"Where\is\Hakata\Ward?", "questionentities":["Q1330839"], "answerentities":["Q26600"]}
```
By executing the srtk preprocess command as follows, we first identify the probable expansion paths between the question and answer entities within two hops, then create training samples from the paths:
```
srtkpreprocess-inputkbqa_dataset.jsonl --search-path--outputtrain.jsonl --knowledge-graphwikidata --sparql-endpoint[https://query.wikidata.org/sparql](https://query.wikidata.org/sparql) --metricjaccard--num-negative2
```
In the command above, --search-path instructs SRTK to look for connections between question and answer entities. This argument can be omitted when the gold expansion paths are known. --num-negative specifies how many negative relations to sample for each relation. The first sample in train.jsonl is as follows:
{"query": "Where\({}_{\texttt{\_}}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\_\_\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\_\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\)\(\texttt{\_}\_}\)\(\texttt{\_}\)\(\texttt{\_}\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\\\\\_\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
--output-dirartifacts/score \ --model-name-or-pathroberta-base \ --acceleratorgpu
If the answer entities are known in a dataset, we can proceed to evaluate a scorer using SRTK. It works by retrieving subgraphs for a test dataset, then calculating how many answer entities are retrieved with answer coverage rate. We define answer coverage rate as the number of samples where any of the correct answer entities is retrieved, divided by the total number of test samples. We can perform an evaluation using the srtk retrieve subcommand with the --evaluate option. The expected input fields include question, question entities, and answer entities. If the test dataset is stored at test.jsonl, one can evaluate a trained scorer with the following command:
srtkretrieve --inputtest.jsonl \ --score-model-pathartifacts/scorer \ --evaluate >>>>Answercoveragrate:0.9749 (4188 / 4296) >>>>Averagesubgraphsize:7.5345 triples
In the provided example, the command line output reveals that out of the 4296 samples, a total of 4188 samples successfully retrieve at least one of the target entities. Additionally, the size of the subgraphs is measured by calculating the average number of triples within the subgraphs across all the samples. The numbers provided in this example are used purely for explanatory purposes.
## 4 Discussions
### Impact
In this paper, we have primarily focused on discussing the use case of SRTK in the domain of KBQA. Within KBQA, our toolkit helps researchers more easily retrieve subgraphs, which reduces search space and may improve the downstream tasks by providing less noisy information. Moreover, SRTK promotes the migration from outdated knowledge graphs to up-to-date ones by providing unified interfaces. For example, research works that use to depend on Freebase can seamlessly migrate to Wikidata7 by updating the knowledge graph endpoint URL, while the interfaces to access underlying knowledge graphs remain the same. More importantly, SRTK may promote the development of transferable and shareable8 retrieval algorithms, such that newly developed datasets or algorithms work across knowledge graphs and can be easier compared and benchmarked.
Footnote 7: We additionally provide migration scripts from Freebase to Wikidata, including entities and relations: [https://github.com/happen2me/freebase-wikidata-convert](https://github.com/happen2me/freebase-wikidata-convert)
Footnote 8: The trained models can be easily shared on HuggingFace hub, e.g. we share our trained checkpoint as drt/srtk-scorer
SRTK has numerous potential use cases that extend beyond KBQA. In knowledge graph augmented language model pretraining, subgraphs retrieved with SRTK can be used to enhance language representations with either entity knowledge [45, 37, 36, 24, 10] or triple knowledge [31, 35, 26]. One possible approach is using SRTK to identify entities, search relations within the linked entities or with semantic-relevant neighbors, and use the retrieved triples for language pretraining. In knowledge-enhanced language generation, SRTK may be used to retrieve relevant and accurate facts with the given prompts [16, 15, 18, 47]. In conversation reasoning and generation, SRTK can be used to identify mentioned entities and provides semantic-relevant subgraphs as extra information for responses[43, 21, 33]. In fact verification, SRTK may be used to retrieve subgraph as reliable facts to verify the statements [3]. One way to accomplish this is to examine whether certain links exist in the retrieved subgraph. Furthermore, the subgraphs retrieved by SRTK can be leveraged for improvement on a variety of downstream tasks [42, 46], like translation [22], summarization [13] etc.
### Positioning to the State of the Art
SRTK stands on the shoulder of giants. It builds upon existing entity linking services and extends an existing state-of-the-art path expansion algorithm to offer an off-the-shelf toolkit for subgraph retrieval across various knowledge graphs. Regarding entity linking on Wikidata, REL [14] is integrated, which currently achieves the highest micro-F1-strong score on the AIDA CoNLL-YAGO dataset [12]. Although REL and most other entity linking tools primarily link entities to Wikipedia, the presence of a maintained mapping between Wikipedia articles and Wikidata enables their utilization. For entity linking on DBpedia, DBpedia Spotlight [20] is supported. It relies on pattern match and thus does not perform as well as neural models [28]. However, due to the absence of alternative off-the-shelf entity linking services for DBpedia, we include DBpedia Spotlight as part of the toolkit. For path expansion, we integrate SR proposed by Zhang et al [44]. They reported state-of-the-art results on CWQ [32] and WebQSP [41], two datasets for KBQA on Freebase, with the subgraph retrieved by SR. The retrieved subgraphs are significantly smaller in size but still have high coverage of answer entities. We further extend SR to support other knowledge graphs. Although we did not compare the results of KBQA using subgraphs retrieved by SRTK on datasets of other knowledge graphs, this is due to the complexities involved in setting up such comparisons. Besides, evaluating the performance of a specific implementation of the path expansion algorithm in various settings exceeds the scope of this paper.
### Limitations
SRTK relies on knowledge graph endpoints for up-to-date information, but network latency and iterative queries slow down retrieval. To address this, we propose two solutions. Firstly, setting up local endpoints can alleviate the problem, as demonstrated in our experiments (tutorial available in our documentation).
Secondly, caching known entities' \(k\)-hop facts in advance can build a smaller local graph, an approach currently under development.
Another limitation of SRTK is its dependency on preceding steps, where issues can accumulate and magnify. For instance, failure to link entities to knowledge graphs renders subgraph retrieval impossible. One possible solution is to combine the results from multiple entity linking services or to fallback to n-gram based methods when neural methods fail so that the mentioned are more likely to be recognized. We plan to integrate more existing entity linking services into SRTK.
On the path expansion side, there are two notable limitations. One limitation is the restricted expansion direction. SRTK currently only expands outward along directed relations. This means for a triple like _(Hakata, locate in, Fukuoka)_, _Fukuoka_ can be discovered by following the directed relation _locate in_ if _Hakata_ is known, but the inversed discovery of _Hakata_ is not possible if only _Fukuoka_ is known. This limitation arises in knowledge bases lacking paired inverse relations like Wikidata. Another limitation lies in the proximity assumption. The current link-then-expand approach assumes that the target entities lie in a certain proximity to the linked entities. This holds for most KBQA scenarios but does not generally hold for all subgraph retrieval situations. The first limitation can be alleviated by allowing inverse expansion, e.g. constructing SPARQL queries with unknown variables in the subject position. But this significantly increases complexity, including challenges of circle avoidance and expansion direction determination. The second limitation may be solved by replacing entity linking with problem-specific entity discovery methods to locate entities near the target.
Besides, SRTK currently retrieves subgraphs as sets of triples, excluding additional affiliated information available in knowledge graphs like Wikidata. For example, the triple _(Merkel, Position Held, Chancellor of Germany)_ includes start and end dates. Current methods retrieve this information as extra triples (position held, start time, 22.11.2005). Future improved algorithms seek to retrieve all related information simultaneously.
### Future Development and Maintenance
The SRTK project has several directions for future development. In addition to the current capabilities, we plan to add support for additional knowledge graphs such as YAGO [23]. This would involve integrating an entity linking service and implementing graph access interfaces for each supported knowledge graph. Besides, since the abstract access interfaces are implementation-agnostic, SRTK has the potential to support knowledge graphs queried with interfaces other than SPARQL. Our ultimate goal is to encourage comparable methods across different knowledge graphs and to help researchers migrate from outdated knowledge graphs.
Moving forward, we aim to expand the range of algorithms supported by SRTK. The current retrieval algorithm is path-centric: it retrieves subgraphs by selecting the most probable outgoing expansion paths. However, we see room
for improvement in several areas. For example, we could take entities along the paths into account when expanding subgraphs, and consider both incoming and outgoing links when expanding from a known entity. This would undoubtedly make expansion more complex, but it holds promise for future research.
In the medium term, our plans for SRTK include incorporating support for YAGO, integrating more online entity linking services, and exploring the compatibility of SRTK with existing information retrieval-based KBQA methods across various datasets. Looking ahead to the long term, we aspire to incorporate support for additional retrieval algorithms and integrate SRTK with popular KBQA models, enabling direct comparisons and benchmarking of existing KBQA and retrieval methods. Additionally, we are also committed to continuously maintaining SRTK, addressing any bugs or security vulnerabilities. To achieve these visions, we also greatly value and encourage contributions from the open-source community.
## 5 Conclusion
SRTK is an extensible and user-friendly toolkit that focuses on retrieving semantic-relevant subgraphs. It is built on a state-of-the-art algorithm and comes with a command line interface and Python API. SRTK streamlines the full life cycle of subgraph retrieval development and application, providing customizable pipeline steps. It supports multiple knowledge graphs, including Freebase, DBpedia, and Wikidata. Future developments include support for entity linking services, more knowledge graphs, and other retrieval algorithms.
Resource Availability Statement: Source code for SRTK is available from Github9. Its canonical citation is [29].
Footnote 9: [https://github.com/happen2me/subgraph-retrieval-toolkit](https://github.com/happen2me/subgraph-retrieval-toolkit)
|
2304.04228 | Unsupervised Multi-Criteria Adversarial Detection in Deep Image
Retrieval | The vulnerability in the algorithm supply chain of deep learning has imposed
new challenges to image retrieval systems in the downstream. Among a variety of
techniques, deep hashing is gaining popularity. As it inherits the algorithmic
backend from deep learning, a handful of attacks are recently proposed to
disrupt normal image retrieval. Unfortunately, the defense strategies in
softmax classification are not readily available to be applied in the image
retrieval domain. In this paper, we propose an efficient and unsupervised
scheme to identify unique adversarial behaviors in the hamming space. In
particular, we design three criteria from the perspectives of hamming distance,
quantization loss and denoising to defend against both untargeted and targeted
attacks, which collectively limit the adversarial space. The extensive
experiments on four datasets demonstrate 2-23% improvements of detection rates
with minimum computational overhead for real-time image queries. | Yanru Xiao, Cong Wang, Xing Gao | 2023-04-09T12:46:35Z | http://arxiv.org/abs/2304.04228v1 | # Unsupervised Multi-Criteria Adversarial Detection in Deep Image Retrieval
###### Abstract
The vulnerability in the algorithm supply chain of deep learning has imposed new challenges to image retrieval systems in the downstream. Among a variety of techniques, deep hashing is gaining popularity. As it inherits the algorithmic backend from deep learning, a handful of attacks are recently proposed to disrupt normal image retrieval. Unfortunately, the defense strategies in softmax classification are not readily available to be applied in the image retrieval domain. In this paper, we propose an efficient and unsupervised scheme to identify unique adversarial behaviors in the hamming space. In particular, we design three criteria from the perspectives of hamming distance, quantization loss and denoising to defend against both untargeted and targeted attacks, which collectively limit the adversarial space. The extensive experiments on four datasets demonstrate \(2-23\%\) improvements of detection rates with minimum computational overhead for real-time image queries.
## 1 Introduction
Powered by neural networks, deep hashing enables image retrieval at a large scale [7, 8, 26, 47, 49, 27]. By representing high-dimensional images with compact binary codes, retrieval becomes an efficient similarity computation of Hamming distance. Google [3], Bing [2], Pinterest [4], Taobao [1] have all incorporated image query as part of their products. Despite of its great success, deep hashing also inherits the vulnerabilities from neural networks [36] with new attack vectors and effects. By introducing adversarial perturbations either on the query or database images, normal requests can be diverted to an irrelevant (_untargeted attack_) [46] or specific category (_targeted attack_) [42, 6, 40], e.g., turning a query of "husky dog" into retrieving a branded "dog food" so the attacker can advertise their products for free.
With a handful of efforts on the attack side [6, 40, 42, 46], deep hashing still falls short to defend against adversarial examples in the hamming space. _Adversarial training_ and _detection_ are the two common defenses in softmax classification. Yet, adversarial training has to deal with the non-trivial trade-off between robustness and accuracy [48]. According to our implementation (see appendix), finding the min-max saddle points becomes even more difficult under the hash function, which suffers from a large accuracy loss. On the other hand, detection aims to unveil the adversarial behaviors on different levels of raw pixel [15, 17], feature distribution [28, 17, 23], softmax probabilities [19] and frequency components [39] in a _supervised_[11] or _unsupervised_ manner [45]. Based on the prior knowledge of attack methods, supervised detection trains a classifier to distinguish the adversarial images, but is hard to extrapolate to the unknown attacks. To this end, we pursue the direction of unsupervised anomaly detection in this paper. Different from softmax classification on a closed set of class probabilities, deep hashing maps similar/dissimilar images into binary codes in an open Hamming space. Thus, the focus of our work is to tap into the unique adversarial behaviors in
Figure 1: Miss rate of different detections for untargeted/targeted adversarial examples in deep hashing. The solid and hollow markers are for CIFAR-10 and ImageNet respectively. The proposed UMCD has the lowest miss rate on both datasets.
deep hashing to detect both untargeted and targeted attacks.
Starting from the untargeted attacks [46], we first theoretically deduce the hamming distance distribution from the adversarial image to other categories, which asymptotically approaches a Gaussian distribution. For targeted attacks, we discover an interesting adversarial behavior on the quantization loss: when the adversarial objective is to produce the same hash code of a targeted category [6, 40], it unintentionally brings the quantization loss close to zero. Thus, we first develop two thresholding methods that take hamming distance and quantization loss as the proxies. Then we combine the two criteria with a denoising-based detection to measure the disagreement between an input and its denoised transformation. We demonstrate that this combination can successfully defend against both _gray-box_ attackers, who have no prior knowledge of the detection method, and the strongest white-box attackers, who know the existence of the detection and can implement countermeasures. The overall framework is shown in Fig. 2.
The main contributions of this paper are summarized below. To the best of our knowledge, this is the first, unsupervised effort to defend against adversarial attacks in deep hashing. Based on the novel discoveries and analysis, we propose three criteria to unveil adversarial behaviors of targeted and untargeted attacks in the hamming space, and demonstrate their complementing relations against the strongest white-box attackers. The extensive experiments on CIFAR-10, ImageNet, MS-COCO and NUSWIDE datasets show that the proposed method surpasses the state-of-the-art defenses by up to 23% in detection rates with negligible computational overhead for real-time image queries.
## 2 Preliminary
This section illustrates the fundamentals of deep hashing and adversarial attacks.
### Deep Hashing
Given a dataset of \(N\) samples \(X=\{x_{1},x_{2},\ldots,x_{N}\}\), \(x_{i}\in\mathbb{R}^{D}\) and their corresponding labels \(Y=\{y_{1},y_{2},\ldots,y_{N}\}\), \(y_{i}\in\mathbb{R}^{C}\), where \(x_{i}\) is the \(i\)-th sample and \(y_{c,i}=1\) if the \(i\)-th image is associated with class \(c\). Deep hashing learns a function \(f_{\theta}(x)\) that maps the input image \(x\) into a \(K\)-bit binary code \(h(x)\) via a sign operation,
\[h(x)=sign(f_{\theta}(x))\in\{-1,+1\}^{K}, \tag{1}\]
where \(\theta\) are the parameters learned from minimizing the weighted combination of the similarity loss \(\mathcal{L}_{S}\) and quantization loss \(\mathcal{L}_{Q}\)[7, 8, 26, 47, 49],
\[\theta=\operatorname*{arg\,min}_{\theta}\mathcal{L}_{S}+\lambda\mathcal{L}_{Q}. \tag{2}\]
\(\mathcal{L}_{S}\) represents the hamming distance \(D_{h}(h(x_{i}),h(x_{j}))\) between two images \(x_{i}\) and \(x_{j}\) with their similarity \(s(y_{i},y_{j})\),
\[s(y_{i},y_{j})=\begin{cases}+1,&\text{if }y_{i}y_{j}^{T}>0\\ -1,&\text{otherwise}.\end{cases} \tag{3}\]
\(\mathcal{L}_{Q}\) is the quantization loss to minimize the difference between the continuous output of \(f_{\theta}(x)\) and its binary code \(h(x)\). The objective is to minimize the hamming distance \(D_{h}(h(x_{i}),h(x_{j}))\) between two samples \(x_{i}\) and \(x_{j}\) when they are similar, maximize the hamming distance when they are dissimilar, and meanwhile, represent the continuous \(f_{\theta}(x)\) as binary codes. Both \(D_{h}(h_{1},h_{2})\) and \(h(x)\) are non-differentiable regarding their inputs. A common technique is to use the differentiable form of \(D_{h}(h_{1},h_{2})\) noted as \(\frac{1}{2}(K-{h_{1}}^{T}h_{2})\) during backpropagation, where \(h_{1},h_{2}\) are the continuous floating point representation in \([-1,+1]\), and the binary hash codes \(h(x)\) are represented by the continuous output of \(f_{\theta}(x)\). The gap between such continuous and binary representations is considered as the quantization loss \(\mathcal{L}_{Q}\), which is minimized in Eq. (2).
Deep hashing consists of two main components, a _database_ and a _model_. The database stores the images and their pre-computed hash codes. Given a query image \(x\) with hash code \(h(x)\), the system returns the top-\(k\) images from the database which are \(h(x)\)'s \(k\)-nearest neighbors determined by hamming distance. The retrieval performance is calculated by the mean average precision (mAP), which is the ratio of images similar to \(x\). In this paper, we base the hashing framework on the state-of-the-art method called Central Similarity Quantization (CSQ) [47]. CSQ pre-determines the optimal hash codes based on the Hadamard matrix and randomly selects a set of hash codes with sufficient distances from each other as the hash centers from the Hadamard matrix (or from a random binary matrix if the Hadamard matrix is not available). Since different hashing techniques share the general objective of Eq. (2), our defense applies to other techniques as well [7, 8, 26, 49, 27].
### Adversarial Attacks
**Untargeted Attack**[46] finds an adversarial image \(x^{\prime}\) by maximizing the hamming distance between the hash codes of adversarial examples and original images, subject to the \(\mathcal{L}_{\infty}\) bound of \(\epsilon\).
\[\begin{split}\max_{x^{\prime}}&\,D_{h}\big{(}h(x^{ \prime}),h(x)\big{)}\\ &\text{s.t.}\,\left\|x-x^{\prime}\right\|_{\infty}\leq\epsilon \end{split} \tag{4}\]
It works effectively to reduce the mAP by pushing the original image towards the furthest hamming distance in the hash space.
**Targeted Attack**[6, 40, 42] attempts to minimize the hamming distance from \(x^{\prime}\) to the targeted hash code \(h_{t}\) of a
specific category,
\[\begin{split}&\min_{x^{\prime}}D_{h}(h(x^{\prime}),h_{t})\\ &\text{s.t. }\left\lVert x-x^{\prime}\right\rVert_{\infty}\leq \epsilon\end{split} \tag{5}\]
Once the attacker has embedded the adversarial images in the database, targeted attacks enable image retrieval from a specific category upon user queries. For example, as illustrated in [42], the database could mistakenly return the advertisements of branded beer from the database upon the query of facial lotions.
**Attack Model.** Attackers can carry out both untargeted and targeted attacks. In particular, we consider two types of _gray-box_ and _white-box_ attackers. _Gray-box_ attackers have access to all the information including network architecture, weights and data, but are not aware of the existence of adversarial detection. The stronger _white-box_ attackers are aware of both the model function/parameters and the existence of the detection. So they implement different bypassing strategies as discussed in Section 4.
## 3 Adversarial Behaviors in the Hamming Space
Among a variety of artifacts left by adversarial images in classification networks, one of the most evident "adversarial behaviors" is from the softmax function [19, 23]. Due to the fast-growing exponentiation, it magnifies small changes in the logits [19] and becomes overconfident in the presence of adversarial images by regularizing other categories [23]. In contrast to softmax, which makes the decision from a closed set of categories, hashing maps similar images into compact hamming balls in an open hamming space of \(\{-1,1\}^{K}\). In this section, we define three criteria to identify adversarial behaviors in the hamming space.
### Detecting Untargeted Attacks (\(C_{1}\))
We start with untargeted attacks that maximize the hamming distance between \(x^{\prime}\) and \(x\)[46]. Though such behavior is straightforward to discern, we seek a theoretical answer to the distribution of \(h(x^{\prime})\) when the attacking capacity is maximized.
**Assumption 1.** The network is capable of learning _perfect_ hash codes with the minimum intra-class distance (i.e., equals to zero) and maximum margin between each other.
If Assumption 1 holds, what is the hamming distance from the adversarial \(h(x^{\prime})\) to the rest of the hash codes? To answer this question, we first establish the distribution of the maximum inter-class distance as illustrated in the following Lemma.
**Lemma 1.** Given a number of \(C\) classes in the \(K\)-bit hamming space with (ideally) compact hash codes, the inter-class hamming distance follows a Binomial Distribution of \(\mathcal{X}\sim B(K,p)\), where \(p=\frac{C}{2(C-1)}\) and \(p\approx\frac{1}{2}\) when \(C\) is large.
Proof.: For all \(C\) classes, consider only one bit location at a time. The maximum hamming distance is achieved when there is an equal number of \(\frac{C}{2}\), \(\{+1\}\) and \(\{-1\}\) codes among the \(C\) classes. The hamming distance between two bits is either \(0\) or \(1\). Thus, among \(\binom{C}{2}\) selection of pairs, the probability that the hamming distance equals to \(1\) is \((\frac{C}{2}\cdot\frac{C}{2})/\binom{C}{2}=\frac{C}{2(C-1)}\). Since all \(K\) bits can be selected independently, the probability of the inter-class hamming distance between \(h_{i}\) and \(h_{j}\) equals to \(d\) is,
\[\begin{split} Pr\big{(}D_{h}(h_{i},h_{j})=d\big{)}=\binom{K}{d}p ^{d}(1\!-\!p)^{K-d},\quad p=\frac{C}{2(C-1)},\end{split} \tag{6}\]
in which mean value is \(Kp\) and variance is \(Kp(1-p)\).
From _Lemma 1_, we can further deduce the next theorem.
**Theorem 1.** For untargeted attacks, the hamming distance from the adversarial image to any other classes follows a Gaussian distribution \(\mathcal{N}\sim(K(1-p),Kp(1-p))\).
Proof.: In the ideal situation, the untargeted attack maximizes the hamming distance from \(D_{h}(h(x^{\prime}),h(x_{i}))\) to \(K\)
Figure 2: The proposed detection framework: highlighted by the dash lines.
Thus, for any other hash codes \(h(x_{j})\), the hamming distance is \(D_{h}(h(x^{\prime}),h(x_{j}))=K-D_{h}(h_{i},h_{j})\), which is also a Binomial distribution with the mean of \(K(1-p)\) and the same variance. When the hash bits \(K\) is a large value, it can be approximated by a Gaussian distribution \(\mathcal{N}\sim(K(1-p),Kp(1-p))\)[34].
**Example.** When \(K=64\) bits, and \(C\) is large (\(p\rightarrow\frac{1}{2}\)), using the three-sigma rule, the confidence interval is \((K(1-p)-3\sqrt{Kp(1-p)},K(1-p)+3\sqrt{Kp(1-p)})\). In other words, there is 99.73% confidence that the hamming distance from an untargeted adversarial image to any other classes would be within the \([20,44]\) hamming distance interval with the mean of \(K/2=32\), which is sufficiently distinguishable in the hamming space. Note that the above analysis serves a theoretical upper bound because achieving Assumption 1 is still an ongoing effort [20, 47]. To see some examples, we visualize the t-SNE of untargeted adversarial images vs. benign images on CIFAR-10 and MS-COCO in Fig. 3. It is observed that despite of a few samples, the majority of the adversarial images are sufficiently distinguishable based on hamming distance. Hence, we formalize the first detection criterion.
**Criterion 1 (Hamming Distance).** For query \(x\), collect the set of top-\(k\) hash codes \(\mathcal{H}_{k}\) and calculate the average hamming distance to \(h(x)\).
\[C_{1}=\frac{1}{|\mathcal{H}_{k}|}\sum_{h(x_{k})\in\mathcal{H}_{k}}D_{h}\big{(} h(x_{k}),h(x)\big{)} \tag{7}\]
\(C_{1}\) is the average hamming distance of the top-\(k\) retrieval results, i.e., a scalar value and we can compare it with a threshold \(\mathcal{T}_{1}\) calculated on benign samples. The computational process of \(C_{1}\) follows the normal retrieval procedures using the top-\(k\) hash codes. To detect targeted attacks, we develop the next criterion.
### Detecting Targeted Attacks (\(C_{2}\))
While untargeted attacks attempt to induce a bit flip that makes \(h(x^{\prime})=-h(x)\), targeted attacks minimize the hamming distance between \(h(x^{\prime})\) and an arbitrary target code \(h_{t}\) (e.g., such as computed from consensus voting [6] of a category). To find an appropriate metric to identify them, we have the following observation.
**Observation 1.** For the quantization loss of benign images \(\mathcal{L}_{Q}^{b}\) and targeted images \(\mathcal{L}_{Q}^{t}\), the relation \(\mathcal{L}_{Q}^{b}>\mathcal{L}_{Q}^{t}\approx 0\) holds.
To illustrate this observation, recall that the original targeted objective in Eq. (5) is not differentiable regarding the targeted binary code of \(x^{\prime}\). The implementation approximates via a continuous relaxation and the goal is to minimize the distance between the continuous output from the \(\tanh(\cdot)\) function and the target code [8]. As more gradient descent steps are taken, the quantization loss \(\mathcal{L}_{Q}^{t}\to 0\), when their inter-distance is minimized. This is in close analogy with the adversarial images on softmax classifications while the targeted probabilities become overconfident [23], and surprisingly, similar phenomenon is reflected on the quantization loss in deep hashing. In contrast, for all the benign samples, it is difficult to find the optimal model parameters to push \(\mathcal{L}_{Q}^{b}\) towards zero during the training process. An example of ImageNet is shown in Fig.4(a) as the targeted attacks leave a distinguishable gap from benign samples. It is also interesting to compare with the quantization loss of untargeted attacks in Fig.4(a), which is larger than zero. This is because for untargeted attacks, finding an adversarial subspace that reduces the mAP to zero can be achieved before flipping all the bits, which is much easier than targeted attacks. Based on these observations, we develop the second detection criterion.
**Criterion 2 (Quantization Loss).** Calculate the \(\mathbf{L}_{p}\) distance from the output of network \(f_{\theta}(x)\) (logits before the sign function) and its hash code \(h(x)\),
\[C_{2}=\|h(x)-f_{\theta}(x)\|_{p} \tag{8}\]
\(C_{2}\) is the quantization loss of query \(x\). Here, we use the \(\mathbf{L}_{1}\) distance (\(p=1\)) and obtain a threshold \(\mathcal{T}_{2}\) on benign samples offline. Figs. 4(b)(c) show the distribution of quantization loss between the adversarial and the benign images. As \(C_{2}\to 0\) for targeted attacks, we can see that using
Figure 3: t-SNE visualization of untargeted adversarial images vs. original images of different datasets (a) CIFAR-10. (b) MS-COCO.
can effectively identify most of the attacks; using \(C_{2}\) also identifies about 60% of the untargeted attacks,
### Detecting Prediction Inconsistency (\(C_{3}\))
\(C_{1}\) and \(C_{2}\) alone are not sufficient. In principle, detection works by limiting the attacker's action space in a confined region. Perturbation can be generally treated as an artificial noise with high-frequency components [39]. Thus, a common approach is to apply local, non-local smoothing filters [45], auto-encoder denoiser [30], color bit reduction [45], quantization [25], and measure the response sensitivity to the denoised images. The adversarial images are more prone to produce a different result, while the benign samples are less sensitive. These denoising operations reduce the entropy (randomness) and the input dimensions of adversarial space that the perturbations can act upon.
We extend this principle in deep hashing to formulate Criterion 3. Denote the transformation [25, 30, 45] as \(t(\cdot)\). For query \(x\), \(C_{3}\) measures the hamming distance between a transformed \(t(x)\) and \(x\) based on the output before the sign function.
**Criterion 3 (Prediction Inconsistency).**
\[C_{3}=D_{h}\big{(}f_{\theta}(t(x)),f_{\theta}(x)\big{)} \tag{9}\]
In other words, \(C_{3}\) quantifies the disagreement between the original and transformed inputs, which can be evaluated against a threshold \(\mathcal{T}_{3}\) calculated offline on benign samples.
### Put Everything Together
The overall detection combines the three criteria: given a query image \(x\), we calculate \(\{C_{1},C_{2},C_{3}\}\) and compare with thresholds \(\{\mathcal{T}_{1},\mathcal{T}_{2},\mathcal{T}_{3}\}\). If (a) \(C_{1}<\mathcal{T}_{1}\); (b) \(C_{2}>\mathcal{T}_{2}\); (c) \(C_{3}<\mathcal{T}_{3}\), the input is considered as benign; otherwise, if any of them is not satisfied, the input is rejected as an adversarial example. The computation time is bounded by \(C_{3}\) since it requires two retrievals. To minimize the compute time, the system can combine the original query and its denoised copy into a batched query. In case the GPU has sufficient resources, it should have minimum overhead as discussed in Section 4.5.
## 4 Experiments
### Implementation
We evaluate our mechanism on the CIFAR-10, ImageNet, MS-COCO and NUSWIDE datasets that are commonly used for deep hashing [7, 8, 26, 47, 49, 27] and adopt CSQ [47] with ResNet50 [18] as the base model. The RMSProp optimizer [37] with learning rate \(10^{-5}\) is used for training of 150 epochs. The weight of the quantization loss is set to \(10^{-4}\). For four datasets, our trained models achieve mAP of 0.854, 0.883, 0.884, and 0.843, respectively.
We compare with several benchmarks originally designed for softmax classification: Local Intrinsic Dimensionality (LID) [28], Median Smoothing (FS-Median), Non-local Means (FS-NLM) [45], FS-Adaptive [25], and MeanBlur [23]. FS-Adaptive uses the entropy of input as a metric to adaptively reduce the input space using scalar quantization and smoothing spatial filter. We select MeanBlur as the denoising technique for our method. True Positive Rate (TPR), False Negative Rate (FNR, Miss Rate), and Area-Under-Curve (AUC) are used as the evaluation metric, where adversarial examples are considered as Positive and benign samples are considered as Negative. Thus, a detected adversarial example is counted as a true positive, while a misidentified benign sample is counted as false positive.
All detection methods are evaluated against both untargeted [46] and targeted [6] deep hashing adversarial attacks (based on the PGD attack) and an untargeted deep hashing CW [10] attack1. The step sizes of the former two are set to 1.0 with \(100\) steps, limited by an \(\mathbf{L}_{\infty}\) norm of perturbation \(\epsilon=8\). For CW attack, the learning rate is set to \(0.01\) with
Figure 4: Example of identifying targeted attacks based on quantization loss on ImageNet. (a) The quantization loss for targeted attacks concentrates around zero vs. the benign samples. (b) Targeted attacks push the quantization loss to zero compared to untargeted attacks. (c) 60% of the untargeted attacks also concentrate around zero.
\(500\) steps. Additional experimental details and results are available in appendix.
### Detection of Gray-Box Attacks
**Our Method.** Table 1 shows the detection miss rate of different methods when we fix the FPR at 5%. The proposed method can make robust detection of targeted attacks with less than 1% miss rate in the most cases. Compared to the SOTA benchmarks, our method is \(2.13\%\) to \(23.44\%\) better than other methods on average. Untargeted attacks are relatively easier to detect, so the overall miss rates are lower than targeted attacks. Our method can still improve the baselines by \(1.15\%\)-\(14.33\%\).
**Compare with LID.** Although LID performs well on most of the targeted attacks, it does not generalize to the CW attack. Furthermore, different from the softmax networks that the last few layers often offer better detection [28], LID is quite sensitive to which layers should those features be extracted in deep hashing, which adds extra configuration overhead. In sum, LID is not always effective against all types of attacks.
**Compare with Spatial Denoising Methods.** LID does not take advantage of the spatial information like the rest four benchmarks using non-local mean, median, etc. As shown in Table 1, though they generally have less than 10% miss rates on untargeted attacks, there is a 0.5-17.8% gap in detecting targeted attacks. Such gaps can be explained by the attack mechanisms as targeted attacks take more gradient steps. This lands the image deep into the adversarial space, which is more robust to pixel-level modifications such as denoising [42, 21]. However, our method provides an extra layer of defense from \(C_{2}\) to specifically monitor the value of quantization loss as targeted attacks bring it to zero, thereby complementing \(C_{3}\) when targeted attacks push the inputs deep into the adversarial space.
### Detection of White-box Attacks
Next, we demonstrate the detection of white-box attackers, who know the existence of our detection and conduct countermeasures accordingly. The attacker adopts _backward pass differentiable approximation_[5] to estimate the gradients and develops different strategies against \(C_{1}\), \(C_{2}\) and \(C_{3}\):
**Against \(C_{1}\).**\(C_{1}\) relies on the hamming distance between hash codes to detect outliers. Thus, an effective evasion is to drive the adversarial examples into the neighborhoods of benign images, e.g., generating the same binary hash codes of certain targeted images \(h_{t}\):
\[\mathcal{L}_{1}=\underbrace{D_{h}(f_{\theta}(x^{\prime}),h_{t})}_{\text{adv loss}} \tag{10}\]
**Against \(C_{2}\).**\(C_{2}\) detects near-zero quantization loss by accessing the logits before the sign function. To bypass this detection, the attacker aims to maximize the quantization loss, which amortizes the adversarial behavior identified from \(C_{2}\).
\[\mathcal{L}_{2}=-\underbrace{\left\|h(x^{\prime})-f_{\theta}(x^{\prime}) \right\|_{1}}_{\text{quantization loss}} \tag{11}\]
**Against \(C_{3}\).**\(C_{3}\) detects the disagreement between \(f_{\theta}(x^{\prime})\) and the denoised copy \(f_{\theta}(t(x^{\prime}))\). The attacker minimizes such difference by enforcing distance between \(f_{\theta}(x^{\prime})\) and \(f_{\theta}(t(x^{\prime}))\) to be small,
\[\mathcal{L}_{3}=\underbrace{D_{h}(f_{\theta}(t(x^{\prime})),f_{\theta}(x^{ \prime}))}_{\text{denoised adv loss}} \tag{12}\]
We use MeanBlur [23] as the transformation \(t(\cdot)\) here. By combining them, the white-box attacker constructs a joint optimization objective,
\[\min_{x^{\prime}}\mathcal{L}=\mathcal{L}_{1}+\lambda_{1}\mathcal{L}_{2}+ \lambda_{2}\mathcal{L}_{3} \tag{13}\]
**Detection of White-box Attacks.** Optimizing (13) turns out to be quite difficult by finding \(\lambda_{1}\), \(\lambda_{2}\) that breach all three criteria. We demonstrate the best effort in Fig. 5 that fixes \(\lambda_{2}\) to \(0.3\) and \(3\) adjusts \(\lambda_{1}\) from \(0\) to \(0.03\). The best
\begin{table}
\begin{tabular}{l l c c c} \hline \hline & & Untgt [46] & Tgt [6] & Untgt CW [10] \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & LID [28] & 14.90 & **4.00** & 12.16 \\ & FS-Median [45] & 13.40 & 54.30 & **2.40** \\ & FS-NLM [45] & 18.80 & 18.10 & 14.41 \\ & FS-Adaptive [25] & 54.00 & 48.30 & 52.55 \\ & MeanBlur [23] & **11.60** & 26.90 & **2.40** \\ & **U MCD** (Ours) & **7.50** & **0.40** & **0.15** \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & LID [28] & 21.78 & 6.40 & 75.95 \\ & FS-Median [45] & 8.54 & 19.20 & 1.41 \\ & FS-NLM [45] & 2.62 & **3.40** & 8.02 \\ & FS-Adaptive [25] & 4.05 & 4.30 & 13.68 \\ & MeanBlur [23] & **0.76** & 4.54 & **0.94** \\ & **U MCD** (Ours) & **0.42** & **0.34** & **0.47** \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & LID [28] & 3.24 & **1.20** & 54.59 \\ & FS-Median [45] & 0.26 & 7.10 & **0.00** \\ & FS-NLM [45] & **0.00** & 1.72 & 0.08 \\ & FS-Adaptive [25] & **0.00** & 2.84 & 0.08 \\ & MeanBlur [23] & **0.00** & 2.12 & **0.00** \\ & **U MCD** (Ours) & **0.00** & **1.12** & **0.00** \\ \hline \multirow{4}{*}{
\begin{tabular}{} \end{tabular} } & LID [28] & 25.34 & **0.08** & 47.96 \\ & FS-Median [45] & 3.52 & 16.33 & **0.00** \\ \cline{1-1} & FS-NLM [45] & **0.00** & 2.90 & **0.00** \\ \cline{1-1} & FS-Adaptive [25] & **0.00** & 4.57 & **0.00** \\ \cline{1-1} & MeanBlur [23] & 0.19 & 4.29 & **0.00** \\ \cline{1-1} & **U MCD** (Ours) & **0.00** & **1.29** & **0.00** \\ \hline \hline \end{tabular}
\end{table}
Table 1: A comparison with state-of-the-arts methods of _Detection Miss Rate_ (False Negative Rate) against adversarial attacks for deep hashing, when allowing 5% FPR on benign samples. Lower miss rate is better. Top two numbers of each column are **bolded**, with the best in **red** and the second in **blue**.
case is when \(\lambda_{1}\)=\(0.01\), the detection rate has been lowered to around 0.4 (the yellow bars). However, the number of adversarial examples that can successfully optimize (13) is also decreased to around 45% (the blue bars of success rate). Hence, although the strongest white-box attackers still have some chances, our detection has successfully confined the adversarial space by enlarging the attacker's efforts. It is also interesting to see that different criteria form compensating relations as indicated by the AUC values. When \(C_{2}\) (green curve) declines, \(C_{1},C_{3}\) quickly rise and vice versa. This relation is further validated by the ablation study next.
### Ablation Study
We present the ablation study to quantify the contribution of each criterion. We use \(C_{3}\) alone as the baseline and add \(C_{1}\) and \(C_{2}\) with their averaged gain shown in Table 2. The result is consistent with the defense objectives as the addition of \(C_{1}\) and \(C_{2}\) helps improve the detection rates of the untargeted and targeted attacks by \(\mathbf{0.0533}\) and \(\mathbf{0.1377}\), respectively. Meanwhile, \(C_{1}\) and \(C_{2}\) contribute almost independently on the overall detection, e.g., \(C_{1}+C_{3}=\mathbf{0.0533}\) plus \(C_{2}+C_{3}=\mathbf{0.0161}\) is equal to \(C_{1}+C_{2}+C_{3}=\mathbf{0.0692}\) for untargeted attacks and the same also holds for targeted attacks. This validates that all three criteria act as indispensable parts in the detection.
### Computational Time
Finally, we evaluate the computational overhead of the detection mechanism. In practice, the system can accumulate queries into a batch to enhance the utilization of GPU resources and reduce cost. Fig. 6 shows the average retrieval time per sample/batch. First, it is observed that the average time per sample is under 50 ms and further reduced as we increase the batch size. Once the batch size is small, detection introduces negligible overhead because the GPU is underutilized; as the batch size increases, an additional retrieval from the denoised copy in \(C_{3}\) enlarges the gap between normal retrieval since the GPU resources have been fully utilized. Thus, our detection introduces minimum overhead when the system accumulates relatively small batch and responds to queries in real-time.
## 5 Related Work
### Deep Hashing
Image retrieval uses nearest neighbor search to return the semantically related images of query inputs. Traditionally, it relies on hand-crafted visual descriptors to reduce the computational cost of similarity measure [14, 31]. Powered by deep learning, end-to-end hash learning improves the performance to a new level [7, 8, 26, 27, 49, 47]. They use the similarities between image pairs to train deep hashing models in a supervised manner by transforming the high-dimensional images into compact hash codes, on which neighboring search can be efficiently performed based on hamming distance. To convert the continuous outputs into discrete binary codes, common approaches use continuous relaxation such as sigmoid or hyperbolic tan
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{ImageNet} & \multicolumn{2}{c}{MS-COCO} & \multicolumn{2}{c}{NUSWIDE} & \multicolumn{2}{c}{_Avg. Gain_} \\ \cline{2-11} & Untgt & Tgt & Untgt & Tgt & Untgt & Tgt & Untgt & Tgt \\ \hline \(C_{3}\) Alone & 0.8160 & 0.7460 & 0.9110 & 0.8338 & 0.9954 & 0.9076 & 0.9757 & 0.8904 & – & – \\ \(C_{1}+C_{3}\) & 0.9870 & 0.7580 & 0.9522 & 0.8460 & 0.9966 & 0.9098 & 0.9757 & 0.8904 & _0.0533_ & _0.0066_ \\ \(C_{2}+C_{3}\) & 0.8170 & 0.9830 & 0.9504 & 0.9828 & 0.9992 & 0.9784 & 0.9961 & 0.9847 & _0.0161_ & _0.1377_ \\ \(C_{1}+C_{2}+C_{3}\) & 0.9880 & 0.9950 & 0.9916 & 0.9956 & 0.9992 & 0.9784 & 0.9961 & 0.9847 & _0.0692_ & _0.1439_ \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study: detection rates of different combinations (\(\epsilon=32\))
Figure 5: White-box attack results. x-axis is the \(\lambda_{1}\) value and y-axis is the percentage. Once the white-box attacks achieve a lower detection rate at \(\lambda_{1}=\)1e-2, the success rate of generating such adversarial examples also drops significantly. The three detection criteria compensate each other by observing the AUC values: when \(C_{1},C_{3}\) are low, \(C_{2}\) is high and vice versa.
gent functions to approximate the discrete binary thresholding [7, 8, 26, 47, 49, 27]. Our work exploits the adversarial behaviors originated from this approximation process, thus can be applied to a variety of deep hashing models.
### Adversarial Attacks
Deep neural networks are known to be vulnerable to the non-perceptible perturbations [36]. The Fast Gradient Sign Method (FGSM) [16] generates perturbations in the direction of the signed gradient to maximize the loss function in one-shot computation. The Basic Iterative Method (BIM) [22] and Projected Gradient Descent (PGD) [29] take iterative steps (from random initialization) to achieve higher attack success. There are several other variants [5, 10, 32, 38], e.g., the CW attack aims at minimizing the perturbations to evade detection.
By using the deep learning backends, deep hashing inherits the vulnerability from neural networks. With some slight adaptation, recent works have shown that adversarial attacks can also mislead image retrieval systems [40, 42, 6, 43, 46]. The attacks can be generally categorized into _untargeted_ and _targeted attacks_. _Untargeted attacks_ divert the query away from the correct results, which make the system retrieve irrelevant images or simply nothing. [46] proposes an untargeted attack to maximize the hamming distance between adversarial and benign samples. [24, 12] craft adversarial examples based on iterative retrievals from a black-box model. [43] hides private images in the database into a non-retrievable subspace by minimizing the number of samples around the private images. _Targeted attacks_ make the systems return images from a targeted category, different from the inputs. [40, 6] minimize the average hamming distance between the adversarial examples and a set of images with a target label. [42] enhances targeted transferability to a black-box model via injecting random noise into the adversarial generation. Our work defends against both untargeted and targeted attacks in deep hashing.
### Adversarial Defenses
Most of the defense mechanisms are based on softmax classification. As proactive measures, gradient masking [33] and adversarial training [41, 41, 29, 13] learns a robust model. The early defense of [33] starts with an incorrect conjecture that ascribes adversarial example to high nonlinearity/overfitting, and develops defensive distillation to reduce the variations around input. The method is quickly subverted by [5, 9, 10] as argued in [16] that the primary cause is due to local linearity of neural networks instead.
Hence, a large body of works focus on adversarial training [41, 35, 13, 29] by solving a min-max saddle point problem. However, it is non-trivial to tackle the trade-off between robustness and accuracy [48], which often leads to significant loss on clean image accuracy, with extensive training efforts. Applying adversarial training into the deep hashing domain suffers from even higher accuracy loss as we have experimented (see appendix). For image retrieval system, as long as the adversarial images are detected at the input, we can equivalently thwart the attacks without accuracy loss and training complexities.
Adversarial detections extract the artifacts left by the adversarial examples at different levels: raw pixels [17, 15], feature distributions [23, 17], softmax distributions [19] and frequency components [39]. By analyzing the contrastive distributions of the adversarial and natural images, a detector can be efficiently trained in a supervised or unsupervised manner. Another thread of works rely on the prediction inconsistency by exploiting denoise method and measuring the disagreement between the results [30, 45, 25]. All these works are based on softmax classification. In this work, we discover adversarial behaviors from the hamming space and propose a set of detection criteria including defending against the strongest white-box attackers.
## 6 Conclusion
In this paper, we propose an efficient, unsupervised detection of adversarial examples in deep hashing based image
Figure 6: Computation time of different batch size: a) per sample; b) per batch.
retrieval. We design three criteria to identify adversarial behaviors of both targeted and untargeted attacks in the hamming space and consider white-box attackers who are aware of the existence of the defense. The extensive evaluations demonstrate that the proposed detection can surpass previous defense techniques by a large margin and is also robust against white-box attacker by limiting its action space.
|
2306.12233 | Epitaxy enhancement in oxide/tungsten heterostructures by harnessing the
interface adhesion | The conditions whereby epitaxy is achieved are commonly believed to be mostly
governed by misfit strain. We report on a systematic investigation of growth
and interface structure of single crystalline tungsten thin films on two
different metal oxide substrates, Al$_{2}$O$_{3}$ ($11\bar{2}0$) and MgO
($001$). We demonstrate that despite a significant mismatch, enhanced crystal
quality is observed for tungsten grown on the sapphire substrates. This is
promoted by stronger adhesion and chemical bonding with sapphire compared to
magnesium oxide, along with the restructuring of the tungsten layers close to
the interface. The latter is supported by ab initio calculations using density
functional theory. Finally, we demonstrate the growth of magnetic
heterostructures consisting of high-quality tungsten layers in combination with
ferromagnetic CoFe layers, which are relevant for spintronic applications. | Anna L. Ravensburg, Rimantas Brucas, Denis Music, Lennart Spode, Gunnar K. Pálsson, Peter Svedlindh, Vassilios Kapaklis | 2023-06-21T12:49:38Z | http://arxiv.org/abs/2306.12233v2 | # Epitaxy enhancement in oxide/tungsten heterostructures by harnessing the interface adhesion
###### Abstract
The fundamental understanding of the metal/ceramic interface is of crucial importance to diverse fields such as spintronics, energy conversion and storage devices, as well as fusion reactors. The conditions whereby epitaxy is achieved are commonly believed to be mostly governed by misfit strain. We report on a systematic investigation of growth and interface structure of single crystalline tungsten thin films on two different metal oxide substrates, Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) and MgO (001). X-ray scattering techniques and high-resolution transmission electron microscopy have been used to assess the overall epitaxial quality of the tungsten layers. We demonstrate that despite the significant mismatch for both substrates, enhanced crystal quality is observed for tungsten grown on the sapphire substrates. This is promoted by stronger adhesion and chemical bonding with sapphire compared to magnesium oxide, along with the restructuring of the tungsten layers close to the sapphire/tungsten interface. The latter is supported by _ab initio_ calculations using density functional theory. Finally, we demonstrate the growth of magnetic heterostructures consisting of high-quality tungsten layers in combination with ferromagnetic CoFe layers, which are relevant for spintronic applications.
## I Introduction
Spintronic devices, consisting of ferromagnetic layers separated by a nonmagnetic metal or an insulating layer, use spin-dependent electron transport to detect changes in magnetic fields [1]. In light of this, heterostructures of ferromagnetic layers in proximity to 4\(d\) and 5\(d\) nonmagnetic metals are of particular interest. For example, such heterostructures can be used to tune the strength and type of interlayer exchange coupling in trilayers and to fine tune the magnetization dynamics [2; 3]. Most of these heterostructures have to be grown on oxide substrates, making the oxide/metal interface with its chemistry and structure on multiple length scales an important parameter to consider while designing and evaluating the performance of a device.
Spin-orbit torques (SOTs) in heavy metal/ferromagnetic heterostructures are gaining increasing attention for providing an efficient pathway for manipulating the free layer magnetization in magnetic random-access memories. The origin of the SOT is the pure spin current \(J_{s}\) generated by a charge current \(J_{c}\) in the heavy metal via the spin-orbit coupling. The charge-to-spin current conversion efficiency can be described by the spin Hall angle \(\theta_{H}=J_{s}/J_{c}\)[4]. Tungsten has, in this respect, been in focus due to large reported values of \(\theta_{H}\) of around \(-0.3\) to \(-0.4\). Furthermore, W has been the subject of investigations as fusion reactor plasma-facing material due to its combination of high atomic number and relatively low activation decay time [5; 6].
In thin film form, besides the groundstate bcc \(\alpha\)-W phase, tungsten can be stabilized in its \(\beta\)-W phase with an A15 structure [7; 8]. The \(\beta\)-W phase exhibits a large spin Hall angle and therefore also a large charge-to-spin current conversion efficiency, similar to the case of \(\beta\)-Ta [4; 9; 8; 10]. The \(\beta\)-W phase growth depends strongly on the deposition conditions during the sputtering process as well as on the film thickness [9; 11; 12]. Thin films may exhibit the \(\beta\)-W phase, while intermediate thicknesses and/or annealing yield mixtures of \(\alpha\)-W and \(\beta\)-W phase [7; 13]. Thick films tend to be almost pure \(\alpha\)-W phase. Furthermore, the majority of these films are polycrystalline forming more complicated interfaces at grain boundaries, with the adjacent substrates, and with additional buffer-layers. Hetero-epitaxial growth of \(\alpha\)-W thin films on the other hand, may be enabled through a lattice match between in-plane atomic distances in the film and substrate. The crystal structure of bcc \(\alpha\)-W is reported to have a cubic lattice parameter of 3.155 A [14] to 3.17 A [15; 16].
Here, we study the sputter growth of highly epitaxial \(\alpha\)-W thin films on sapphire, Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0), and MgO (001) substrates, giving emphasis to the overall quality of the layering and crystal structure. Magnetron sputtering was selected for thin film growth since it offers the best compromise between low defect density and flat layering as compared to other physical vapor deposition processes. We further shed more light on interdependencies between the oxide/film interface and the accommodation of the lattice mismatch, with support from _ab initio_ calculations. For the case of sapphire substrates, we argue that the interface structure and epitaxial quality are results of the strong adhesion and bonding, similar to the sapphire/Nb system [17; 18]. This has as a prerequisite a well-defined epitaxial relationship at the metal/oxide interface and is thus coordination specific. Having built a solid foundation for growth of epitaxial
tungsten, we proceed to the growth of epitaxial \(\alpha\)-W and ferromagnetic CoFe bilayers. These bilayers are of technological importance, as they might be essential in future spintronic applications, such as THz emitters [19].
## II Methods
### Growth
Thin layers of W and W/CoFe bilayers of different thicknesses were deposited on single crystalline Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) and MgO (001) substrates (both 10\(\times\)10 mm\({}^{2}\)) at floating potential, using direct current (dc) and radio frequency (rf) magnetron sputtering. While for the bilayers, first the W and then the CoFe layer were grown on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0), the order of the layers was turned around for bilayers grown on MgO (001) due to better lattice matching of CoFe on MgO. Prior to deposition, the substrates were cleaned in acetone and 2-propanol using ultrasonic agitation for 120 s. This was followed by annealing in vacuum at 873(2) K for 3600 s. The base pressure of the growth chamber was below 5\(\times\)10\({}^{-7}\) Pa. In order to prevent surface oxidation of the films, the samples were capped at ambient temperature (\(<\) 313(2) K) with Al; selected samples were capped with Al\({}_{2}\)O\({}_{3}\) instead. The depositions were carried out in an Ar atmosphere (gas purity \(\geq\) 99.999 %, and a secondary getter based purification) from elemental W (25 W, dc) and Al (50 W, dc) targets, and CoFe (13 W, dc) and Al\({}_{2}\)O\({}_{3}\) (90 W, rf) compound targets. The targets were cleaned by sputtering against closed shutters for at least 60 s prior to each deposition. The target-to-substrate distance in the deposition chamber was around 0.2 m. The deposition rates (W: 0.23 A/s, Al: 0.30 A/s, CoFe: 0.10 A/s, Al\({}_{2}\)O\({}_{3}\): 0.03 A/s) were calibrated prior to the growth, using x-ray reflectivity. The W growth temperature was optimized with respect to W layering and crystal quality, yielding 843(2) K for single W layers (one selected sample was grown at 793(2) K instead). For the W/CoFe bilayers, W and CoFe were desposited at 843(2) K and 573(2) K, respectively, if W was grown first, while both layers were deposited at 573(2) K if the CoFe layer was grown first. Finally, in order to ensure thickness uniformity, the substrate holder was rotated during the deposition.
### Characterization
X-ray reflectometry (XRR) and diffraction (XRD) were carried out in a Bede D1 diffractometer equipped with a Cu \(K_{\alpha_{1}}\) x-ray source operated at 35 mA and 50 kV. A circular mask (diameter: 0.005 m) and an incidence and a detector slit (both 0.0005 m) were used. For monochromatizing the beam by reducing the CuK\(\beta\) and CuK\(\alpha_{2}\) radiation, the setup included a Gobel mirror and a 2-bounce-crystal on the incidence side. The x-rays were detected with a Bede EDRc x-ray detector. The instrument angles were aligned to the sample surface for XRR and to the W crystal planes for XRD measurements. The measured XRR data was fitted using GenX [20; 21] enabling the determination of the scattering length density (SLD) profile which includes information on layer thickness and roughness. However, atomic terraces in the Al\({}_{2}\)O\({}_{3}\)[22] and twinning in the MgO substrates [23], and therefore also in the epitaxial top layers, may lead to an overestimation of the layer roughnesses. In the diffraction experiments, the samples were measured with a combination of coupled 2\(\theta\)-\(\theta\) and rocking curve scans. Texture analysis was performed employing rotational \(\phi\) scans at different sample tilts \(\chi\). A pole figure was measured for \(\phi\) angles between 350 and 190 degrees. Data in the range between 190 and 350 degrees in \(\phi\) was assumed to be rotational symmetric with an angle of 180 degrees. The lattice mismatch between film and substrate for certain epitaxial relationships was calculated based on a previously established approach by Wildes _et al._[18]. Peak positions in 2\(\theta\) were determined by fitting with a Gaussian function, while rocking curve peaks were fitted with a Lorentzian profile. All error bars for fits of scattering data are statistical and do not include systematic errors arising from alignment or absorption. XRD patterns including Laue oscillations were additionally fitted with a homemade Matlab code [24]. Details on how to calculate the intensity of theoretical diffraction patterns from the squared structure factor can be found elsewhere [25; 26]. The simulated pattern was fitted to the experimental data by adjusting selected parameters while minimising the reduced \(\chi^{2}\) employing the differential evolution algorithm. Fitted parameters included a Gaussian convolution to match the instrument resolution and a background for the diffraction pattern. The Bragg peak originating from the Al\({}_{2}\)O\({}_{3}\) substrate was fitted with a Lorentzian function. The fitted parameters relating to the W layer were: the average number of coherently scattering planes contributing to the Laue oscillations \(N_{\rm L}\) and the average out-of-plane atomic distance of \(\alpha\)-W \(d_{\rm hkl}\). The substrate/W and W/cap interface roughnesses were taken into account and fitted, as detailed elsewhere [25; 27]. Furthermore, a strain profile may be applied and fitted, which can account for an asymmetry in the observed Laue oscillations around the Bragg peak [28; 29]. The displacement \(\epsilon_{\rm n}\) of the \(n\)th atom from its position in an unstrained lattice is given by
\[\epsilon_{\rm n}=e^{-\alpha_{1}\cdot d_{0}\cdot n}+e^{-\alpha_{2}\cdot d_{0} \cdot n}, \tag{1}\]
where \(d_{0}\) is the out-of-plane distance between the atomic planes for the unstrained lattice. The parameters \(\alpha_{1}\) and \(\alpha_{2}\) were fitted within the model to account for strain relaxation. The contribution of terraces to the asymmetry was omitted.
Electronic transport measurements of the resistivity and Hall coefficient were performed during warm-up from 10 to 320 K in a cryostat using a closed cycle He compressor. The temperature was controlled stepwise using a
37 W resistance heater and a Cernox(r) temperature sensor connected to a LakeShore 340 temperature controller. At each temperature step, a waiting time of 450 s was applied to stabilise the temperature. The resistivity was measured at remanence using a 4-point van der Pauw method including reversed polarity measurements [30]. A current of 0.001 A was applied by means of a Keithley 2400 SourceMeter. A Keithley 2182A NanoVoltMeter was used to measure the voltage.
To measure the Hall coefficient, nine magnetic fields of increasing flux densities between -0.5 and 0.5 T were applied using a GMW Model 5403 electromagnet and a Kepco BOP 20-50MG power supply. The magnetic field was measured using a Hall probe and a LakeShore 455 Gaussmeter. The same 4-point probe setup as for the resistivity measurements was used to measure the Hall coefficient. To determine the Hall coefficient, the current was applied along one diagonal of the sample, while the Hall voltage was measured perpendicular to it. The Hall coefficient was determined from the slope of the measured field dependent Hall voltages [31]. At each field step, the Hall-voltage was determined in two orientations of current and voltage, perpendicular to each other, to account for geometric effects in the pin placement. In each orientation, the current direction was alternated about once per second in a delta-measurement to account for electromotive forces [32], thermally induced through minuscule temperature gradients inside the sample. A HP 3488A Switch/Control unit was used to automatically change pin connections for resistivity and the Hall coefficient measurements. The error bars for the resistivity measurements represent a statistical standard deviation of 20 repeated measurements, while the error bars for the Hall coefficient depict the 1\(\sigma\) confidence interval of the fit to the Hall voltage dependence on the magnetic field.
Scanning transmission electron microscopy (STEM) measurements were performed to combine reciprocal and real space information from the same spatial location of the sample at high resolution. Selected samples were examined in cross-section geometry using Titan Themis 200 from FEI operated at 20000 V. The cross-section lamellae of W/CoFe bilayered films were prepared perpendicular to the side of the samples using a focused ion beam (FIB) Zeiss FIB/SEM Crossbeam 550 with Ga Ion-Sculptor gun system. The final polishing was performed at 5000 V ion acceleration voltage with XeF\({}_{2}\) gas assistance.
### Density functional theory calculations
Density functional theory (DFT) [33] was employed at 0 K to explore the atomic and electronic structure of two interfaces, namely W (110) on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) with W [1\(\bar{1}\)1]\(||\)Al\({}_{2}\)O\({}_{3}\) [0001] as well as W (001) on MgO (001) with W [100]\(||\)MgO [110]. The Vienna _ab initio_ simulation package was used. The projector augmented wave potentials were chosen for the basis set [34; 35; 36] and the generalized gradient approximation, as parameterized by Perdew, Burke, and Ernzerhof [37], was used to describe the exchange-correlation effects. The Blochl correction was employed [38] for the interfaces and an integration over the Brillouin zone was performed with the Monkhorst-Pack approach [39] with a _k_-point mesh of 4?4?1 for W (110) on Al\({}_{2}\)O\({}_{3}\) (1120) (198 atoms) and 8x8x1 for W (001) on MgO (001) (112 atoms). The orthorhombic description of corundum Al\({}_{2}\)O\({}_{3}\) was used to construct the interfaces [40]. Oxygen termination of the Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) surface was assumed as it is reported to be a more likely match of the actual substrate surface in the experiments [41; 42]. Six atomic layers of W were taken into account, whereby W atoms were placed at the top position of O atoms. The convergence criterion for the total energy was 0.01 meV and a cut-off energy was 500 eV. All interfaces were constrained to the calculated lattice parameters of bulk Al\({}_{2}\)O\({}_{3}\) and MgO at 0 K, acting as substrates. To construct these interfaces, a vacuum layer was inserted perpendicularly to the interface with a thickness of 10 A. The bottom layer of each substrate was frozen to mimic the infinite bulk. The interfaces were characterized by a work of separation \(W_{S}\)[40; 43], calculated from the total energy change per unit area upon separation of the corresponding slabs. All counterparts were fully relaxed at 0 K. The electronic structure was characterized by evaluating electron density distributions.
## III Results and discussion
### Growth of epitaxial W thin films
To compare the epitaxial growth of single layers of W on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) and MgO (001) substrates, thin films, 100 A in thickness, were sputtered under the same deposition conditions. Their x-ray scattering patterns are displayed in the upper panel of Fig. 1. We start the discussion on the structural quality by having a closer look on the layering of the films, i.e., the low angle scattering regime displayed in detail in Fig. 1a. Both patterns show pronounced Kiessig fringes [44] up until 10 and 12 degrees in 2\(\theta\) for the films grown on Al\({}_{2}\)O\({}_{3}\) (1120) and MgO (001), respectively. The presence of Kiessig fringes up to 10 degrees is typically only found in layers that are flat on a mesoscopic length scale of the order of the in-plane coherence length of the x-ray beam. Scattering length density profiles obtained from the fitting of the reflectivity are shown as insets and confirm the intended substrate/W/Al layering with well defined layer thicknesses on both substrates.
For the thin film grown on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0), the interface widths and surface roughness of the substrate, the 97(1) A W layer, and the 35(1) A Al layer are 2(1), 2(1) and 9(1) A, respectively. The interface widths and surface roughness of the MgO (001) substrate, the 97(1) A W layer, and the 30(1) A Al layer are 2(1), 0(1), and 4(1) A, respectively. Furthermore, for Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0), the
reflectivity data could only be fitted by including an additional 5(1) A layer with a roughness of 3(1) A at the substrate/W interface in the fit, with electronic density close to tungsten oxide. Since roughness and thickness of this layer are of the same order of magnitude, the roughness value may not be indicative of the real underlying roughness. A similar observation has been made for Nb growing on sapphire, relating the presence of an
Figure 1: X-ray scattering patterns of 100 Å W layers grown on single crystalline Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) and MgO (001) substrates. The films are capped with 30 Å Al. Indexed peaks relate to the \(\alpha\)-W structure. The highlighted regions in the upper panel are displayed as a) to d) in the panels below. Red curves correspond to fits of a) the reflectivity and b) to d) the indicated Bragg peaks indexed \(hkl\). From the fits, a) the scattering length density \(SLD\) profiles over thickness \(z\) and c) the evolution of out-of-plane lattice spacing \(d_{hkl}\) over \(n\) lattice planes were obtained and are shown as insets. The grey dashed line corresponds to the \(d_{hkl}\) spacing of the equilibrium \(\alpha\)-W structure [15; 16]. For b) and d) no strain profile was applied. The scattering patterns are shifted vertically for clarity.
oxide layer to a kinematic chemical reaction between a film and a substrate [18]. The observation is in line with the reported bonds forming between W atoms growing on Al\({}_{2}\)O\({}_{3}\) and the oxygen atoms of the substrate [45]. As the nominal thicknesses for all samples lie reasonably close (\(<\) 4 % for W and CoFe) to the fitted layer thicknesses, we will continue to refer to the nominal thickness henceforth.
Regarding the diffraction analysis of the W thin film grown on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0), two peaks corresponding to the (110) and (220) \(\alpha\)-W structure Bragg reflections are visible at 40.254(0) and 86.962(1) degrees, corresponding to the out-of-plane atomic distances \(d_{110}\) and \(d_{220}\) in W of 2.239(0) and 1.119(0) A, respectively. Since the Bragg peaks of the \(\beta\)-W structure lie within a few degrees of the observed (110) and (220) peaks [46, 47, 48], reciprocal space mapping was conducted around the \(\alpha\)-W (002) reflection at 58 degrees [15, 16, 49] in \(2\theta\). At a tilt \(\chi\) of the sample by 45 degrees perpendicular to the scattering plane, we observed a sharp peak, confirming the phase-pure epitaxial growth of \(\alpha\)-W on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) with [110] out-of-plane growth direction under the above mentioned conditions. For simplicity, the \(\alpha\)-W is referred to as W for growth on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) henceforth. The sharp peaks, namely Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) and (22\(\bar{4}\)0), are attributed to the substrate, while the broad bump at around 18 degrees is attributed to the amorphous capping layer.
In addition, around the W (110) and (220) Bragg peaks, magnified in Fig. 1b and d, respectively, symmetric Laue oscillations are visible over a range of more than 10 degrees. The occurrence of the Laue oscillations is proof of a high degree of coherent scattering and, therefore, high crystal quality over the total thickness of the W layer [18, 50]. As defects and dislocations give rise to coherent diffuse scattering and do not contribute to the observed intensity of these oscillations, the shape and decay of the Laue oscillations can be used as a quantitative measure for the crystal quality of epitaxial thin films [50]. The diffracted intensity around the W (110) and (220) Bragg peaks was fitted in order to identify the degree of coherent scattering. Fits are shown as red lines in the respective figures. The symmetry of the oscillations indicates a negligible degree of strain in the 100 A thick W layer [51], which is in line with the reports on epitaxial W growth on sapphire by pulsed laser deposition [52]. Hence, no strain profile was included in the fitting for the sample grown on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0).
As it is evident from Fig. 1, the fitting captures the features of the diffraction pattern. The main (110) Bragg peak intensity however is not entirely captured, with the fitted interface roughness being underestimated. Based on both fits, 99.9 % of the pure W layer scatters coherently. It has to be noted, however, that the thickness of the tungsten oxide resembling interface layer in the XRR fitting was not included in this calculation due to its unknown crystal structure. Including it by assuming the obtained average W layer spacing of the film above, yields a reduced percentage of 94.7 %. The fitted average out-of-plane distances \(d_{110}\) and \(d_{220}\) lie within 0.1 % of the previously determined values. The interface roughnesses, contributing to the decay of the intensity of the Laue oscillations with increasing angular distance from the W (110) and (220) main peaks, are fitted to be 2 and 5 A, respectively, and thus in agreement with the fitted roughnesses based on the reflectivity data.
On MgO (001), the only observed specular W peak is at 57.404(2) degrees and corresponds to a \(d_{002}\) of 1.604(0) A of the \(\alpha\)-W structure. For simplicity, \(\alpha\)-W is henceforth also referred to as W for growth on MgO (001). The sharp MgO (002) and (004) peaks are attributed to the single crystalline substrate. W is growing epitaxially in the [001] growth direction on MgO (001), in line with results from previous studies [53]. However, in this study, Laue oscillations are observed on the low angle side, observable in the magnified display of the W (002) Bragg peak in Fig. 1c. The oscillations are less pronounced, decay faster, and are more asymmetric compared to W grown on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0). In the fitting, the asymmetry is accounted for by a strain profile. The fitted variation of the out-of-plane spacing \(d_{002}\) as a function of lattice planes \(n\) across the W layer thickness is shown in the inset. The out-of-plane spacing seems to be linearly decreasing over W layer thickness, lying around 2 % above the equilibrium lattice spacing of bulk W [15, 16]. Lattice mismatch between film and substrate gives rise to misfit strain in epitaxial thin films causing a change in out-of-plane lattice spacing over film thickness. The origin of strain in W grown on MgO (001) will be discussed below. Based on the fitting, 80.4 % of the W layer grown on MgO (001) scatter coherently. The fitted \(d_{002}\) lies within 0.1 % from the previously determined value, and the interface roughness of 5 A is in agreement with the roughness obtained from XRR.
The degree of coherent scattering of the W layer can, as discussed above, be used as a quantitative measure of the crystal quality of epitaxial W, which is higher for Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) than that for MgO (001). Moreover, the observed difference in the crystal quality manifests itself in the peak intensity distribution in reciprocal space. The peak intensity of the Bragg peaks is one to three orders of magnitude higher for W on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) compared to W on MgO (001). This difference is related to a larger mosaic spread of W grown on MgO (001), as it can be seen in the upper panel of Fig. 2. The rocking curve around the W (110) peak for W grown on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) is sharp and has a full width at half maximum (FWHM) of 0.02(0) degrees. The peak intensity of the rocking curve around the W (002) peak for W grown MgO (001) is distributed over a two orders of magnitude wider angular range (FWHM = 2.19(5) degrees). The instrument resolution of 0.012 degrees was taken into account for determining these values. The mosaic spread, i.e. misorientation of the W atomic planes relative to each other, is larger, corresponding to a lower crystal quality of the film grown on MgO (001).
As the mosaic spread of the epitaxial W thin film on
Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) is low, it is possible to grow relatively thick films still exhibiting a high degree of coherent scattering. The diffractogram around the W (110) Bragg peak of a 1000 A thick W layer is shown in the lower panel of Fig. 2. Even for this tenfold larger layer thickness, W grows epitaxially with [110] growth direction on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0). The Bragg peak position relates to an average \(d_{110}\) spacing over 1000 A of 2.240(0) A, a deviation of less than 0.08 % from \(d_{110}\) for 100 A W. The Laue oscillations observed on both sides are symmetric and can be observed over a range of more than 4 degrees in 2\(\theta\), being proof of a small degree of strain over the W layer thickness. To the best of the authors' knowledge, the Laue oscillations of W have only been observed for thin films of 30 A layer thickness [52]. The existence of the Laue oscillations for a film thickness of 1000 A is proof of the superior crystal quality of the W films deposited in this study. The FWHM of 0.04(0) degrees of the (110) rocking curve for 1000 A W, shown in the inset, is comparable to the value for 100 A W. However, the shape of the rocking curve for 1000 A W includes two features indicating two different correlation lengths; a narrow feature, almost resolution limited, and a broader triangularly shaped feature. Similar thickness dependent observations of features in rocking curves have been reported for epitaxial Nb (110) grown on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) [18], where the broad feature is more pronounced for thicker films. For Nb, Wildes _et al._[18] show that flat growth planes over long length scales give rise to the narrow component, while strain and misfit dislocations causing height deviations give rise to the broader feature in the rocking curve. Hence, a semicoherent growth mode is expected for W (110) on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0), especially at the substrate/W interface.
Hetero-epitaxy is restricted to specific relative orientations of substrate and film, as lattice matching is required. On Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0), epitaxial W (110) growth is observed due to a match between these two crystal planes at the substrate interface. The W (110) crystal plane has a rectangular atomic shape with an atomic distance \(d_{110}\) on one side and \(d_{001}\) on the other side. The Al\({}_{2}\)O\({}_{3}\) (10001] and [1\(\bar{1}\)00] directions span the corresponding Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0)
Figure 3: W {112} x-ray pole figure measured on a 1000 Å W layer grown on single crystalline Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0). The pole figure displays the polar angle \(\chi\) (sample tilt) and the azimuthal angle \(\phi\) (sample rotation). Specific \(\chi\) and \(\phi\) values relevant for the discussion are marked as red circles and blue lines, respectively.
Figure 2: Top panel: Rocking curves measurements around the specular \(\alpha\)-W Bragg peak of 100 Å W layers grown on single crystalline Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) and MgO (001) substrates. For Al\({}_{2}\)O\({}_{3}\) the measurement was conducted around the W (110) Bragg peak and for MgO the measurement was conducted around the W (002) Bragg peak. In order to visually compare the widths, the rocking curve of the sample grown on MgO was multiplied by a factor of 200. Bottom panel: X-ray diffraction pattern around the W (110) Bragg peak of 1000 Å W grown on single crystalline Al\({}_{2}\)O\({}_{3}\) (1120). The rocking curve for this peak is displayed in the inset.
plane [52; 54]. It is reported that a W (110) growth orientation with an in-plane rotated unit cell is energetically favored to match a rectangular structure on the Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) surface [55; 54; 18; 52].
Based on the work of Mc Grath _et al._[52], this rotation is calculated to be 54.7 degrees relative to Al\({}_{2}\)O\({}_{3}\) [0001] or 35.3 degrees relative to [1100]. To further investigate the in-plane orientation of the W unit cell in our thin films, a W [112] pole figure measurement was conducted on a 1000 A thick W layer. The results are displayed in Fig. 3. The off-specular W {112} peaks are expected to be observed in diffraction at the incident angle \(\theta\) corresponding to \(d_{112}\). However, to obtain a W {112} plane in diffraction, the sample needs to be tilted by a certain polar angle \(\chi\) and rotated by the azimuthal angle \(\phi\) based on the in-plane orientation of the unit cell relative to (110) out-of-plane orientation. The pole figure displays the polar angle \(\chi\) (sample tilt) and the azimuthal angle \(\phi\) (sample rotation). At \(\phi=0\) degrees, the sample edges, corresponding to the Al\({}_{2}\)O\({}_{3}\) [0001] and [1\(\bar{1}\)00] directions, are oriented 90 and 180 degrees to the incoming x-ray beam. Sharp peaks are observed at specific \(\chi\) and \(\phi\) angles, confirming epitaxial growth of 1000 A W on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0). The angles between [110] and \(<\)112\(>\) are all either 30.0, 54.7, 73.2, or 90.0 degrees, depending on the specific crystallographic plane from the \(<\)112\(>\) family. These \(\chi\) angles are displayed as red circles. Within the resolution of the measurements related to the angles \(\chi\) and \(\phi\), the {112} peaks are observed at these specific sample tilts, in line with the expected small degree of strain in the epitaxial W film. Moreover, a {112} peak is observed at \(\phi=0\) and 180 degrees for \(\chi=30.0\) degrees. Therefore, a W \(<\)112\(>\) crystallographic direction is assumed to be parallel to the edge of the substrate, which is either [0001] or [1\(\bar{1}\)00] spanning the Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) plane.
In order to determine the relative crystal orientation of the substrate and the film, an Al\({}_{2}\)O\({}_{3}\) [11\(\bar{2}\)0] pole figure was measured for \(\phi=\) -10 to 110 degrees and \(\chi=0\) to 80 degrees. The only visible {1120} peak measured within this range is at \(\phi=0\) degrees and for a sample tilt of \(\chi=60\) degrees. In a hexagonal crystal with (2\(\bar{1}\)10) orientation, (11\(\bar{2}\)0) satisfies the diffraction criterion for a 60 degrees sample tilt \(\chi\) with [0001] rotation axis. Therefore, at \(\phi=0\) degrees, [0001] is parallel to a W {112} plane and thus perpendicular to the respective \(<\)112\(>\) direction. Hence, our experimental diffraction study confirms the epitaxial relationships reported by Mc Grath _et al._[52]: W[111]|Al\({}_{2}\)O\({}_{3}\)[0001] and W[1\(\bar{1}\)2]|Al\({}_{2}\)O\({}_{3}\)[1\(\bar{1}\)00] or W[1\(\bar{1}\)1]|Al\({}_{2}\)O\({}_{3}\)[0001] and W[1\(\bar{1}\)2]|Al\({}_{2}\)O\({}_{3}\)[1\(\bar{1}\)00].
The lattice mismatch for these epitaxial relationships are 7.2 % and 19.4 % along W [111] and W [\(\bar{1}\)12] directions, respectively [15; 56; 16]. The mismatch is expected to cause misfit strain in the growing W layer, eventually leading to the formation of misfit dislocations and strain release above the critical thickness for fully coherent growth. Due to the positive Poisson ratio of 0.284, determined from reported elastic constants [57], the strain in W is expected to be tensile in-plane and compressive out-of-plane since both atomic distances in Al\({}_{2}\)O\({}_{3}\) are larger compared to the corresponding atomic distances in bulk W. For comparison, for the epitaxial growth of similar sized Nb (110) on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0), lattice mismatches of 1.9 % and 12.9 % are reported in the two different crystallographic directions [55] and the critical thickness is reported to be less than 100 A [18]. Hence, for the W films grown within this study, we expect the formation of misfit dislocations for strain release giving rise to coherent diffuse scattering, which reduces the coherence length of the epitaxial crystal [58]. However, parts of this dislocation formation is expected directly at the substrate/film interface for two reasons: First, in the stated epitaxial relationship, some W atomic positions do not coincide with atomic positions of the Al\({}_{2}\)O\({}_{3}\) lattice, but with octahedral interstices in the Al\({}_{2}\)O\({}_{3}\) lattice
Figure 4: Results of density functional theory calculations of 6 monolayers of a) W (110) on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) with W [1\(\bar{1}\)1]|Al\({}_{2}\)O\({}_{3}\) [0001] and b) W (001) on MgO (001) with W [100]|MgO [110]. A schematic of the atomic structure and the electron density distributions of the interface are displayed in the middle and to the right of each figure, respectively. In-plane and out-of-plane atomic distances at the interfaces and in the W layers are indicated. The work of separation \(W_{S}\) is displayed on the left side. For Al\({}_{2}\)O\({}_{3}\), oxygen surface termination is assumed.
[18; 52]. Therefore, additional atomic relaxation is expected at the interface. Second, a miscut is common for the Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) substrates [18; 22] that causes atomic steps and terraces with an incommensurate step height of \(d_{0006}\). Therefore, the terraces will propagate into the growing film as defects. In between the defects, coherent regions with well-defined translational order are expected [18]. The presence of defects at the substrate/film interface is supported by the necessity to include an additional layer into the XRR fitting for both W and Nb [18] grown with (110) texture on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0). The defects as well as the additional XRD peaks are expected to reduce the misfit strain in W already at the interface, below the critical thickness. For Nb (110) on Al\({}_{2}\)O\({}_{3}\), residual epitaxial strain which depends on the layer thickness is reported to lie usually below 0.05 % [17]. Hence, growth with reduced misfit strain above the interface is expected for W on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0), in line with the result that 99.9 % of the relaxed W layer on top scatter coherently.
The conundrum of obtaining epitaxy despite a large misfit strain requires further investigation. For that reason, DFT calculations of the epitaxial Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0)/W (110) interface structure were performed. The results on the crystal structure including the corresponding electron density distribution are displayed in Fig. 4a. Based on these, strain is introduced into the growing W layer at the interface, as evident from the tetragonal distortion of the W slab. To match the in-plane atomic distances in the substrate, the in-plane atomic spacing \(d_{1\bar{1}0}\) at the interface was calculated to be 2.774 A, an increase of 23 % compared to 2.242 A [15; 16] at equilibrium, which is in line with the expected in-plane tensile strain. In the out-of-plane direction, the calculations predict a significantly larger \(d_{110}\) = 2.671 A between the first and second atomic layers compared to the interplanar spacing of the following monolayers. Furthermore, the DFT calculations indicate that the first atomic layer of W exhibits a buckled atomic structure. The center atom in the (110) atomic plane has a larger distance to the interface as compared to the oxygen bound corner atoms in the unit cell. Such buckling often occurs due to high interfacial strains [59]. The exceptionally large interplanar spacing between the first and second atomic W layer in combination with the buckling of the atomic structure can be assigned to different chemical and structural properties of W directly at the interface, in agreement with the tungsten oxide resembling interface layer included in the XRR fitting as well as the expected strain relaxation at the interface. From the electron density distribution of the interface structure, a change in electron density of the first three monolayers is evident, showing an out-of-plane elongation of the area of high electron density around the atomic positions. First-principles calculations on the similar Al\({}_{2}\)O\({}_{3}\) (0001)/Nb (111) system also show appreciable interlayer relaxation near the interface for an oxygen terminated substrate surface [60]. However, all calculated atomic distances in W are larger than the equilibrium \(d_{011}\) of 2.242 A [15; 16]. This out-of-plane elongation is in contrast to the expected compressive strain in this direction and to the experimentally observed smaller \(d_{011}\) in the out-of-plane direction of around 2.239(0) and 2.240(0) A for the 100 and 1000 A thick W layers, respectively. It should be remarked that the high interfacial strains and hence high atomic relaxations are partly due to the interface size. Classical molecular dynamics modelling may reveal these particularities, but we are not aware of any available interatomic potentials.
The calculated work of separation \(W_{S}\) for the Al\({}_{2}\)O\({}_{3}\)/W (110) system with W [111]\(\parallel\)Al\({}_{2}\)O\({}_{3}\)[0001] is 1.44 J/m\({}^{2}\), corresponding to an intermediately strong interface where epitaxy may be possible. For substrate/film combinations known for their epitaxial growth like Nb(111) on Al\({}_{2}\)O\({}_{3}\)(0001) or Cu(111) on Al\({}_{2}\)O\({}_{3}\)(0001) larger values for \(W_{S}\) are reported, namely 12.7 [61] and 5.48 J/m\({}^{2}\)[62], respectively. A comparable work of separation of 2.86 J/m\({}^{2}\) is reported for V\({}_{2}\)AlC(0001) on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) [40]. The interface of Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0)/V\({}_{2}\)AlC(0001) was characterized as semicorentent, i.e. having coherent regions separated by misfit dislocations due to a lattice mismatch of 8.16 % [40].
Results of temperature dependent electronic resistivity measurements of 1000 A W grown on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) are displayed in Fig. 5. The statistical error of each data point is of the order of \(1\times 10^{-4}\)\(\mu\Omega\)cm and thus smaller than the symbols depicted. The data exhibits a typical Bloch-Gruneisen metallic behaviour with a residual resistivity ratio (RRR) of 3.14. The ratio is comparable to the value reported by Choi _et al._[63], with a RRR of around 4 for annealed samples. Hence we conclude that the defect density in the present samples are comparable to the state of the art of epitaxial tungsten films grown with sputtering. While the previous discussion of the XRD data strongly suggests excellent crystal quality, the measured RRR in this case remains relatively low. One possible explanation for this disparity could lie in the origin of the XRD and resistivity signals. The XRD intensities arise from coherent scattering within the co
Figure 5: Temperature dependent electronic resistivity and Hall coefficient measurements of a 1000 Å thick W layer grown on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0).
herence volume of the x-ray beam, typically limited to a few thousands Angstroms. Conversely, the resistivity signal probes the entire sample area, with the probing pins separated by several millimeters.
The Hall coefficient continuously decreases from 0.5\(\times\)10\({}^{-10}\) m\({}^{3}\)/C at 10 K to -1.24\(\times\)10\({}^{-10}\) m\({}^{3}\)/C at 320 K, crossing zero at 86 K. Care is needed when comparing results of measurements on single crystals to polycrystalline materials, since the Hall effect depends on the crystallographic direction [64, 65, 31]. An increase in the crystallographic defect density is expected to increase the resistivity and alter the Hall coefficient. However, the influence of defects on the Hall coefficient are due to their impact on the anisotropy of the scattering rates rather than their absolute value [66]. Furthermore, the Fuchs-Sondheimer model of surface scattering predicts an increase of resistivity and the Hall coefficent with decreasing film thickness [63, 67] with the present film being reasonably close to the bulk regime, where finite size is not seriously affecting the results. The behaviour of the Hall coefficient versus temperature in Fig. 5 exhibits the same trend as that of \(\alpha\)-W films grown on thermally oxidized Si, as reported by Hao _et al._[9]. \(\beta\)-W on the other hand, exhibits consistently negative Hall coefficients at temperatures between 10 and 300 K [9]. From the electronic transport we therefore confirm that the 1000 A W thin film is phase pure \(\alpha\)-W. Differences in the absolute values of the Hall coefficients compared to Hao _et al._[9] are likely due to the large difference in defect densities and/or crystallographic orientation, between the epitaxially grown films here and the textured films grown by Hao _et al._[9]. Bastl [68] concluded from Hall measurements on thin polycrystalline films of W that the presence of grain boundaries together with defects resulted in a suppression of the Hall coefficient as compared to the bulk value, which is supported by our findings.
In contrast to the epitaxial W thin film grown on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0), W grown on MgO (001) exhibits the [001] out-of-plane growth direction. Epitaxial growth of W on MgO (001) was confirmed as off-specular peaks were measured as sharp reflections occurring at specific sample tilts \(\chi\) and rotations \(\phi\) in reciprocal space. Measurements of the off-specular W {112} reflections revealed a relative in-plane rotation of 45 degrees between the W [100] and MgO [100] directions, i.e. W [100] is oriented parallel to MgO [110], confirming the observed epitaxial relationship reported elsewhere [53]. This in-plane rotation allows for a smaller mismatch of around -6.1 % between the respective atomic distances in film and substrate [15, 16, 53]. At elevated deposition temperatures like 843 K, the lattice mismatch is expected to be even smaller due to different thermal expansion coefficients of substrate and film [53]. Such a lattice rotation has been reported for other epitaxially growing thin films on MgO (001) substrates with similar atomic distances as W, e.g. Fe [69, 70, 71], V, Cr, Hf [23] or alloys thereof [72]. The mismatch is negative [18], yielding compressive in-plane and tensile out-of-plane strain in W and hence, a tetragonally distorted unit cell. It is the opposite strain state as compared to W grown on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0). Hints of the expected tetragonal distortion of the unit cell can be found in the asymmetry of the Laue oscillations for W grown on MgO (001), displayed in Fig. 1c. The critical thickness for the introduction of misfit dislocations of W grown at 1173 K on MgO (001) with a mismatch of -5.1 % is reported to be 27 A, well below the W thicknesses in this study. Hence, misfit dislocation formation is expected for all samples of W grown on MgO (001). The variation of the out-of-plane atomic distance \(d_{200}\) over the thickness of the W layer lies above the equilibrium spacing for relaxed W [15, 16], in line with tensile out-of-plane strain. Over the layer thickness, \(d_{002}\) is decreasing towards the equilibrium value, indicating a partial strain relaxation.
These observations are in agreement with DFT calculations on the crystal structure of the MgO (001)/W (001) interface. The results of these calculations are displayed in Fig. 4b. The in-plane \(d_{100}\) lattice distance is calculated to be 3.004 A and, thus, smaller than the equilibrium value [14, 15, 16], but in line with the expected compressive in-plane strain. It appears that W is tetragonally distorted for the first six monolayers, whereby the out-of-plane interplanar spacing is smaller than that of the equilibrium configuration, as expected for the tensile strain state in this direction. The spacing oscillates between roughly 1.60 and 1.69 A for every other layer. This highlights that the expected strain might affect atoms at different positions of the bcc cell differently, possibly due to different in-plane positions in relation to the substrate. Another reason may be the employed registry, since lateral relaxations may partly be inhibited due to the size of interface considered by DFT.
The calculated work of separation \(W_{S}\) is similar to the value calculated for W (110) on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0). The interface is thereby characterized to be intermediately strong, allowing for epitaxy, possibly locally or up to low W thicknesses. The interfacial bonds between tungsten and the oxygen atoms of the substrate, however, are weaker in the case of MgO (001). This difference is also visible in the electron density map. In the case of both interfaces, the interfacial bonds are characterized by covalent contributions (charge sharing) and ionic contributions (charge transfer). The bonds between W and O atoms across the Al\({}_{2}\)O\({}_{3}\)/W interface are shorter and thus stronger than those in the case of MgO, but a more transparent comparison should be made by comparing the bond lengths in the substrates (bulk counterparts). The Mg-O bond length is 2.124 A, while the corresponding interfacial bond (W-O) is 2.249 A. On the other hand, the Al-O bond length in the substrate is 1.874 A and the corresponding W-O bond is 2.157 A. Hence, a stacking sequence together with an expected bond length is better reproduced for W on MgO (001), which is mirrored in the slightly higher work of separation.
We established that the epitaxial growth and quality of W thin films are highly dependent on substrate and thin film thickness. Following this, the dependence on the de
position temperature will now be discussed. Fig. 6 shows a diffraction pattern around the W (220) Bragg peak of two 100 A thick W thin films grown at different deposition temperatures on Al\({}_{2}\)O\({}_{3}\) (1120) substrates. With an increase in deposition temperature of 50 K, the Bragg peak position shifts slightly towards higher \(2\theta\) angles, corresponding to an atomic spacing \(d_{220}\) of 1.119(0) A and 1.118(0) A for 793(2) and 843(2) K, respectively. A more distinct difference is visible in the Laue oscillations around the main peak, which are visible over an angular range of 8 and 11 degrees for 843(2) and 793(2) K, respectively.
While the Laue oscillations are symmetric for the sample grown at 793(2) K, they show an asymmetry for the sample deposited at 843(2) K, decaying faster on the low than on the high angle side. In contrast, epitaxial W grown on MgO (001) or Fe on MgAl\({}_{2}\)O\({}_{4}\) (001) [51] exhibit Laue oscillations around the main Bragg peaks which decay faster on the high than on the low angle side. This difference in decay is attributed to the opposite strain state and therefore, tensile instead of compressive out-of-plane strain. Hence, for W grown on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0), the out-of-plane lattice spacing is expected to increase over the layer thickness due to strain relaxation. The change in symmetry with growth temperature observed in Fig. 6 shows that the introduction of misfit dislocations for strain relaxation in W thin films is thermal energy dependent. The increase in deposition temperature by 50 K might lead to an increase in adatom mobility at the interface, partly preventing relaxation through the introduction of misfit dislocations. For the film grown at 843(2) K misfit dislocations are possibly incorporated into the growing W layer at slightly larger layer thicknesses while relaxation is expected to take place at the substrate/W interface for W grown at 793(2) K. This is in line with the smaller interplanar spacing of the sample deposited at 843(2) K, since it corresponds to an average over the W layer thickness. The degree of the observed asymmetry in this sample is, however, smaller than expected from the large lattice mismatch for Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0)/W (110), indicating that a large part of the strain is still released at the substrate/W interface. For W on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) grown at 843(2) K, the relative range of the oscillations on the faster decaying side is roughly 38 % of the whole range of oscillations. For W (001) grown on MgO (001) with a smaller lattice mismatch of -6.5 % and Fe (001) on MgAl\({}_{2}\)O\({}_{4}\) (001) with a lattice mismatch of only -0.2 % the relative range spans 29 % and 15 % [51], respectively.
### Growth of tungsten and CoFe bilayers
Building on the knowledge about epitaxial W single layer growth, W/CoFe bilayers of different layer thicknesses have been deposited. X-ray scattering patterns of a "thick" 60 A W/100 A CoFe and a "thin" 30 A W/25 A CoFe bilayer sample both grown on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) are displayed in Fig. 7. In the small angle regime, Kiessig fringes [44] are visible. As their spacing relates to the total film thickness including all layers, broader fringes are observed for the "thin" bilayer sample. The fringes decay at around 10 and 8 degrees in \(2\theta\) for the "thick" and "thin" bilayer sample, respectively.
In diffraction, sharp peaks corresponding to the Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) and (22\(\bar{4}\)0) reflections of the substrate are visible for both samples. Two peaks can be attributed to epitaxially growing W with (110) out-of-plane orientation, namely W (110) and (220), in line with the observations for W single layers on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) substrates. Based on scans around off-specular reflections, the sole presence of the \(\alpha\)-W structure is confirmed even for the thinnest W layer in this study with a thickness of 30 A. The W peak positions relate to atomic spacings \(d_{110}\) and \(d_{220}\) of 2.235(0) and 1.118(0) A for the "thick" sample and 2.221(0) and 1.118(0) A for the "thin" bilayer sample. The change in \(d_{110}\) and \(d_{220}\) with W layer thickness between 30 and 1000 A is below 0.9 %, in agreement with the described strain relaxation at the substrate/W interface and not over the W layer thickness.
For the "thick" sample, peaks corresponding to the CoFe (110) and (220) Bragg reflections [15; 73] are observed at 44.981(5) and 99.940(7) degrees in \(2\theta\). As no other peaks in the specular \(2\theta-\theta\) scan can be attributed to CoFe, the layer is assumed to at least grow highly textured, in line with the observed FWHM of the CoFe (220) rocking curve of 0.06(0) degrees. The corresponding \(d_{110}\) and \(d_{220}\) are 2.014(0) and 1.006(0) A, respectively. The reported equilibrium values of CoFe are smaller, 2.007 and 1.004 A for \(d_{110}\) and \(d_{220}\), respectively [15; 73]. The equilibrium interplanar spacings \(d_{00}\) and \(d_{110}\) in CoFe have a mismatch to the corresponding distances in W of approximately 12 % each [15; 16; 73]. Based on its positive Poisson's ratio of 0.397 [74], CoFe (110) is assumed to grow with tensile in-plane and compressive out-of-plane strain on W (110), in line with the measured
Figure 6: X-ray diffraction patterns of 100 Å W layers grown at 793(2) and 843(2) K on single crystalline Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) substrates. The films are capped with 30 Å Al. The indexed peak relates to the \(\alpha\)-W structure.
larger than equilibrium out-of-plane interplanar spacings in this study. Moreover, indications of fully epitaxial growth of parts of the CoFe layer are present in the form of Laue oscillations of two different average oscillation frequencies of approximately \(f_{1}=1.5\) degrees\({}^{-1}\) and \(f_{2}=0.6\) degrees\({}^{-1}\), between 35 and 48 degrees in 2\(\theta\).
This oscillation frequency is directly related to the coherently scattering thickness: the oscillations of the lower frequency \(f_{1}\) can be related to a thickness of 59 A, which is in the order of 98 % of the W layer thickness, while the higher frequency \(f_{2}\) oscillations relate to a thickness of around 147 A, which is in the order of 92 % of the bilayer thickness. The observation of Laue oscillations relating to the bilayer thickness are proof of coherent scattering throughout both layers [25]. In contrast, no Laue oscillations with a frequency relating to the CoFe single layer thickness are observed around the CoFe (110) or (220) Bragg peaks. This can be attributed to multiple possible reasons: First, based on the calculation above, if 98 % of the W layer and 92 % of the bilayer are assumed to scatter coherently, then only 88 % of the CoFe layer scatters coherently. As can be observed in Fig. 1c, the relative intensity of the Laue oscillations decreases substantially with a lower degree of coherent scattering. Second, the scattering intensity relates to the form factor and, hence, the atomic number squared \(Z^{2}\)[25]. Therefore, the scattering intensity from W with \(Z=74\) is expected to be higher by a factor of 7.8 compared to the scattering intensity from the alloy with \(Z=27\) and \(Z=26\) for Co and Fe, respectively.
For the "thin" bilayer sample, only the peak at 45 degrees can be attributed to CoFe. Since its peak position overlaps with the Laue oscillations around the W (110) Bragg peak, a clear identification is difficult. However, the spacing between the peak's position and the Bragg reflection is different compared to the spacing between the Laue oscillations on the lower angle side and the main Bragg reflection, indicating that both do not have the same origin. The intensity of the CoFe (220) Bragg reflection is assumed to lie below the detection limit. However, the presence of a CoFe layer in all samples with finite thickness is confirmed by STEM imaging, results are displayed in Fig. 9a and b.
Due to the limitation of epitaxial growth being reliant on a specific substrate, just like for W, an alternative way of growing epitaxial W/CoFe bilayers was explored using a single crystalline MgO (001) substrate. To optimize the epitaxial growth, the lattice mismatch between MgO (001)/W (001) with 45 degrees in-plane rotation and between MgO (001)/CoFe were compared. The lattice mismatch between \(d_{110}=2.978\) A of MgO (001) [53] and \(d_{200}=2.840\) A of CoFe (001) [15; 73]) is 4.4 %, possibly allowing for epitaxial growth with a 45 degrees in-plane rotated CoFe unit cell. A CoFe (110) growth orientation on MgO (001) is likely to be energetically less favored due to the slightly larger lattice mismatch of 4.5 %. Both mismatches are smaller than the expected mismatch between MgO (001) and W (001) with 45 degrees in-plane rotation of -6.5 %. Therefore, the order of the W and CoFe layers was reversed for the growth on MgO (001) in comparison to the bilayer grown on Al\({}_{2}\)O\({}_{3}\) (1120).
An x-ray scattering pattern of a 100 A CoFe/60 A W bilayer is shown in Fig. 8. Pronounced Kiessig fringes [44] are visible in the small angle regime up until 13 de
Figure 7: X-ray scattering patterns of W/CoFe bilayers grown on single crystalline Al\({}_{2}\)O\({}_{3}\) (1120) substrates. Indexed peaks relate to the \(\alpha\)-W structure. The film corresponding to the upper pattern consists of 60 Å W and 100 Å CoFe capped with 30 Å Al, while the film corresponding to the lower pattern is thinner and consists of 30 Å W and 25 Å CoFe capped with 60 Å Al\({}_{2}\)O\({}_{3}\). The lower pattern is vertically shifted for clarity.
grees in 2\(\theta\), a larger range compared to the bilayer grown on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0). Based on the fitting of the reflectivity, the interfaces are flatter compared to the bilayer grown on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0). In diffraction, two sharp peaks are observed corresponding to (002) and (004) planes in the MgO (001) substrate. The CoFe (002) Bragg peak is observed in the specular scan at 65.452(6) degrees, in agreement with the calculated energetically favored [001] growth direction on this substrate. The peak position corresponds to a \(d_{002}=1.425(0)\) A, which is close to the chemical composition dependent equilibrium value [73; 15; 74]. At 57.905(6) degrees, a peak is observed which is attributed to W (002). Low intensity Laue oscillations are observed around the W (002) peak indicating epitaxial growth resulting in coherent scattering.
To support the results on crystal structure and epitaxial growth with a real space technique, atomic resolution cross-section STEM images were recorded for bilayers grown on both substrates. A cross-section dark-field STEM image of a W/CoFe bilayer grown on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) is shown in Fig. 9a. The image is recorded of the thin film corresponding to the lower scattering pattern displayed in Fig. 7. High resolution magnifications of the substrate/W and W/CoFe interfaces are displayed in Fig. 9c and b, respectively. Based on the STEM micrographs, the epitaxial growth of W and CoFe layers on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) is confirmed. The atomically sharp boundaries indicate a single-crystalline nature for both layers. Moreover, few misfit dislocations directly at the substrate/film interface are visible as a consequence of large lattice mismatch and strain discussed in detail earlier. In addition, the rather blurry interface for a width of 1 to 2 monolayers is an indication for relaxation directly at the interface for W grown on this substrate, in agreement with the results of the x-ray scattering and DFT studies. The presence of atomic terraces with a terrace width of a few tens of Angstroms for the Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) substrate is confirmed based on these images. Misfit dislocations are also observed at the W/CoFe interface, but the crystal structure of the CoFe layer appears to be single crystalline. Hence, semicoherent growth of W/CoFe on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) is confirmed.
For comparison, a cross-section dark-field STEM image of a CoFe/W bilayer grown on MgO (001) is shown in Fig. 9d. High resolution magnifications of the substrate/CoFe and CoFe/W interfaces are displayed in Fig. 9f and e, respectively. The images are recorded of the thin film corresponding to the scattering pattern displayed in Fig. 8. The STEM micrographs confirm the epitaxial growth of W/CoFe bilayers on MgO (001), in agreement with the x-ray scattering results. The substrate/CoFe interface seems to be sharp, with strain visible for at least the first 1 to 2 monolayers of CoFe. Dislocations are visible for the W/CoFe interface, however, the W layer itself seems to grow fully single crystalline above the interface.
## IV Conclusions
The structural properties of epitaxial \(\alpha\)-W thin films deposited on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) and MgO (001) substrates have been studied in in-plane and out-of-plane scattering experiments as well as with real space techniques and electronic transport measurements. Emphasis was given to the overall quality of layering and crystal structure, analyzing the epitaxial relationship and growth mode in combination with _ab initio_ calculations. The crystal quality of W (110) on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0) was found to be higher compared to films of equivalent thickness on MgO (001) even though the lattice mismatch is larger. The improvement in the crystal quality was attributed to a semicoherent growth mode including the introduction of misfit dislocations directly and in the vicinity of the substrate/film interface, yielding nearly strain-free, highly coherent W layers for thicknesses between 30 and 1000 A. The degree of relaxation at the interface was, however, found to be highly temperature de
Figure 8: X-ray scattering pattern of a 100 Å CoFe/60 Å W bilayered thin film grown on a single crystalline MgO (001) substrate. The film is capped with 30 Å Al. Indexed peaks relate to the \(\alpha\)-W structure.
pendent. Furthermore, the epitaxial growth of W and CoFe bilayers on both substrates was found to be of high crystal quality, exhibiting coherent scattering throughout the total bilayer thickness for W/CoFe films sputtered on Al\({}_{2}\)O\({}_{3}\) (11\(\bar{2}\)0). The results of the extensive x-ray scattering analysis on the epitaxial growth were confirmed by real space high resolution STEM imaging. The detailed analysis of the growth of these epitaxial thin films is of technological importance, as bilayers of W and CoFe might be essential in future spintronic applications.
###### Acknowledgements.
VK and PS would like to acknowledge financial support from the Swedish Research Council (Project Nos. 2019-03581 and 2021-0465). GKP acknowledges funding from the Swedish Energy Agency (Project No. 2020-005212). DM acknowledges the financial support from the Olle Engkvist Foundation (project number 217-0023). The computations were enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at National Superecomputer Centre (NSC) in Linkoping, Sweden, partially funded by the Swedish Research Council through grant agreement no. 2018-05973.
## Data availability
The data that support the findings of this study are available from the authors upon reasonable request.
The authors have no conflicts of interest to disclose
|
2302.08777 | Hate Speech and Offensive Language Detection using an Emotion-aware
Shared Encoder | The rise of emergence of social media platforms has fundamentally altered how
people communicate, and among the results of these developments is an increase
in online use of abusive content. Therefore, automatically detecting this
content is essential for banning inappropriate information, and reducing
toxicity and violence on social media platforms. The existing works on hate
speech and offensive language detection produce promising results based on
pre-trained transformer models, however, they considered only the analysis of
abusive content features generated through annotated datasets. This paper
addresses a multi-task joint learning approach which combines external
emotional features extracted from another corpora in dealing with the
imbalanced and scarcity of labeled datasets. Our analysis are using two
well-known Transformer-based models, BERT and mBERT, where the later is used to
address abusive content detection in multi-lingual scenarios. Our model jointly
learns abusive content detection with emotional features by sharing
representations through transformers' shared encoder. This approach increases
data efficiency, reduce overfitting via shared representations, and ensure fast
learning by leveraging auxiliary information. Our findings demonstrate that
emotional knowledge helps to more reliably identify hate speech and offensive
language across datasets. Our hate speech detection Multi-task model exhibited
3% performance improvement over baseline models, but the performance of
multi-task models were not significant for offensive language detection task.
More interestingly, in both tasks, multi-task models exhibits less false
positive errors compared to single task scenario. | Khouloud Mnassri, Praboda Rajapaksha, Reza Farahbakhsh, Noel Crespi | 2023-02-17T09:31:06Z | http://arxiv.org/abs/2302.08777v1 | # Hate Speech and Offensive Language Detection
###### Abstract
The rise of emergence of social media platforms has fundamentally altered how people communicate, and among the results of these developments is an increase in online use of abusive content. Therefore, automatically detecting this content is essential for banning inappropriate information, and reducing toxicity and violence on social media platforms. The existing works on hate speech and offensive language detection produce promising results based on pre-trained transformer models, however, they considered only the analysis of abusive content features generated through annotated datasets. This paper addresses a multi-task joint learning approach which combines external emotional features extracted from another corpora in dealing with the imbalanced and scarcity of labeled datasets. Our analysis are using two well-known Transformer-based models, BERT and mBERT, where the later is used to address abusive content detection in multi-lingual scenarios. Our model jointly learns abusive content detection with emotional features by sharing representations through transformers' shared encoder. This approach increases data efficiency, reduce overfitting via shared representations, and ensure fast learning by leveraging auxiliary information. Our findings demonstrate that emotional knowledge helps to more reliably identify hate speech and offensive language across datasets. Our hate speech detection Multi-task model exhibited 3% performance improvement over baseline models, but the performance of multi-task models were not significant for offensive language detection task. More interestingly, in both tasks, multi-task models exhibits less false positive errors compared to single task scenario.
Social media, Natural Language Processing, Hate speech, offensive language, Twitter, BERT, Multilingual BERT, Multi-task learning, emotional knowledge, shared encoder.
## I Introduction
People have become addicted to social media platforms in recent decades as means of engaging and connecting with each other. Through social media platforms like Twitter, individuals are increasingly communicating and expressing their opinions, and emotions. However, their content can contain harmful information prejudiced against a certain person or group, manifested as abusive language. It is challenging to have a concrete final definition of hate and offensive language, but in general, according to the United Nations1: "hate speech" refers to **offensive** discourse targeting group or an individual based on inherent characteristics (race, religion or gender) and that may threaten social peace." As for offensive language, it is more general referring to any content that can implicitly or explicitly offend the other or make him uncomfortable. Today, it becomes challenging and even impossible to manually track the substance of posts due to the huge and unregulated content that is uploaded every day online. However, there have been many research attempts in automating the detection of abusive content on online platforms. The majority of these attempts adopt supervised learning and deep learning methods that were trained using an annotated dataset [1]. Among the substantial drawbacks in these approaches include the lack of training data, and in data bias, as well as the ambiguity of the abusive content that can be challenging to accurately detect using traditional NLP methods [2]. The latters start with using machine learning [2], then deep learning, transfer learning and ensemble learning [3]... Thus, we aim to involve training a single transformer-based model on multiple tasks simultaneously as it has been shown to improve performance of abusive language detection. This paper proposes a multi-task joint learning approach that utilizes additional features (emotions), to improve model performances. Emotion categorization from the text is deeply aligned with scientific concepts that have been studied for a very long time in the framework of Sentiment Analysis (SA) [4]. In fact, emotion classification seeks to automatically classify texts in a precise manner according to several classes such as anger, fear... [5]. A psychology study [6] found a direct correlation between the speaker's psychological and emotional state, and his abusive speech [7]. For instance, abusive content presents unpleasant attitudes and feelings like anger, disgust, fear, and sadness. Hence, in this paper, we studied new methods of improving the detection of hate speech and offensive language by integrating emotional knowledge as an additional related feature. In the Multi-Task Learning (MTL) scenario, multiple tasks are learned in parallel while using a shared representation [8]. In comparison to learning multiple tasks individually and sequentially, this joint learning approach effectively increases the sample size while training a single model, which leads to improved performance by increasing its generalization [9]. Based on the recent state-of-the-art results given by implementing BERT in hate speech/offensive language detection [10, 11, 12], as well as mBERT (in cross-lingual setting) [4, 10], we have chosen to build our MTL model using those two language models. Following listed the main contributions of this paper:
Footnote 1: [https://www.un.org/en/hate-speech](https://www.un.org/en/hate-speech)
* Building a multi-task learning framework that enables the model by sharing representations between several related tasks and generalizing better by achieving better
performance for the hate speech and offensive language detection task.
* Use of BERT and mBERT pre-trained models as shared encoders to create an MTL model, and analyze and compare their efficiency.
* Use of external related features (emotions) to improve hate speech detection task. We proved the hypothesis of the relationship between the spread of hateful and abusive content and the emotional psychological status of its writer.
* Model optimization within learning multiple tasks in parallel while using a shared transformer representation, which helps in avoiding the computation expenses and task-specific fine-tuning training step. This ensures making predictions at inference time faster than training two different models for every task independently.
Our joint learning approach uses a transformer-based shared encoder to implement a multi-task model for categorizing hate speech and offensive language. It aims to solve the issue of labeled data scarcity by sharing representations between several tasks, using an auxiliary dataset from the secondary task (emotion knowledge). The proposed multi-task model exhibits higher performance with fewer classification errors compared to single-task baseline models in both hate speech and offensive language detection.
## II Literature Survey
### _Multi-task learning on hate/offensive speech_
The implementation of multitask learning approach in the field of NLP abusive language detection remains a new approach. The first approaches used deep neural networks. Liu et al. [13] proposed a three-level framework. It includes detecting: hate speech, its types, and its topics. They developed a fuzzy ensemble approach in the setting of multi-task learning considering each type of hate speech as task prediction head. Their ensemble gave a detection rate of 0.93. Moreover, Kapil et al. [14] proposed a deep learning Shared-Private multi-task model to leverage the information from 5 abusive tasks. As a result, they built 4 deep neural networks, and training them on 5 datasets, they managed to get 26 models' combinations. Their approaches outperformed the single-task's with macro-F1 scores between 10% and 27%. Furthermore, Abu Farah et al. [15] worked on Arabic language within a joint learning approach, using sentiment analysis. Their best model is a multitask learning framework, based on CNN-BiLSTM. It gave macro F1-score of 0.9 and 0.7 for offensive and hate speech tasks respectively.
### _Multi-task learning based on Transfer learning:_
With the outstanding performances given by the transformers, most of the researchers have employed these pre-trained models in hate speech detection. Awal et al. [16] proposed "AngryBERT", a BERT-based multitask model that learns with sentiment analysis and target detection as auxiliary tasks. They demonstrated the ability of those latters to improve hate speech detection. Their model gave an F1 macro score of 90.71% on the Davidson dataset [2]. In addition, to address Aggression Identification, Sanghabadi et al. [17] provided a neural model that builds attention on top of BERT using a multi-task learning paradigm. Their model scored 0.8579 weighted F1 on the English "Misogynistic Aggression Identification" task. Moreover, using the pre-trained AraBERT, Djandji et al. [18] enhanced this transformer with the inclusion of Multi-task learning to build a model able to learn well from little amounts of data, they trained it on several Arabic abusive speech datasets. Their model gave an F1 macro score of 90.15% and 83.41% in Offensive language and hate speech tasks respectively.
### _Emotion knowledge to detect hate/offensive language_
In many fields, like the identification of mental diseases and social media analytics, understanding human emotional patterns is crucial. Since hate/offensive speech detection is integrally tied to the speaker's emotional state [6], detecting emotions has also become an essential application. Markov et al. [19] studied how stylometric and emotional characteristics affect the detection of hate speech. They demonstrated that, when integrated into an ensemble with deep learning models, the use of those features surpasses the commonly used ones to detect hateful content. Moreover, Chiril et al. [20] investigated the the affective knowledge extracted from Sentic computing resources and from structured hate lexicons. They implemented multitasking techniques, and they attained the greatest outcomes with models that used data from these affective resources. Adding to that, Plaza-del-Arco et al. [21] defended the hypothesis of the relationship between hate speech tasks and sentiments, emotions as well as targets, via a straightforward multi-task learning architecture. Implementing BERT-based multitask model, they got the best overall result of F1 = 0.79. Furthermore, using a Transformer-based model, they also proposed a Multi-task model that makes use of shared sentiment and emotional knowledge to identify hate speech in Spanish tweets [22]. Their findings demonstrate that these knowledge work together to more reliably identify hate speech. Based on the above-mentioned works, we propose several strategies for the same tasks by integrating the best of these approaches. Thus, we built a BERT-based and mBERT-based multi-task model using the knowledge extracted from emotion samples provided by social media platforms using a large-size and more diverse-resources emotional dataset, and, implementing a shared representation to enable knowledge transfer between tasks and to reduce model complexity.
## III Methodology
### _Dataset_
We conducted our experiments on Davidson dataset (Twitter) [2]: related to hateful and abusive language detection, and GoEmotion corpora (Reddit) [5]: related to the emotion analysis. The corpora statistics are displayed in Table I.
#### Iii-A1 **Hate/Offensive speech dataset**
For the training of our model for the hate/Offensive task, we used Davidson corpora [2]. This data was compiled using a lexicon of hate speech
content taken from Twitter and classified tweets into _Hate speech_, _Offensive_, and _Neither_ classes having about 24k total number of labeled samples. In this paper, we implement hate/offensive speech binary classification and thus, we created 2 corpora by separating the hateful labeled samples from offensive ones. As a result, we got Davidson-HATE and Davidson-OFF for hate speech and offensive language labeled datasets, respectively.
#### Iii-A2 **Emotion dataset**
For training the emotion task, we use GoEmotions corpora [5], developed by Demzky et al. in 2020. This corpus is considered the largest manually annotated dataset, which consists of 58k English Reddit comments categorized as either Neutral or one of 27 emotion groups. The Ekman level further defines the emotion categorization into anger, disgust, fear, joy, sadness, and surprise. We implement the final corpora labeled as the Ekman model in our experiments in order to use a more generalizable dataset with less noisy data.
### _Transformer-based multi-task approach MTL_
Unlike Single Task Learning (STL) which learns task-specific features from one dataset at a time, Multi-Task Learning (MTL) aims to tackle multiple issues at once. In STL, input vectors \(x_{i}\) are supervisedly mapped to any label \(y_{i}\) in order to train the model to complete a classification task \(T\). Each sentence \(x_{i}\) is processed through the model layers, and the final representation is then run through Softmax to predict the probability distribution over \(C\) classes. Given a dataset \(D\) with \(n\) training labeled inputs \((xi,yi)\), the models's weights are trained in order to reduce the cross-entropy of the detected \(\dot{y}\) and labeled \(y\) samples. Where,
\[\dot{y}=softmax(W+b) \tag{1}\]
Knowing that, \(W\) is the final weight after optimization of the linear classifier when training, and \(b\) is a bias term [14]. The objective of MTL scenario is to employ the method of learning numerous tasks to enhance performance on each one of them [8]. Although they could have different data or characteristics, these tasks are correlated and share some similarities. And when the model is trained, it can exploit shared characteristics by using some hints from one task to enhance the other ones. To understand more the process of building an MTL model, Zhang et al. [9] defined MTL as follows: Given \(n\) learning tasks, \(\{T_{i}\}_{i=1}^{n}\), MTL seeks to enhance the learning of a model for the classification task \(T_{i}\) by leveraging the knowledge in some or all of the \(n\) tasks, where all or a subset of the tasks are connected. The two most popular methods to share knowledge in multi-tasking are hard parameter sharing and soft parameter sharing [23]. **Hard parameter sharing** involves all tasks sharing the hidden layers with a number of task-specific output layers, which is the method we used by implementing shared BERT and mBERT encoders, in order to avoid task-specific parameters for each task, because it can lead to augmented model complexity [24]. On the other side, each task can have its own layers with certain shareable components, known as **soft parameters sharing**. Overall MTL approach efficiently improves the sample size when training a model, which leads to enhanced performance by improving the generalization of model in comparison to learning several tasks separately [9].
In this study, we used three related tasks: Hate speech detection, Offensive language detection, and Emotion recognition. The purpose is to determine whether implementing an MTL scenario to emotion categorization task facilitates the identification of hate/offensive speech, regardless of the source of social media data. Therefore, we build a typical contextualized embedding configuration where the input is represented by a well-known model, the Bidirectional Encoder Representations from Transformers (BERT) [25] and the Multilingual BERT (mBERT) 2. We used the latter model in order to build a cross-lingual generalizable approach, it could be tested on different target languages or used to be trained on monolingual low-resource ones. We used it also to compare with BERT based MTL model to understand more about the influence of adding features from other languages. We added two sequence classification heads to the encoder, one for Hate/offensive speech and another for Emotion recognition. The two tasks jointly share the transformer encoder, as seen in Figure 1, so that one task profits from the other by sharing features. The importance of using the shared encoder is to guarantee that any adjustments on its weights during training will change the same encoder weights, and not to use any extra GPU memory. The output heads for every task are then generated, and each task head is connected to a common sentence encoder. The layers are then adjusted in accordance with the specified set of our downstream tasks. As shown in Figure 1, the input representation is BERT/mBERT-based tokenization, and each task corresponds to a specific classification head. In the first step, given a data input, it is first tokenized using the default tokenizer of BERT/mBERT and then converted into pre-trained BERT embeddings: \(Eb=\{eb_{1},eb_{2},...,eb_{n}\}\). These embeddings are then sent to the pre-trained BERT/mBERT shared encoder. After defining the feature extraction function using the corresponding tokenizer, we utilize the _"dataset.map"_ method from the NLP package to apply this function to our data inputs. This NLP library effectively manages the mapping and caches the features. We constructed a _"MultitaskDataloader"_ that combines many data loaders (built to load data of each task) into a single loader. The latter aims to sample, randomly, data from these data loaders, build a task batch and produce the associated task name (attached to each batch data). Overall,
information can flow from one task head to another through the shared encoder, which, is updated during the training via backpropagation. In fact, BERT/mBERT gets tuned by the combined loss of both tasks (Cross-Entropy loss) in order to learn a shared set of information between both tasks. As for the task-specific layers these consist of a task-specific softmax activation followed by a linear classification layer that is dedicated to extracting the unique information per task and giving final outputs.
## IV Experiments and Results
### _Data preprocessing_
We use the below procedures to pre-process the Twitter dataset using the Ekphrasis library [26]: 1) Switching into lowercase, 2) Delete URLs and emails, 3) Remove users' names and mentions, 4) Shorten prolonged words and delete repeated characters ("yeeessss" to "yes"...), 5) Keep stop words, 6) Remove punctuations, unknown uni-codes, and additional delimiting characters, 7) Remove hashtags (#) and correct their texts (e.g, "#notracism" to "not racism"), 8) Eliminate tweets of length less than 2, and 9) Remove emojis.
### _Data analysis platform and evaluation metrics_
The MTL models have been implemented using PyTorch. We trained the models on the training set and tested them on the validation set, keeping the same data split for GoEmotion corpora, and partitioning Davidson into 80% train and 20% validation set. The implemented models are trained using batch size 8 on Google Colab Pro (Tesla-T4 GPU environment with 32 GB of RAM). We used an optimizer with a learning rate of 1e-5, and experimented with the Cross-entropy loss function. Since we used imbalanced datasets, classifiers' performances are measured via multiple metrics: macro and weighted averaged F1 scores, precision, recall as well as accuracy. Weighted F1 calculates the score for each class and adds them together using a **weight** that depends on the number of true labels of each class. Because we were using an imbalanced dataset, we want to assign more contributions to classes having more samples.
### _Results and interpretations_
In this section, we compare the performance of single-task models with multi-task ones to understand the importance of using external, but related, features (emotions) in the abusive language detection. The STL models proposed in our previous contribution achieved considerable performances for detecting abusive content [12] where the ensemble average voting of BERT-CNN+BERT-LSTM gave better results in HS task and BERT-MLP (BERT-Multi Layer Perceptron) for the OFF task. Hence, we use these two best-performed models in [12], as well as BERT and mBERT as our single-task baseline models to compare with MTL approaches. Furthermore, we carried out an error analysis to get more information regarding the performance of the suggested MTL models. Working on imbalanced datasets, we want to get deeper into the classification of each class. So, we analyzed the confusion matrix and compared the MTL misclassification error with the other models as illustrated in the table II. Even though we have an emotion classifier as shown in Figure 1, this work mainly focuses on hate speech and offensive language detection tasks. Hence, Table II illustrates our experimental results for hate speech detection task _"HS task"_ and offensive language detection task _"OFF task"_. In fact, the emotional analysis task trains the MTL network how to identify the emotion labels from the input samples and the representations generated by the encoder including the affective knowledge. This enables the MTL model to detect HS and OFF more accurately by leveraging the affective nature of the text input. Overall, the STL and MTL models' results in both tasks (hate speech detection and offensive language detection) reveal
Fig. 1: Overview of the BERT and mBERT-based: Single-task, and Multitask leveraging emotions representations (auxiliary task) model architecture in the detection of Hate speech “HS” and Offensive language “OFF” (main tasks) from text input
a good performance when fine-tuning on small, imbalanced datasets.
#### Iv-B1 Hate speech detection
As illustrated in Table II, for HS detection task, the performance of the mBERT-MTL model succeeded in surpassing all the STL models except the BERT-based ensemble model. Compared with the BERT STL model, these two models increased their performances by 3% and interestingly, there is no significant difference in their performances. For instance, the accuracy of the HS task for the ensemble model and mBERT-MLT model are 0.9474 and 0.9413, respectively and that of F1 macro are 0.9288 and 0.9204, respectively. Therefore, in order to understand the best model out of the BERT-based ensemble model vs mBERT MTL model in HS task, we conducted an error analysis based on the confusion matrix to explore which model exhibits less misclassification errors. Thus, we compare the confusion matrix of mBERT-MTL and BERT-based ensemble model as displayed in Figure 2. For HS task, although the ensemble model outperforms mBERT-MTL, we noted that the latter can detect Hate speech samples more efficiently than the Ensemble model, which is worth noticing since we used an imbalanced Hate speech dataset with only \(\sim\)77% of samples labeled as "hate". This indicates that the MTL model has less misclassification errors compared to the ensemble one, getting fewer percentages of false positives and false negatives for both classes.
#### Iv-B2 Offensive language detection
We can interpret from Table II that, in OFF task, BERT-MTL gave the highest performances. However, we observe that, compared to the other mBERT-MTL model and the single-task models, the performance improvement is not very significant. The main reason behind this is due to the high imbalance Davidson-OFF corpora (\(\sim\)91% offensive samples) [27]. Nevertheless, emotional features help to improve the OFF task classification by a small margin indicating that even with the highly imbalanced dataset, external features increase model performance. Further, we observed misclassification errors using the confusion matrix of BERT-MTL and BERT-STL models, the two best-performed models in the OFF task, as shown in Figure 3. BERT-MTL illustrates its ability to correctly detect offensive content with the largest rate among all baseline models, leading to True positives of 98.94% for "offensive" class, as well as the least misclassification error rate. In addition, the false positive rate of BERT-MTL model (1.06%) is fewer compared to BERT-STL model(14.40%). This indicates that multi-task models contributed to correctly classifying the texts in comparison to single-task models. Similar to the HS task, emotional features help to improve OFF task performance. Overall, based on the results obtained by the proposed MTL models for both hate speech and offensive language detection tasks, multi-task joint learning models outperformed single-task models and exhibit less false positive errors, indicating that emotional knowledge helps to improve the classification.
## V Conclusion and Future Work
Recent years have seen a rise in the dissemination of abusive language, making it a significant issue for state governments
Fig. 3: Confusion matrix of Offensive language detection task: (a) BERT single-task model, (b.) BERT multi-task model.
Fig. 2: Confusion matrix of Hate speech detection task: (a.) BERT-based ensemble model. (b.) mBERT multi-task model.
and social media corporations to find and delete this kind of content. As a result, we focus our paper on Hate speech and offensive language detection through a BERT/mBERT-based multi-task model to benefit from emotion analysis as a related task. Due to the sensitivity and granularity of hate speech and offensive language, we conducted experiments on two datasets extracted from Davidson corpora to consider, separately, hate speech and offensive language detection, each one, as a major classification task. The efficiency of our suggested model (in terms of performance and resource consumption) and a thorough investigation of the transfer of affective knowledge, demonstrate how emotion classification tasks enable the multi-task system to predict hate/offensive language more precisely by leveraging on this associated information. To improve the multi-task approach, we can focus more on the training datasets used, mainly by reducing the imbalance ratio through different data augmentation techniques to over-sample the corpora. As for the related external features, hate/offensive language detection task is not restricted only to emotion analysis, but can also be integrated with polarity, target (towards individuals or groups), irony, or sarcasm detection tasks. Thus we consider further using other related features. Furthermore, by implementing mBERT, and getting good performance, we can measure the cross-lingual generalization of our approach using zero-shot or few-shot learning and test it on several target low-resource languages (Arabic, French, German, etc.). We also aim to propose feature fusion approaches for hate speech detection as a joint learning approach using fuzzy ruling. In addition, we aim to compare different model performances in terms of resource consumption (i.e., memory, run-time) to determine the most optimized solution to deploy in real environments.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.